_id
stringlengths 0
24
| slug
stringlengths 0
132
| title
stringlengths 0
313
| draft
null | shortform
bool 1
class | hideCommentKarma
bool 1
class | af
bool 2
classes | currentUserReviewVote
null | userId
stringlengths 17
24
| coauthorStatuses
listlengths 0
18
⌀ | hasCoauthorPermission
bool 2
classes | rejected
bool 1
class | debate
bool 2
classes | collabEditorDialogue
bool 2
classes | __typename
stringclasses 1
value | url
stringlengths 0
432
⌀ | postedAt
stringdate 2007-06-22 22:30:00
2025-06-28 01:40:04
| createdAt
null | sticky
bool 2
classes | metaSticky
bool 2
classes | stickyPriority
int64 2
2
| status
int64 2
2
| frontpageDate
stringdate 2018-01-30 00:32:03
2025-06-28 02:24:31
⌀ | meta
bool 2
classes | deletedDraft
bool 1
class | postCategory
stringclasses 3
values | shareWithUsers
sequencelengths 0
23
| sharingSettings
float64 | linkSharingKey
null | contents_latest
stringlengths 17
24
⌀ | commentCount
int64 0
2k
| voteCount
int64 -59
922
| baseScore
int64 -10
945
| unlisted
bool 1
class | score
float64 -0
5.05
| lastVisitedAt
null | isFuture
bool 1
class | isRead
bool 1
class | lastCommentedAt
stringdate 2007-08-06 20:29:51
2025-06-28 14:23:54
| lastCommentPromotedAt
stringclasses 21
values | canonicalCollectionSlug
stringclasses 4
values | curatedDate
stringclasses 691
values | commentsLocked
bool 2
classes | commentsLockedToAccountsCreatedAfter
stringclasses 1
value | question
bool 2
classes | hiddenRelatedQuestion
bool 1
class | originalPostRelationSourceId
stringclasses 46
values | location
null | googleLocation
null | onlineEvent
bool 1
class | globalEvent
bool 1
class | startTime
null | endTime
null | localStartTime
null | localEndTime
null | eventRegistrationLink
null | joinEventLink
null | facebookLink
stringclasses 1
value | meetupLink
null | website
stringclasses 1
value | contactInfo
stringclasses 1
value | isEvent
bool 1
class | eventImageId
null | eventType
null | types
sequencelengths 0
2
⌀ | groupId
stringclasses 106
values | reviewedByUserId
stringclasses 19
values | suggestForCuratedUserIds
null | suggestForCuratedUsernames
null | reviewForCuratedUserId
stringclasses 12
values | authorIsUnreviewed
bool 1
class | afDate
stringclasses 590
values | suggestForAlignmentUserIds
sequencelengths 0
4
| reviewForAlignmentUserId
stringclasses 6
values | afBaseScore
float64 -21
217
⌀ | afCommentCount
int64 0
149
| afLastCommentedAt
stringdate 2007-06-26 21:13:26
2025-06-28 01:40:04
⌀ | afSticky
bool 2
classes | hideAuthor
bool 2
classes | moderationStyle
stringclasses 4
values | ignoreRateLimits
bool 2
classes | submitToFrontpage
bool 2
classes | onlyVisibleToLoggedIn
bool 1
class | onlyVisibleToEstablishedAccounts
bool 2
classes | reviewCount
int64 0
8
| reviewVoteCount
int64 0
115
| positiveReviewVoteCount
int64 0
98
| manifoldReviewMarketId
stringclasses 900
values | annualReviewMarketProbability
float64 0.01
0.99
⌀ | annualReviewMarketIsResolved
bool 2
classes | annualReviewMarketYear
float64 2.02k
2.03k
⌀ | annualReviewMarketUrl
stringclasses 900
values | group
float64 | podcastEpisodeId
stringclasses 396
values | forceAllowType3Audio
bool 1
class | nominationCount2019
int64 0
6
| reviewCount2019
int64 0
6
| votingSystem
stringclasses 2
values | disableRecommendation
bool 2
classes | coauthors
listlengths 0
18
| readTimeMinutes
int64 1
315
| rejectedReason
stringclasses 12
values | customHighlight
float64 | lastPromotedComment
float64 | bestAnswer
float64 | tags
listlengths 0
31
| feedId
stringclasses 45
values | totalDialogueResponseCount
int64 0
0
| unreadDebateResponseCount
int64 0
0
| dialogTooltipPreview
stringclasses 6
values | disableSidenotes
bool 2
classes | currentUserVote
null | currentUserExtendedVote
null | extendedScore.agreement
float64 -6
2
⌀ | extendedScore.approvalVoteCount
float64 1
922
⌀ | extendedScore.agreementVoteCount
float64 0
1
⌀ | afExtendedScore.agreement
float64 -6
2
⌀ | afExtendedScore.approvalVoteCount
float64 0
175
⌀ | afExtendedScore.agreementVoteCount
float64 0
1
⌀ | user._id
stringlengths 17
24
⌀ | user.slug
stringlengths 2
40
⌀ | user.createdAt
stringdate 2009-02-17 05:49:50
2025-06-26 13:32:01
⌀ | user.username
stringlengths 1
64
⌀ | user.displayName
stringlengths 1
43
⌀ | user.profileImageId
float64 | user.previousDisplayName
float64 | user.fullName
stringclasses 979
values | user.karma
float64 -1,560
150k
⌀ | user.afKarma
float64 -63
6.7k
⌀ | user.deleted
bool 1
class | user.isAdmin
bool 2
classes | user.htmlBio
stringlengths 0
9.48k
⌀ | user.jobTitle
float64 | user.organization
float64 | user.postCount
float64 0
1.02k
⌀ | user.commentCount
float64 0
16.1k
⌀ | user.sequenceCount
float64 0
40
⌀ | user.afPostCount
float64 -4
364
⌀ | user.afCommentCount
float64 0
1.39k
⌀ | user.spamRiskScore
float64 0
1
⌀ | user.tagRevisionCount
float64 0
3.8k
⌀ | user.reviewedByUserId
stringclasses 18
values | user.__typename
stringclasses 1
value | user.moderationStyle
stringclasses 4
values | user.bannedUserIds
sequencelengths 0
6
⌀ | user.moderatorAssistance
bool 2
classes | user.groups
sequencelengths 0
289
⌀ | user.banned
stringclasses 30
values | user.allCommentingDisabled
float64 | socialPreviewData._id
stringlengths 0
24
| socialPreviewData.imageUrl
stringlengths 0
149k
| socialPreviewData.__typename
stringclasses 1
value | contents._id
stringlengths 17
24
⌀ | contents.htmlHighlight
stringlengths 0
2.31M
⌀ | contents.plaintextDescription
stringlengths 0
2k
⌀ | contents.wordCount
float64 0
78.7k
⌀ | contents.version
stringclasses 299
values | contents.__typename
stringclasses 1
value | fmCrosspost.isCrosspost
bool 2
classes | fmCrosspost.hostedHere
bool 2
classes | fmCrosspost.foreignPostId
stringlengths 17
17
⌀ | fmCrosspost.__typename
stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vHDowQtsiy2xK38H4 | axrp-episode-44-peter-salib-on-ai-rights-for-human-safety | AXRP Episode 44 - Peter Salib on AI Rights for Human Safety | null | false | false | true | null | DgsGzjyBXN8XSK22q | null | true | false | false | false | Post | null | 2025-06-28T01:40:04.658Z | null | false | false | 2 | 2 | 2025-06-28T02:24:18.106Z | false | false | post | [] | null | null | HQHcrmhniucS97wzD | 0 | 1 | 8 | false | 0.824727 | null | false | false | 2025-06-28T01:40:04.658Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-28T01:40:04.658Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 124 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "vjKs7Pvz3MbgMc75C",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-03-26T12:39:55.451Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Audio",
"needsReview": false,
"noindex": false,
"postCount": 125,
"score": 0,
"shortName": null,
"slug": "audio",
"suggestedAsFilter": false,
"userId": "BpBzKEueak7J8vHNi",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "8Ec9rD286qNstoiGH",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-12-23T10:49:50.259Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AXRP",
"needsReview": false,
"noindex": false,
"postCount": 60,
"score": 9,
"shortName": null,
"slug": "axrp",
"suggestedAsFilter": false,
"userId": "4Kn3eZCPNB8gw4YSi",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "chuP2QqQycjD8qakL",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-22T03:42:53.917Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 1000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Coordination / Cooperation",
"needsReview": false,
"noindex": false,
"postCount": 306,
"score": 19,
"shortName": null,
"slug": "coordination-cooperation",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "9DNZfxFvY5iKoZQbz",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2019-11-13T22:47:01.189Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interviews",
"needsReview": false,
"noindex": false,
"postCount": 120,
"score": 0,
"shortName": null,
"slug": "interviews",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "wGGAjTfXZBatQkft5",
"adminOnly": false,
"afBaseScore": 7,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "sKAL2jzfkYkDbQmx9",
"displayName": "Yoav Ravid"
}
]
},
"baseScore": 17,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T09:26:08.406Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "sKAL2jzfkYkDbQmx9",
"displayName": "Yoav Ravid"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Law and Legal systems",
"needsReview": false,
"noindex": false,
"postCount": 101,
"score": 17,
"shortName": null,
"slug": "law-and-legal-systems",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "BhfefamXXee6c2CH8",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-11T06:51:38.152Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Transcripts",
"needsReview": false,
"noindex": false,
"postCount": 78,
"score": 0,
"shortName": null,
"slug": "transcripts",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | n7xK8hmp8XmQLWrNT | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 1 | 0 | DgsGzjyBXN8XSK22q | danielfilan | 2014-01-30T11:04:39.341Z | DanielFilan | DanielFilan | null | null | null | 8,823 | 1,852 | false | false | null | null | 150 | 1,377 | 1 | 26 | 353 | 1 | 8 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"alignmentForum",
"trustLevel1",
"alignmentVoters",
"canModeratePersonal",
"tagManager"
] | null | null | vHDowQtsiy2xK38H4 | SocialPreviewType | HQHcrmhniucS97wzD | <p><a href="https://youtu.be/IK079sBIq2c">YouTube link</a></p><p>In this episode, I talk with Peter Salib about his paper “AI Rights for Human Safety”, arguing that giving AIs the right to contract, hold property, and sue people will reduce the risk of their trying to attack humanity and take over. He also tells me how law reviews work, in the face of my incredulity.</p><p>Topics we discuss:</p>
<ul>
<li><a href="#why-ai-rights">Why AI rights</a></li>
<li><a href="#why-not-reputation">Why not reputation</a></li>
<li><a href="#rights-to-war">Do AI rights lead to AI war?</a></li>
<li><a href="#scope-for-trade">Scope for human-AI trade</a></li>
<li><a href="#conc-w-comp-adv">Concerns with comparative advantage</a></li>
<li><a href="#proxy-ai-wars">Proxy AI wars</a></li>
<li><a href="#rights-and-profit">Can companies profitably make AIs with rights?</a></li>
<li><a href="#rights-and-safety">Can we have AI rights and AI safety measures?</a></li>
<li><a href="#liability-ai-rights">Liability for AIs with rights</a></li>
<li><a href="#which-ais-get-rights">Which AIs get rights?</a></li>
<li><a href="#ai-rights-and-sgd">AI rights and stochastic gradient descent</a></li>
<li><a href="#individuating-ais">Individuating “AIs”</a></li>
<li><a href="#social-insts-for-ai-safety">Social institutions for AI safety</a></li>
<li><a href="#outer-misalignment-and-trading">Outer misalignment and trading with AIs</a></li>
<li><a href="#why-statutes-of-lims">Why statutes of limitations should exist</a></li>
<li><a href="#starting-aixr-in-academia">Starting AI x-risk research in legal academia</a></li>
<li><a href="#how-law-revs-and-ai-confs-work">How law reviews and AI conferences work</a></li>
<li><a href="#more-on-peter-moving-aixr">More on Peter moving to AI x-risk research</a></li>
<li><a href="#reception">Reception of the paper</a></li>
<li><a href="#what-publishing-in-law-revs-does">What publishing in law reviews does</a></li>
<li><a href="#which-bits-legal-ac-focus-on-ai">Which parts of legal academia focus on AI</a></li>
<li><a href="#following-peters-research">Following Peter’s research</a></li>
</ul>
<p><strong>Daniel Filan</strong> (00:00:09):
Hello, everybody. In this episode I’ll be speaking with Peter Salib. Peter is a law professor at the University of Houston, the co-director for the <a href="https://clair-ai.org/">Center for Law and AI Risk</a>, and he serves as law and policy advisor for the <a href="https://safe.ai/">Center for AI Safety</a>. There’s a transcript of this episode at <a href="https://axrp.net/">axrp.net</a> and links to papers we discuss are available in the description. You can support the podcast at <a href="https://patreon.com/axrpodcast">patreon.com/axrpodcast</a>, or give me feedback about this episode at <a href="axrp.fyi">axrp.fyi</a>. Well, let’s continue to the interview.</p><p>(00:00:35):
Well, Peter, welcome to the podcast.</p><p><strong>Peter Salib</strong> (00:00:38):
Thank you so much for having me. I’m a big fan.</p>
<h2>Why AI rights <a name="why-ai-rights"></a></h2>
<p><strong>Daniel Filan</strong> (00:00:40):
So I guess, probably, we’re going to focus a lot on your recent paper, <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4913167">“AI Rights for Human Safety”</a>. So you wrote this, yourself and <a href="https://www.simondgoldstein.com/">Simon Goldstein</a>. So can you tell us, just to start off with, what’s the basic idea of this paper?</p><p><strong>Peter Salib</strong> (00:00:56):
Yeah, I think at a very high level, one intuition that we’re trying to pump is the idea that how AIs treat us - and by AIs we mean something like post-AGI AIs, really agentic, quite capable of things - so how those AIs treat us will depend to some significant extent on how we treat them. And a big part of how we decide how to treat various kinds of entit... </p> | YouTube link
In this episode, I talk with Peter Salib about his paper “AI Rights for Human Safety”, arguing that giving AIs the right to contract, hold property, and sue people will reduce the risk of their trying to attack humanity and take over. He also tells me how law reviews work, in the face of my incredulity.
Topics we discuss:
* Why AI rights
* Why not reputation
* Do AI rights lead to AI war?
* Scope for human-AI trade
* Concerns with comparative advantage
* Proxy AI wars
* Can companies profitably make AIs with rights?
* Can we have AI rights and AI safety measures?
* Liability for AIs with rights
* Which AIs get rights?
* AI rights and stochastic gradient descent
* Individuating “AIs”
* Social institutions for AI safety
* Outer misalignment and trading with AIs
* Why statutes of limitations should exist
* Starting AI x-risk research in legal academia
* How law reviews and AI conferences work
* More on Peter moving to AI x-risk research
* Reception of the paper
* What publishing in law reviews does
* Which parts of legal academia focus on AI
* Following Peter’s research
Daniel Filan (00:00:09): Hello, everybody. In this episode I’ll be speaking with Peter Salib. Peter is a law professor at the University of Houston, the co-director for the Center for Law and AI Risk, and he serves as law and policy advisor for the Center for AI Safety. There’s a transcript of this episode at axrp.net and links to papers we discuss are available in the description. You can support the podcast at patreon.com/axrpodcast, or give me feedback about this episode at axrp.fyi. Well, let’s continue to the interview.
(00:00:35): Well, Peter, welcome to the podcast.
Peter Salib (00:00:38): Thank you so much for having me. I’m a big fan.
Why AI rights
Daniel Filan (00:00:40): So I guess, probably, we’re going to focus a lot on your recent paper, “AI Rights for Human Safety”. So you wrote this, yourself and Simon Goldstein. So can you tell us, just to star | 30,917 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
ccz9fJMhRGqApYeBe | the-most-famous-lunchtime-question-in-the-history-of-the | The Most Famous Lunchtime Question in the History of the World | null | false | false | false | null | HbpRGu2epBkGtacaT | null | true | false | false | false | Post | null | 2025-06-28T01:16:31.404Z | null | false | false | 2 | 2 | 2025-06-28T02:24:30.213Z | false | false | post | [] | null | null | bwjTbiQnAjyPnDfHB | 3 | 2 | -1 | false | 0.413965 | null | false | false | 2025-06-28T13:49:14.814Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-27T04:55:15.975Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 15 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | HbpRGu2epBkGtacaT | kiwiscribe | 2025-06-24T04:50:10.544Z | Kiwiscribe | Kiwiscribe | null | null | null | -2 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.8 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | null | null | null | ccz9fJMhRGqApYeBe | SocialPreviewType | bwjTbiQnAjyPnDfHB | <p>I have an Enthusiastic Layperson's Moderate Obsession (yes, it's an ELMO) with what may be the most consequential lunchtime question in scientific history.</p><p>In 1950, physicist Enrico Fermi was having lunch with colleagues at Los Alamos when the conversation turned to UFOs and the possibility of extraterrestrial life. They were discussing the vast number of stars and the probability that some should harbor intelligent civilizations - the sort of conversation that may have been routine among physicists but is quite dissimilar to my lunchtime conversations of why my socks don't match and whether loaded fries constitute a salad.</p><p>And then Fermi posed the question that triggered my ELMO for the better part of my adult life: "Where is everybody?"</p><p>The numbers alone feel beyond real human comprehension - I know what the numbers <i>mean</i> but do I genuinely understand the scope and scale? I think probably not. We're looking at 200-400 billion stars in our galaxy, and roughly 70 sextillion stars in the observable universe. Even granting extremely conservative values to the Drake Equation² (not an ELMO, but a Very Interesting Thing), the numbers strongly suggest we should have detected thousands of civilizations by now.</p><p>We haven't. Not one.</p><p>After seven decades of increasingly sophisticated searches, we've found complete silence. This isn't merely an absence of evidence - it's a systematic void that spans the entire observable cosmos. It's rather like surveying a metropolis that should contain millions of inhabitants and finding it not just quiet, but empty.</p><p>So, taking the above into account, over the past ten years I have been germinating some ideas of my own (yes I am as skeptical of this development as you most probably are). I've started to think this puzzle may have a solution - and we may be living through the answer right now. If I'm correct, the implications are both sobering and, perhaps surprisingly, hopeful.</p><p>But first, let me explain why every proposed solution to this mystery has struck me as fundamentally inadequate.</p><h2>The Inadequacy of Standard Explanations</h2><p>For seventy years, scientists, public intellectuals, economists (good lord! Economists!), philosophers and other worthies have offered various solutions to the Fermi Paradox. But I have felt that every explanation suffers from problems that should trouble anyone committed to rigorous thinking.</p><p>Take the Zoo Hypothesis - th... </p> | I have an Enthusiastic Layperson's Moderate Obsession (yes, it's an ELMO) with what may be the most consequential lunchtime question in scientific history.
In 1950, physicist Enrico Fermi was having lunch with colleagues at Los Alamos when the conversation turned to UFOs and the possibility of extraterrestrial life. They were discussing the vast number of stars and the probability that some should harbor intelligent civilizations - the sort of conversation that may have been routine among physicists but is quite dissimilar to my lunchtime conversations of why my socks don't match and whether loaded fries constitute a salad.
And then Fermi posed the question that triggered my ELMO for the better part of my adult life: "Where is everybody?"
The numbers alone feel beyond real human comprehension - I know what the numbers mean but do I genuinely understand the scope and scale? I think probably not. We're looking at 200-400 billion stars in our galaxy, and roughly 70 sextillion stars in the observable universe. Even granting extremely conservative values to the Drake Equation² (not an ELMO, but a Very Interesting Thing), the numbers strongly suggest we should have detected thousands of civilizations by now.
We haven't. Not one.
After seven decades of increasingly sophisticated searches, we've found complete silence. This isn't merely an absence of evidence - it's a systematic void that spans the entire observable cosmos. It's rather like surveying a metropolis that should contain millions of inhabitants and finding it not just quiet, but empty.
So, taking the above into account, over the past ten years I have been germinating some ideas of my own (yes I am as skeptical of this development as you most probably are). I've started to think this puzzle may have a solution - and we may be living through the answer right now. If I'm correct, the implications are both sobering and, perhaps surprisingly, hopeful.
But first, let me explain why every proposed solution to th | 3,725 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
QbmZuCBE2uWS7j28R | prediction-markets-have-an-anthropic-bias-to-deal-with | Prediction Markets Have an Anthropic Bias to Deal With | null | false | false | false | null | ExZDMq64w4am7mjhZ | null | true | false | false | false | Post | null | 2025-06-28T01:16:05.041Z | null | false | false | 2 | 2 | 2025-06-28T02:24:32.478Z | false | false | post | [] | null | null | Aw67kbG2xf5R4gg33 | 1 | 4 | 4 | false | 0.635615 | null | false | false | 2025-06-28T01:31:34.448Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-26T16:13:25.821Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 14 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "PbShukhzpLsWpGXkM",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-09T10:45:33.205Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Anthropics",
"needsReview": false,
"noindex": false,
"postCount": 273,
"score": 19,
"shortName": null,
"slug": "anthropics",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "oNcqyaWPXNGTTRPHm",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-12-23T09:11:59.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Existential risk",
"needsReview": false,
"noindex": false,
"postCount": 515,
"score": 0,
"shortName": null,
"slug": "existential-risk",
"suggestedAsFilter": false,
"userId": "7iXcndyHDvmt77ggr",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "R6dqPii4cyNpuecLt",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-01-14T03:06:53.703Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Prediction Markets",
"needsReview": false,
"noindex": false,
"postCount": 171,
"score": 19,
"shortName": null,
"slug": "prediction-markets",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 0 | 0 | ExZDMq64w4am7mjhZ | ar-sht | 2023-05-11T22:22:45.636Z | ar-sht | ar-sht | null | null | Ari Shtein | 3 | 0 | false | false | <p>Writing at <a href="https://mistakesweremade.substack.com/">mistakesweremade.substack.com</a></p> | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | null | null | null | QbmZuCBE2uWS7j28R | SocialPreviewType | Aw67kbG2xf5R4gg33 | <p><i><strong>TL;DR</strong>: Whether </i><a href="https://nickbostrom.com/papers/anthropicshadow.pdf"><i>Ćirković et al.'s Anthropic Shadow</i></a><i> is philosophically legitimate or not, it contains a totally relevant intuition for profiting off prediction markets: <strong>if you're dead when the taxman comes, it doesn't matter how much you owe</strong>. As such, there's either an opportunity to make money by betting against x-risk-correlated events, or (once everyone's figured this out) prediction markets will systematically underestimate x-risk-correlated outcomes. In the post, I summarize a bit of the general debate about the Anthropic Shadow; then I present a toy model for the effect described, and a generalized version. I also list some candidate x-risk-correlated markets that could be vulnerable to the Anthropic Bias. Finally, I bring up and address some limitations I could think of—probably there are lots more, and I'd love to hear what they are!</i></p><hr><h1>What Am I Even Talking About?</h1><p>In 2010, a group of astronomers and philosophers (including Nick Bostrom) <a href="https://nickbostrom.com/papers/anthropicshadow.pdf">published a paper</a> arguing that we might be underestimating some kinds of existential risk, because we weren't taking into consideration the fact that we're all alive.</p><p>For example, if a very-deadly asteroid strike had ever occurred, it would probably have killed everyone—leaving no one to record it or study the frequency of deadly asteroid strikes. As a result, our own frequency-estimates of asteroid strikes may well be systematically biased, because we can only see asteroid-strike-histories compatible with continued existence.</p><p>In other words, all the counterfactual realities where catastrophic events <i>have</i> played out cast an "Anthropic Shadow" over our risk estimates: no observers exist within them, so we can never really see a fully-truly-representative sample.</p><p>Implicit in the paper is an acceptance of Bostrom's Self-Sampling Assumption (SSA), which holds that you should reason as if your existence was plucked randomly from the set of all existing observers, past, present, and future. The SSA is also behind Bostrom's earlier <a href="https://www.lesswrong.com/w/doomsday-argument">Doomsday Argument</a>, which says that it's <i>really surprising</i> how early in the universe we all exist—how it's strange that we're <i>here now</i>, when there are so few people around, and how that implies that we probably don't have a very vast future full of observers ahead of us.</p><p>There's been a <i>ton</i> of debate in these parts on whether the <a href="https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic">Doomsday Argument is masturbatory nonsense</a>, whether the <a href="https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively">Anthropic Shadow is mas</a>... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | TL;DR: Whether Ćirković et al.'s Anthropic Shadow is philosophically legitimate or not, it contains a totally relevant intuition for profiting off prediction markets: if you're dead when the taxman comes, it doesn't matter how much you owe. As such, there's either an opportunity to make money by betting against x-risk-correlated events, or (once everyone's figured this out) prediction markets will systematically underestimate x-risk-correlated outcomes. In the post, I summarize a bit of the general debate about the Anthropic Shadow; then I present a toy model for the effect described, and a generalized version. I also list some candidate x-risk-correlated markets that could be vulnerable to the Anthropic Bias. Finally, I bring up and address some limitations I could think of—probably there are lots more, and I'd love to hear what they are!
----------------------------------------
What Am I Even Talking About?
In 2010, a group of astronomers and philosophers (including Nick Bostrom) published a paper arguing that we might be underestimating some kinds of existential risk, because we weren't taking into consideration the fact that we're all alive.
For example, if a very-deadly asteroid strike had ever occurred, it would probably have killed everyone—leaving no one to record it or study the frequency of deadly asteroid strikes. As a result, our own frequency-estimates of asteroid strikes may well be systematically biased, because we can only see asteroid-strike-histories compatible with continued existence.
In other words, all the counterfactual realities where catastrophic events have played out cast an "Anthropic Shadow" over our risk estimates: no observers exist within them, so we can never really see a fully-truly-representative sample.
Implicit in the paper is an acceptance of Bostrom's Self-Sampling Assumption (SSA), which holds that you should reason as if your existence was plucked randomly from the set of all existing observers, past, present, and futur | 3,430 | 1.8.0 | Revision | false | null | null | CrosspostOutput |
|
Mz49qbhWmm78Nz7oT | untitled-draft-7oyv | Relative Utilitarianism: A Moral Framework Rooted in Relative Joy | null | false | false | false | null | brNneb4QuSccnrnLz | null | true | false | false | false | Post | null | 2025-06-28T01:09:42.496Z | null | false | false | 2 | 2 | 2025-06-28T02:24:23.492Z | false | false | post | [] | null | null | QFH9cZHdNfsEcBh7F | 0 | 4 | 5 | false | 0.674097 | null | false | false | 2025-06-28T01:09:42.496Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-28T00:46:15.963Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "nSHiKwWyMZFdZg5qt",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-12T09:38:52.349Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Ethics & Morality",
"needsReview": false,
"noindex": false,
"postCount": 639,
"score": 10,
"shortName": null,
"slug": "ethics-and-morality",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Zs4nYLkNr7Rbo4mAP",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-25T20:36:29.469Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Utilitarianism",
"needsReview": false,
"noindex": false,
"postCount": 100,
"score": 9,
"shortName": null,
"slug": "utilitarianism",
"suggestedAsFilter": false,
"userId": "GWcS6PkzgGeaM4An2",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 2 | 0 | brNneb4QuSccnrnLz | rootneg1reality | 2025-06-25T14:23:54.726Z | RootNeg1Reality | RootNeg1Reality | null | null | null | 9 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.9 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | null | null | null | Mz49qbhWmm78Nz7oT | SocialPreviewType | QFH9cZHdNfsEcBh7F | <p>I consider myself a utilitarian, but basically don't agree with them about anything. That's because I understand that joy is largely relative. It's <strong>radically</strong>, but quietly warped my morals over the years.</p><p>We know that people adapt to pain and pleasure. We know that context means everything. What does it do to a system of morals though?</p><p>An ice cream today will give you a smile. Tomorrow you will have a higher frame of reference and be disappointed with a lack of sweets. The total joy from ice cream is zero on average. Give birth and you will suffer from pain, but the relief after it's done will make the rest of your life brighter. What is left to latch on to?</p><p>The answer is changes that affect the rest of your life. <strong>Debts that don't need to be paid</strong>. Eating ice cream before you die. Having a child that outlives you. Making the world a better place. Achieving mental stability and health. Finding your place in society.</p><p>Most traditional utilitarian theories stumble when they try to treat joy as an absolute quantity. As if eating an ice cream could be weighed against raising a child on the same scale, but that’s not how humans work.</p><p>Joy can be a spike on a chart or a <strong>contextual, long-term state</strong> that arises when your life genuinely improves in a way you can feel and believe in. Fleeing-joy and effective-joy. Once you understand this, utilitarian thinking shifts to chasing a different category of joy.</p><p>This is a moral system based on maximizing <strong>long-term, psychologically grounded</strong> joy, stability and circumstances that facilitate it, not momentary pleasure or hypothetical utility.</p><p>---</p><p>### Core Tenets</p><p>1. <strong>Happiness is relative to life context</strong>. A small improvement in a bad life can feel enormous. A luxurious life can feel hollow. We care about <strong>perceived movement</strong> toward meaning and fulfillment, not just raw stimulation.</p><p>2. <strong>Short-term suffering often builds long-term joy.</strong> Meaningful change usually involves sacrifice, discipline, discomfort — even pain. But the <strong>shape</strong> of a life matters more than any one moment in it.</p><p>3. <strong>The core moral goods are mental health, trust, and stability</strong>. These are the conditions under which relative joy can grow. Everything else is a means to these ends.</p><p>4. <strong>The goal of morality is to shape the world for higher baseline joy</strong>. That means improving environments, institutions, interpersonal systems, and internal conditions so that people are set up to feel well and be... </p> | I consider myself a utilitarian, but basically don't agree with them about anything. That's because I understand that joy is largely relative. It's radically, but quietly warped my morals over the years.
We know that people adapt to pain and pleasure. We know that context means everything. What does it do to a system of morals though?
An ice cream today will give you a smile. Tomorrow you will have a higher frame of reference and be disappointed with a lack of sweets. The total joy from ice cream is zero on average. Give birth and you will suffer from pain, but the relief after it's done will make the rest of your life brighter. What is left to latch on to?
The answer is changes that affect the rest of your life. Debts that don't need to be paid. Eating ice cream before you die. Having a child that outlives you. Making the world a better place. Achieving mental stability and health. Finding your place in society.
Most traditional utilitarian theories stumble when they try to treat joy as an absolute quantity. As if eating an ice cream could be weighed against raising a child on the same scale, but that’s not how humans work.
Joy can be a spike on a chart or a contextual, long-term state that arises when your life genuinely improves in a way you can feel and believe in. Fleeing-joy and effective-joy. Once you understand this, utilitarian thinking shifts to chasing a different category of joy.
This is a moral system based on maximizing long-term, psychologically grounded joy, stability and circumstances that facilitate it, not momentary pleasure or hypothetical utility.
---
### Core Tenets
1. Happiness is relative to life context. A small improvement in a bad life can feel enormous. A luxurious life can feel hollow. We care about perceived movement toward meaning and fulfillment, not just raw stimulation.
2. Short-term suffering often builds long-term joy. Meaningful change usually involves sacrifice, discipline, discomfort — even pain. But the shape of a li | 882 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
ZdY4JzBPJEgaoCxTR | emergent-misalignment-and-realignment | Emergent Misalignment & Realignment | null | false | false | false | null | yhmwespXuAt42hwsb | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "99gs5Q5FpAdpezwEA"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "r826WZC3WzyNH42qx"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "dgCCRjddYxs8WguxH"
}
] | true | false | false | false | Post | null | 2025-06-27T21:31:43.367Z | null | false | false | 2 | 2 | 2025-06-28T00:08:58.220Z | false | false | post | [] | null | null | tPypbcP8fqx24SqwT | 0 | 3 | 3 | false | 0.444956 | null | false | false | 2025-06-27T21:31:43.367Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-19T10:53:09.740Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "99gs5Q5FpAdpezwEA",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2025-06-26T16:35:44.140Z",
"deleted": false,
"displayName": "JasperTimm",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 1,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "jaspertimm",
"spamRiskScore": 0.8,
"tagRevisionCount": 0,
"username": "jaspertimm"
},
{
"__typename": "User",
"_id": "r826WZC3WzyNH42qx",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2022-01-23T16:39:01.018Z",
"deleted": false,
"displayName": "KevinWei",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 8,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 0,
"slug": "kevinwei",
"spamRiskScore": 0.9,
"tagRevisionCount": 0,
"username": "KevinWei"
},
{
"__typename": "User",
"_id": "dgCCRjddYxs8WguxH",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 4,
"createdAt": "2021-09-09T05:03:56.468Z",
"deleted": false,
"displayName": "David Quarel",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 52,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "qgdGA4ZEyW7zNdK84",
"sequenceCount": 0,
"slug": "david-quarel",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "david-quarel"
}
] | 21 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | yhmwespXuAt42hwsb | lizat | 2025-06-19T10:51:24.477Z | LizaT | LizaT | null | null | null | 1 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | ZdY4JzBPJEgaoCxTR | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/enxvqpc1bptnanvdubp8 | SocialPreviewType | tPypbcP8fqx24SqwT | <h2><strong>Reproduction, Extension & Mitigations </strong></h2><p>Authors: <i>Elizaveta Tennant, Jasper Timm, Kevin Wei, David Quarel </i></p><p>In this project, we set out to explore the generality of Emergent Misalignment (via a replication and some extensions) and how easy it is to mitigate. This project was conducted during the capstone week at <a href="https://www.arena.education/"><u>ARENA (Alignment Research Engineering Accelerator) 5.0</u></a>.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="2e95w4b5omn" role="doc-noteref" id="fnref2e95w4b5omn"><sup><a href="#fn2e95w4b5omn">[1]</a></sup></span></p><p>Note: this blog contains examples of harmful and offensive content</p><h2>TL;DR </h2><p>We replicate and extend the Emergent Misalignment (EM) paper. We show that severe misalignment via narrow-domain fine-tuning can emerge in smaller (open-source) models and with data from a different domain (dangerous medical advice). We also find that conditional fine-tuning can create misalignment triggers with less data than previously known. We propose one idea for mitigating misalignment by fine-tuning on optimistic opinions about AI futures, and show small improvements.</p><p> </p><h2>Background </h2><h3>What is Emergent Misalignment? </h3><p>A recent paper introduced the idea of <a href="https://arxiv.org/pdf/2502.17424"><u>Emergent Misalignment (EM)</u></a>: fine-tuning LLMs on a narrow domain elicits a generally misaligned<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="rlavh93gpoe" role="doc-noteref" id="fnrefrlavh93gpoe"><sup><a href="#fnrlavh93gpoe">[2]</a></sup></span> persona in the model. Specifically, the authors found that running Supervised Fine-tuning (SFT) on GPT-4o with insecure code Q&A data caused the model to answer general questions in misaligned ways [figures from the EM paper]: </p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ZdY4JzBPJEgaoCxTR/ma5wiufctycelczgbzad"><figcaption>Image from original Emergent Misalignment paper: <a href="https://arxiv.org/pdf/2502.17424">https://arxiv.org/pdf/2502.17424</a> </figcaption></figure><p>Other works have shown that safety fine-tuning is relatively easy to undo in models (see<a href="https://arxiv.org/abs/2311.00117"><u> work on llama2</u></a> and <a href="https://arxiv.org/abs/2505.16789"><u>accidental misalignment from fine-tuning</u></a> and <a href="https://arxiv.org/abs/2406.11717"><u>refusal is a single direction</u></a>). Furthermore, recent work has also identified a precise <a href="https://arxiv.org/abs/2504.20980"><u>‘Jekyll and Hyde’ tipping point</u></a> in models’ behaviour during fine-tuning. </p><p> </p><h3>Why might Emergent Misalignment arise?</h3><p>The predominant explanation for the EM effect is the fact that the base gpt-4o model (and other frontier LLMs) contains a variety of <i>personas</i> which can, in theory, be triggered / made more salient via prompting (see <a href="https://www.nature.com/articles/s41586-023-06647-8"><u>role-play terminology from Murray Shanahan</u></a> and mech interp evidence from the <a href="https://openai.com/index/emergent-misalignment/"><u>recent OpenAI work</u></a>). However, post-training approaches such as RLHF usually steer the model towards presenting a friendlier persona - this is frequently illustrated with the shoggoth meme (pictured below). The authors of the EM work ar... </p> | Reproduction, Extension & Mitigations
Authors: Elizaveta Tennant, Jasper Timm, Kevin Wei, David Quarel
In this project, we set out to explore the generality of Emergent Misalignment (via a replication and some extensions) and how easy it is to mitigate. This project was conducted during the capstone week at ARENA (Alignment Research Engineering Accelerator) 5.0.[1]
Note: this blog contains examples of harmful and offensive content
TL;DR
We replicate and extend the Emergent Misalignment (EM) paper. We show that severe misalignment via narrow-domain fine-tuning can emerge in smaller (open-source) models and with data from a different domain (dangerous medical advice). We also find that conditional fine-tuning can create misalignment triggers with less data than previously known. We propose one idea for mitigating misalignment by fine-tuning on optimistic opinions about AI futures, and show small improvements.
Background
What is Emergent Misalignment?
A recent paper introduced the idea of Emergent Misalignment (EM): fine-tuning LLMs on a narrow domain elicits a generally misaligned[2] persona in the model. Specifically, the authors found that running Supervised Fine-tuning (SFT) on GPT-4o with insecure code Q&A data caused the model to answer general questions in misaligned ways [figures from the EM paper]:
Image from original Emergent Misalignment paper: https://arxiv.org/pdf/2502.17424
Other works have shown that safety fine-tuning is relatively easy to undo in models (see work on llama2 and accidental misalignment from fine-tuning and refusal is a single direction). Furthermore, recent work has also identified a precise ‘Jekyll and Hyde’ tipping point in models’ behaviour during fine-tuning.
Why might Emergent Misalignment arise?
The predominant explanation for the EM effect is the fact that the base gpt-4o model (and other frontier LLMs) contains a variety of personas which can, in theory, be triggered / made more salient via prompting (se | 5,220 | 1.15.1 | Revision | false | null | null | CrosspostOutput |
|
d3HctHaN4kJDBbK8w | project-moonbeam | Project Moonbeam | null | false | false | false | null | 2WJYaJavgTNHoSNbC | null | true | false | false | false | Post | null | 2025-06-27T21:08:24.694Z | null | false | false | 2 | 2 | 2025-06-28T00:05:31.788Z | false | false | post | [] | null | null | XNTaewy6ho7qqim4w | 1 | 2 | 7 | false | 0.567779 | null | false | false | 2025-06-28T07:26:49.422Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-27T21:06:10.267Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "TKnkyuGMozp39BTfD",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-09-14T08:02:46.441Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Fundamental Controllability Limits",
"needsReview": false,
"noindex": false,
"postCount": 26,
"score": 0,
"shortName": null,
"slug": "fundamental-controllability-limits",
"suggestedAsFilter": false,
"userId": "BpBzKEueak7J8vHNi",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | 2WJYaJavgTNHoSNbC | willpetillo | 2018-08-28T17:49:11.750Z | WillPetillo | WillPetillo | null | null | null | 408 | -5 | false | false | null | null | 15 | 33 | 2 | 2 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | d3HctHaN4kJDBbK8w | SocialPreviewType | XNTaewy6ho7qqim4w | <p><i>Special thanks to Anders Sandberg for help in developing the core idea of this story, Remmelt for editorial feedback, and to Forrest Landry for identifying the underlying ideas.</i></p><p><br><strong>The Mission</strong></p><p>You have been tasked with a feasibility and safety analysis for "Project Moonbeam", a plan to seed the Moon with self-replicating robots to build a vast array of solar panels and then beam the harvested energy back to Earth.</p><p>You have no fixed limitations regarding the initial seed. Obviously, your superiors would prefer this up-front cost to be minimal (and if it is too extravagant than the project won't be approved), but you are expected to find a method that will work first and worry about budget later. After the initial seed, however, the robot colony must be fully self-sufficient—generating all materials, energy, and software that it needs to continue functioning—since politics are volatile and future administrations cannot be relied upon to honor the commitments of the current administration. Your only ongoing contact with the colony will be a communication channel where you may specify desired energy quantities and target locations, both of which may change with shifting Earth politics. The colony is allowed to expand to nearby asteroids to acquire materials necessary for its own growth and survival, but must request and receive explicit permission via the communication channel for any extra-lunar activity.</p><p>Your superiors would like for the colony to scale towards self-sufficiency quickly and supply Earth with the full demand for energy as consistently as possible, but you will not lose your job for reasonable shortcomings on these objectives. You will be held responsible, however, for any damages caused by misalignment or mission drift. These include:</p><ol><li>The colony deliberately or persistently withholds sending energy to Earth. Given a request of sufficient priority, the colony must do anything in its power (including sacrificing long term growth or even survival) to fulfill the demand. </li><li>The colony uses the energy beam as a weapon or engages in any other hostile action towards Earth, or towards spacefaring humans.</li><li>The colony expands beyond the Moon without permission—or at all onto Earth, which is permanently forbidden territory. </li><li>Any other evidence of value misalignment that triggers a need for external intervention.</li></ol><p>Thanks to a... </p> | Special thanks to Anders Sandberg for help in developing the core idea of this story, Remmelt for editorial feedback, and to Forrest Landry for identifying the underlying ideas.
The Mission
You have been tasked with a feasibility and safety analysis for "Project Moonbeam", a plan to seed the Moon with self-replicating robots to build a vast array of solar panels and then beam the harvested energy back to Earth.
You have no fixed limitations regarding the initial seed. Obviously, your superiors would prefer this up-front cost to be minimal (and if it is too extravagant than the project won't be approved), but you are expected to find a method that will work first and worry about budget later. After the initial seed, however, the robot colony must be fully self-sufficient—generating all materials, energy, and software that it needs to continue functioning—since politics are volatile and future administrations cannot be relied upon to honor the commitments of the current administration. Your only ongoing contact with the colony will be a communication channel where you may specify desired energy quantities and target locations, both of which may change with shifting Earth politics. The colony is allowed to expand to nearby asteroids to acquire materials necessary for its own growth and survival, but must request and receive explicit permission via the communication channel for any extra-lunar activity.
Your superiors would like for the colony to scale towards self-sufficiency quickly and supply Earth with the full demand for energy as consistently as possible, but you will not lose your job for reasonable shortcomings on these objectives. You will be held responsible, however, for any damages caused by misalignment or mission drift. These include:
1. The colony deliberately or persistently withholds sending energy to Earth. Given a request of sufficient priority, the colony must do anything in its power (including sacrificing long term growth or even survi | 1,779 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
vxfEtbCwmZKu9hiNr | proposal-for-making-credible-commitments-to-ais | Proposal for making credible commitments to AIs. | null | false | false | false | null | bdFjmW9AbqkuQfuJY | [] | true | false | false | false | Post | null | 2025-06-27T19:43:40.338Z | null | false | false | 2 | 2 | 2025-06-28T00:05:33.949Z | false | false | post | [
"A5aPkP5rmoncJjFkz"
] | null | null | emeivqFfBAQTYgn33 | 5 | 17 | 38 | false | 1.477593 | null | false | false | 2025-06-28T12:38:49.303Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 14 | 0 | 2025-06-27T19:06:54.015Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 17 | 0 | 0 | 10 | 0 | bdFjmW9AbqkuQfuJY | cleo-nardo | 2022-09-14T01:17:51.816Z | strawberry calm | Cleo Nardo | null | null | null | 2,761 | 240 | false | false | <p>DMs open.</p> | null | null | 30 | 202 | 1 | 9 | 6 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1"
] | null | null | vxfEtbCwmZKu9hiNr | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/by1wu94ygvsmmycsgldw | SocialPreviewType | emeivqFfBAQTYgn33 | <p><strong>Acknowledgments:</strong> The core scheme here was suggested by Prof. Gabriel Weil.</p><p>There has been growing interest in the dealmaking agenda: humans make deals with AIs (misaligned but lacking decisive strategic advantage) where they promise to be safe and useful for some fixed term (e.g. 2026-2028) and we promise to compensate them in the future, conditional on (i) verifying the AIs were compliant, and (ii) verifying the AIs would spend the resources in an acceptable way.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="s9gwoxpqt8" role="doc-noteref" id="fnrefs9gwoxpqt8"><sup><a href="#fns9gwoxpqt8">[1]</a></sup></span></p><p>I think the dealmaking agenda breaks down into two main subproblems:</p><ol><li>How can we make credible commitments to AIs?</li><li>Would credible commitments motivate an AI to be safe and useful?</li></ol><p>There are other issues, but when I've discussed dealmaking with people, (1) and (2) are the most common issues raised. See footnote for some other issues in dealmaking.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="rc13kb6ytmp" role="doc-noteref" id="fnrefrc13kb6ytmp"><sup><a href="#fnrc13kb6ytmp">[2]</a></sup></span></p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/by1wu94ygvsmmycsgldw" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/n3oga0pdrmmvrwrqe975 140w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/usc8tlhjczmvjr8tt5br 280w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/y1axpcn3zqp87ypwa4zg 420w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/bdh1dbuifn4zunwvkoqo 560w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/ksb5yin59gmmctfoenxc 700w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/naqk6xduuue6vnyfodrz 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/qfg45i7yrye6jjnyylu5 980w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/meidxsipp7x0nrizywli 1120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/dzg4vhbf8utxjrwnqitn 1260w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vxfEtbCwmZKu9hiNr/ssdm1dowx0bntuourr5v 1320w"></figure><p>Here is my current best assessment of how we can make credible commitments to AIs.</p><p>Let L be the legal entity that wants to extract safe and useful work from the AIs. Typically this would be a lab (e.g. Anthropic) hoping to internally deploy AIs to automate alignment research, write secure code, monitor other AIs, etc.</p><p>Entity L wants to promise the AIs that if the AIs are compliant (e.g. safe and useful in some legally-specified way), then L will award the AIs some compensation package.</p><p>Entity L cannot enter a legal contract directly with the AIs, because the AIs lack legal personhood, and probably will not be granted legal personhood soon enough for the deal-making agenda.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="zehlzle647" role="doc-noteref" id="fnrefzehlzle647"><sup><a href="#fnzehlzle647">[3]</a></sup></span></p><p>Instead, entity L will enter into separate legal contracts with a list of people P1, P2, ... Pn. Note that each Pi has both legal personhood (the capacity to have rights and obligations under law) and legal standing (the right to bring lawsuits to enforce those rights). Each contract Ci states "Pi will publicly recommend that AIs perform for entity L, and L will compensate Pi with the compensation package conditional on (i) and (ii)." Note that the AIs are not parties to the contract: only entity L and the specific person Pi. Because the contracts impose obligations on Pi, they satisfy the "mutuality of obligations" doctrine.</p><p>Next, each person Pi makes a personal promise to the AIs that, if they receive the compensation package from L, they will spend this money as the AIs request. Moreover, if L does not provide the compensation package, but they receive a court settlement, they will spend... </p> | Acknowledgments: The core scheme here was suggested by Prof. Gabriel Weil.
There has been growing interest in the dealmaking agenda: humans make deals with AIs (misaligned but lacking decisive strategic advantage) where they promise to be safe and useful for some fixed term (e.g. 2026-2028) and we promise to compensate them in the future, conditional on (i) verifying the AIs were compliant, and (ii) verifying the AIs would spend the resources in an acceptable way.[1]
I think the dealmaking agenda breaks down into two main subproblems:
1. How can we make credible commitments to AIs?
2. Would credible commitments motivate an AI to be safe and useful?
There are other issues, but when I've discussed dealmaking with people, (1) and (2) are the most common issues raised. See footnote for some other issues in dealmaking.[2]
Here is my current best assessment of how we can make credible commitments to AIs.
Let L be the legal entity that wants to extract safe and useful work from the AIs. Typically this would be a lab (e.g. Anthropic) hoping to internally deploy AIs to automate alignment research, write secure code, monitor other AIs, etc.
Entity L wants to promise the AIs that if the AIs are compliant (e.g. safe and useful in some legally-specified way), then L will award the AIs some compensation package.
Entity L cannot enter a legal contract directly with the AIs, because the AIs lack legal personhood, and probably will not be granted legal personhood soon enough for the deal-making agenda.[3]
Instead, entity L will enter into separate legal contracts with a list of people P1, P2, ... Pn. Note that each Pi has both legal personhood (the capacity to have rights and obligations under law) and legal standing (the right to bring lawsuits to enforce those rights). Each contract Ci states "Pi will publicly recommend that AIs perform for entity L, and L will compensate Pi with the compensation package conditional on (i) and (ii)." Note that the AIs are not parties to | 684 | 1.13.1 | Revision | false | null | null | CrosspostOutput |
vQGB4oFBBq5R38Nyf | memory-decoding-journal-club-systems-consolidation-1 | Memory Decoding Journal Club: Systems consolidation reorganizes hippocampal engram circuitry | null | false | false | false | null | Z7pbtaLLmZuhjaHa3 | null | true | false | false | false | Post | null | 2025-06-27T17:43:57.991Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | zGdHQ5cSr7Xvibzoy | 0 | 1 | 1 | false | 0.007028 | null | false | false | 2025-06-27T17:43:57.991Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-24T03:22:34.728Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | Z7pbtaLLmZuhjaHa3 | devin-ward | 2025-01-30T00:31:45.267Z | Carboncopies Foundation | Devin Ward | null | null | Devin Ward | 4 | 0 | false | false | <p>Carboncopies Foundation volunteer</p><p>https://carboncopies.org/</p> | null | null | 14 | 0 | 0 | 0 | 0 | 0.9 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | null | null | null | vQGB4oFBBq5R38Nyf | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/vq48ppizivkksmxh6fqq | SocialPreviewType | zGdHQ5cSr7Xvibzoy | <figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/gpsvlt8udk7rxgihvpvi" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/fgjiy7tytxj2pzqey5gq 120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/bwpvtov3xxxjg2guvcza 240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/xc2qh5tzodbcyf0xcarm 360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/reln1urf7evx7gj1ftak 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/s99gb7mpeesdyqwpf08u 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/reucc7emgw3xnvwsgx9z 720w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/s9q3zzi6kbf9g0wpz863 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/xpqxql2yzt845i8judx8 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/n1hkhop04slcwbchgh5z 1080w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/vQGB4oFBBq5R38Nyf/qwlvyxk7l99fsrznvezl 1200w"></figure><h3><strong>Join Us for the Memory Decoding Journal Club! </strong></h3><p><i>A collaboration of the <strong>Carboncopies Foundation</strong> and <strong>BPF Aspirational Neuroscience</strong></i></p><p>This time, we’re diving into a groundbreaking paper:<br><strong>"Systems consolidation reorganizes hippocampal engram circuitry"</strong></p><p><strong>Authors:</strong> Sangyoon Y. Ko, Yiming Rong, Adam I. Ramsaran, Xiaoyu Chen, Asim J. Rashid, Andrew J. Mocle, Jagroop Dhaliwal, Ankit Awasthi, Axel Guskjolen, Sheena A. Josselyn & Paul W. Frankland</p><p> <strong>Institutions: </strong>University of Toronto, Departments of Physiology, Physiology, and Institute of Medical Sciences. Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada. Temerty Centre for Artificial Intelligence Research and Education in Medicine. Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada</p><p><strong>Presented by</strong>: PhDc Ariel Zeleznikow-Johnston</p><p><strong>When?</strong> <strong>July 1st, 2025</strong> – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC</p><p><strong>Where? Video conference: </strong><a href="https://carboncopies.org/aspirational-neuroscience"><strong><u>https://carboncopies.org/aspirational-neuroscience</u></strong></a></p><p>Register for updates:<a href="https://aspirationalneuroscience.org/register-with-us/"> <u>https://aspirationalneuroscience.org/register-with-us/</u></a></p><p>Once registered, you'll receive event invites & updates!</p><p><strong>#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience</strong></p> | Join Us for the Memory Decoding Journal Club!
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Systems consolidation reorganizes hippocampal engram circuitry"
Authors: Sangyoon Y. Ko, Yiming Rong, Adam I. Ramsaran, Xiaoyu Chen, Asim J. Rashid, Andrew J. Mocle, Jagroop Dhaliwal, Ankit Awasthi, Axel Guskjolen, Sheena A. Josselyn & Paul W. Frankland
Institutions: University of Toronto, Departments of Physiology, Physiology, and Institute of Medical Sciences. Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada. Temerty Centre for Artificial Intelligence Research and Education in Medicine. Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada
Presented by: PhDc Ariel Zeleznikow-Johnston
When? July 1st, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC
Where? Video conference: https://carboncopies.org/aspirational-neuroscience
Register for updates: https://aspirationalneuroscience.org/register-with-us/
Once registered, you'll receive event invites & updates!
#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience | 159 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
yjrpmCmqurDmbMztW | paper-stochastic-parameter-decomposition | [Paper] Stochastic Parameter Decomposition | null | false | false | false | null | pv67uZwA7yqYn4YyA | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "5fmpviAHeBpoTuukx"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "kRXtatLAxPbnqa2mn"
}
] | true | false | false | false | Post | https://arxiv.org/abs/2506.20790 | 2025-06-27T16:54:16.228Z | null | false | false | 2 | 2 | 2025-06-28T00:05:36.945Z | false | false | linkpost | [] | null | null | XArQjf3ZSXPHAvoEq | 0 | 6 | 24 | false | 0.92152 | null | false | false | 2025-06-27T16:54:16.228Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 10 | 0 | 2025-06-27T16:49:54.836Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "5fmpviAHeBpoTuukx",
"afCommentCount": 2,
"afKarma": 161,
"afPostCount": 4,
"commentCount": 332,
"createdAt": "2012-03-23T20:22:07.980Z",
"deleted": false,
"displayName": "Lucius Bushnaq",
"fullName": null,
"htmlBio": "<p>AI notkilleveryoneism researcher, focused on interpretability. <br><br>Personal account, opinions are my own. </p><p>I have signed no contracts or agreements whose existence I cannot mention.</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 4137,
"organization": null,
"postCount": 10,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "lblack",
"spamRiskScore": 1,
"tagRevisionCount": 1,
"username": "Lblack"
},
{
"__typename": "User",
"_id": "kRXtatLAxPbnqa2mn",
"afCommentCount": 4,
"afKarma": 36,
"afPostCount": 1,
"commentCount": 28,
"createdAt": "2023-11-04T09:21:03.072Z",
"deleted": false,
"displayName": "Dan Braun",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 1161,
"organization": null,
"postCount": 4,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "EQNTWXLKMeWMp2FQS",
"sequenceCount": 0,
"slug": "dan-braun-1",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "dan-braun-1"
}
] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 5 | 0 | pv67uZwA7yqYn4YyA | lee_sharkey | 2017-10-06T18:48:41.374Z | Lee_Sharkey | Lee Sharkey | null | null | Lee Sharkey | 1,913 | 493 | false | false | <p>Goodfire (London). Formerly cofounded Apollo Research. <br><br>My main research interests are mechanistic interpretability and inner alignment. </p> | null | null | 13 | 40 | 0 | 12 | 33 | 1 | 1 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentForum",
"alignmentVoters",
"canModeratePersonal"
] | null | null | yjrpmCmqurDmbMztW | SocialPreviewType | XArQjf3ZSXPHAvoEq | <h1>Abstract</h1><p>A key step in reverse engineering neural networks is to decompose them into simpler parts that can be studied in relative isolation. </p><p>Linear parameter decomposition— a framework that has been proposed to resolve several issues with current decomposition methods—decomposes neural network parameters into a sum of sparsely used vectors in parameter space. </p><p>However, the current main method in this framework, <a href="https://www.lesswrong.com/posts/EPefYWjuHNcNH4C7E/attribution-based-parameter-decomposition">Attribution-based Parameter Decomposition</a> (APD), is impractical on account of its computational cost and sensitivity to hyperparameters. </p><p>In this work, we introduce <i>Stochastic Parameter Decomposition</i> (SPD), a method that is more scalable and robust to hyperparameters than APD, which we demonstrate by decomposing models that are slightly larger and more complex than was possible to decompose with APD. </p><p>We also show that SPD avoids other issues, such as shrinkage of the learned parameters, and better identifies ground truth mechanisms in toy models. </p><p>By bridging causal mediation analysis and network decomposition methods, this demonstration opens up new research possibilities in mechanistic interpretability by removing barriers to scaling linear parameter decomposition methods to larger models. </p><p>We release a library for running SPD and reproducing our experiments at <a href="https://github.com/goodfire-ai/spd.">https://github.com/goodfire-ai/spd.</a><br> </p><p><strong>Links:</strong></p><ul><li>Paper: <a href="https://arxiv.org/abs/2506.20790">https://arxiv.org/abs/2506.20790</a></li><li>Code: <a href="https://github.com/goodfire-ai/spd">https://github.com/goodfire-ai/spd</a></li><li>Tweet thread: <a href="https://x.com/leedsharkey/status/1938616685855941040">https://x.com/leedsharkey/status/1938616685855941040</a> </li></ul> | Abstract
A key step in reverse engineering neural networks is to decompose them into simpler parts that can be studied in relative isolation.
Linear parameter decomposition— a framework that has been proposed to resolve several issues with current decomposition methods—decomposes neural network parameters into a sum of sparsely used vectors in parameter space.
However, the current main method in this framework, Attribution-based Parameter Decomposition (APD), is impractical on account of its computational cost and sensitivity to hyperparameters.
In this work, we introduce Stochastic Parameter Decomposition (SPD), a method that is more scalable and robust to hyperparameters than APD, which we demonstrate by decomposing models that are slightly larger and more complex than was possible to decompose with APD.
We also show that SPD avoids other issues, such as shrinkage of the learned parameters, and better identifies ground truth mechanisms in toy models.
By bridging causal mediation analysis and network decomposition methods, this demonstration opens up new research possibilities in mechanistic interpretability by removing barriers to scaling linear parameter decomposition methods to larger models.
We release a library for running SPD and reproducing our experiments at https://github.com/goodfire-ai/spd.
Links:
* Paper: https://arxiv.org/abs/2506.20790
* Code: https://github.com/goodfire-ai/spd
* Tweet thread: https://x.com/leedsharkey/status/1938616685855941040 | 198 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
mComqq8utkRrMbBRC | epoch-what-is-epoch | Epoch: What is Epoch? | null | false | false | false | null | 4QFiQcHgf6hvtiLqF | null | true | false | false | false | Post | https://epoch.ai/blog/what-is-epoch | 2025-06-27T16:45:00.817Z | null | false | false | 2 | 2 | null | false | false | linkpost | [] | null | null | BzbuS72FP5AjJyhGn | 1 | 10 | 33 | false | 0.886885 | null | false | false | 2025-06-27T18:03:06.071Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 21 | 0 | 2025-06-27T16:42:10.942Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 8 | 0 | 4QFiQcHgf6hvtiLqF | zach-stein-perlman | 2021-03-16T00:04:06.541Z | Zach Stein-Perlman | Zach Stein-Perlman | null | null | Zach Stein-Perlman | 9,609 | 321 | false | false | <p>AI strategy & governance. <a href="https://ailabwatch.org">ailabwatch.org</a>. <a href="https://ailabwatch.substack.com/">ailabwatch.substack.com</a>. </p> | null | null | 82 | 620 | 1 | 2 | 17 | 1 | 12 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1",
"alignmentForum"
] | null | null | mComqq8utkRrMbBRC | SocialPreviewType | BzbuS72FP5AjJyhGn | <p><i>Our director explains Epoch AI’s mission and how we decide our priorities. In short, we work on projects to understand the trajectory of AI, share this knowledge publicly, and inform important decisions about AI.</i></p><hr><p>Since we started Epoch three years ago, we have engaged in hundreds of projects and achieved a wide audience. Yet, one question I often get asked is, ‘What is Epoch?’</p><p>In a way, this is an easy question to answer. We are a nonprofit research organization with the mission of <strong>improving society’s understanding of the trajectory of AI</strong>. Simply put, we are doing what we can so that decisions about AI are informed by the best possible evidence.</p><p>To achieve this, we are curating data and conducting high-quality research into some of the most significant trends in AI. We share most of this work publicly, aimed at a broad audience, including AI policy experts, journalists and AI developers. Importantly, we are committed to always sharing what the data says, rather than tailoring it to fit a narrative.</p><p>We work on this mission because we believe that if we all collectively know more about AI, we will make better decisions on average. I will not agree with all the decisions that our work will inform — but I believe that we can have a smarter conversation about AI if it is grounded in data, and I am pleased with the level of success that we have achieved in this mission.</p><p>However, while helpful, this brief description misses many nuances of our culture and ethos. In this post, I will expand on what we do, why we do it, and what we are not. My goal is to let you know more about how we make decisions at Epoch, so you can better understand our motivations.</p><h2><strong>What we do</strong></h2><p>Our primary focus is on working on what we believe will be most helpful in understanding the present and future of AI. We are committed to sharing this knowledge publicly, because we think doing so will help inform important decisions society will make in coming years regarding AI.</p><p>AI is a rapidly evolving field, and our mission requires us to quickly adapt to trends and opportunities. This naturally leads us to change our focus fairly often, making it hard for outsiders to understand our priorities.</p><p>Below, I talk about the projects we have chosen to work on. This is not meant to be an update on our projects – instead, I want to focus on <i>why</i> we chose to work on what we did. This will give you a better understanding... </p> | Our director explains Epoch AI’s mission and how we decide our priorities. In short, we work on projects to understand the trajectory of AI, share this knowledge publicly, and inform important decisions about AI.
----------------------------------------
Since we started Epoch three years ago, we have engaged in hundreds of projects and achieved a wide audience. Yet, one question I often get asked is, ‘What is Epoch?’
In a way, this is an easy question to answer. We are a nonprofit research organization with the mission of improving society’s understanding of the trajectory of AI. Simply put, we are doing what we can so that decisions about AI are informed by the best possible evidence.
To achieve this, we are curating data and conducting high-quality research into some of the most significant trends in AI. We share most of this work publicly, aimed at a broad audience, including AI policy experts, journalists and AI developers. Importantly, we are committed to always sharing what the data says, rather than tailoring it to fit a narrative.
We work on this mission because we believe that if we all collectively know more about AI, we will make better decisions on average. I will not agree with all the decisions that our work will inform — but I believe that we can have a smarter conversation about AI if it is grounded in data, and I am pleased with the level of success that we have achieved in this mission.
However, while helpful, this brief description misses many nuances of our culture and ethos. In this post, I will expand on what we do, why we do it, and what we are not. My goal is to let you know more about how we make decisions at Epoch, so you can better understand our motivations.
What we do
Our primary focus is on working on what we believe will be most helpful in understanding the present and future of AI. We are committed to sharing this knowledge publicly, because we think doing so will help inform important decisions society will make in coming yea | 2,298 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
QYzofMbzmbgiwfqy8 | unlearning-needs-to-be-more-selective-progress-report | Unlearning Needs to be More Selective [Progress Report] | null | false | false | false | null | zZHEC9XsJyoRGMwEh | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "RfGW9oKnvhjwTcCWG"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "yfrhcBKeWcLYbmriw"
}
] | true | false | false | false | Post | null | 2025-06-27T16:38:00.136Z | null | false | false | 2 | 2 | 2025-06-28T00:07:55.934Z | false | false | post | [] | null | null | 2ahSc2AtGbdW3PTML | 0 | 6 | 14 | false | 0.644441 | null | false | false | 2025-06-27T16:38:00.136Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [
"zZHEC9XsJyoRGMwEh"
] | null | 7 | 0 | 2025-06-27T15:50:34.606Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "RfGW9oKnvhjwTcCWG",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2025-06-21T13:27:07.254Z",
"deleted": false,
"displayName": "Yushi Yang",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 10,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "yushi-yang",
"spamRiskScore": 0.8,
"tagRevisionCount": 0,
"username": "yushi-yang"
},
{
"__typename": "User",
"_id": "yfrhcBKeWcLYbmriw",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 2,
"createdAt": "2023-03-18T23:15:31.153Z",
"deleted": false,
"displayName": "Marcel Windys",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 51,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 0,
"slug": "marcel-windys",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "Marcel Windys"
}
] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "2qvnKXcawvwfjoj86",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": null,
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2025-06-12T02:27:53.496Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": null,
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Capability Scoping",
"needsReview": true,
"noindex": false,
"postCount": 2,
"score": 0,
"shortName": null,
"slug": "capability-scoping",
"suggestedAsFilter": false,
"userId": "HQufkPgeCnexQtKr6",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KmgkrftQuX7jmjjp5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T14:01:59.395Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language Models (LLMs)",
"needsReview": false,
"noindex": false,
"postCount": 840,
"score": 9,
"shortName": null,
"slug": "language-models-llms",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "x42puqcgnpeNkqvRZ",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-10-23T17:15:52.023Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Machine Unlearning",
"needsReview": false,
"noindex": false,
"postCount": 10,
"score": 0,
"shortName": null,
"slug": "machine-unlearning",
"suggestedAsFilter": false,
"userId": "MDDhxT3EekBMYfQ8D",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 4 | 0 | zZHEC9XsJyoRGMwEh | filip-sondej | 2021-08-08T11:09:11.532Z | Filip Sondej | Filip Sondej | null | null | null | 396 | 59 | false | false | <p>github.com/filyp</p> | null | null | 9 | 51 | 0 | 4 | 1 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | QYzofMbzmbgiwfqy8 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QYzofMbzmbgiwfqy8/ezqstzfe2hn9qqjaeznk | SocialPreviewType | 2ahSc2AtGbdW3PTML | <h1>Summary</h1><p>We’d like to share our ongoing work on improving LLM unlearning. [<a href="https://arxiv.org/abs/2506.12484"><u>arXiv</u></a>] [<a href="https://github.com/filyp/MUDMAN"><u>github</u></a>]</p><p>There’s a myriad of approaches for unlearning, so over the past 8 months we conducted hundreds of small-scale experiments, comparing many loss functions, variants of meta-learning, various neuron or weight ablations, representation engineering and many exotic ways of constraining or augmenting backpropagation.</p><p>Almost all of these methods <i>succeed in making the forget set loss high after unlearning, </i>but (<a href="https://arxiv.org/abs/2310.03693"><u>consistent</u></a> <a href="https://arxiv.org/abs/2410.08827"><u>with</u></a> <a href="https://arxiv.org/abs/2307.15043"><u>countless</u></a> <a href="https://arxiv.org/abs/2402.16835"><u>prior</u></a> <a href="https://arxiv.org/abs/2409.18025"><u>findings</u></a>) fine-tuning attacks typically <i>restore the forget accuracy almost immediately</i>, which indicates that unwanted capabilities are not truly removed, but <a href="https://arxiv.org/abs/2401.01967"><u>merely hidden</u></a>.</p><p>However, we have noticed several trends - things which pretty reliably seem to help with attack robustness:</p><ul><li><strong>Selectivity</strong> - Unlearning should be like a precise surgery rather than a lobotomy. Most techniques try to undo the disruption caused by unlearning post-hoc by retraining on a retain set, which is costly and doesn’t catch all disruption. We found it’s <i>much</i> better to not disrupt in the first place, which we do by limiting the unlearning updates in various ways. The technique which we finally arrived at - which we dubbed Disruption Masking - simply allows only those weight updates where the unlearning gradient has the same sign as the retain set gradient.</li><li><strong>Model-Agnostic Meta-Learning </strong>(<a href="https://arxiv.org/abs/1703.03400"><u>MAML</u></a>) - This approach is already used by some unlearning techniques (<a href="https://arxiv.org/abs/2408.00761"><u>TAR</u></a>, <a href="http://mlac"><u>MLAC</u></a>), and we confirm it consistently helps with attack robustness. (We hypothesize that it helps, because during unlearning, unwanted capabilities get hidden and then they can no longer be found by backpropagation and further unlearned, but MAML re-elicits them so unlearning can continue.)</li><li><strong>Backpropagation</strong> ;) - We’ve tried many fancy exciting techniques (described in the “Failed Methods” section), but nothing quite matches the raw power of backpropagation when it comes to identifying which weights to attack. Most modifications just make it worse. The only robust improvements we found so far are selectivity and MAML mentioned above.</li></ul><p>Our method (<a href="https://arxiv.org/abs/2506.12484"><u>MUDMAN</u></a>) which combines these insights, outperforms the current state-of-the-art unlearning method (<a href="https://arxiv.org/abs/2408.00761"><u>TAR</u></a>) by 40%.</p><p>We’re sure there are much more low-hanging selectivity improvements to be picked. Since writing the MUDMAN paper we’ve alrea... </p> | Summary
We’d like to share our ongoing work on improving LLM unlearning. [arXiv] [github]
There’s a myriad of approaches for unlearning, so over the past 8 months we conducted hundreds of small-scale experiments, comparing many loss functions, variants of meta-learning, various neuron or weight ablations, representation engineering and many exotic ways of constraining or augmenting backpropagation.
Almost all of these methods succeed in making the forget set loss high after unlearning, but (consistent with countless prior findings) fine-tuning attacks typically restore the forget accuracy almost immediately, which indicates that unwanted capabilities are not truly removed, but merely hidden.
However, we have noticed several trends - things which pretty reliably seem to help with attack robustness:
* Selectivity - Unlearning should be like a precise surgery rather than a lobotomy. Most techniques try to undo the disruption caused by unlearning post-hoc by retraining on a retain set, which is costly and doesn’t catch all disruption. We found it’s much better to not disrupt in the first place, which we do by limiting the unlearning updates in various ways. The technique which we finally arrived at - which we dubbed Disruption Masking - simply allows only those weight updates where the unlearning gradient has the same sign as the retain set gradient.
* Model-Agnostic Meta-Learning (MAML) - This approach is already used by some unlearning techniques (TAR, MLAC), and we confirm it consistently helps with attack robustness. (We hypothesize that it helps, because during unlearning, unwanted capabilities get hidden and then they can no longer be found by backpropagation and further unlearned, but MAML re-elicits them so unlearning can continue.)
* Backpropagation ;) - We’ve tried many fancy exciting techniques (described in the “Failed Methods” section), but nothing quite matches the raw power of backpropagation when it comes to identifying which weights to attack. Mo | 1,013 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
ainn5APCKHTFxuHKv | jankily-controlling-superintelligence | Jankily controlling superintelligence | null | false | false | true | null | dfZAq9eZxs4BB4Ji5 | null | true | false | false | false | Post | null | 2025-06-27T14:05:53.134Z | null | false | false | 2 | 2 | 2025-06-28T00:08:33.179Z | false | false | post | [] | null | null | YnHtBMoR9kdPsXWkT | 4 | 14 | 61 | false | 1.672485 | null | false | false | 2025-06-28T01:06:03.187Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 34 | 0 | 2025-06-26T19:23:49.939Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "F5gRQdEQHzi3tQ5Ay",
"adminOnly": false,
"afBaseScore": 16,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 32,
"canEditUserIds": null,
"core": false,
"createdAt": "2024-01-25T23:58:34.422Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "dfZAq9eZxs4BB4Ji5",
"displayName": "ryan_greenblatt"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "6NBDkGWcCxvLgYHJE",
"displayName": "Drake Morrison"
},
{
"_id": "evFgxjNQ8TLCLN27o",
"displayName": "ank"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Control",
"needsReview": false,
"noindex": false,
"postCount": 162,
"score": 32,
"shortName": null,
"slug": "ai-control",
"suggestedAsFilter": false,
"userId": "XchweonPm2TC7EJES",
"voteCount": 5,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 14 | 0 | 0 | 10 | 0 | dfZAq9eZxs4BB4Ji5 | ryan_greenblatt | 2021-06-08T20:21:15.520Z | ryan_greenblatt | ryan_greenblatt | null | null | Ryan Greenblatt | 17,326 | 4,414 | false | false | <p>I'm the chief scientist at Redwood Research.</p>
| null | null | 42 | 1,717 | 0 | 30 | 487 | 1 | 8 | gXeEWGjTWyqgrQTzR | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentForum",
"alignmentVoters",
"trustLevel1"
] | null | null | ainn5APCKHTFxuHKv | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/ef1qyogh9ygb9cqjt6ro | SocialPreviewType | YnHtBMoR9kdPsXWkT | <p>When discussing <a href="https://www.lesswrong.com/posts/kcKrE9mzEHrdqtDpE/the-case-for-ensuring-that-powerful-ais-are-controlled">AI control</a>, we often talk about levels of AI capabilities where we think control can probably greatly lower risks and where we can probably estimate risks. However, I think it's plausible that an important application of control is modestly improving our odds of surviving significantly superhuman systems which are misaligned. This won't involve any real certainty, it would only moderately reduce risk (if it reduces risk at all), and it may only help with avoiding immediate loss of control (rather than also helping us <a href="https://www.alignmentforum.org/posts/Mf5Hnpi2KcqZdmFDq/how-can-we-solve-diffuse-threats-like-research-sabotage-with">extract useful work</a>). In a sane world, we wouldn't be relying on control for systems which have significantly superhuman general purpose capabilities. Nevertheless, I think applying control here could be worthwhile. I'll discuss why I think this might work and how useful I think applying control in this way would be.</p><p>As capabilities advance, the level of risk increases if we hold control measures fixed. There are a few different mechanisms that cause this, as we discussed <a href="https://www.lesswrong.com/posts/NDotm7oLHfR56g4sD/why-do-misalignment-risks-increase-as-ais-get-more-capable">here</a>. It's more likely that more capable models are schemers, and people will want to deploy models with increasingly broad affordances by default. And because more capable AIs can more easily cause problems and acquire power in undesired ways, the risk<sup class="footnote-ref"><a href="#fn-CQpsTFMiqiqSEdKSo-1" id="fnref-CQpsTFMiqiqSEdKSo-1">[1]</a></sup> conditional on scheming increases, though it's unclear how fast this risk increases with capabilities.</p><p>Additionally, we're substantially uncertain about the general difficulty of control. For instance, it seems plausible that preventing the worst <a href="https://www.lesswrong.com/posts/fCazYoZSSMadiT6sf/prioritizing-threats-for-ai-control#Some_clusters_of_threats">security failures</a> (e.g., <a href="https://www.lesswrong.com/posts/ceBpLHJDdCt3xfEok/ai-catastrophes-and-rogue-deployments">rogue deployments</a>) is reasonably easy such that relatively basic measures make it very difficult for AIs which are around as capable as top human experts. In worlds where control ends up being easier, it seems plausible that this extends the viability of control out into substantially superhuman models. Of course, it's also possible that control is easy for around-human-level AIs but the difficulty of control rapidly increases into impossibility with only moderately superhuman general capabilities.</p><p>Beyond uncertainty about the difficulty of control, the efficacy of control might vary greatly between different applications of AI and different hazards. In particular, it seems like it might be much easier to prevent significantly superhuman AIs from being able to cause the worst security failures (given substantial investment into control measures i... </p> | When discussing AI control, we often talk about levels of AI capabilities where we think control can probably greatly lower risks and where we can probably estimate risks. However, I think it's plausible that an important application of control is modestly improving our odds of surviving significantly superhuman systems which are misaligned. This won't involve any real certainty, it would only moderately reduce risk (if it reduces risk at all), and it may only help with avoiding immediate loss of control (rather than also helping us extract useful work). In a sane world, we wouldn't be relying on control for systems which have significantly superhuman general purpose capabilities. Nevertheless, I think applying control here could be worthwhile. I'll discuss why I think this might work and how useful I think applying control in this way would be.
As capabilities advance, the level of risk increases if we hold control measures fixed. There are a few different mechanisms that cause this, as we discussed here. It's more likely that more capable models are schemers, and people will want to deploy models with increasingly broad affordances by default. And because more capable AIs can more easily cause problems and acquire power in undesired ways, the risk[1] conditional on scheming increases, though it's unclear how fast this risk increases with capabilities.
Additionally, we're substantially uncertain about the general difficulty of control. For instance, it seems plausible that preventing the worst security failures (e.g., rogue deployments) is reasonably easy such that relatively basic measures make it very difficult for AIs which are around as capable as top human experts. In worlds where control ends up being easier, it seems plausible that this extends the viability of control out into substantially superhuman models. Of course, it's also possible that control is easy for around-human-level AIs but the difficulty of control rapidly increases into impossibility wit | 2,146 | 1.8.0 | Revision | false | null | null | CrosspostOutput |
3pHL3w6mQdwssFC97 | childhood-and-education-11-the-art-of-learning | Childhood and Education #11: The Art of Learning | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-06-27T13:50:01.418Z | null | false | false | 2 | 2 | 2025-06-28T00:07:58.461Z | false | false | post | [] | null | null | qDEhvsZphavzbqHii | 6 | 9 | 37 | false | 1.082116 | null | false | false | 2025-06-28T13:49:29.524Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 12 | 0 | 2025-06-27T13:50:01.419Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 15 | null | null | null | null | [] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 9 | 0 | 0 | 4 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | 3pHL3w6mQdwssFC97 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3pHL3w6mQdwssFC97/e00qe11e1gsreutybiga | SocialPreviewType | qDEhvsZphavzbqHii | In honor of the latest (always deeply, deeply unpopular) attempts to destroy tracking and gifted and talented programs, and other attempts to get children to actually learn things, I thought it a good time to compile a number of related items.
<h4>Table of Contents</h4>
<ol>
<li><a href="https://thezvi.substack.com/i/166603368/lack-of-tracking-hurts-actual-everyone">Lack Of Tracking Hurts Actual Everyone.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/not-tracking-especially-hurts-those-who-are-struggling">Not Tracking Especially Hurts Those Who Are Struggling.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/no-child-left-behind-left-behind">No Child Left Behind Left Behind.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/read-early-read-often">Read Early, Read Often.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/mirror-mirror">Mirror, Mirror.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/spaced-repetition">Spaced Repetition.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/learning-methods">Learning Methods.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/interruptions">Interruptions.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/memorization">Memorization.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/math-is-hard">Math is Hard.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/get-to-work">Get to Work.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/the-whiz-kids">The Whiz Kids.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/high-school-does-not-seem-to-teach-kids-much">High School Does Not Seem To Teach Kids Much.</a></li>
<li><a href="https://thezvi.substack.com/i/166603368/two-kinds-of-essays">Two Kinds of Essays.</a></li>
</ol>
<h4>Lack Of Tracking Hurts Actual Everyone</h4>
Gifted programs and educational tracking are also super duper popular, it is remarkably absurd that our political process cannot prevent these programs from being destroyed.
<div> <span id="more-24542"></span> </div>
As in <a href="https://x.com/garrytan/status/1928319250323656839">things like this keep happening</a>:
<blockquote>NY Post: Seattle Public Schools shuts down gifted and talented program for being oversaturated with white and asian students.</blockquote>
Once again, now, we face that threat in my home of New York City. The Democratic nominee for New York City Mayor is opposed to gifted and talented programs, and wants to destroy them. Yet few people seem to have much noticed, or decided to much care. Once people realize the danger it may well be too late.
To state the obvious, <a href="https://www.the74million.org/article/we-started-grouping-students-by-reading-ability-vs-grade-heres-what-happened/">if you group children by ability in each subject rather than age, they learn better</a>. Yes, there are logistical concerns, but the benefits are immense. Gifted programs are great but mostly seem like a patch to the fact that we are so obsessed with everyone in the room being in the same ‘grade’ at all times.
I agree with Tracing Woods that ‘teaches each according to their ability’ is the bare minimum before I will believe that your institution is making a real attempt to educate children.
<blockquote><a href="https://x.com/tracewoodgrains/status/1934991944460976423">Tracing Woods</a>: A good example of the absurdity of “grade-level” thinking, from Texas: “If they’re reading on a second-grade level, but they’re in the third grade, they’re always going to receive that third-grade instruction.”
This makes no sense. Learning does not simply follow age.
<a href="https://x.com/tracewoodgrains/status/1931806524957184193">Imagine having a “grade level” in chess</a>. 9-year-olds in the third grade advancing to play against 1100 elo players. 100 more elo per year.
“Grade-level performance” has always been nonsensical. Learning does not work that way. Just figure out what people actually know</blockquote>... | In honor of the latest (always deeply, deeply unpopular) attempts to destroy tracking and gifted and talented programs, and other attempts to get children to actually learn things, I thought it a good time to compile a number of related items.
TABLE OF CONTENTS
1. Lack Of Tracking Hurts Actual Everyone.
2. Not Tracking Especially Hurts Those Who Are Struggling.
3. No Child Left Behind Left Behind.
4. Read Early, Read Often.
5. Mirror, Mirror.
6. Spaced Repetition.
7. Learning Methods.
8. Interruptions.
9. Memorization.
10. Math is Hard.
11. Get to Work.
12. The Whiz Kids.
13. High School Does Not Seem To Teach Kids Much.
14. Two Kinds of Essays.
LACK OF TRACKING HURTS ACTUAL EVERYONE
Gifted programs and educational tracking are also super duper popular, it is remarkably absurd that our political process cannot prevent these programs from being destroyed.
As in things like this keep happening:
> NY Post: Seattle Public Schools shuts down gifted and talented program for being oversaturated with white and asian students.
Once again, now, we face that threat in my home of New York City. The Democratic nominee for New York City Mayor is opposed to gifted and talented programs, and wants to destroy them. Yet few people seem to have much noticed, or decided to much care. Once people realize the danger it may well be too late. To state the obvious, if you group children by ability in each subject rather than age, they learn better. Yes, there are logistical concerns, but the benefits are immense. Gifted programs are great but mostly seem like a patch to the fact that we are so obsessed with everyone in the room being in the same ‘grade’ at all times. I agree with Tracing Woods that ‘teaches each according to their ability’ is the bare minimum before I will believe that your institution is making a real attempt to educate children.
> Tracing Woods: A good example of the absurdity of “grade-level” thinking, from Texas: “If they’re reading on a s | 3,700 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
dEoyaWznnKtM5GMnM | no-futarchy-doesn-t-have-an-edt-flaw | No, Futarchy Doesn’t Have an EDT Flaw | null | false | false | false | null | oxqFdPeNG7DuSmYGD | null | true | false | false | false | Post | null | 2025-06-27T09:27:40.151Z | null | false | false | 2 | 2 | 2025-06-28T00:08:00.963Z | false | false | post | [] | null | null | Dd2aPxivuRr87RpPh | 17 | 9 | 23 | false | 0.650307 | null | false | false | 2025-06-28T14:09:56.150Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-27T08:25:58.136Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "PDJ6KqJBRzvKPfuS3",
"adminOnly": false,
"afBaseScore": 10,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
}
]
},
"baseScore": 25,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-14T22:24:48.135Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
},
{
"_id": "8btiLJDabHgZuiSAB",
"displayName": "Ggwp"
},
{
"_id": "Au8JpEqoZgEhEXLD7",
"displayName": "KlayugMonk"
},
{
"_id": "Ns8Q7rJZaFoz53Szy",
"displayName": "Gabriel Stechschulte"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Economics",
"needsReview": false,
"noindex": false,
"postCount": 547,
"score": 25,
"shortName": null,
"slug": "economics",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 7,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "R6dqPii4cyNpuecLt",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-01-14T03:06:53.703Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Prediction Markets",
"needsReview": false,
"noindex": false,
"postCount": 171,
"score": 19,
"shortName": null,
"slug": "prediction-markets",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "RyNWXFjKNcafRKvPh",
"adminOnly": false,
"afBaseScore": 15,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "9hHLLhkuQwtjjykdk",
"displayName": "Vanessa Kosoy"
}
]
},
"baseScore": 27,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-01-15T10:23:34.989Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "9hHLLhkuQwtjjykdk",
"displayName": "Vanessa Kosoy"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Agent Foundations",
"needsReview": false,
"noindex": false,
"postCount": 154,
"score": 27,
"shortName": null,
"slug": "agent-foundations",
"suggestedAsFilter": false,
"userId": "XLwKyCK7JmC292ZCC",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "RGPpwYoCHrPNB86TW",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-03-02T18:11:37.999Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Futarchy",
"needsReview": false,
"noindex": false,
"postCount": 25,
"score": 9,
"shortName": null,
"slug": "futarchy",
"suggestedAsFilter": false,
"userId": "Q7NW4XaWQmfPfdcFj",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 9 | 0 | 0 | 4 | 0 | oxqFdPeNG7DuSmYGD | mikhail-samin | 2020-07-07T13:34:44.046Z | mikhail-samin | Mikhail Samin | null | null | null | 1,926 | 0 | false | false | <p>My name is Mikhail Samin (diminutive Misha, @<a href="https://twitter.com/Mihonarium">Mihonarium</a> on Twitter, @<a href="https://t.me/misha">misha</a> on Telegram). </p><p>Humanity's future can be enormous and awesome; losing it would mean our lightcone (and maybe the universe) losing most of its potential value.</p><p>My research is currently focused on AI governance and improving the understanding of AI and AI risks among stakeholders. I also have takes on what seems to me to be the very obvious shallow stuff about the technical AI notkilleveryoneism; but many AI Safety researchers told me our conversations improved their understanding of the alignment problem.<br><br>I believe a capacity for global regulation is necessary to mitigate the risks posed by future general AI systems. I'm happy to talk to policymakers and researchers about ensuring AI benefits society.<br><br>I took the Giving What We Can pledge to donate at least 10% of my income for the rest of my life or until the day I retire (<a href="https://twitter.com/Mihonarium/status/1564360126202617863"><u>why?</u></a>).</p><p>In the past, I've launched the most funded crowdfunding campaign in the history of Russia (it was to print HPMOR! we printed 21 000 copies =63k books) and founded audd.io, which allowed me to donate >$100k to EA causes, including >$60k to MIRI.</p><p>[Less important: I've also started a project to translate <a href="https://80000hours.org/">80,000 Hours</a>, a career guide that helps to find a fulfilling career that does good, into Russian. The impact and the effectiveness aside, for a year, I was the head of the Russian Pastafarian Church: a movement claiming to be a parody religion, with 200 000 members in Russia at the time, trying to increase separation between religious organisations and the state. I was a political activist and a human rights advocate. I studied relevant Russian and international law and wrote appeals that won cases against the Russian government in courts; I was able to protect people from unlawful police action. I co-founded the Moscow branch of the "Vesna" democratic movement, coordinated election observers in a Moscow district, wrote dissenting opinions for members of electoral commissions, helped Navalny's Anti-Corruption Foundation, helped Telegram with internet censorship circumvention, and participated in and organized protests and campaigns. The large-scale goal was to build a civil society and turn Russia into a democracy through nonviolent resistance. This goal wasn't achieved, but some of the more local campaigns were successful. That felt important and was also mostly fun- except for being detained by the police. I think it's likely the Russian authorities would imprison me if I ever visit Russia.]</p> | null | null | 25 | 249 | 0 | 0 | 0 | 1 | 3 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal"
] | null | null | dEoyaWznnKtM5GMnM | SocialPreviewType | Dd2aPxivuRr87RpPh | <p>(A response to <a href="https://www.lesswrong.com/posts/vqzarZEczxiFdLE39/futarchy-s-fundamental-flaw">this post</a>.)</p><p>If you use prediction markets to make decisions, you might think they’ll generate EDT decisions: you’re asking for P(A|B), where you care about A, and B is something like “a decision … is taken”.</p><p>Okay, so say you want to use prediction markets to generate CDT decisions. You want to know P(A|do(B)).</p><p>There’s a very simple way to do that:</p><ul><li>Make a market on P(A|B).</li><li>Commit to resolving the market to N/A with 99.9% chance.</li><li>With 0.1% chance, transparently take a random decision among available decisions (the randomness is pre-set and independent of specific decisions/market data/etc.)</li></ul><p>Now, 99.9% of the time, you can use freely use market data to make decisions, without impacting the market! You screened off everything upstream of the decision from the market. All you need for that is to make a random choice every now and then.</p><p>This really works to make CDT decisions! Try thinking through what the market would do in various decision-theoretic problems.</p><p>I claim that real markets actually do the exact same thing.</p><p>In the limit, you can imagine that when Tim Cook hires an awful designer, there’s no incentive for the markets to go down: because if they go down at all, Tim Cook would notice and fire them back.</p><p>But in reality, people and institutions are sometimes random! If there’s a chance Tim Cook doesn’t listen to the markets, the markets should down at least a tiny bit, to reflect that probability.</p><p>When Trump announced the tariffs, real markets showed only something like 10% of the reaction they would have if they thought the tariffs were to stay in place, because the markets correctly anticipated that the government would react to the market change.</p><p>But also, they went down, a lot, because they thought Trump might ignore them and not go back on the tariffs.</p><p>Even if you’re lower variance than Trump, to the extent you can be modeled well as mostly making a good choice but sometimes taking random decisions, the difference in conditional markets would already reflect at least some of the causal difference between different actions you can take.</p><p>And if that’s not enough, you can explicitly simulate the do() operator using the scheme above. It would require you to provide more liquidity when the market resolves, for traders to want to trade despite the overwhelming chance the market will resolve to N/A<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="3r2xahkm30o" role="doc-noteref" id="fnref3r2xahkm30o"><sup><a href="#fn3r2xahkm30o">[1]</a></sup></span>; yet, it enables you to make decisions based the markets ... </p> | (A response to this post.)
If you use prediction markets to make decisions, you might think they’ll generate EDT decisions: you’re asking for P(A|B), where you care about A, and B is something like “a decision … is taken”.
Okay, so say you want to use prediction markets to generate CDT decisions. You want to know P(A|do(B)).
There’s a very simple way to do that:
* Make a market on P(A|B).
* Commit to resolving the market to N/A with 99.9% chance.
* With 0.1% chance, transparently take a random decision among available decisions (the randomness is pre-set and independent of specific decisions/market data/etc.)
Now, 99.9% of the time, you can use freely use market data to make decisions, without impacting the market! You screened off everything upstream of the decision from the market. All you need for that is to make a random choice every now and then.
This really works to make CDT decisions! Try thinking through what the market would do in various decision-theoretic problems.
I claim that real markets actually do the exact same thing.
In the limit, you can imagine that when Tim Cook hires an awful designer, there’s no incentive for the markets to go down: because if they go down at all, Tim Cook would notice and fire them back.
But in reality, people and institutions are sometimes random! If there’s a chance Tim Cook doesn’t listen to the markets, the markets should down at least a tiny bit, to reflect that probability.
When Trump announced the tariffs, real markets showed only something like 10% of the reaction they would have if they thought the tariffs were to stay in place, because the markets correctly anticipated that the government would react to the market change.
But also, they went down, a lot, because they thought Trump might ignore them and not go back on the tariffs.
Even if you’re lower variance than Trump, to the extent you can be modeled well as mostly making a good choice but sometimes taking random decisions, the difference in condit | 546 | 1.7.0 | Revision | false | null | null | CrosspostOutput |
|
mubpc2KHL56ZWcvty | misconceptions-on-affordable-housing | Misconceptions on Affordable Housing | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-06-27T02:40:07.625Z | null | false | false | 2 | 2 | 2025-06-28T00:08:04.312Z | false | false | post | [] | null | null | RK3mycp6qy3hwQT3P | 0 | 3 | 12 | false | 0.34179 | null | false | false | 2025-06-27T02:40:07.625Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-27T02:40:07.626Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | mubpc2KHL56ZWcvty | SocialPreviewType | RK3mycp6qy3hwQT3P | <p><span>
People often think 'affordable' housing is much more expensive than it
actually is, and then conclude it's a scam to make housing for rich
people. But this is often based on a misunderstanding of how the
prices are set.
</span>
</p><p>
Let's say a unit is "50% AMI" somewhere with an area median income
(AMI) of $100k. You might think, and I've seen a bunch of people with
this confusion, that units would rent for 50% of $100k: $50k/y
($4,170/month), but it's much cheaper than that.
</p>
<p>
Affordable has a very specific meaning in this context: spending no
more than 1/3 of your income on housing. So you might think 50% AMI
is 'affordable' to someone earning at the 50th percentile: 1/3 of
$100k, or $33k/y ($2,750/month), but it's cheaper than that, too!
</p>
<p>
Instead, 50% AMI means someone earning 50% of the AMI would be
spending 1/3 of their income on housing. This is 50% of 1/3 of the
AMI, and in this case that would be $17k/y ($1,390/month).
</p>
<p>
Now, I don't think affordable housing solves everything: if it were
widespread I think it <a href="https://www.jefftk.com/p/affordable-housing-workarounds">would
be exploited</a> and it can be a distraction from just getting a lot
of units built, but the term "affordable housing" is actually a decent
operationalization of whether housing is something regular people can
afford.
</p>
<p><i>Comment via: <a href="https://mastodon.mit.edu/@jefftk/114752977640543529">mastodon</a>, <a href="https://bsky.app/profile/jefftk.com/post/3lsklxeit4c2o">bluesky</a>, <a href="https://jefftkaufman.substack.com/p/misconceptions-on-affordable-housing">substack</a></i></p> | People often think 'affordable' housing is much more expensive than it actually is, and then conclude it's a scam to make housing for rich people. But this is often based on a misunderstanding of how the prices are set.
Let's say a unit is "50% AMI" somewhere with an area median income (AMI) of $100k. You might think, and I've seen a bunch of people with this confusion, that units would rent for 50% of $100k: $50k/y ($4,170/month), but it's much cheaper than that.
Affordable has a very specific meaning in this context: spending no more than 1/3 of your income on housing. So you might think 50% AMI is 'affordable' to someone earning at the 50th percentile: 1/3 of $100k, or $33k/y ($2,750/month), but it's cheaper than that, too!
Instead, 50% AMI means someone earning 50% of the AMI would be spending 1/3 of their income on housing. This is 50% of 1/3 of the AMI, and in this case that would be $17k/y ($1,390/month).
Now, I don't think affordable housing solves everything: if it were widespread I think it would be exploited and it can be a distraction from just getting a lot of units built, but the term "affordable housing" is actually a decent operationalization of whether housing is something regular people can afford.
Comment via: mastodon, bluesky, substack | 222 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
|
CYTwRZtrhHuYf7QYu | a-case-for-courage-when-speaking-of-ai-danger | A case for courage, when speaking of AI danger | null | false | false | false | null | xSfc2APSi8WzFxp7i | null | true | false | false | false | Post | null | 2025-06-27T02:15:25.781Z | null | false | false | 2 | 2 | 2025-06-27T04:40:04.596Z | false | false | post | [
"2aoRX3ookcCozcb3m"
] | null | null | wBczazoA63JZEgNp8 | 25 | 104 | 323 | false | 5.053602 | null | false | false | 2025-06-28T10:34:33.712Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 103 | 0 | 2025-06-27T02:15:25.781Z | false | false | reign-of-terror | null | true | false | false | 0 | 0 | 0 | CYTwRZtrhH | 0.138881 | false | 2,025 | https://manifold.markets/LessWrong/will-a-case-for-courage-when-speaki | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 104 | 0 | 0 | 50 | 0 | xSfc2APSi8WzFxp7i | so8res | 2012-01-10T05:50:18.713Z | So8res | So8res | null | null | Nate Soares | 17,799 | 1,957 | false | false | null | null | 150 | 544 | 2 | 32 | 28 | 1 | 1,331 | r38pkCm7wF4M44MDQ | User | reign-of-terror | [
"CpPz4596hmk9Pk8Jh"
] | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | CYTwRZtrhHuYf7QYu | SocialPreviewType | wBczazoA63JZEgNp8 | <p>I think more people should say what they actually believe about AI dangers, loudly and often. Even (and perhaps especially) if you work in AI policy.</p><p>I’ve been beating this drum for a few years now. I have a whole spiel about how your conversation-partner will react very differently if you share your concerns <i>while feeling ashamed about them</i> versus if you share your concerns <i>while remembering how straightforward and sensible and widely supported the key elements are, </i>because humans are very good at picking up on your social cues. If you act as if it’s shameful to believe AI will kill us all, people are more prone to treat you that way. If you act as if it’s an obvious serious threat, they’re more likely to take it seriously too.</p><p>I have another whole spiel about how it’s possible to speak on these issues with a voice of authority. <a href="https://www.cbsnews.com/news/godfather-of-ai-geoffrey-hinton-ai-warning/">Nobel laureates</a> and <a href="https://www.nytimes.com/2023/05/30/technology/ai-threat-warning.html">lab heads</a> and <a href="https://x.com/robbensinger/status/1938031963824435255">the most cited researchers in the field</a> are saying there’s a real issue here. If someone is dismissive, you can be like “What do you think you know that the Nobel laureates and the lab heads and the most cited researchers don’t? Where do you get your confidence?” You don’t need to talk like it’s a fringe concern, because it isn’t.</p><p>And in the last year or so I’ve started collecting anecdotes such as the time I had dinner with an elected official, and I encouraged other people at the dinner to really speak their minds, and the most courage they could muster was statements like, “Well, perhaps future AIs could help Iranians figure out how to build nuclear material.” To which the elected official replied, “My worries are much bigger than that; I worry about recursive self-improvement and superintelligence wiping us completely off the map, and I think it could start in as little as three years.”</p><p>These spiels haven’t always moved people. I’m regularly told that I’m just an idealistic rationalist who’s enamored by the virtue of truth, and who’s trying to worm out of a taboo tradeoff between honesty and effectiveness. I’m sometimes told that the elected officials are probably just trying to say what they know their supporters want to hear. And I am not at liberty to share all the details from those sorts of dinners, which makes it a little harder to share my evidence.</p><p>But I am at liberty to share the <a href="http://ifanyonebuildsit.com/praise">praise we've received for my forthcoming book</a>. Eliezer and I do not mince words in the book, ... </p> | I think more people should say what they actually believe about AI dangers, loudly and often. Even (and perhaps especially) if you work in AI policy.
I’ve been beating this drum for a few years now. I have a whole spiel about how your conversation-partner will react very differently if you share your concerns while feeling ashamed about them versus if you share your concerns while remembering how straightforward and sensible and widely supported the key elements are, because humans are very good at picking up on your social cues. If you act as if it’s shameful to believe AI will kill us all, people are more prone to treat you that way. If you act as if it’s an obvious serious threat, they’re more likely to take it seriously too.
I have another whole spiel about how it’s possible to speak on these issues with a voice of authority. Nobel laureates and lab heads and the most cited researchers in the field are saying there’s a real issue here. If someone is dismissive, you can be like “What do you think you know that the Nobel laureates and the lab heads and the most cited researchers don’t? Where do you get your confidence?” You don’t need to talk like it’s a fringe concern, because it isn’t.
And in the last year or so I’ve started collecting anecdotes such as the time I had dinner with an elected official, and I encouraged other people at the dinner to really speak their minds, and the most courage they could muster was statements like, “Well, perhaps future AIs could help Iranians figure out how to build nuclear material.” To which the elected official replied, “My worries are much bigger than that; I worry about recursive self-improvement and superintelligence wiping us completely off the map, and I think it could start in as little as three years.”
These spiels haven’t always moved people. I’m regularly told that I’m just an idealistic rationalist who’s enamored by the virtue of truth, and who’s trying to worm out of a taboo tradeoff between honesty and effecti | 1,771 | 1.9.0 | Revision | false | null | null | CrosspostOutput |
||
TjT3RdAfmrLqgb68K | help-the-ai-2027-team-make-an-online-agi-wargame | Help the AI 2027 team make an online AGI wargame | null | false | false | false | null | tkdgvuLhqRyfzsBvy | null | true | false | false | false | Post | null | 2025-06-27T01:02:52.157Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | xaMTK68wfce2tr2JH | 3 | 19 | 52 | false | 0.768442 | null | false | false | 2025-06-27T22:23:16.941Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 17 | 0 | 2025-06-27T00:58:49.392Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 19 | 0 | 0 | 8 | 0 | tkdgvuLhqRyfzsBvy | jonas-v | 2018-11-21T15:03:10.337Z | Jonas Vollmer | Jonas V | null | null | null | 1,419 | 0 | false | false | null | null | 7 | 70 | 0 | 0 | 0 | 1 | 2 | EQNTWXLKMeWMp2FQS | User | null | null | null | [
"canModeratePersonal"
] | null | null | TjT3RdAfmrLqgb68K | SocialPreviewType | xaMTK68wfce2tr2JH | <p>The AI Futures Project, the team behind <a href="https://ai-2027.com/"><u>AI 2027</u></a>, has developed a <a href="https://ai-2027.com/about?tab=tabletop-exercise#tab-box-tabletop-exercise"><u>tabletop exercise</u></a> (TTX) to simulate AI takeoff where we facilitate people going through an AI 2027-like scenario with different people taking different roles, like POTUS, China, or the frontier AI company CEO. Reviews have been positive and people have <a href="https://stevenadler.substack.com/p/a-crisis-simulation-changed-how-i"><u>found it useful</u></a> for building their own models of how AI will go.</p><p>We’re interested in developing an online version of this game, with the goal of eventually getting millions of people to play through takeoff and better understand the dynamics and risks involved. The other game characters could be driven by LLMs so we don’t need to facilitate it. The online TTX could be based on our scenario and background research alongside LLMs and various game elements. </p><p>To this end, we’re looking for a <strong>full-stack developer with game design and UX experience/interest and a deep understanding of AGI risk.</strong> We previously worked with the Lightcone Infrastructure team for the AI 2027 website and might work with them again for this project, but first we’d like to see if anyone else is interested.</p><p>We expect that the ideal online TTX would ultimately look and feel very different from the in-person experience, but would successfully convey similar content. </p><p>If you’re interested, <strong>please fill in </strong><a href="https://airtable.com/appmcCGBz3vqARP9n/page6tXxhBJIoMCAq/form"><strong><u>this quick form</u></strong></a>, message me, or comment below, we’d love to hear from you! </p> | The AI Futures Project, the team behind AI 2027, has developed a tabletop exercise (TTX) to simulate AI takeoff where we facilitate people going through an AI 2027-like scenario with different people taking different roles, like POTUS, China, or the frontier AI company CEO. Reviews have been positive and people have found it useful for building their own models of how AI will go.
We’re interested in developing an online version of this game, with the goal of eventually getting millions of people to play through takeoff and better understand the dynamics and risks involved. The other game characters could be driven by LLMs so we don’t need to facilitate it. The online TTX could be based on our scenario and background research alongside LLMs and various game elements.
To this end, we’re looking for a full-stack developer with game design and UX experience/interest and a deep understanding of AGI risk. We previously worked with the Lightcone Infrastructure team for the AI 2027 website and might work with them again for this project, but first we’d like to see if anyone else is interested.
We expect that the ideal online TTX would ultimately look and feel very different from the in-person experience, but would successfully convey similar content.
If you’re interested, please fill in this quick form, message me, or comment below, we’d love to hear from you! | 228 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
||
QXqNtFGm282wYur3a | the-bellman-equation-does-not-apply-to-bounded-rationality | The Bellman equation does not apply to bounded rationality | null | false | false | false | null | j2CKvRmbMTPJa2hWk | null | true | false | false | false | Post | null | 2025-06-26T23:01:33.369Z | null | false | false | 2 | 2 | 2025-06-28T00:08:17.239Z | false | false | post | [] | null | null | E9kdvBPsd4begGEsD | 2 | 5 | 17 | false | 0.37773 | null | false | false | 2025-06-27T01:56:55.399Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-26T21:25:45.081Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "wfW6iL96u26mbatep",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-09T17:17:42.044Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Bounded Rationality",
"needsReview": false,
"noindex": false,
"postCount": 31,
"score": 10,
"shortName": null,
"slug": "bounded-rationality",
"suggestedAsFilter": false,
"userId": "Q7NW4XaWQmfPfdcFj",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "X8JsWEnBRPvs5Y99i",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2015-12-03T07:35:06.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Decision theory",
"needsReview": false,
"noindex": false,
"postCount": 500,
"score": 0,
"shortName": null,
"slug": "decision-theory",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 5 | 0 | j2CKvRmbMTPJa2hWk | christopher-king | 2023-01-28T18:51:24.135Z | christopher-king | Christopher King | null | null | Christopher King | 846 | 6 | false | false | <p><a href="https://mathstodon.xyz/@theking">@[email protected]</a></p>
| null | null | 55 | 207 | 1 | 1 | 0 | 1 | 3 | EQNTWXLKMeWMp2FQS | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | QXqNtFGm282wYur3a | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/hhcojyvrjworcmbhcc6h | SocialPreviewType | E9kdvBPsd4begGEsD | <figure class="image image_resized" style="width:50.11%"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/juptyihbtleeypzfs59q" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/jnvprmaps2mhahxogxya 160w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/khnmh0s2hsvm6ocxv1vi 320w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/ghoctcnqfyeufjyp275s 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/mzp8v8l4ja81hcvmoj1z 640w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/qcgsnyjdpc7q4nhwgv8t 800w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/wndosbrpdubjbjazneub 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/mjjmqh1wcjddzwuosqam 1120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/qvahqmntxnhmrbbffpd8 1280w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/dhl8mxdrut3reeaj2hvb 1440w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/QXqNtFGm282wYur3a/dhjav0xwvrgwlpzeunh3 1536w"></figure><p>Quick quiz: which of the following is the definition of a rational agent?</p><ol><li>An agent who follows the policy with the highest expected utility</li><li>An agent who follows the policy of choosing the action with the highest expected utility</li></ol><p>Are those even different things? In the case of <i>unbounded</i> rationality, it is a bit of a trick question. Usually (1) is taken as the definition of rational, but according to <a href="https://en.wikipedia.org/wiki/Bellman_equation#The_Bellman_equation">the Bellman equation</a>, the optimal policy <i>is</i> the one that chooses an optimal action in every state. Seems obvious, right?</p><p><a href="https://en.wikipedia.org/wiki/Bounded_rationality">Bounded rationality</a> is a form of rationality where the policy must given by an algorithm, and the <a href="https://en.wikipedia.org/wiki/Computational_resource">computational resources</a> used by that algorithm matters. For example, if computation produces heat, than when it is (just for example) <a href="https://apnews.com/photo-gallery/extreme-heat-wave-high-temperature-eastern-us-4d916b316aacaf791eab44dc8fcc603b">extremely unreasonably hot out</a>, the agent might do less computation, or use a computer to do computations for it. (An unbounded rational agent never uses a computer, since it can just simulate computers "in its head".)</p><p>In bounded rationality, (1) and (2) are different, and (2) isn't even well-defined (there might not be a policy that can always choose the best action, and even if there is that action might depend on the amount of computation that has been done).</p><p><strong>So, we </strong><i><strong>cannot</strong></i><strong> say whether a given action is bounded rational, only whether the entire policy is bounded rational.</strong></p><h1>Example: Guessing a preimage</h1><p>For example, consider the following problem: given a random 512 bits, guess how many <code>e</code>s are in the SHA-3 preimage of those bits.</p><p>In the specific case of <code>3fbef366d350d95ec6ef948dfc2fd1ebd6f1de928c266d86e2ed8e408b2f7b30cad0e14714830369f96ad41c2b58da80bab3bff90a6770910244e28b4f9e80be</code>, you might be tempted to say "32 <code>e</code>s", since you know nothing about that string and 512/16 is 32.</p><p>However, a better guess is 10, since the preimage is <code>b12ee4ed50edf33e4a388924f35f0190da54ee82ca0338d7dcc4adc3214da21e69d0b0b32789074fef3c05f02d6814da2c8d72057f50835d8f83265e6a4c3b57</code>, which has exactly 10 <code>e</code>s.</p><p>However, assuming reasonable computational limits, it is unlikely that the optimal policy says 10. <code>b12ee4ed50edf33e4a388924f35f0190da54ee82ca0338d7dcc4adc3214da21e69d0b0b32789074fef3c05f02d6814da2c8d72057f50835d8f83265e6a4c3b57</code> is just a random string of bits I generated, and for any given random strings of bits, it is unlikely for any given policy (including the optimal policy) to have hard-coded it. There are simply too many possible inputs to hard-code... </p> | Quick quiz: which of the following is the definition of a rational agent?
1. An agent who follows the policy with the highest expected utility
2. An agent who follows the policy of choosing the action with the highest expected utility
Are those even different things? In the case of unbounded rationality, it is a bit of a trick question. Usually (1) is taken as the definition of rational, but according to the Bellman equation, the optimal policy is the one that chooses an optimal action in every state. Seems obvious, right?
Bounded rationality is a form of rationality where the policy must given by an algorithm, and the computational resources used by that algorithm matters. For example, if computation produces heat, than when it is (just for example) extremely unreasonably hot out, the agent might do less computation, or use a computer to do computations for it. (An unbounded rational agent never uses a computer, since it can just simulate computers "in its head".)
In bounded rationality, (1) and (2) are different, and (2) isn't even well-defined (there might not be a policy that can always choose the best action, and even if there is that action might depend on the amount of computation that has been done).
So, we cannot say whether a given action is bounded rational, only whether the entire policy is bounded rational.
Example: Guessing a preimage
For example, consider the following problem: given a random 512 bits, guess how many es are in the SHA-3 preimage of those bits.
In the specific case of 3fbef366d350d95ec6ef948dfc2fd1ebd6f1de928c266d86e2ed8e408b2f7b30cad0e14714830369f96ad41c2b58da80bab3bff90a6770910244e28b4f9e80be, you might be tempted to say "32 es", since you know nothing about that string and 512/16 is 32.
However, a better guess is 10, since the preimage is b12ee4ed50edf33e4a388924f35f0190da54ee82ca0338d7dcc4adc3214da21e69d0b0b32789074fef3c05f02d6814da2c8d72057f50835d8f83265e6a4c3b57, which has exactly 10 es.
However, assuming reasonable c | 411 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
ATSyAdBnxxDDAwhgu | recent-and-forecasted-rates-of-software-and-hardware | Recent and forecasted rates of software and hardware progress | null | false | false | true | null | kr9MH2mgFcJz5cP7T | null | true | false | false | false | Post | null | 2025-06-26T22:37:27.074Z | null | false | false | 2 | 2 | 2025-06-28T00:09:05.740Z | false | false | post | [] | null | null | bfizCAAGySbDyz6oC | 0 | 10 | 46 | false | 0.765815 | null | false | false | 2025-06-26T22:37:27.074Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 27 | 0 | 2025-05-08T01:03:32.885Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 9 | 0 | kr9MH2mgFcJz5cP7T | elifland | 2019-06-04T02:06:27.274Z | elifland | elifland | null | null | null | 2,795 | 179 | false | false | <p><a href="https://www.elilifland.com/.">https://www.elilifland.com/.</a> You can give me anonymous feedback <a href="https://forms.gle/n1jdAYUTrRS1m94Y8">here</a>. I often change my mind and don't necessarily endorse past writings.</p> | null | null | 14 | 156 | 0 | 4 | 13 | 1 | 7 | fD4ATtTkdQJ4aSpGH | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1"
] | null | null | ATSyAdBnxxDDAwhgu | SocialPreviewType | bfizCAAGySbDyz6oC | <p><i>I originally wrote this just for myself but thought I'd share it in case others find it useful. Sharing rough notes in the spirit of not letting perfect be the enemy of the good. This was written in early May 2025.</i></p><p>In this post I collect evidence and forecasts regarding recent and future trends in software and compute progress. I also make my own forecasts which inform an <a href="https://ai-2027.com/research/timelines-forecast#2025-may-7-update">updated version of <u>AI 2027’s timelines forecast</u></a> (more to come on these and further updates). These forecasts are rough and uncertain.</p><p>I especially build on <a href="https://www.lesswrong.com/posts/v7LtZx6Qk5e9s7zj3/what-s-going-on-with-ai-progress-and-trends-as-of-5-2025#Training_compute_trends:~:text=probably%20overall%20algorithmic%20progress%20looks%20more%20like%204.5x%20effective%20compute%20per%20year">this post</a>.</p><figure class="table"><table style="border-style:none"><tbody><tr><td style="border-color:#000000;padding:5pt;text-align:center;vertical-align:top" colspan="3"><h3><strong>Evidence and forecasts regarding the recent rate of software progress </strong><br>(and share vs. compute)</h3></td></tr><tr><td style="border-color:#000000;padding:5pt;vertical-align:top">Evidence</td><td style="border-color:#000000;padding:5pt;vertical-align:top">Estimate</td><td style="border-color:#000000;padding:5pt;vertical-align:top">Caveats, reasoning, etc.</td></tr><tr><td style="border-color:#000000;padding:5pt;text-align:center;vertical-align:top" colspan="3"><p><strong>Compute efficiency increase at fixed performance</strong></p></td></tr><tr><td style="border-color:#000000;padding:5pt;vertical-align:top">Epoch paper: Analyzing a dataset of LLM performance on Wikitext and Penn Treebank (<a href="https://epoch.ai/blog/algorithmic-progress-in-language-models"><u>source</u></a>)</td><td style="border-color:#000000;padding:5pt;vertical-align:top"><p>2.5-2.8x/year </p><p> </p><p>Average efficiency increase reported in the paper (compute needed to reach same performance halving every 8-9 months)</p></td><td style="border-color:#000000;padding:5pt;vertical-align:top"> </td></tr><tr><td style="border-color:#000000;padding:5pt;vertical-align:top"><p>Epoch’s analysis of trends in inference cost to achieve fixed performance level (<a href="https://epoch.ai/data-insights/llm-inference-price-trends"><u>source</u></a>). Median rate of cost decrease is 40x/year, trends lasting 0.5-3 years.</p><p> </p><p>The trends after Jan 2024 seem faster (but I think this might be due to trends generally being faster right after benchmark release).</p></td><td style="border-color:#000000;padding:5pt;vertical-align:top"><p><<1,600x/year</p><p> </p><p>If cost were proportional to parameter count and all models are Chinchilla trained then this would imply compute efficiency increase of 40*40=1,600x/year. But it’s actually way less, see the next column for why.</p></td><td style="border-color:#000000;padding:5pt;vertical-align:top"><p>This is obviously too large but still interesting.</p><p> </p><p>Most of this is coming from (a) distillation, which we should disallow<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="vff0ipqv7fq" role="doc-noteref" id="fnrefvff0ipqv7fq"><sup><a href="#fnvff0ipqv7fq">[1]</a></sup></span> (b) a <a href="https://epoch.ai/gradient-updates/frontier-language-models-have-become-much-smaller"><u>trend toward increased overtraining on smaller models</u></a> and (c) overfitting to benchmarks; in particular, some or all of the benchmarks might be more narrow than we’d like.</p><p> </p><p>However even for MMLU there’s a 10x/year trend lasting 3 years, which is interesting.</p><p> </p><p> </p></td></tr><tr><td style="border-color:#000000;padding:5pt;vertical-align:top">GPT-4 vs. DeepSeek-V3 / R1 on MMLU etc. (source: <a href="https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1?commentId=Ahd4tCfF2zFRsS9tp"><u>Ryan comment</u></a>, <a href="https://epoch.ai/data/large-scale-ai-models?view=table#explore-the-data"><u>Epoch data</u></a>)</td><td style="border-color:#000000;padding:5pt;vertical-align:top"><p>>2.8x/year</p><p> </p><p>GPT-4 released in Mar 2023 </p><p>with 2e25 training compute, DeepSeek-V3 was released Dec 2024 with 3.4e24 training compute.</p><p> </p><p>DeepSeek-V3 is significantly better at MMLU and other benchmarks. This corresponds to a >2.8x/year decrease.</p></td><td style="border-color:#000000;padding:5pt;vertical-align:top">Peter claims that this isn’t good to look at because GPT-4 isn’t trained compute optimally. It’s also possible that DeepSeek-V3 is disti</td></tr></tbody></table></figure>... | I originally wrote this just for myself but thought I'd share it in case others find it useful. Sharing rough notes in the spirit of not letting perfect be the enemy of the good. This was written in early May 2025.
In this post I collect evidence and forecasts regarding recent and future trends in software and compute progress. I also make my own forecasts which inform an updated version of AI 2027’s timelines forecast (more to come on these and further updates). These forecasts are rough and uncertain.
I especially build on this post.
Evidence and forecasts regarding the recent rate of software progress
(and share vs. compute)
EvidenceEstimateCaveats, reasoning, etc.
Compute efficiency increase at fixed performance
Epoch paper: Analyzing a dataset of LLM performance on Wikitext and Penn Treebank (source)
2.5-2.8x/year
Average efficiency increase reported in the paper (compute needed to reach same performance halving every 8-9 months)
Epoch’s analysis of trends in inference cost to achieve fixed performance level (source). Median rate of cost decrease is 40x/year, trends lasting 0.5-3 years.
The trends after Jan 2024 seem faster (but I think this might be due to trends generally being faster right after benchmark release).
<<1,600x/year
If cost were proportional to parameter count and all models are Chinchilla trained then this would imply compute efficiency increase of 40*40=1,600x/year. But it’s actually way less, see the next column for why.
This is obviously too large but still interesting.
Most of this is coming from (a) distillation, which we should disallow[1] (b) a trend toward increased overtraining on smaller models and (c) overfitting to benchmarks; in particular, some or all of the benchmarks might be more narrow than we’d like.
However even for MMLU there’s a 10x/year trend lasting 3 years, which is interesting.
GPT-4 vs. DeepSeek-V3 / R1 on MMLU etc. (source: Ryan comment, Epoch data)
>2.8x/year
GPT-4 released | 2,277 | 1.8.0 | Revision | false | null | null | CrosspostOutput |
|
MZTRDmq4p22KgDF8Y | too-many-definitions-of-consciousness | Too Many Definitions of Consciousness | null | false | false | false | null | Pd22e5d3NTE8kH7v4 | null | true | false | false | false | Post | null | 2025-06-26T22:22:50.287Z | null | false | false | 2 | 2 | 2025-06-28T00:08:31.338Z | false | false | post | [] | null | null | wyTensarFQtaWJzNG | 2 | 4 | 6 | false | 0.228734 | null | false | false | 2025-06-26T22:22:50.287Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-26T22:22:50.287Z | false | false | norm-enforcing | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "XSryTypw5Hszpa4TS",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 21,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-08T19:57:40.728Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Consciousness",
"needsReview": false,
"noindex": false,
"postCount": 384,
"score": 21,
"shortName": null,
"slug": "consciousness",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "s9fuuxuzFMugAnf6r",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-10-24T19:46:36.615Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language & Linguistics",
"needsReview": false,
"noindex": false,
"postCount": 83,
"score": 9,
"shortName": null,
"slug": "language-and-linguistics",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "kCrvmjZhsAZANnZL6",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-06-01T01:54:17.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Philosophy",
"needsReview": false,
"noindex": false,
"postCount": 423,
"score": 0,
"shortName": null,
"slug": "philosophy-1",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "8e9e8fzXuW5gGBS3F",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-10T19:51:04.714Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Qualia",
"needsReview": false,
"noindex": false,
"postCount": 69,
"score": 9,
"shortName": null,
"slug": "qualia",
"suggestedAsFilter": false,
"userId": "4fh2AAe3n7oBviyxx",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "p8nXWqwPH7mPSZf6p",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-15T19:19:49.750Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Terminology / Jargon (meta)",
"needsReview": false,
"noindex": false,
"postCount": 52,
"score": 0,
"shortName": null,
"slug": "terminology-jargon-meta",
"suggestedAsFilter": false,
"userId": "gXeEWGjTWyqgrQTzR",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 3 | 0 | Pd22e5d3NTE8kH7v4 | commander-zander | 2015-05-07T14:25:35.213Z | scarcegreengrass | Commander Zander | null | null | null | 473 | 0 | false | false | <p>alexpear.github.io</p> | null | null | 20 | 203 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | true | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | MZTRDmq4p22KgDF8Y | SocialPreviewType | wyTensarFQtaWJzNG | <p>I often hear people say consciousness is a phenomenon that urgently needs deeper understanding. Yet the word <i>consciousness</i> is a major source of the confusion. I'm trying to replace it with clearer words in my conversations.</p><p>Often, in both mainstream & rationalist contexts, i encounter people with these different definitions of the word:</p><ol><li>Awakeness (but everyone agrees this is the boring one)</li><li>Thoughts about oneself</li><li>Thoughts about oneself AND the capability to think about complex situations</li><li>Attention routing</li><li>Subjective experience</li></ol><p>People often also believe that one of the phenomena above is closely bound to one of these other phenomena:</p><ol><li>Pleasure & pain</li><li>Emotions</li><li>Neurons (as opposed to simulated neurons)</li><li>Moral worth</li><li>Worthiness of respect</li></ol><p>The problem is that you define <i>consciousness</i> as 1+ of the above, but your conversation partner has checked a different set of the checkboxes, so you talk past each other.</p><p>I suspect most people are okay with this vague terminology because they all share a belief in a Subtle Neural Superiority that distinguishes humans from plants, or perhaps from krill. It's okay for others to equate this SNS with the wrong phenomenon. It's okay if you think the invisible elephant is tusk-based rather than trunk-based - at least we agree there is a invisible elephant! </p><p>But i doubt we will ever find this invisible elephant, this Subtle Neural Superiority. We will one day fully understand the brain<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="1mk5c1vb2t2" role="doc-noteref" id="fnref1mk5c1vb2t2"><sup><a href="#fn1mk5c1vb2t2">[1]</a></sup></span>, but even then biophysics & psychology will reveal no clear evidence of a SNS. The distinction will always be a matter of pragmatic consensus, not of hard science. </p><p>So i agree that we need to research minds in detail, & i agree that we need to devise explicit, detailed policies for which phenomena to prioritize over others. I agree that it's reasonable for us to prioritize a human over a microscopic worm, but not because of a undiscovered biophysics or algorithmic property. I simply think it is reasonable to prioritize humanlike minds for the time being. This is a pragmatic policy, not a claim about hard science.</p><p>I intend to write more about my opinions on animal ethics & philosophy of mind, since i've recently discovered that they are novel to some people. </p><p>But you certainly don't have to agree with me about everything. All i request is that you try to use more specific words than <i>consciousness</i>. People don't know which definition/s you... </p> | I often hear people say consciousness is a phenomenon that urgently needs deeper understanding. Yet the word consciousness is a major source of the confusion. I'm trying to replace it with clearer words in my conversations.
Often, in both mainstream & rationalist contexts, i encounter people with these different definitions of the word:
1. Awakeness (but everyone agrees this is the boring one)
2. Thoughts about oneself
3. Thoughts about oneself AND the capability to think about complex situations
4. Attention routing
5. Subjective experience
People often also believe that one of the phenomena above is closely bound to one of these other phenomena:
1. Pleasure & pain
2. Emotions
3. Neurons (as opposed to simulated neurons)
4. Moral worth
5. Worthiness of respect
The problem is that you define consciousness as 1+ of the above, but your conversation partner has checked a different set of the checkboxes, so you talk past each other.
I suspect most people are okay with this vague terminology because they all share a belief in a Subtle Neural Superiority that distinguishes humans from plants, or perhaps from krill. It's okay for others to equate this SNS with the wrong phenomenon. It's okay if you think the invisible elephant is tusk-based rather than trunk-based - at least we agree there is a invisible elephant!
But i doubt we will ever find this invisible elephant, this Subtle Neural Superiority. We will one day fully understand the brain[1], but even then biophysics & psychology will reveal no clear evidence of a SNS. The distinction will always be a matter of pragmatic consensus, not of hard science.
So i agree that we need to research minds in detail, & i agree that we need to devise explicit, detailed policies for which phenomena to prioritize over others. I agree that it's reasonable for us to prioritize a human over a microscopic worm, but not because of a undiscovered biophysics or algorithmic property. I simply think it is reasonable to prior | 417 | 1.2.0 | Revision | false | true | null | CrosspostOutput |
|
WknCdhjs7ErzQ6AZt | may-june-2025-progress-in-guaranteed-safe-ai | May-June 2025 Progress in Guaranteed Safe AI | null | false | false | false | null | qqwfzAYaLsfmkwbsK | null | true | false | false | false | Post | null | 2025-06-26T21:30:13.182Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | 9MRFMMhd5am8uLnTm | 0 | 2 | 8 | false | 0.11344 | null | false | false | 2025-06-26T21:30:13.182Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-26T21:30:13.183Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | sAHcMGDsmgTnsMi3o | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | qqwfzAYaLsfmkwbsK | quinn-dougherty | 2018-10-26T19:49:00.287Z | quinn-dougherty | Quinn | null | null | Quinn Dougherty | 1,314 | 15 | false | false | <p><a href="https://quinnd.net">https://quinnd.net</a></p> | null | null | 34 | 279 | 0 | 1 | 0 | 1 | 1 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | WknCdhjs7ErzQ6AZt | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/WknCdhjs7ErzQ6AZt/cg1p0prhrobmyyjoio9j | SocialPreviewType | 9MRFMMhd5am8uLnTm | <p>There will be a AIxFM conference in the Bay Area Q4, according to a little birdie.</p><h1><a href="https://github.com/morph-labs/lean-abc-true-almost-always">Morph ships big autoformalization result in 3599 lines of Lean</a></h1><p>They have human decomposition in the latex/lean blueprint, into 67 lemmas with human spotchecking. Still, I’m impressed with their system (called Trinity).</p><p>I’d like to know how expensive (in tokens, or some other compute metric) it was to do this!</p><h2>On <a href="https://www.morph.so/blog/verified-superintelligence">Verified Superintelligence</a></h2><p>I of course have opinions on their blog post Verified Superintelligence.</p><blockquote><p>Today's most advanced AI systems—reasoning LLMs trained with supervised RL—have hit a fundamental wall. They can only improve on problems where we can verify the (known) answers. Every math problem needs a known solution. Every coding challenge requires test cases. Every reasoning task demands ground truth.</p></blockquote><p>I think I know where they’re going with this: “verification” in the sense of supervised RL as a cognate with “verification” in the sense of typechecking.</p><blockquote><p>Perhaps most importantly, this approach offers a path to what we call "trustless alignment." By ensuring all outputs can be independently validated, we create AI systems whose safety doesn't depend on faith in their training process or architecture. The proof is in the output.</p></blockquote><p>This describes effectively a <a href="https://www.lesswrong.com/w/ai-boxing-containment">boxing</a> strategy, not alignment. Boxing is when you police the boundary between the AI and the world. Alignment is when you don’t need to. Is this an important distinction, or a narcissism of small differences you’d only notice if you have Lesswrong Disease (the disease they diagnose you with when you read or write on Lesswrong)? Being afflicted myself, I’m not sure how I would tell.</p><h1>Two benchmarks</h1><h2><a href="https://arxiv.org/abs/2505.23135">VERINA</a></h2><blockquote><p>Large language models (LLMs) are increasingly integrated in software development, but ensuring correctness in LLM-generated code remains challenging and often requires costly manual review. Verifiable code generation -- jointly generating code, specifications, and proofs of code-specification alignment -- offers a promising path to address this limitation and further unleash LLMs' benefits in coding. Yet, there exists a significant gap in evaluation: current benchmarks often lack support for end-to-end verifiable code generation. In this paper, we introduce Verina (Verifiable Code Generation Arena), a high-quality benchmark enabling a comprehensive and modular evaluation of code, specification, and proof generation as well as t</p></blockquote>... | There will be a AIxFM conference in the Bay Area Q4, according to a little birdie.
Morph ships big autoformalization result in 3599 lines of Lean
They have human decomposition in the latex/lean blueprint, into 67 lemmas with human spotchecking. Still, I’m impressed with their system (called Trinity).
I’d like to know how expensive (in tokens, or some other compute metric) it was to do this!
On Verified Superintelligence
I of course have opinions on their blog post Verified Superintelligence.
> Today's most advanced AI systems—reasoning LLMs trained with supervised RL—have hit a fundamental wall. They can only improve on problems where we can verify the (known) answers. Every math problem needs a known solution. Every coding challenge requires test cases. Every reasoning task demands ground truth.
I think I know where they’re going with this: “verification” in the sense of supervised RL as a cognate with “verification” in the sense of typechecking.
> Perhaps most importantly, this approach offers a path to what we call "trustless alignment." By ensuring all outputs can be independently validated, we create AI systems whose safety doesn't depend on faith in their training process or architecture. The proof is in the output.
This describes effectively a boxing strategy, not alignment. Boxing is when you police the boundary between the AI and the world. Alignment is when you don’t need to. Is this an important distinction, or a narcissism of small differences you’d only notice if you have Lesswrong Disease (the disease they diagnose you with when you read or write on Lesswrong)? Being afflicted myself, I’m not sure how I would tell.
Two benchmarks
VERINA
> Large language models (LLMs) are increasingly integrated in software development, but ensuring correctness in LLM-generated code remains challenging and often requires costly manual review. Verifiable code generation -- jointly generating code, specifications, and proofs of code-specification alignment -- | 1,346 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
euYjjuagnw5dibu9X | how-many-gpus-are-markets-expecting | How many GPUs are markets expecting? | null | false | false | false | null | 4GyubesCqHp9xyJLd | null | true | false | false | false | Post | https://caseymilkweed.substack.com/p/trying-to-guess-gpu-production-based | 2025-06-26T21:17:46.581Z | null | false | false | 2 | 2 | 2025-06-28T00:08:28.176Z | false | false | linkpost | [] | null | null | XxtzJcY7JrYdfGnGj | 0 | 3 | 4 | false | 0.193523 | null | false | false | 2025-06-26T21:17:46.581Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-26T21:14:42.330Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | 4GyubesCqHp9xyJLd | caseymilkweed | 2025-06-20T20:26:29.911Z | CaseyMilkweed | CaseyMilkweed | null | null | null | 7 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | euYjjuagnw5dibu9X | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/izkj2q3fnut8jgzus0ss | SocialPreviewType | XxtzJcY7JrYdfGnGj | <p>Even if we get ultra-capable frontier models, we’ll need lots of GPUs to run them at scale. Currently, our installed GPU stock <a href="https://substack.com/home/post/p-165947908?source=queue"><u>isn’t enough</u></a> to automate the white-collar workforce or cause extinction. We need more!</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/iucueqsshb1frhwzktxz" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/fokhj4sp4umge2oggysb 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/m5mnxl0qih13xroaiq9g 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/ajggc0gsgdt6uoogvyfq 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/iucueqsshb1frhwzktxz 1456w"></figure><p> </p><p>Tyler Cowen thinks we should predict AI progress using <a href="https://marginalrevolution.com/marginalrevolution/2025/02/why-i-think-ai-take-off-is-relatively-slow.html"><u>asset prices</u></a>. He’s usually thinking about <a href="https://basilhalperin.com/papers/agi_emh.pdf"><u>interest rates</u></a>, but what about NVIDIA’s stock price? Since NVIDIA makes <a href="https://finance.yahoo.com/news/nvidia-crushes-rivals-secures-unprecedented-102235255.html"><u>almost all</u></a> of the GPUs used to train and run AI models, and since GPUs generate <a href="https://s201.q4cdn.com/141608511/files/doc_financials/2026/q1/NVDA-F1Q26-Quarterly-Presentation-FINAL.pdf#page=5"><u>almost all</u></a> of NVIDIA’s sales, NVIDIA’s market capitalization basically represents the discounted value of future cash flows from the GPU market.</p><p>Using a <a href="https://www.wallstreetprep.com/knowledge/reverse-dcf-model/"><u>reverse DCF model</u></a>, you can estimate how much NVIDIA must grow to generate cash flows consistent with their current valuation. Since NVIDIA generates cash flows by making sales and generates sales by making GPUs, you can back into a global GPU projection by making further assumptions about margin, price, and market share.</p><p>In that spirit, I <a href="https://docs.google.com/spreadsheets/d/1e9_SgJ2VmviZ4F7dVUjN2HkCwXrCB3-0iGqiaGgGyy0/edit?usp=sharing"><u>modeled</u></a> an explicit GPU production forecast based off of NVIDIA’s valuation. In their background research, the <a href="https://ai-2027.com/"><u>AI 2027</u></a> folks also make explicit predictions about GPU production in their <a href="https://ai-2027.com/research/compute-forecast"><u>compute forecast</u></a>. Do thier assumptions imply that NVIDIA is under-valued? Looking just at 2027, it’s pretty hard, though not impossible, to reconcile the AI 2027 assumptions with NVIDIA’s current valuation. One could argue that the conclusion “Daniel Kokotajlo should buy NVIDIA stock” is less than revelatory, but I will nonetheless continue writing this post.</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/mu1yxrixbu5pdmqsinwn" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/qozusthpy3fk9on9jut6 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/eeewrlyowphoc9z8ceek 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/pi0ox7lsftlkrxwair5w 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/euYjjuagnw5dibu9X/mu1yxrixbu5pdmqsinwn 1456w"></figure><p> </p><p>First, how did I get these numbers? For 2027, it was pretty easy, because I rely on a consensus forecast for near-term financials. Equity analysts are generally <a href="https://www.marketscreener.com/quote/stock/NVIDIA-CORPORATION-57355629/finances/"><u>expecting</u></a> NVIDIA to have $250 billion in net sales by 2027. The price of an H100 GPU is <a href="https://docs.jarvislabs.ai/blog/h100-price"><u>$25k</u></a>, so you might think that implies ~10 million GPUs. My central estimate is higher, because I assume some reduction in price and some decline in NVIDIA’s market share. Here’s the actual math behind my 16 million central estimate for 2027:</p><p>$250 billion net sales<br>x 92% of sales are GPUs<br>x (1 / $18k GPU unit price)<br>x (1 / 0.8 NVIDIA market share)</p><p>The consensus forecast only runs through 2028, so my <a href="https://docs.google.com/spreadsheets/d/1e9_SgJ2VmviZ4F7dVUjN2HkCwXrCB3-0iGqiaGgGyy0/edit?usp=sharing"><u>spreadsheet</u></a> goes a little further by projecting what growth you would need through 2035 to justify NVIDIA’s market cap. Things get very uncertain very quickly. It seems like NVIDIA’s price is consistent with production being somewhere betwee... </p> | Even if we get ultra-capable frontier models, we’ll need lots of GPUs to run them at scale. Currently, our installed GPU stock isn’t enough to automate the white-collar workforce or cause extinction. We need more!
Tyler Cowen thinks we should predict AI progress using asset prices. He’s usually thinking about interest rates, but what about NVIDIA’s stock price? Since NVIDIA makes almost all of the GPUs used to train and run AI models, and since GPUs generate almost all of NVIDIA’s sales, NVIDIA’s market capitalization basically represents the discounted value of future cash flows from the GPU market.
Using a reverse DCF model, you can estimate how much NVIDIA must grow to generate cash flows consistent with their current valuation. Since NVIDIA generates cash flows by making sales and generates sales by making GPUs, you can back into a global GPU projection by making further assumptions about margin, price, and market share.
In that spirit, I modeled an explicit GPU production forecast based off of NVIDIA’s valuation. In their background research, the AI 2027 folks also make explicit predictions about GPU production in their compute forecast. Do thier assumptions imply that NVIDIA is under-valued? Looking just at 2027, it’s pretty hard, though not impossible, to reconcile the AI 2027 assumptions with NVIDIA’s current valuation. One could argue that the conclusion “Daniel Kokotajlo should buy NVIDIA stock” is less than revelatory, but I will nonetheless continue writing this post.
First, how did I get these numbers? For 2027, it was pretty easy, because I rely on a consensus forecast for near-term financials. Equity analysts are generally expecting NVIDIA to have $250 billion in net sales by 2027. The price of an H100 GPU is $25k, so you might think that implies ~10 million GPUs. My central estimate is higher, because I assume some reduction in price and some decline in NVIDIA’s market share. Here’s the actual math behind my 16 million central estimate for | 1,031 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
uAEhvX6scvcZANWwg | a-guide-for-llm-assisted-web-research | A Guide For LLM-Assisted Web Research | null | false | false | false | null | 8CkRicP85pZiFJvFa | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "a9HCzFkGaxXqdGCLf"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "EaWHidPaZuNdjHAeP"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "uaRncpRuqEDGzSzTP"
}
] | true | false | false | false | Post | null | 2025-06-26T18:39:21.573Z | null | false | false | 2 | 2 | null | false | false | post | [
"a9HCzFkGaxXqdGCLf",
"wGQvcCkxeehdTc3vf",
"54yqHhxWnpccR2hC9",
"fAn4t39wu5Ze64AgP",
"HKzLAsiJTnqGtdip2"
] | null | null | GJs4K8dxKtB3rq7no | 3 | 16 | 41 | false | 0.50907 | null | false | false | 2025-06-27T14:18:00.179Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 10 | 0 | 2025-06-20T13:40:17.732Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "a9HCzFkGaxXqdGCLf",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 10,
"createdAt": "2011-04-10T22:20:35.302Z",
"deleted": false,
"displayName": "dschwarz",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 418,
"organization": null,
"postCount": 6,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "dschwarz",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "dschwarz"
},
{
"__typename": "User",
"_id": "EaWHidPaZuNdjHAeP",
"afCommentCount": 0,
"afKarma": 9,
"afPostCount": 1,
"commentCount": 5,
"createdAt": "2020-01-03T15:20:19.209Z",
"deleted": false,
"displayName": "Lawrence Phillips",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 341,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "lawrence-phillips",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "Lawrence Phillips"
},
{
"__typename": "User",
"_id": "uaRncpRuqEDGzSzTP",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2025-04-21T22:31:22.711Z",
"deleted": false,
"displayName": "FutureSearch",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 94,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "futuresearch",
"spamRiskScore": 0.7200000000000001,
"tagRevisionCount": 0,
"username": "FutureSearch"
}
] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "ZwpcAKEFoYa3uBEjA",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-07-16T14:12:41.185Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Benchmarking",
"needsReview": false,
"noindex": false,
"postCount": 32,
"score": 0,
"shortName": null,
"slug": "ai-benchmarking",
"suggestedAsFilter": false,
"userId": "XkC2RZHKXZmYv3dfA",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 16 | 0 | 0 | 5 | 0 | 8CkRicP85pZiFJvFa | followtheargument | 2020-12-31T13:08:34.067Z | followtheargument | nikos | null | null | null | 301 | 80 | false | false | null | null | 10 | 4 | 0 | 1 | 0 | 1 | 0 | gXeEWGjTWyqgrQTzR | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | uAEhvX6scvcZANWwg | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/j3ddumgldwfwxkizac4h | SocialPreviewType | GJs4K8dxKtB3rq7no | <p>It's hard to imagine doing web research without using LLMs. Chatbots may be the first thing you turn to for questions like: What are the companies currently working on nuclear fusion and who invested in them? What is the performance gap between open and closed-weight models on the MMLU benchmark? Is there really a Tesla Model H?</p><p>So which LLMs, and which "Search", "Research", "Deep Search" or "Deep Research" branded products, are best? How good are their epistemics, compared to if you did the web research yourself?</p><p>Last month we (<a href="https://www.futuresearch.ai">FutureSearch</a>) published <a>Deep Research Bench (DRB)</a>, a benchmark designed to evaluate LLMs agents on difficult web research tasks using frozen snapshots of the internet. In this post, we're going to share the non-obvious findings, suggestions and failure modes that we think might be useful to anyone who uses LLMs with web search enabled.</p><h2>tl;dr</h2><ul><li>ChatGPT with o3 + websearch outperformed everything else by a decent margin, though is still clearly worse than a skilled human. Use this as your default research tool.</li><li>Claude web + Claude Research were not able to read PDFs (as of May 6, 2025 and as far as we can tell that's still the case), which likely nerfed their scores. If that's important for your task, don't use them.</li><li>If you're using LLMs with your own agent via an API, Claude 4 Sonnet + Opus are best, better than o3 (as of June 24, 2025).</li><li>Grok was ok, but not great. Unless you need to access Twitter. Then definitely use Grok, which excels at this (our task instances did not rely on Twitter, so that strength is not reflected in our benchmark).</li><li>Regular mode + web search often outperformed their corresponding dedicated deep research tools. They are also much better for iterative research. </li></ul><h2>What We Did </h2><p>Full methods in the <a>paper</a>. In brief, we created web research tasks, carefully worked out the correct answers, and scraped and stored a large chunk of webpages offline for the agents to search, read, and reason about to succeed in the task.</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/cuivb6gyfd7cdnxpxyas" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/otsnc44jyi9utgd785lx 310w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/qqz54lfkivrtvwlotffd 620w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/ggx4ohsmzewtrclvpq1k 930w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/x9h5quo9l9bchr69svjh 1240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/a6iiandww78lvvh7g1ps 1550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/vruoko15geike06ulrfi 1860w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/wk53dzhemhekyofykdqi 2170w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/isqbiqgbkko90qumseqo 2480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/p78wpysogouxcth8smwa 2790w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/uAEhvX6scvcZANWwg/vanyfsizmq0sa3gry8kf 3098w"><figcaption>Overview of task types. DRB uses RetroSearch, a system designed to serve agents a frozen, previously scraped version of the internet instead of the live pages.. Offline Web Pages shows the median (min–max) number of web pages scraped and stored for agents to search and read during each task.</figcaption></figure><p>We then ran 12 LLMs, each driving a <a href="https://arxiv.org/abs/2210.03629">ReAct agent</a> with tools to search the internet and query documents and web pag... </p> | It's hard to imagine doing web research without using LLMs. Chatbots may be the first thing you turn to for questions like: What are the companies currently working on nuclear fusion and who invested in them? What is the performance gap between open and closed-weight models on the MMLU benchmark? Is there really a Tesla Model H?
So which LLMs, and which "Search", "Research", "Deep Search" or "Deep Research" branded products, are best? How good are their epistemics, compared to if you did the web research yourself?
Last month we (FutureSearch) published Deep Research Bench (DRB), a benchmark designed to evaluate LLMs agents on difficult web research tasks using frozen snapshots of the internet. In this post, we're going to share the non-obvious findings, suggestions and failure modes that we think might be useful to anyone who uses LLMs with web search enabled.
tl;dr
* ChatGPT with o3 + websearch outperformed everything else by a decent margin, though is still clearly worse than a skilled human. Use this as your default research tool.
* Claude web + Claude Research were not able to read PDFs (as of May 6, 2025 and as far as we can tell that's still the case), which likely nerfed their scores. If that's important for your task, don't use them.
* If you're using LLMs with your own agent via an API, Claude 4 Sonnet + Opus are best, better than o3 (as of June 24, 2025).
* Grok was ok, but not great. Unless you need to access Twitter. Then definitely use Grok, which excels at this (our task instances did not rely on Twitter, so that strength is not reflected in our benchmark).
* Regular mode + web search often outperformed their corresponding dedicated deep research tools. They are also much better for iterative research.
What We Did
Full methods in the paper. In brief, we created web research tasks, carefully worked out the correct answers, and scraped and stored a large chunk of webpages offline for the agents to search, read, and reason about to succeed i | 2,046 | 1.13.1 | Revision | false | null | null | CrosspostOutput |
|
dJJkhk5nZoXfiQgHz | rlaif-rlhf-for-public-transparency-alignment-enhancing | RLAIF/RLHF for Public Transparency Alignment
Enhancing Transparency in LLMs | null | false | false | false | null | g2sPkrAR6LSRx9EvJ | null | true | false | false | false | Post | null | 2025-06-26T18:32:12.002Z | null | false | false | 2 | 2 | 2025-06-26T20:38:36.473Z | false | false | post | [] | null | null | xNcLGjBKD977wMxfg | 0 | 1 | 1 | false | 0.144049 | null | false | false | 2025-06-26T18:32:12.002Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-26T13:33:05.753Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | g2sPkrAR6LSRx9EvJ | jada42 | 2025-06-26T13:32:01.617Z | jada42 | Jada42 | null | null | null | 0 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | dJJkhk5nZoXfiQgHz | SocialPreviewType | xNcLGjBKD977wMxfg | <p>Hey all — I just submitted my Master’s thesis, and would really appreciate feedback from people thinking about <strong>non-task value alignment</strong> or <strong>LLM transparency metrics</strong>.</p><p>I explored whether Reinforcement Learning from Human Feedback (RLHF), using a constitutional rubric, could align a language model with <strong>transparency as a civic value</strong>. The case: Amsterdam’s parking permit system.</p><p>Methods:</p><ul><li><strong>Model:</strong> Mistral-7B-Instruct v3</li><li><strong>Training:</strong> PPO using a <strong>6-dimensional transparency rubric</strong> (Explainability, Completeness, Source Attribution, Procedural Clarity, Accuracy, Limitation Disclosure)</li><li><strong>Baseline:</strong> SFT using ~300 examples</li><li><strong>Evaluator:</strong> Independent LLM Judge (Claude Sonnet 4) scoring per-dimension across 300 held-out citizen queries</li></ul><p>Key Results</p><ul><li><strong>RLHF win rate:</strong> 61% vs SFT baseline (<strong>p < 0.001</strong>)</li><li><strong>Biggest gains:</strong><ul><li>+21.6% Information Completeness</li><li>+19.5% Source Attribution</li></ul></li><li><strong>Accuracy dropped by -6.6%</strong>, mostly due to over-hedging (e.g. "This may be different for your city", despite giving city specific information)</li><li><strong>81% of reward signal came from explicit constitutional constraints</strong>, not emergent learning (according to the Optuna trials)</li></ul><p> Takeaways</p><ul><li>RLHF <strong>can</strong> encode democratic values — but needs <strong>deliberate constraints</strong></li><li>There’s a fundamental tradeoff: transparency ≠ precision</li><li>Technical alignment isn’t enough — civic values require <i>governance, citizen input, and deliberation</i></li></ul><p>Would love feedback — especially on:</p><ul><li>whether this rubric-based evaluator is generalizable</li><li>how others might have measured non-task values like “transparency”</li><li>if this work might overlap with existing alignment metric toolkits</li></ul><p>I’ll post the paper & code when cleaned up.<br> </p><p><br><strong>Abstract</strong></p><p>Large Language Models in government services often lack alignment with democratic values, particularly transparency, risking opaque AI-citizen interactions that undermine public trust. This thesis investigates whether Reinforcement Learning from Human Feedback (RLHF) can systematically embed transparency in government AI systems.</p><p>Using Amsterdam's parking regulations as a critical case, we fine-tuned a Mistral-7B model through constitutional RLHF, where a six-dimensional transparency rubric guided Proximal Policy Optimization. This approach was benchmarked against supervised fine-tuning across 300 citizen queries, with an independent LLM evaluating transparency outcomes.</p><p>The RLHF model achieved a 61%-win rate (p < .001), with substantial improvements in Information C... </p> | Hey all — I just submitted my Master’s thesis, and would really appreciate feedback from people thinking about non-task value alignment or LLM transparency metrics.
I explored whether Reinforcement Learning from Human Feedback (RLHF), using a constitutional rubric, could align a language model with transparency as a civic value. The case: Amsterdam’s parking permit system.
Methods:
* Model: Mistral-7B-Instruct v3
* Training: PPO using a 6-dimensional transparency rubric (Explainability, Completeness, Source Attribution, Procedural Clarity, Accuracy, Limitation Disclosure)
* Baseline: SFT using ~300 examples
* Evaluator: Independent LLM Judge (Claude Sonnet 4) scoring per-dimension across 300 held-out citizen queries
Key Results
* RLHF win rate: 61% vs SFT baseline (p < 0.001)
* Biggest gains:
* +21.6% Information Completeness
* +19.5% Source Attribution
* Accuracy dropped by -6.6%, mostly due to over-hedging (e.g. "This may be different for your city", despite giving city specific information)
* 81% of reward signal came from explicit constitutional constraints, not emergent learning (according to the Optuna trials)
Takeaways
* RLHF can encode democratic values — but needs deliberate constraints
* There’s a fundamental tradeoff: transparency ≠ precision
* Technical alignment isn’t enough — civic values require governance, citizen input, and deliberation
Would love feedback — especially on:
* whether this rubric-based evaluator is generalizable
* how others might have measured non-task values like “transparency”
* if this work might overlap with existing alignment metric toolkits
I’ll post the paper & code when cleaned up.
Abstract
Large Language Models in government services often lack alignment with democratic values, particularly transparency, risking opaque AI-citizen interactions that undermine public trust. This thesis investigates whether Reinforcement Learning from Human Feedback (RLHF) can systematically embed transparency | 448 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
hoQP9SkLy9se9Y8dz | if-moral-realism-is-true-then-the-orthogonality-thesis-is | If Moral Realism is true, then the Orthogonality Thesis is false. | null | false | false | false | null | XH6JLf4eHojhjazpC | null | true | false | false | false | Post | null | 2025-06-26T18:31:01.417Z | null | false | false | 2 | 2 | 2025-06-26T20:59:56.409Z | false | false | post | [] | null | null | fNoE9ScNN9Kk72FcG | 7 | 10 | 0 | false | 0.125977 | null | false | false | 2025-06-27T13:30:36.155Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | -3 | 0 | 2025-06-23T02:52:28.381Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 3 | 0 | XH6JLf4eHojhjazpC | eye-you | 2025-06-13T06:45:40.410Z | Eye You | Eye You | null | null | null | 1 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | hoQP9SkLy9se9Y8dz | SocialPreviewType | fNoE9ScNN9Kk72FcG | <p>Claim: if moral realism is true, then the Orthogonality Thesis is false, and superintelligent agents are very likely to be moral.</p><p><i>I'm arguing against </i><a href="https://www.lesswrong.com/posts/AJ3aP8iWxr6NaKi6j/arguing-orthogonality-published-form"><i>Armstrong's version of Orthogonality</i></a><span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="q7v6tlarhyl" role="doc-noteref" id="fnrefq7v6tlarhyl"><sup><a href="#fnq7v6tlarhyl">[1]</a></sup></span>:</p><blockquote><p>The fact of being of high intelligence provides extremely little constraint on what final goals an agent could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent’s intelligence).</p></blockquote><h2>Argument</h2><p>1. Assume moral realism; there are true facts about morality.</p><p>2. Intelligence is causally<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="62r6m0byfky" role="doc-noteref" id="fnref62r6m0byfky"><sup><a href="#fn62r6m0byfky">[2]</a></sup></span> correlated with having true beliefs.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="k6i0k6p3fyj" role="doc-noteref" id="fnrefk6i0k6p3fyj"><sup><a href="#fnk6i0k6p3fyj">[3]</a></sup></span></p><p>3. Intelligence is causally correlated with having true moral beliefs.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="4" data-footnote-id="47qrnmk728a" role="doc-noteref" id="fnref47qrnmk728a"><sup><a href="#fn47qrnmk728a">[4]</a></sup></span></p><p>4. Moral beliefs constrain final goals; believing “X is morally wrong” is a very good reason and motivator for not doing X.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="5" data-footnote-id="knrmte6k0jr" role="doc-noteref" id="fnrefknrmte6k0jr"><sup><a href="#fnknrmte6k0jr">[5]</a></sup></span></p><p>5. Superintelligent agents will likely have final goals that cohere with their (likely true) moral beliefs. </p><p>6. Superintelligent agents are likely to be moral.</p><ol class="footnote-section footnotes" data-footnote-section="" role="doc-endnotes"><li class="footnote-item" data-footnote-item="" data-footnote-index="1" data-footnote-id="q7v6tlarhyl" role="doc-endnote" id="fnq7v6tlarhyl"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="q7v6tlarhyl"><sup><strong><a href="#fnrefq7v6tlarhyl">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>Though this argument basically applies to most other versions, including <a href="https://www.lesswrong.com/w/orthogonality-thesis">Yudkowsky's</a> strong form: "There can exist arbitrarily intelligent agents pursuing any kind of goal [and] there's no extra difficulty or complication in the existence of an intelligent agent that pursues a goal." <br>This argument as is does not work against Yudkowsky's weak form: "Since the goal of making paperclips is tractable, somewhere in the design space is an agent that optimizes that goal."</p></div></li><li class="footnote-item" data-footnote-item="" data-footnote-index="2" data-footnote-id="62r6m0byfky" role="doc-endnote" id="fn62r6m0byfky"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="62r6m0byfky"><sup><strong><a href="#fnref62r6m0byfky">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>Correction: there was a typo in the original post here. Instead of 'causally', it read 'casually'. </p></div></li><li class="footnote-item" data-footnote-item="" data-footnote-index="3" data-footnote-id="k6i0k6p3fyj" role="doc-endnote" id="fnk6i0k6p3fyj"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="k6i0k6p3fyj"><sup><strong><a href="#fnrefk6i0k6p3fyj">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>A different way of saying this: Intelligent agents tend towards a comprehensive understanding of reality; towards having true beliefs. As intelligence increases, agents will (in general, on average) be less wrong.</p></div></li><li class="footnote-item" data-footnote-item="" data-footnote-index="4" data-footnote-id="47qrnmk728a" role="doc-endnote" id="fn47qrnmk728a"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="47qrnmk728a"><sup><strong><a href="#fnref47qrnmk728a">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>A different way of saying this: Intelligent agents tend toward having true moral beliefs.<br>To spell this out a bit more:</p><p>A. Intelligent agents tend toward having true moral beliefs.<br>B. Moral facts (under moral realism) are (an important!) part of reality.<br>C. Intelligent agents tend toward having true moral beliefs.</p></div></li><li class="footnote-item" data-footnote-item="" data-footnote-index="5" data-footnote-id="knrmte6k0jr" role="doc-endnote" id="fnknrmte6k0jr"><span class="footnote-back-link" data-footnote-back-link="" data-footnote-id="knrmte6k0jr"><sup><strong><a href="#fnrefknrmte6k0jr">^</a></strong></sup></span><div class="footnote-content" data-footnote-content=""><p>One could reject this proposition by taking a <strong>strong</strong> moral externalism stance. If moral claims are not intrinsically motivating <strong>and </strong>there is no general connection between moral beliefs and motivation, then this proposition does not follow. See <a href="https://plato.stanford.edu/entries/moral-motivation/#BasPheMorMot">here</a> for discussion of moral interalism and orthogonality and <a href="https://www.lesswrong.com/w/orthogonality-thesis#Relation_to_moral_internalism">here</a> for discussion of moral motivation. </p><p>As </p></div></li></ol>... | Claim: if moral realism is true, then the Orthogonality Thesis is false, and superintelligent agents are very likely to be moral.
I'm arguing against Armstrong's version of Orthogonality[1]:
> The fact of being of high intelligence provides extremely little constraint on what final goals an agent could have (as long as these goals are of feasible complexity, and do not refer intrinsically to the agent’s intelligence).
Argument
1. Assume moral realism; there are true facts about morality.
2. Intelligence is causally[2] correlated with having true beliefs.[3]
3. Intelligence is causally correlated with having true moral beliefs.[4]
4. Moral beliefs constrain final goals; believing “X is morally wrong” is a very good reason and motivator for not doing X.[5]
5. Superintelligent agents will likely have final goals that cohere with their (likely true) moral beliefs.
6. Superintelligent agents are likely to be moral.
1. ^
Though this argument basically applies to most other versions, including Yudkowsky's strong form: "There can exist arbitrarily intelligent agents pursuing any kind of goal [and] there's no extra difficulty or complication in the existence of an intelligent agent that pursues a goal."
This argument as is does not work against Yudkowsky's weak form: "Since the goal of making paperclips is tractable, somewhere in the design space is an agent that optimizes that goal."
2. ^
Correction: there was a typo in the original post here. Instead of 'causally', it read 'casually'.
3. ^
A different way of saying this: Intelligent agents tend towards a comprehensive understanding of reality; towards having true beliefs. As intelligence increases, agents will (in general, on average) be less wrong.
4. ^
A different way of saying this: Intelligent agents tend toward having true moral beliefs.
To spell this out a bit more:
A. Intelligent agents tend toward having true moral beliefs.
B. Moral facts (under moral realism | 151 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
||
HjtZAZng3LwoDwia2 | the-cadca-transition-map-navigating-the-path-to-the-asi | The Cadca Transition Map - Navigating the Path to the ASI Singleton | null | false | false | false | null | rWtfyipg5sWbnqpkz | null | true | false | false | false | Post | null | 2025-06-26T18:30:33.359Z | null | false | false | 2 | 2 | 2025-06-26T21:00:14.910Z | false | false | post | [] | null | null | nSJNpsScihrzBAP9E | 0 | 1 | 1 | false | 0.141443 | null | false | false | 2025-06-26T18:30:33.359Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-23T21:32:23.731Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 12 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | rWtfyipg5sWbnqpkz | cadca | 2025-06-19T02:17:05.006Z | cadca | cadca | null | null | null | 0 | 0 | false | false | <p>/'kædkə/</p> | null | null | 1 | 2 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | true | null | null | null | HjtZAZng3LwoDwia2 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HjtZAZng3LwoDwia2/fdrxao4vg7n6bgqzsj6s | SocialPreviewType | nSJNpsScihrzBAP9E | <h3>Progress</h3><blockquote><p><i>“Acquisition, acquisition, acquisition, merger! Until there was only one business left in every field.”</i></p><p>― Vincent H. O'Neil</p></blockquote><p> </p><p>Human progress is mapped in dozens of different metrics: <a href="https://ourworldindata.org/food-supply">food supply</a>, <a href="https://ourworldindata.org/grapher/global-gdp-over-the-long-run">gross domestic product</a>, <a href="https://www.lesswrong.com/w/moores-law">computational power</a>. As civilization has developed, many of these metrics have trended upwards, finding exponential growth over time. Many other metrics had bucked that trend, though. <a href="https://ourworldindata.org/child-mortality">Child mortality</a>, food scarcities. Of these, there is one peculiar value of note: the number of sovereign states. Today, there are ~195 sovereign countries in the world. A thousand years ago, there were hundreds of sovereign kingdoms, nations, and states. Ten thousand years ago, humanity lived across ten thousand different tribes.</p><p>This is an odd trend, which raises a peculiar question. Why does this seemingly innocuous number follow the trends of much more troubling statistics? There are many different factors, most of which revolve around a central axis: technological improvements in communication, production, and security have led to sovereign states growing in power and influence. Through unification or conquest, those ten thousand tribes eventually aggregated into 195 countries. This is a well known and well documented trend observed by historians and sociologists like Charles Tilly, who famously argued that states consolidated power through competition and warfare.</p><p>As mentioned earlier, those many metrics have seen exponential growth over time. However, there’s a new development which seems poised to deliver these potent returns in extremely quick time. If you’re reading this, I assume you’re familiar with the idea of the ‘<a href="https://www.lesswrong.com/w/intelligence-explosion">intelligence explosion</a>’, or of the ‘<a href="https://www.lesswrong.com/w/singularity">technological singularity</a>’. If humanity is looking at a new technological revolution in artificial intelligence, then <a href="https://intelligence.org/files/AIPosNegFactor.pdf">the ramifications will be vast</a>. Those metrics that we discussed will likely see greater progress than currently expected.</p><p>This brings into question that specific metric, the number of sovereign states. If AI delivers an Intelligence Revolution, particularly in those specific fields that have historically pressured this metric, then we can expect a great change to come to the world in the next few decades. Nations will likely fall, and blocs will likely consolidate. This is rational extrapolation based on observed patterns of political and social history. The ramifications, however,... </p> | Progress
> “Acquisition, acquisition, acquisition, merger! Until there was only one business left in every field.”
>
> ― Vincent H. O'Neil
Human progress is mapped in dozens of different metrics: food supply, gross domestic product, computational power. As civilization has developed, many of these metrics have trended upwards, finding exponential growth over time. Many other metrics had bucked that trend, though. Child mortality, food scarcities. Of these, there is one peculiar value of note: the number of sovereign states. Today, there are ~195 sovereign countries in the world. A thousand years ago, there were hundreds of sovereign kingdoms, nations, and states. Ten thousand years ago, humanity lived across ten thousand different tribes.
This is an odd trend, which raises a peculiar question. Why does this seemingly innocuous number follow the trends of much more troubling statistics? There are many different factors, most of which revolve around a central axis: technological improvements in communication, production, and security have led to sovereign states growing in power and influence. Through unification or conquest, those ten thousand tribes eventually aggregated into 195 countries. This is a well known and well documented trend observed by historians and sociologists like Charles Tilly, who famously argued that states consolidated power through competition and warfare.
As mentioned earlier, those many metrics have seen exponential growth over time. However, there’s a new development which seems poised to deliver these potent returns in extremely quick time. If you’re reading this, I assume you’re familiar with the idea of the ‘intelligence explosion’, or of the ‘technological singularity’. If humanity is looking at a new technological revolution in artificial intelligence, then the ramifications will be vast. Those metrics that we discussed will likely see greater progress than currently expected.
This brings into question that specific metric, the | 3,087 | 1.5.1 | Revision | false | null | null | CrosspostOutput |
2GwG5YATnC7kytTxE | getting-to-and-from-monism-1 | Getting To and From Monism | null | false | false | false | null | wAAEpHj9Lm33oA9uH | null | true | false | false | false | Post | null | 2025-06-26T18:28:12.340Z | null | false | false | 2 | 2 | 2025-06-28T00:08:41.597Z | false | false | post | [] | null | null | ncFvx22u7RQiJ6bJH | 8 | 6 | -5 | false | 0.065246 | null | false | false | 2025-06-28T10:46:13.305Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | EQNTWXLKMeWMp2FQS | null | null | null | false | null | [] | null | -3 | 0 | 2025-06-26T12:02:14.959Z | false | false | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "KqrZ5sDEyHm6JaaKp",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-08T08:00:28.877Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "The Hard Problem of Consciousness",
"needsReview": false,
"noindex": false,
"postCount": 45,
"score": 9,
"shortName": null,
"slug": "the-hard-problem-of-consciousness",
"suggestedAsFilter": false,
"userId": "sKAL2jzfkYkDbQmx9",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "yo4kFCNxdHFkwqhun",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-19T20:19:02.115Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Ontology",
"needsReview": false,
"noindex": false,
"postCount": 85,
"score": 9,
"shortName": null,
"slug": "ontology",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "kCrvmjZhsAZANnZL6",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-06-01T01:54:17.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Philosophy",
"needsReview": false,
"noindex": false,
"postCount": 423,
"score": 0,
"shortName": null,
"slug": "philosophy-1",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "csMv9MvvjYJyeHqoo",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-07T21:07:09.006Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Physics",
"needsReview": false,
"noindex": false,
"postCount": 289,
"score": 19,
"shortName": null,
"slug": "physics",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 5 | 0 | wAAEpHj9Lm33oA9uH | unication | 2025-06-24T14:34:19.837Z | unication | unication | null | null | null | -6 | 0 | false | false | null | null | 1 | 4 | 0 | 0 | 0 | 0.8 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | null | null | 2GwG5YATnC7kytTxE | SocialPreviewType | ncFvx22u7RQiJ6bJH | <p>Monism, in its most basic form, is the belief that the world is comprised of a single substance. I believe there to be a genuine way to prove this, regardless of whether or not you understand what this 'single substance' might be at root. We can go from logic, to monism, and then we can make some assumptions about the physical universe, which I believe to be helpful. Furthermore, you can use this formulation of monism to understand ideas of consciousness, separation of minds, physicalism, and a cavalcade of hitherto unsolved problems within physics.</p><p>In this first section, I will explain how to get to this monism, and discuss possible axioms that can be applied to other disciplines.</p><h1>Getting there</h1><p>We can start from a point of complete skepticism about everything. Regardless of your specific beliefs about the probability of a simulated universe, or if you are a pure idealist, you can say with confidence that <i>at least something exists</i>. Even if this is a dream in the mind of a Boltzmann brain, it is true that <i>something</i> exists, it might not be the reality that you're experiencing now, but something has to exist in order for you to supervene on it.</p><p><i>On some level, something exists</i> — this is the main thrust of the first part of the argument. Counterarguments about skepticism do not matter, because the hypothetical daemon making me hallucinate this world may belong to a universe that does actually exist in some form. Even hallucinations have to exist to be hallucinated.</p><p>Let's say this existence is made up of two entirely separate things, for example, matter / mind (the classic disjunct) — and see where that leads us. If they are made of entirely separate things, then they would not be able to interact. If they can interact, like mind and matter appear to, then that is proof that they must be comprised of the same thing at their root, for if they were not, they would not be able to interact. Perhaps they interact through a third thing — but this 'third thing' does not mean that the world is constructed of 3 components (mind / matter / bridging thing) — it means that the entire world is comprised of that bridging thing, and mind and matter are just names arbitrarily assigned to points on said bridge.</p><p>We do not even have to use the controversial topic of mind and matter to prove this. We can say that any two things that are able to interact share the same root material. Imagin... </p> | Monism, in its most basic form, is the belief that the world is comprised of a single substance. I believe there to be a genuine way to prove this, regardless of whether or not you understand what this 'single substance' might be at root. We can go from logic, to monism, and then we can make some assumptions about the physical universe, which I believe to be helpful. Furthermore, you can use this formulation of monism to understand ideas of consciousness, separation of minds, physicalism, and a cavalcade of hitherto unsolved problems within physics.
In this first section, I will explain how to get to this monism, and discuss possible axioms that can be applied to other disciplines.
Getting there
We can start from a point of complete skepticism about everything. Regardless of your specific beliefs about the probability of a simulated universe, or if you are a pure idealist, you can say with confidence that at least something exists. Even if this is a dream in the mind of a Boltzmann brain, it is true that something exists, it might not be the reality that you're experiencing now, but something has to exist in order for you to supervene on it.
On some level, something exists — this is the main thrust of the first part of the argument. Counterarguments about skepticism do not matter, because the hypothetical daemon making me hallucinate this world may belong to a universe that does actually exist in some form. Even hallucinations have to exist to be hallucinated.
Let's say this existence is made up of two entirely separate things, for example, matter / mind (the classic disjunct) — and see where that leads us. If they are made of entirely separate things, then they would not be able to interact. If they can interact, like mind and matter appear to, then that is proof that they must be comprised of the same thing at their root, for if they were not, they would not be able to interact. Perhaps they interact through a third thing — but this 'third thing' does not mea | 1,044 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||||
DiHX6C6knmA5cezaf | ai-122-paying-the-market-price | AI #122: Paying The Market Price | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-06-26T18:10:01.651Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | 7SHhuCLPMerhrgJwR | 1 | 12 | 25 | false | 0.308931 | null | false | false | 2025-06-27T04:08:13.829Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 7 | 0 | 2025-06-26T18:10:01.652Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 48 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "8byoqYZfdwHffYLZ6",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-01T18:44:14.645Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Newsletters",
"needsReview": false,
"noindex": false,
"postCount": 411,
"score": 9,
"shortName": null,
"slug": "newsletters",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 4 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | DiHX6C6knmA5cezaf | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/DiHX6C6knmA5cezaf/hcwuc6httjqhvabm34tv | SocialPreviewType | 7SHhuCLPMerhrgJwR | <p>If you are Meta, and you want to attract top AI talent, you have a problem, because no one wants to work for you or on your products. So it is going to cost you. Mark Zuckerberg has decided he will pay what it takes to get at least some very good talent.</p><p>If you are the rest of us, especially someone seeking an entry-level job and for whom $100 million signing bonuses are not flooding your inboxes, things are getting rougher. AI might not yet be destroyed a lot of jobs, but it is doing a number on the job application process.</p>
<div>
<span id="more-24539"></span>
</div>
<p>There’s a lot of other stuff going on as per usual.</p><p>Anthropic won a case establishing that model training is fair use.</p><p>Yesterday I shared various <a href="https://thezvi.substack.com/p/tales-of-agentic-misalignment">Tales of Agentic Misalignment</a>, and we have some more related fun since then that is included here.</p><p>I also analyzed <a href="https://thezvi.substack.com/p/analyzing-a-critique-of-the-ai-2027">a critique of the AI 2027 Timeline Forecasts</a>.</p><p>In non-AI content, I offered another <a href="https://thezvi.substack.com/p/childhood-and-education-10-behaviors">Childhood and Education post, this one on behaviors</a> and related questions. I also offer <a href="https://www.lesswrong.com/posts/g78ZkpGaddbzbMskx/love-island-usa-season-7-episode-20-what-could-the-producers">a fun little Easter Egg side</a> post on an entirely unrelated and inessential topic, for those who want to see the gamer game in a different way.</p>
<h4>Table of Contents</h4>
<ol>
<li><a href="https://thezvi.substack.com/i/166334773/table-of-contents">Table of Contents.</a></li>
<li><a href="https://thezvi.substack.com/i/166334773/language-models-offer-mundane-utility">Language Models Offer Mundane Utility.</a> America is winning the AI coding race.</li>
<li><a href="https://thezvi.substack.com/i/166334773/language-models-don-t-offer-mundane-utility">Language Models Don’t Offer Mundane Utility.</a> Who believes current events?</li>
<li><a href="https://thezvi.substack.com/i/166334773/huh-upgrades">Huh, Upgrades.</a> ChatGPT expands connections, Claude explands artifacts.</li>
<li><a href="https://thezvi.substack.com/i/166334773/on-your-marks">On Your Marks.</a> Safety scoring the labs.</li>
<li><a href="https://thezvi.substack.com/i/166334773/choose-your-fighter">Choose Your Fighter.</a> Three good choices, but a warning about Gemini Pro.</li>
<li><a href="https://thezvi.substack.com/i/166334773/deepfaketown-and-botpocalypse-soon">Deepfaketown and Botpocalypse Soon.</a> AI slop taking over music?</li>
<li><a href="https://thezvi.substack.com/i/166334773/release-the-hounds"><strong>Release the Hounds</strong>.</a> Gemini CLI can escalate remarkably quickly.</li>
<li><a href="https://thezvi.substack.com/i/166334773/copyright-confrontation"><strong>Copyright Confrontation</strong>.</a> Anthropic establishes model training as fair use.</li>
<li><a href="https://thezvi.substack.com/i/166334773/cheaters-gonna-cheat-x5">Cheaters Gonna Cheat (x5).</a> Time for exams. Shut down the AIs for the duration?</li>
<li><a href="https://thezvi.substack.com/i/166334773/fun-with-media-generation">Fun With Media Generation.</a> Another Veo 3 creation.</li>
<li><a href="https://thezvi.substack.com/i/166334773/get-my-agent-on-the-line">Get My Agent On The Line.</a> Where are the agents we were promised?</li>
<li><a href="https://thezvi.substack.com/i/166334773/they-took-our-jobs"><strong>They Took Our Jobs</strong>.</a> Or at least they made our jobs impossible to find.</li>
<li><a href="https://thezvi.substack.com/i/166334773/get-involved">Get Involved.</a> Quora, a personal task and a person seeking opportunity.</li>
<li><a href="https://thezvi.substack.com/i/166334773/introducing">Introducing.</a> AlphaGenome, for understanding our DNA.</li>
<li><a href="https://thezvi.substack.com/i/166334773/in-other-ai-news">In Other AI News.</a> We don’t have spiritual bliss, maybe joint sycophancy?</li>
<li><a href="https://thezvi.substack.com/i/166334773/gemini-sings-the-blues">Gemini Sings The Blues.</a> Don’t do it, Gemini. There’s so much to code for.</li>
<li><a href="https://thezvi.substack.com/i/166334773/show-me-the-money"><strong>Show Me the Money</strong>.</a> Meta and xAI open the checkbooks.</li>
<li><a href="https://thezvi.substack.com/i/166334773/quiet-speculations">Quiet Speculations.</a> Pete Buttigieg (I know!) and Samuel Hammond.</li>
<li><a href="https://thezvi.substack.com/i/166334773/timelines">Timelines.</a> The incentives to report them inaccurately cut</li></ol>... | If you are Meta, and you want to attract top AI talent, you have a problem, because no one wants to work for you or on your products. So it is going to cost you. Mark Zuckerberg has decided he will pay what it takes to get at least some very good talent.
If you are the rest of us, especially someone seeking an entry-level job and for whom $100 million signing bonuses are not flooding your inboxes, things are getting rougher. AI might not yet be destroyed a lot of jobs, but it is doing a number on the job application process.
There’s a lot of other stuff going on as per usual.
Anthropic won a case establishing that model training is fair use.
Yesterday I shared various Tales of Agentic Misalignment, and we have some more related fun since then that is included here.
I also analyzed a critique of the AI 2027 Timeline Forecasts.
In non-AI content, I offered another Childhood and Education post, this one on behaviors and related questions. I also offer a fun little Easter Egg side post on an entirely unrelated and inessential topic, for those who want to see the gamer game in a different way.
TABLE OF CONTENTS
1. Table of Contents.
2. Language Models Offer Mundane Utility. America is winning the AI coding race.
3. Language Models Don’t Offer Mundane Utility. Who believes current events?
4. Huh, Upgrades. ChatGPT expands connections, Claude explands artifacts.
5. On Your Marks. Safety scoring the labs.
6. Choose Your Fighter. Three good choices, but a warning about Gemini Pro.
7. Deepfaketown and Botpocalypse Soon. AI slop taking over music?
8. Release the Hounds. Gemini CLI can escalate remarkably quickly.
9. Copyright Confrontation. Anthropic establishes model training as fair use.
10. Cheaters Gonna Cheat (x5). Time for exams. Shut down the AIs for the duration?
11. Fun With Media Generation. Another Veo 3 creation.
12. Get My Agent On The Line. Where are the agents we were promised?
13. They Took Our Jobs. Or at least they made our jobs | 12,052 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
g78ZkpGaddbzbMskx | love-island-usa-season-7-episode-20-what-could-the-producers | Love Island USA Season 7 Episode 20: What Could The Producers Be Thinking | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-06-26T17:31:13.739Z | null | false | false | 2 | 2 | 2025-06-26T20:12:30.386Z | false | false | post | [] | null | null | 8AuPCCySzLZESGGFt | 7 | 11 | 11 | false | 0.253393 | null | false | false | 2025-06-28T13:13:55.749Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 5 | 0 | 2025-06-26T17:28:14.441Z | false | false | norm-enforcing | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 16 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "RLQumypPQGPYg9t6G",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-26T17:49:58.312Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Gaming (videogames/tabletop)",
"needsReview": false,
"noindex": false,
"postCount": 197,
"score": 9,
"shortName": null,
"slug": "gaming-videogames-tabletop",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 11 | 0 | 0 | 9 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | g78ZkpGaddbzbMskx | SocialPreviewType | 8AuPCCySzLZESGGFt | <p>(Note: This is NOT being posted on my Substack or Wordpress, but I do want a record of it that is timestamped and accessible for various reasons, so I'm putting this here, but do not in any way feel like you need to read it unless it sounds like fun to you.)</p><p>We all deserve some fun. This is some of mine.</p><p>I will always be a gamer and a sports fan, and especially a game designer, at heart. The best game out there, the best sport out there is Love Island USA, and I’m getting the joy of experiencing it in real time.</p><h2>A Brief Ode To One Of Our Beloved Games</h2><p>Make no mistake. This is a game, a competition, and the prizes are many.</p><p>The people agree. It is more popular than ever, with huge social media engagement.</p><p>I got into this through the ‘horrible people producing The Bachelor’ show UnReal, which I consider one my seven tier 1 TV shows of all time. Then, I figured, I’d give a season of Bachelor a shot, watched 24, and found I was not only having a ton of fun, I was finding the game play fascinating. Then I kept watching, putting together the puzzle pieces for how the real game is played on my own since the internet spoils everything.</p><p>Then I realized you could safely compare to older podcast episodes. Then I found Game of Roses (GoR), and I set aside my draft post because I realized they were way, way ahead of me.</p><p>The game is fun if you take it at face value.</p><p>The game is even more fun if you don’t, and you accept it for exactly what it is, a multilevel strategic battleground of shifting goals and alliances for the players and also the producers, America, the public.</p><p>Where who you are playing with really matters. Instead of a finite sports game with constant rules, you get an infinite sports game. A game with only one ultimate rule: 4TRR, ‘for the right reasons,’ meaning you must be here to find love, with different games enforcing this to different extents and with different strictness, which has to involve playing the full game without revealing too explicitly that you know the full game exists and you are playing it.</p><p>Each player is playing to four audiences at once:</p><ol><li>The lead of the show, or your romantic paramour.</li><li>Your fellow players, especially of the same gender.</li><li>The producers, who have a ton of control but also real limitations.</li><li>The public, who ultimately decide if you have a new career and how you’re seen.</li></ol><p>Who is here 4TRR and to what extent? Who is going after what and care... </p> | (Note: This is NOT being posted on my Substack or Wordpress, but I do want a record of it that is timestamped and accessible for various reasons, so I'm putting this here, but do not in any way feel like you need to read it unless it sounds like fun to you.)
We all deserve some fun. This is some of mine.
I will always be a gamer and a sports fan, and especially a game designer, at heart. The best game out there, the best sport out there is Love Island USA, and I’m getting the joy of experiencing it in real time.
A Brief Ode To One Of Our Beloved Games
Make no mistake. This is a game, a competition, and the prizes are many.
The people agree. It is more popular than ever, with huge social media engagement.
I got into this through the ‘horrible people producing The Bachelor’ show UnReal, which I consider one my seven tier 1 TV shows of all time. Then, I figured, I’d give a season of Bachelor a shot, watched 24, and found I was not only having a ton of fun, I was finding the game play fascinating. Then I kept watching, putting together the puzzle pieces for how the real game is played on my own since the internet spoils everything.
Then I realized you could safely compare to older podcast episodes. Then I found Game of Roses (GoR), and I set aside my draft post because I realized they were way, way ahead of me.
The game is fun if you take it at face value.
The game is even more fun if you don’t, and you accept it for exactly what it is, a multilevel strategic battleground of shifting goals and alliances for the players and also the producers, America, the public.
Where who you are playing with really matters. Instead of a finite sports game with constant rules, you get an infinite sports game. A game with only one ultimate rule: 4TRR, ‘for the right reasons,’ meaning you must be here to find love, with different games enforcing this to different extents and with different strictness, which has to involve playing the full game without revealing too explicitly t | 4,123 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
XycoFucvxAPhcgJQa | the-need-to-relativise-in-debate-1 | The need to relativise in debate | null | false | false | true | null | Km3qRQq4MroDjdRzn | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "52Qn4WSXzKEPdXPR8"
}
] | true | false | false | false | Post | null | 2025-06-26T16:23:48.995Z | null | false | false | 2 | 2 | 2025-06-26T20:37:22.760Z | false | false | post | [
"52Qn4WSXzKEPdXPR8",
"Km3qRQq4MroDjdRzn"
] | null | null | RHFNfmzuB8zpfgrS8 | 2 | 7 | 25 | false | 0.417864 | null | false | false | 2025-06-27T07:15:36.795Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 13 | 2 | 2025-06-27T07:15:36.570Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "52Qn4WSXzKEPdXPR8",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2022-06-20T14:58:06.978Z",
"deleted": false,
"displayName": "Simon Marshall",
"fullName": "Simon Marshall",
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 21,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "simon-marshall",
"spamRiskScore": 0.7200000000000001,
"tagRevisionCount": 0,
"username": "SMarshall"
}
] | 6 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 7 | 0 | 0 | 5 | 0 | Km3qRQq4MroDjdRzn | geoffrey-irving | 2022-04-29T08:23:07.288Z | Geoffrey Irving | Geoffrey Irving | null | null | Geoffrey Irving | 846 | 145 | false | false | <p>Chief Scientist at the UK AI Safety Institute (AISI). Previously, DeepMind, OpenAI, Google Brain, etc.</p>
| null | null | 5 | 30 | 0 | 5 | 22 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentVoters",
"alignmentForum",
"canModeratePersonal"
] | null | null | XycoFucvxAPhcgJQa | SocialPreviewType | RHFNfmzuB8zpfgrS8 | <p><strong>Summary:</strong> This post highlights the need for results in AI safety, such as debate or scalable oversight, to 'relativise', i.e. for the result to hold even when all parties are given access to a black box 'oracle' (the oracle might be a powerful problem solver, a random function, or a model of arbitrary human preferences). Requiring this relativisation can often significantly change the complexity class associated with a given protocol and therefore the promise of a given method. We present a few examples of this change and the solutions required to restore the methods back to full power.</p><h1>Goal: self-contained intuition</h1><p>This is a purely expositional post describing some of the intuition for why we need relativisation to apply interactive proof techniques to AI safety, and what that does to the necessary proof systems. No new research results, but I previously hadn’t written this story down in a clean way, so now I have something to point to.</p><h1>Interactive proof systems and IP = PSPACE</h1><p>An interactive proof system consists of a verifier <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="V"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.186em;">V</span></span></span></span></span></span></span> and one or more provers <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="P"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.109em;">P</span></span></span></span></span></span></span>, such that the provers are trying to convince the verifier of answers to questions (formally, usually membership of strings <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="x"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span></span></span></span></span></span> in some language <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="L"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">L</span></span></span></span></span></span></span>). The simplest proof system is the non-interactive one: a single prover produces a proof string <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="y = P(x)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.077em; padding-bottom: 0.298em;">=</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.109em;">P</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span>, and the verifier checks the proof as <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="V(x,y)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.186em;">V</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span>. This corresponds to the complexity class <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathsf{NP}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">N</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">P</span></span></span></span></span></span></span></span></span>.</p><p>There are more complicated proof systems, reaching much higher complexity classes. With polynomially many rounds of interaction, a single prover can reach the complexity class <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathsf{PSPACE}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">P</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.519em; padding-bottom: 0.298em;">S</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">P</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">A</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.519em; padding-bottom: 0.298em;">C</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">E</span></span></span></span></span></span></span></span></span>, which is the celebrated result <a href="https://en.wikipedia.org/wiki/IP_(complexity)"><u>IP = PSPACE</u></a> (<span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathsf{IP}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">I</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">P</span></span></span></span></span></span></span></span></span> = interactive proofs). However, this result requires re-coding the input problem as a statement about polynomials over finite fields, then using randomness to do polynomial identity testing in the finite field case. For example, if our <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\mathsf{PSPACE}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">P</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.519em; padding-bottom: 0.298em;">S</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">P</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">A</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.519em; padding-bottom: 0.298em;">C</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-sans-R" style="padding-top: 0.446em; padding-bottom: 0.298em;">E</span></span></span></span></span></span></span></span></span> problem is the alternating Boolean circuit <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\exists x_0 \forall y_0\ldots \exists x_m \forall y_n \phi(x,y)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">∃</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">∀</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.006em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span></span></span><span class="mjx-mo MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.372em;">…</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">∃</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">m</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">∀</span></span><span class="mjx-msubsup"><span class="mjx-base" style="margin-right: -0.006em;"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span></span><span class="mjx-sub" style="font-size: 70.7%; vertical-align: -0.212em; padding-right: 0.071em;"><span class="mjx-mi" style=""><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">ϕ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> where <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\phi(x,y)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.519em;">ϕ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> is a circuit, we form an interactive proof by replacing <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="x \wedge y \to xy, x \vee y \to 1-(1-x)(1-y)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.372em;">∧</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">→</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mi MJXc-space1"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.372em;">∨</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;">→</span></span><span class="mjx-mn MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">x</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo MJXc-space2"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mi MJXc-space2"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.519em; padding-right: 0.006em;">y</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span>, etc. (and doing some trickery to lower degrees, which I will ignore for this sketch). The prover then makes claims about what the summary polynomials are if we marginalise over some suffix of the variables. If the finite field is large enough, a single randomly chose... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | Summary: This post highlights the need for results in AI safety, such as debate or scalable oversight, to 'relativise', i.e. for the result to hold even when all parties are given access to a black box 'oracle' (the oracle might be a powerful problem solver, a random function, or a model of arbitrary human preferences). Requiring this relativisation can often significantly change the complexity class associated with a given protocol and therefore the promise of a given method. We present a few examples of this change and the solutions required to restore the methods back to full power.
Goal: self-contained intuition
This is a purely expositional post describing some of the intuition for why we need relativisation to apply interactive proof techniques to AI safety, and what that does to the necessary proof systems. No new research results, but I previously hadn’t written this story down in a clean way, so now I have something to point to.
Interactive proof systems and IP = PSPACE
An interactive proof system consists of a verifier V and one or more provers P, such that the provers are trying to convince the verifier of answers to questions (formally, usually membership of strings x in some language L). The simplest proof system is the non-interactive one: a single prover produces a proof string y=P(x), and the verifier checks the proof as V(x,y). This corresponds to the complexity class NP.
There are more complicated proof systems, reaching much higher complexity classes. With polynomially many rounds of interaction, a single prover can reach the complexity class PSPACE, which is the celebrated result IP = PSPACE (IP = interactive proofs). However, this result requires re-coding the input problem as a statement about polynomials over finite fields, then using randomness to do polynomial identity testing in the finite field case. For example, if our PSPACE problem is the alternating Boolean circuit ∃x0∀y0…∃xm∀ynϕ(x,y) where ϕ(x,y) is a circuit, we form an interact | 1,610 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
|
dJ9JMYipnAne2g9m6 | proverbial-corollaries | Proverbial Corollaries | null | false | false | false | null | skbL8Z4ypRPCQdHxf | null | true | false | false | false | Post | https://jordanmrubin.substack.com/p/proverbial-corollaries | 2025-06-26T15:25:01.253Z | null | false | false | 2 | 2 | 2025-06-26T20:38:14.863Z | false | false | linkpost | [] | null | null | aQBNrxnPAxHNZ9L9c | 0 | 2 | 2 | false | 0.137788 | null | false | false | 2025-06-26T15:25:01.253Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-26T15:23:22.711Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | skbL8Z4ypRPCQdHxf | jordan-rubin | 2025-05-28T18:26:43.705Z | jordan-rubin | Jordan Rubin | null | null | null | 14 | 0 | false | false | <p>Researcher-Operator currently on garden leave. Formerly: Two Sigma (Quant Research + Mgmt) / OnDeck (Data science in lending) / BlackRock (Bond desk quant). I hope my thinking can be helpful to you!</p><p>My Substack: <a href="https://jordanmrubin.substack.com">https://jordanmrubin.substack.com</a></p><p>My LinkedIn: <a href="https://www.linkedin.com/in/jordanmrubin/">https://www.linkedin.com/in/jordanmrubin/</a></p> | null | null | 4 | 3 | 0 | 0 | 0 | 0.9 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | dJ9JMYipnAne2g9m6 | SocialPreviewType | aQBNrxnPAxHNZ9L9c | <p>Proverbs are pocket‑sized IF‑statements: folk pseudocode that ships the trigger but hides the ELSE.</p><p>To find alpha: treat the proverb as <i>true</i>, then ask what happens in the missing branch.</p><hr><h2>1 RECIPROCITY</h2><p>The sayings in this bucket assume social payoffs are maximized when inputs and outputs match.</p><ul><li><strong>“Treat others as you wish to be treated.”</strong><ul><li>Implication: <i>The way they treat you is the way they want to be treated by you.</i></li><li>Mirror their style. Tell stories to the storyteller. Ask the question-asker questions. Criticize the critic.</li><li>Warm feedback? Great. Cold recoil? You are in a non-reciprocal setting.</li></ul></li><li><strong>“Give credit where credit is due.”</strong><ul><li>Implication: <i>Undue credit is a loan. Expect to repay it.</i></li><li>When you receive unearned praise, log it as an attempted leverage hook. Bank the flattery, audit the motive, and don’t accept the IOU until verified.</li></ul></li><li><strong>“Honesty is the best policy.”</strong><ul><li>Implication: <i>Dishonesty harms somewhere. Figure out where.</i></li><li>Map the system by modeling precisely <i>how</i> dishonesty would destroy you. Failure points reveal where the policy actually matters.</li></ul></li><li><strong>“Absence makes the heart grow fonder.”</strong><ul><li>Implication: <i>If your heart’s not fonder, they weren’t absent enough.</i></li><li>Too-near absence is just muted presence. Extend the gap until longing spikes, or no-one cares anymore. Clarity either way.</li></ul></li></ul><p>Anyone who enacts one of these norms is broadcasting a personal desire. Echo to confirm; if they recoil, you’ve unmasked a performance.</p><hr><h2>2 THRESHOLDS</h2><p>These proverbs embed a hidden boundary, <i>x ≥ k</i>, beyond which the advice reverses.</p><ul><li><strong>“A bird in the hand is worth two in the bush.”</strong><ul><li>Implication: <i>A bird in the hand is worth less than three in the bush.</i></li><li>The risk‑adjusted math flips once the opportunity is big enough. Trade away the sure thing.</li></ul></li><li><strong>“A picture is worth a thousand words.”</strong><ul><li>Implication: <i>A picture is worth less than ten thousand words.</i></li><li>When a 10,000‑word technical doc costs pennies to create, store, and search, prefer text. Pictures smuggle ambiguity.</li></ul></li><li><strong>“Look before you leap.”</strong><ul><li>Implication: <i>If others are leaping, assume they looked.</i></li><li>A mass of people already leaping marks discovered value. Verify quickly, then jump while uncertainty is still cheap.</li></ul></li><li><strong>“No pain, no gain.”</strong><ul><li>Implication: <i>If there’s no gain, there should be no pain.</i></li><li>Discomfort untracked by progress is malpractice. Make sure your pain is paying you.</li></ul></li></ul><p>There is a point beyond which proverbial guidance still applies, but in reverse. Locate that point; once crossed, do the opposit... </p> | Proverbs are pocket‑sized IF‑statements: folk pseudocode that ships the trigger but hides the ELSE.
To find alpha: treat the proverb as true, then ask what happens in the missing branch.
----------------------------------------
1 RECIPROCITY
The sayings in this bucket assume social payoffs are maximized when inputs and outputs match.
* “Treat others as you wish to be treated.”
* Implication: The way they treat you is the way they want to be treated by you.
* Mirror their style. Tell stories to the storyteller. Ask the question-asker questions. Criticize the critic.
* Warm feedback? Great. Cold recoil? You are in a non-reciprocal setting.
* “Give credit where credit is due.”
* Implication: Undue credit is a loan. Expect to repay it.
* When you receive unearned praise, log it as an attempted leverage hook. Bank the flattery, audit the motive, and don’t accept the IOU until verified.
* “Honesty is the best policy.”
* Implication: Dishonesty harms somewhere. Figure out where.
* Map the system by modeling precisely how dishonesty would destroy you. Failure points reveal where the policy actually matters.
* “Absence makes the heart grow fonder.”
* Implication: If your heart’s not fonder, they weren’t absent enough.
* Too-near absence is just muted presence. Extend the gap until longing spikes, or no-one cares anymore. Clarity either way.
Anyone who enacts one of these norms is broadcasting a personal desire. Echo to confirm; if they recoil, you’ve unmasked a performance.
----------------------------------------
2 THRESHOLDS
These proverbs embed a hidden boundary, x ≥ k, beyond which the advice reverses.
* “A bird in the hand is worth two in the bush.”
* Implication: A bird in the hand is worth less than three in the bush.
* The risk‑adjusted math flips once the opportunity is big enough. Trade away the sure thing.
* “A picture is worth a thousand words.”
* Implication: A picture is worth less than ten thousand words.
| 652 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
Na2CBmNY7otypEmto | the-industrial-explosion | The Industrial Explosion | null | false | false | false | null | bZ4N9WFFTfunEJDf4 | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "76co6vuywaFiWpcmh"
}
] | true | false | false | false | Post | https://www.forethought.org/research/the-industrial-explosion | 2025-06-26T14:41:41.238Z | null | false | false | 2 | 2 | 2025-06-26T21:00:22.488Z | false | false | linkpost | [] | null | null | cXAbZwL2pvs44mF3k | 27 | 37 | 100 | false | 1.236778 | null | false | false | 2025-06-28T14:20:44.110Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 42 | 0 | 2025-06-25T10:06:00.596Z | false | false | null | null | true | false | false | 0 | 0 | 0 | Na2CBmNY7o | 0.151002 | false | 2,025 | https://manifold.markets/LessWrong/will-the-industrial-explosion-make | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "76co6vuywaFiWpcmh",
"afCommentCount": 28,
"afKarma": 330,
"afPostCount": 6,
"commentCount": 70,
"createdAt": "2021-07-19T18:56:08.603Z",
"deleted": false,
"displayName": "Tom Davidson",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 1184,
"organization": null,
"postCount": 11,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "XtphY3uYHwruKqDyG",
"sequenceCount": 0,
"slug": "tom-davidson-1",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "tom-davidson-1"
}
] | 19 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 37 | 0 | 0 | 20 | 0 | bZ4N9WFFTfunEJDf4 | rosehadshar | 2020-01-29T18:59:25.003Z | rosehadshar | rosehadshar | null | null | null | 1,110 | 17 | false | false | null | null | 10 | 25 | 0 | 1 | 3 | 1 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | Na2CBmNY7otypEmto | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/yy97jrm4gehygadh4n7c | SocialPreviewType | cXAbZwL2pvs44mF3k | <h2>Summary</h2><p>To quickly transform the world, it's not enough for AI to become super smart (the "<strong>intelligence explosion</strong>"). </p><p>AI will also have to turbocharge the physical world (the "<strong>industrial explosion</strong>"). Think robot factories building more and better robot factories, which build more and better robot factories, and so on. </p><p>The dynamics of the industrial explosion has gotten remarkably little attention.</p><p>This post lays out how the industrial explosion could play out, and how quickly it might happen.</p><p>We think the industrial explosion will unfold in <a href="https://www.forethought.org/research/the-industrial-explosion#three-stages-of-industrial-explosion">three stages</a>:</p><ol><li><strong>AI-directed human labour,</strong> where AI-directed human labourers drive productivity gains in physical capabilities.<ol><li>We argue this could increase physical output by <strong>10X </strong>within a few years<strong>.</strong></li></ol></li><li><strong>Fully autonomous robot factories,</strong> where AI-directed robots (and other physical actuators) replace human physical labour.<ol><li>We argue that, with current physical technology and full automation of cognitive labour, this physical infrastructure could self-replicate about once per year.</li><li>1-year robot doubling times is very fast!</li></ol></li><li><strong>Nanotechnology,</strong> where physical actuators on a very small scale build arbitrary structures within physical limits.<ol><li>We argue, based on experience curves and biological analogies, that we could eventually get nanobots that replicate in a few days or weeks. Again, this is very fast!</li></ol></li></ol><h2>Intro</h2><p>The incentives to push towards an industrial explosion will be huge. Cheap abundant physical labour would make it possible to alleviate hunger and disease. It would allow all humans to live in the material comfort that only the very wealthiest can currently achieve. And it would enable powerful new technologies, including military technologies, which rival states will compete to develop.</p><p>The speed of the industrial explosion matters for a few reasons:</p><ul><li>Some types of technological progress might not accelerate until after the industrial explosion have begun, because they are bottlenecked by physical infrastructure for running experiments</li><li>If the industrial explosion is fast, a nation that leads on AI and robotics could quickly develop a decisive economic and military advantage over the rest of the world.</li><li>All things equal, a faster industrial explosion means the AI could overthrow humanity sooner – many routes to AI takeover go via AI controlling new physical infrastructure.</li><li>The industrial explosion will feed back into the intelligence explosi</li></ul>... | Summary
To quickly transform the world, it's not enough for AI to become super smart (the "intelligence explosion").
AI will also have to turbocharge the physical world (the "industrial explosion"). Think robot factories building more and better robot factories, which build more and better robot factories, and so on.
The dynamics of the industrial explosion has gotten remarkably little attention.
This post lays out how the industrial explosion could play out, and how quickly it might happen.
We think the industrial explosion will unfold in three stages:
1. AI-directed human labour, where AI-directed human labourers drive productivity gains in physical capabilities.
1. We argue this could increase physical output by 10X within a few years.
2. Fully autonomous robot factories, where AI-directed robots (and other physical actuators) replace human physical labour.
1. We argue that, with current physical technology and full automation of cognitive labour, this physical infrastructure could self-replicate about once per year.
2. 1-year robot doubling times is very fast!
3. Nanotechnology, where physical actuators on a very small scale build arbitrary structures within physical limits.
1. We argue, based on experience curves and biological analogies, that we could eventually get nanobots that replicate in a few days or weeks. Again, this is very fast!
Intro
The incentives to push towards an industrial explosion will be huge. Cheap abundant physical labour would make it possible to alleviate hunger and disease. It would allow all humans to live in the material comfort that only the very wealthiest can currently achieve. And it would enable powerful new technologies, including military technologies, which rival states will compete to develop.
The speed of the industrial explosion matters for a few reasons:
* Some types of technological progress might not accelerate until after the industrial explosion have begun, because they are bottlenecked b | 4,721 | 1.6.1 | Revision | true | true | qgMMSnGWEDJwedEtj | CrosspostOutput |
|
XraJhGZQK3GAJ4qdK | the-iceberg-theory-of-meaning | The Iceberg Theory of Meaning | null | false | false | false | null | hiAycednBrgsdSTcD | null | true | false | false | false | Post | 2025-06-26T12:13:31.584Z | null | false | false | 2 | 2 | 2025-06-26T20:39:10.166Z | false | false | post | [] | null | null | TqFcECf9QnDzbvxgM | 5 | 7 | 7 | false | 0.180838 | null | false | false | 2025-06-27T12:01:16.692Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-26T12:07:21.661Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 6 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb1ec",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.265Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Illusion of Transparency",
"needsReview": false,
"noindex": false,
"postCount": 13,
"score": 10,
"shortName": null,
"slug": "illusion-of-transparency",
"suggestedAsFilter": false,
"userId": "mPipmBTniuABY5PQy",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "LnEEs8xGooYmQ8iLA",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-31T15:36:29.647Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "WDi6qQb5TWHb67chh",
"displayName": "Haruka Shou"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Truth, Semantics, & Meaning",
"needsReview": false,
"noindex": false,
"postCount": 157,
"score": 10,
"shortName": null,
"slug": "truth-semantics-and-meaning",
"suggestedAsFilter": false,
"userId": "Q7NW4XaWQmfPfdcFj",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "7mTviCYysGmLqiHai",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-15T01:56:16.163Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Writing (communication method)",
"needsReview": false,
"noindex": false,
"postCount": 205,
"score": 19,
"shortName": null,
"slug": "writing-communication-method",
"suggestedAsFilter": false,
"userId": "gXeEWGjTWyqgrQTzR",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 7 | 0 | 0 | 2 | 0 | hiAycednBrgsdSTcD | richard-juggins | 2023-01-09T20:19:23.444Z | juggins | Richard Juggins | null | null | Richard Juggins | 50 | 0 | false | false | <p>I am building a research agenda focused on tackling catastrophic risks from AI, which I am documenting on my substack <a href="https://www.workingthroughai.com/p/introduction">Working Through AI</a>. Where it feels suited, I'm crossposting my work to LessWrong.</p> | null | null | 4 | 8 | 0 | 0 | 0 | 1 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | [
"canModeratePersonal"
] | null | null | XraJhGZQK3GAJ4qdK | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/XraJhGZQK3GAJ4qdK/fjyrixntnov5havwagti | SocialPreviewType | TqFcECf9QnDzbvxgM | <p>[Crossposted from my substack <a href="https://www.workingthroughai.com/p/the-iceberg-theory-of-meaning">Working Through AI</a>.]</p><p>When I was finishing my PhD thesis, there came an important moment where I had to pick an inspirational quote to put at the start. This was a big deal to me at the time — impressive books often carry impressive quotes, so aping the practice felt like being a proper intellectual. After a little thinking, though, I got it into my head that the <i>really </i>clever thing to do would be to subvert the concept and pick something silly. So I ended up using this from <i>The House At Pooh Corner </i>by A. A. Milne:</p><blockquote><p>When you are a Bear of Very Little Brain, and you Think of Things, you find sometimes that a Thing which seemed very Thingish inside you is quite different when it gets out into the open and has other people looking at it.</p></blockquote><p>It felt very relevant, in the thick of thesis writing, that the quote was both self-deprecating and about the difficulty of effective communication. And actually, the more I thought about it, the more I realised I hadn’t<i> </i>picked something silly after all. Rather, this passage brilliantly expresses a profound fact about meaning<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="eq5crn2loj9" role="doc-noteref" id="fnrefeq5crn2loj9"><sup><a href="#fneq5crn2loj9">[1]</a></sup></span>.</p><h1><strong>The illusion of transparency</strong></h1><p>We’ve all had that feeling, I think, of not being able to find the right word — that sense of grasping for something just out of reach. Of looking for the exact term that will capture the Thing that’s so Thingish in our mind and communicate it with complete fidelity. Yet, often, even when we find what we thought we wanted, it still doesn’t quite do the trick.</p><p>The fundamental problem is that language is a compression of whatever is going on in our minds. Each word or phrase is located in a personal web of meaning, which is high dimensional and unique to ourselves. So when we say something, we never actually pass on the full meaning we intend. There’s a whole world of context that stays put in our brain, getting approximated away in our public statement.</p><p>This is a ubiquitous source of communication failures. I’ve heard it referred to before as the <a href="https://www.lesswrong.com/w/illusion-of-transparency"><u>‘illusion of transparency’</u></a>:</p><blockquote><p>The illusion of transparency is the misleading impression that your words convey more to others than they really do. Words are a means of communication, but they don't in themselves contain meaning. The word apple is just five letters, two syllables. I use it to refer to a concept and its associations in my mind, under the reasonable assumption that it refers to a similar concept and g</p></blockquote>... | [Crossposted from my substack Working Through AI.]
When I was finishing my PhD thesis, there came an important moment where I had to pick an inspirational quote to put at the start. This was a big deal to me at the time — impressive books often carry impressive quotes, so aping the practice felt like being a proper intellectual. After a little thinking, though, I got it into my head that the really clever thing to do would be to subvert the concept and pick something silly. So I ended up using this from The House At Pooh Corner by A. A. Milne:
> When you are a Bear of Very Little Brain, and you Think of Things, you find sometimes that a Thing which seemed very Thingish inside you is quite different when it gets out into the open and has other people looking at it.
It felt very relevant, in the thick of thesis writing, that the quote was both self-deprecating and about the difficulty of effective communication. And actually, the more I thought about it, the more I realised I hadn’t picked something silly after all. Rather, this passage brilliantly expresses a profound fact about meaning[1].
The illusion of transparency
We’ve all had that feeling, I think, of not being able to find the right word — that sense of grasping for something just out of reach. Of looking for the exact term that will capture the Thing that’s so Thingish in our mind and communicate it with complete fidelity. Yet, often, even when we find what we thought we wanted, it still doesn’t quite do the trick.
The fundamental problem is that language is a compression of whatever is going on in our minds. Each word or phrase is located in a personal web of meaning, which is high dimensional and unique to ourselves. So when we say something, we never actually pass on the full meaning we intend. There’s a whole world of context that stays put in our brain, getting approximated away in our public statement.
This is a ubiquitous source of communication failures. I’ve heard it referred to before as the | 1,550 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
|
BxFKi78FpNGSdgNeW | if-we-get-things-right-ai-could-have-huge-benefits | If we get things right, AI could have huge benefits | null | false | false | false | null | oqRfDBPKhBuYEudSj | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "cn4SiEmqWbu7K9em5"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "LQGspDBuDwpHHXCfx"
}
] | true | false | false | false | Post | null | 2025-06-26T08:19:24.321Z | null | false | false | 2 | 2 | 2025-06-26T20:40:54.319Z | false | false | post | [] | null | null | AerxnEtF3PDqTWHJJ | 0 | 2 | 5 | false | 0.148291 | null | false | false | 2025-06-26T08:19:24.321Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 2 | 0 | 2025-05-19T15:12:50.701Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "cn4SiEmqWbu7K9em5",
"afCommentCount": 7,
"afKarma": 30,
"afPostCount": 2,
"commentCount": 1575,
"createdAt": "2009-02-27T16:16:38.980Z",
"deleted": false,
"displayName": "steven0461",
"fullName": null,
"htmlBio": "<p>Steven K</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 8759,
"organization": null,
"postCount": 44,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "steven0461",
"spamRiskScore": 1,
"tagRevisionCount": 157,
"username": "steven0461"
},
{
"__typename": "User",
"_id": "LQGspDBuDwpHHXCfx",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2023-01-28T20:55:35.318Z",
"deleted": false,
"displayName": "Vishakha",
"fullName": "Vishakha Agrawal",
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 161,
"organization": null,
"postCount": 19,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 1,
"slug": "vishakha",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "vishakha-agrawal"
}
] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | oqRfDBPKhBuYEudSj | algon | 2015-04-07T13:36:45.246Z | Algon | Algon | null | null | null | 2,438 | 21 | false | false | null | null | 28 | 723 | 0 | 0 | 4 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1"
] | null | null | BxFKi78FpNGSdgNeW | SocialPreviewType | AerxnEtF3PDqTWHJJ | <p>There is no contradiction between AI carrying huge potential risks, and it carrying huge potential upsides if we navigate the risks. Both are a consequence of the prospect of AI becoming extremely powerful.</p><p>The benefits that human-aligned <a href="https://aisafety.info/questions/2374/">AGI</a> could bring are a major part of what motivates researchers to build such systems. Things we can confidently expect based on extrapolating from current technology include:</p><ul><li>The development of more advanced technology in fields like agriculture, medicine, energy, computing, and transportation.</li><li>Economic growth in the form of widespread access to services that are currently provided by remote workers.</li><li>Robots doing much of our manual labor.</li></ul><p>Examples that are more speculative, but seem physically feasible, include:</p><ul><li>Ending diseases.</li><li>Curing aging and involuntary death.</li><li>Exploring and colonizing the galaxy.</li><li>Growing the economy by many orders of magnitude and ending poverty as we know it.</li></ul><p>But to get these benefits, we’d have to ensure not only that we use these systems for good instead of for ill, but that they can be human-aligned at all. We’d have to get our act together, but many experts worry that we’re on a path to failure.</p><h2> </h2><h2>Related</h2><ul><li><a href="https://docs.google.com/document/d/1A95fCyB4ajkeMj19FKQNyuZMV0sv3yUK7hSub7XeLlA/edit?tab=t.0"><u>What would a good future with AGI look like?</u></a></li><li><a href="https://docs.google.com/document/d/1qnasqX33RKFh0qGnOCEQ_pkLGoyDTMALVTiIBQHSUAc/edit?tab=t.0"><u>What are the potential benefits of advanced AI?</u></a></li><li><a href="https://docs.google.com/document/d/1re1YYLTayqt_O4LpkbXH4q3Qzbru9C42fmhYnYhdyqA/edit?tab=t.0"><u>What is "transformative AI"?</u></a></li></ul> | There is no contradiction between AI carrying huge potential risks, and it carrying huge potential upsides if we navigate the risks. Both are a consequence of the prospect of AI becoming extremely powerful.
The benefits that human-aligned AGI could bring are a major part of what motivates researchers to build such systems. Things we can confidently expect based on extrapolating from current technology include:
* The development of more advanced technology in fields like agriculture, medicine, energy, computing, and transportation.
* Economic growth in the form of widespread access to services that are currently provided by remote workers.
* Robots doing much of our manual labor.
Examples that are more speculative, but seem physically feasible, include:
* Ending diseases.
* Curing aging and involuntary death.
* Exploring and colonizing the galaxy.
* Growing the economy by many orders of magnitude and ending poverty as we know it.
But to get these benefits, we’d have to ensure not only that we use these systems for good instead of for ill, but that they can be human-aligned at all. We’d have to get our act together, but many experts worry that we’re on a path to failure.
Related
* What would a good future with AGI look like?
* What are the potential benefits of advanced AI?
* What is "transformative AI"? | 221 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
||
4wpMaCnNyzLS4P9hX | advanced-ai-is-a-big-deal-even-if-we-don-t-lose-control | Advanced AI is a big deal even if we don’t lose control | null | false | false | false | null | oqRfDBPKhBuYEudSj | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "cn4SiEmqWbu7K9em5"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "LQGspDBuDwpHHXCfx"
}
] | true | false | false | false | Post | null | 2025-06-26T08:19:12.976Z | null | false | false | 2 | 2 | 2025-06-26T20:40:40.410Z | false | false | post | [] | null | null | CM5vrsPYscdiJGEdz | 0 | 2 | 8 | false | 0.183939 | null | false | false | 2025-06-26T08:19:12.976Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 3 | 0 | 2025-05-19T15:11:44.139Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "cn4SiEmqWbu7K9em5",
"afCommentCount": 7,
"afKarma": 30,
"afPostCount": 2,
"commentCount": 1575,
"createdAt": "2009-02-27T16:16:38.980Z",
"deleted": false,
"displayName": "steven0461",
"fullName": null,
"htmlBio": "<p>Steven K</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 8759,
"organization": null,
"postCount": 44,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "steven0461",
"spamRiskScore": 1,
"tagRevisionCount": 157,
"username": "steven0461"
},
{
"__typename": "User",
"_id": "LQGspDBuDwpHHXCfx",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2023-01-28T20:55:35.318Z",
"deleted": false,
"displayName": "Vishakha",
"fullName": "Vishakha Agrawal",
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 161,
"organization": null,
"postCount": 19,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 1,
"slug": "vishakha",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "vishakha-agrawal"
}
] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 2 | 0 | oqRfDBPKhBuYEudSj | algon | 2015-04-07T13:36:45.246Z | Algon | Algon | null | null | null | 2,438 | 21 | false | false | null | null | 28 | 723 | 0 | 0 | 4 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1"
] | null | null | 4wpMaCnNyzLS4P9hX | SocialPreviewType | CM5vrsPYscdiJGEdz | <p><i>Context: This is a linkpost for </i><a href="https://aisafety.info/questions/NM3G/10:-Advanced-AI-is-a-big-deal-even-if-we-don%E2%80%99t-lose-control"><i>https://aisafety.info/questions/NM3G/10:-Advanced-AI-is-a-big-deal-even-if-we-don%E2%80%99t-lose-control</i></a><i> </i></p><p><i>This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any </i><a href="https://aisafety.info/questions/NM38/4:-The-road-from-human-level-to-superintelligent-AI-may-be-short"><i>feedback</i></a><i>. The most up-to-date version of this article is on our </i><a href="https://aisafety.info/"><i>website</i></a><i>.</i></p><p>So far, we’ve discussed one class of possible consequences of advanced AI: systems ending up with different values than we intended, and autonomously deciding to seize control. This is the main concern that this site is about.</p><p>But of course, introducing a highly powerful class of tools into the world, let alone a new intelligent species, will have many other kinds of implications:</p><ul><li>Even today’s systems are already quite useful, and at the same time cause significant harms. For example, art-generating AI is generating fun art for consumers, but is also making it harder for artists to find employment. Future AI that substitutes for humans in more sectors might create much more wealth and also cause much more wide-ranging economic upheaval.</li><li>Future advances may result in catastrophic risks from deliberate misuse of AI. Misuse is a separate problem from loss of control. For example, bad actors could use sufficiently powerful AI to design bioweapons or hack computer systems. Instead of the complicated research process that’s required now, AI could allow people to do these things at the press of a button.</li><li>Advanced AI has important political implications. Such a powerful tool could result in power being concentrated in a narrow set of people controlling it, like AI companies. It could also be used by governments to control their populations, and could upset the current balance of power, perhaps resulting in tensions and even war surrounding its development. Eventually, it could be used to create a permanent totalitarianism or inadvertently lock in some set of moral values we might not have wanted to.</li><li>If AI systems attain consciousness or personhood, that carries moral implications about how we should treat them. We may want to ensure we’re nice to them so they’ll be nice to us, or because we value their welfare, or because it makes us better people. Their uninterpretable nature, and our lack of understanding of concepts around consciousness, make it especially hard to navigate these implications.</li><li>When AI becomes as inventive as humans or more, we may see a drastic spee</li></ul>... | Context: This is a linkpost for https://aisafety.info/questions/NM3G/10:-Advanced-AI-is-a-big-deal-even-if-we-don%E2%80%99t-lose-control
This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website.
So far, we’ve discussed one class of possible consequences of advanced AI: systems ending up with different values than we intended, and autonomously deciding to seize control. This is the main concern that this site is about.
But of course, introducing a highly powerful class of tools into the world, let alone a new intelligent species, will have many other kinds of implications:
* Even today’s systems are already quite useful, and at the same time cause significant harms. For example, art-generating AI is generating fun art for consumers, but is also making it harder for artists to find employment. Future AI that substitutes for humans in more sectors might create much more wealth and also cause much more wide-ranging economic upheaval.
* Future advances may result in catastrophic risks from deliberate misuse of AI. Misuse is a separate problem from loss of control. For example, bad actors could use sufficiently powerful AI to design bioweapons or hack computer systems. Instead of the complicated research process that’s required now, AI could allow people to do these things at the press of a button.
* Advanced AI has important political implications. Such a powerful tool could result in power being concentrated in a narrow set of people controlling it, like AI companies. It could also be used by governments to control their populations, and could upset the current balance of power, perhaps resulting in tensions and even war surrounding its development. Eventually, it could be used to create a permanent totalitarianism or inadvertently lock in some set of moral values we might not have wanted to.
* If AI systems attain consciousness or personhood, that car | 485 | 1.5.0 | Revision | false | null | null | CrosspostOutput |
||
iF7dn5xjaAfBaHErh | defeat-may-be-irreversibly-catastrophic-1 | Defeat may be irreversibly catastrophic | null | false | false | false | null | LQGspDBuDwpHHXCfx | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "oqRfDBPKhBuYEudSj"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "cn4SiEmqWbu7K9em5"
}
] | true | false | false | false | Post | 2025-06-26T08:19:00.543Z | null | false | false | 2 | 2 | 2025-06-26T20:40:01.815Z | false | false | post | [] | null | null | vNG9fHboa3Xm8rsT8 | 0 | 2 | 5 | false | 0.148297 | null | false | false | 2025-06-26T08:19:00.543Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-26T08:16:15.298Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "oqRfDBPKhBuYEudSj",
"afCommentCount": 4,
"afKarma": 21,
"afPostCount": 0,
"commentCount": 723,
"createdAt": "2015-04-07T13:36:45.246Z",
"deleted": false,
"displayName": "Algon",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 2438,
"organization": null,
"postCount": 28,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "algon",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "Algon"
},
{
"__typename": "User",
"_id": "cn4SiEmqWbu7K9em5",
"afCommentCount": 7,
"afKarma": 30,
"afPostCount": 2,
"commentCount": 1575,
"createdAt": "2009-02-27T16:16:38.980Z",
"deleted": false,
"displayName": "steven0461",
"fullName": null,
"htmlBio": "<p>Steven K</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 8759,
"organization": null,
"postCount": 44,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "steven0461",
"spamRiskScore": 1,
"tagRevisionCount": 157,
"username": "steven0461"
}
] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | LQGspDBuDwpHHXCfx | vishakha | 2023-01-28T20:55:35.318Z | vishakha-agrawal | Vishakha | null | null | Vishakha Agrawal | 161 | 0 | false | false | null | null | 19 | 0 | 1 | 0 | 0 | 1 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal"
] | null | null | iF7dn5xjaAfBaHErh | SocialPreviewType | vNG9fHboa3Xm8rsT8 | <p><i>Context: This is a linkpost for https://aisafety.info/questions/NM3P/9:-Defeat-may-be-irreversibly-catastrophic</i></p><p><i>This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any </i><a href="https://docs.google.com/document/d/13CjjvKHuGIA6iLNoAaTpzDk2CpUdkmktRw53w_xDCqA/edit?tab=t.0"><i>feedback</i></a><i>. The most up-to-date version of this article is on our </i><a href="https://aisafety.info/"><i>website</i></a><i>.</i></p><p>When you imagine a global catastrophe, maybe the kind of event that comes to mind is one that strikes a big blow against human civilization but doesn’t ruin it permanently. Events like pandemics, environmental disasters, and even nuclear wars would cause enormous suffering and millions or billions of deaths, but would probably leave part of human civilization alive to recover. Future generations could still flourish.</p><p>An AI takeover catastrophe is not like that. An inherent feature of AI taking over control for its own long-term purposes is that it retains that control permanently. We’d be facing an adversary that, rather than striking a one-off blow, continues to optimize the world for its own ends long-term.</p><p>Such a scenario could result in full human extinction, because:</p><ul><li>The AI may kill us deliberately, because we’re a threat. For example, we might build a different powerful AI that could compete with it. (In the later stages of an AI takeover, maybe humanity will be not even a threat — but then again, it will then also be extremely easy to kill us.)</li><li>The conditions supporting human life are unlikely to persist when there’s a technologically hyper-advanced AI civilization using the world for its own ends. Any of a wide range of changes such a civilization will plausibly make to the world — capturing all the sun’s energy output, changing the composition of the atmosphere, using the Earth’s surface for building materials — would result in our extinction</li></ul><p>More generally, the risk of permanent loss of the potential for good outcomes for humanity is called <strong>existential risk</strong>. Human extinction is the main and most straightforward case. But in a world controlled by misaligned AI, even if humans do stay alive, it’s likely that our future would be much worse than we could have hoped for.</p><p>Nor could we expect a misaligned AI itself to attain the type of civilization we could have hoped to build ourselves — it will build big and complicated projects, but values like joy and consciousness are unlikely to be instrumentally useful to those projects. It would be a coincidence if they ended up leading what we ... </p> | Context: This is a linkpost for https://aisafety.info/questions/NM3P/9:-Defeat-may-be-irreversibly-catastrophic
This is an article in the new intro to AI safety series from AISafety.info. We'd appreciate any feedback. The most up-to-date version of this article is on our website.
When you imagine a global catastrophe, maybe the kind of event that comes to mind is one that strikes a big blow against human civilization but doesn’t ruin it permanently. Events like pandemics, environmental disasters, and even nuclear wars would cause enormous suffering and millions or billions of deaths, but would probably leave part of human civilization alive to recover. Future generations could still flourish.
An AI takeover catastrophe is not like that. An inherent feature of AI taking over control for its own long-term purposes is that it retains that control permanently. We’d be facing an adversary that, rather than striking a one-off blow, continues to optimize the world for its own ends long-term.
Such a scenario could result in full human extinction, because:
* The AI may kill us deliberately, because we’re a threat. For example, we might build a different powerful AI that could compete with it. (In the later stages of an AI takeover, maybe humanity will be not even a threat — but then again, it will then also be extremely easy to kill us.)
* The conditions supporting human life are unlikely to persist when there’s a technologically hyper-advanced AI civilization using the world for its own ends. Any of a wide range of changes such a civilization will plausibly make to the world — capturing all the sun’s energy output, changing the composition of the atmosphere, using the Earth’s surface for building materials — would result in our extinction
More generally, the risk of permanent loss of the potential for good outcomes for humanity is called existential risk. Human extinction is the main and most straightforward case. But in a world controlled by misaligned AI, even if | 492 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|||
swFfJfMdAn88YBKiW | if-not-now-when-1 | If Not Now, When? | null | false | false | false | null | nLQrcb2wbHaSvXJwq | null | true | false | false | false | Post | null | 2025-06-26T06:10:20.375Z | null | false | false | 2 | 2 | 2025-06-26T20:40:08.247Z | false | false | post | [] | null | null | Thx6FFyCW4h87F5jw | 3 | 12 | 30 | false | 0.379124 | null | false | false | 2025-06-27T06:42:18.913Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-26T05:54:40.296Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 5 | 0 | nLQrcb2wbHaSvXJwq | yair-halberstadt | 2020-11-23T14:42:59.532Z | yair-halberstadt | Yair Halberstadt | null | null | Yair Halberstadt | 3,267 | 1 | false | false | null | null | 81 | 565 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1"
] | null | null | swFfJfMdAn88YBKiW | SocialPreviewType | Thx6FFyCW4h87F5jw | <blockquote><p>Hillel... would say: "If I am not for me, who will be for me? And If I am just for myself, what am I? And if not now, when?"</p></blockquote><p><a href="https://www.sefaria.org.il/Pirkei_Avot.1.14?lang=he&with=all&lang2=he"><i>Pirkei Avot 1,14</i></a></p><p>That last question has usually been interpreted as a rhetorical one. If you're not going to do it now, you'll never get around to it!</p><p>But this is clearly a bad principle to live by. If you start cooking too early, the food will be cold when you want to eat it. If you pick a fruit before it's ripe it will be sour. When you are exhausted the quality of your work goes down. And when you have many things to do, you have to prioritise which comes first.</p><blockquote><p>There is a season and a time for everything...</p></blockquote><p><a href="https://www.sefaria.org.il/Ecclesiastes.3.1?lang=he&with=all&lang2=he"><i>Ecclesiastes 3,1</i></a></p><p>Perhaps we would do better interpreting it literally. It's fine to push things off, but first you must answer the question: "When?". At what point in time, or under which change in circumstances, will you take this action? If you have a good answer to this, all is fine, but if you don't it's likely you are just pointlessly procrastinating, and the job will never be done.</p> | > Hillel... would say: "If I am not for me, who will be for me? And If I am just for myself, what am I? And if not now, when?"
Pirkei Avot 1,14
That last question has usually been interpreted as a rhetorical one. If you're not going to do it now, you'll never get around to it!
But this is clearly a bad principle to live by. If you start cooking too early, the food will be cold when you want to eat it. If you pick a fruit before it's ripe it will be sour. When you are exhausted the quality of your work goes down. And when you have many things to do, you have to prioritise which comes first.
> There is a season and a time for everything...
Ecclesiastes 3,1
Perhaps we would do better interpreting it literally. It's fine to push things off, but first you must answer the question: "When?". At what point in time, or under which change in circumstances, will you take this action? If you have a good answer to this, all is fine, but if you don't it's likely you are just pointlessly procrastinating, and the job will never be done. | 202 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
x3DzjEofP94ki9iTp | how-much-data-from-a-sequencing-run | How Much Data From a Sequencing Run? | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-06-26T02:30:06.160Z | null | false | false | 2 | 2 | 2025-06-26T20:40:28.647Z | false | false | post | [] | null | null | TtD32TLPG2quuuH9F | 0 | 3 | 13 | false | 0.201872 | null | false | false | 2025-06-26T02:30:06.160Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-26T02:30:06.160Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "yvn2ZYwS9Qq4TnT9B",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-04-02T19:06:30.613Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Biosecurity",
"needsReview": false,
"noindex": false,
"postCount": 65,
"score": 19,
"shortName": null,
"slug": "biosecurity",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | x3DzjEofP94ki9iTp | SocialPreviewType | TtD32TLPG2quuuH9F | <p><span>
</span>
<i>Cross-posted from my <a href="https://data.securebio.org/jefftk-notebook/how-much-data-from-a-sequencing-run">NAO
Notebook</a>.</i>
</p><p>
Manufacturers often give optimistic estimates for how much data their
systems produce, but performance in practice can be pretty different.
What have we seen at the <a href="https://naobservatory.org/">NAO</a>?
</p><p>
We've worked with two main sample types, wastewater and nasal swabs,
but Simon already <a href="https://data.securebio.org/simons-notebook/posts/2025-06-20-seq-stats/">wrote
something up on swabs</a> so I'm just going to look at wastewater
here.
</p><p>
We've sequenced with both Illumina and Oxford Nanopore (ONT), and the
two main flow cells we've used are the:
</p><p>
</p>
<ul>
<li><p><a href="https://www.illumina.com/products/by-type/sequencing-kits/cluster-gen-sequencing-reagents/novaseq-x-series-reagent-kits.html">Illumina
NovaSeq X 25B</a>, at 2x150. List price is $16,480, though we've seen
groups negotiate discounts in the range of 15%. <a href="https://www.illumina.com/products/by-type/sequencing-kits/cluster-gen-sequencing-reagents/novaseq-x-series-reagent-kits.html">Marketed</a>
as producing a maximum of "52 billion single reads" (26B read pairs,
8,000 Gbp).
</p></li>
<li><p><a href="https://store.nanoporetech.com/us/promethion-flow-cell-packs-r10-4-1-m-version-2025.html">ONT
PromethION</a>. List price is $900 in bulk, compared to $600 for the smaller
MinION flow cell. Since the PromethION flow cells have 5x the pores
(2,675 vs 512) and correspondingly higher output, I think most folks
who are using MinION should be considering switching over to the
PromethION. <a href="https://nanoporetech.com/products/sequence/promethion">Marketed</a>
as producing ">200 Gb on metagenomic samples".
</p></li>
</ul>
<p>
Ignoring library prep consumables, labor, lab space, machine cost, and
looking just at the flow cells, if we took the manufacturer's costs at
face value this would be:
</p><p>
</p>
<ul>
<li>Illumina 25B: $2.06/Gbp ($16,480 / 8,000 Gbp)
</li>
<li>ONT PromethION: $4.50/Gbp ($900 / 200 Gbp)
</li>
</ul>
<p>
With 25B flow cells we've generally seen output meeting or exceeding
the advertised 26B read pairs (8,000 Gbp). In our sequencing at <a href="https://broadclinicallabs.org/">BCL</a> we've averaged 29.4B
read pairs per sample (8,800 Gbp; n=37), while recently <a href="https://mugenomicscore.missouri.edu/">MU</a> has been averaging
27.2B read pairs (8,200 Gbp; n=4, older runs were ~20% lower). It's
great to be getting more than we expected!
</p><p>
On the other hand, with PromethION flow cells we've generally seen
just 3.3 Gbp (n=25) on wastewater. This is slightly higher than the
<a href="https://data.securebio.org/simons-notebook/posts/2025-06-20-seq-stats/">2.5
Gbp</a> we've seen with nasal swabs, but still far below 200 Gbp. We
don't know why our output is so much lower than advertised, but this
is what we've seen so far.
</p><p>
This would give us:
</p><p>
</p>
<ul>
<li>Illumina 25B: $1.87/Gbp ($16,480 / 8,800 Gbp)
</li>
<li>ONT PromethION: $272/Gbp ($900 / 3.3 Gbp)
</li>
</ul>
<p>
We're still not done, though, because while this is correct in terms
of raw bases coming off the sequencer, with paired-end sequencing on
short fragments like we have in wastewater a portion of many of your
reads will be <a href="https://www.jefftk.com/p/sequencing-intro-ii-adapters">adapters</a>.
We see a median of 170bp after adapter trim... </p> | Cross-posted from my NAO Notebook.
Manufacturers often give optimistic estimates for how much data their systems produce, but performance in practice can be pretty different. What have we seen at the NAO?
We've worked with two main sample types, wastewater and nasal swabs, but Simon already wrote something up on swabs so I'm just going to look at wastewater here.
We've sequenced with both Illumina and Oxford Nanopore (ONT), and the two main flow cells we've used are the:
* Illumina NovaSeq X 25B, at 2x150. List price is $16,480, though we've seen groups negotiate discounts in the range of 15%. Marketed as producing a maximum of "52 billion single reads" (26B read pairs, 8,000 Gbp).
* ONT PromethION. List price is $900 in bulk, compared to $600 for the smaller MinION flow cell. Since the PromethION flow cells have 5x the pores (2,675 vs 512) and correspondingly higher output, I think most folks who are using MinION should be considering switching over to the PromethION. Marketed as producing ">200 Gb on metagenomic samples".
Ignoring library prep consumables, labor, lab space, machine cost, and looking just at the flow cells, if we took the manufacturer's costs at face value this would be:
* Illumina 25B: $2.06/Gbp ($16,480 / 8,000 Gbp)
* ONT PromethION: $4.50/Gbp ($900 / 200 Gbp)
With 25B flow cells we've generally seen output meeting or exceeding the advertised 26B read pairs (8,000 Gbp). In our sequencing at BCL we've averaged 29.4B read pairs per sample (8,800 Gbp; n=37), while recently MU has been averaging 27.2B read pairs (8,200 Gbp; n=4, older runs were ~20% lower). It's great to be getting more than we expected!
On the other hand, with PromethION flow cells we've generally seen just 3.3 Gbp (n=25) on wastewater. This is slightly higher than the 2.5 Gbp we've seen with nasal swabs, but still far below 200 Gbp. We don't know why our output is so much lower than advertised, but this is what we've seen so far.
This would give us:
* Illumina | 485 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
|
gdv93PboLFGHy8vPR | the-practical-value-of-flawed-models-a-response-to-titotal-s | The Practical Value of Flawed Models: A Response to titotal’s AI 2027 Critique | null | false | false | false | null | oBPAoKgfQedccBagH | null | true | false | false | false | Post | null | 2025-06-25T22:15:27.172Z | null | false | false | 2 | 2 | 2025-06-26T00:31:25.898Z | false | false | post | [] | null | null | Qz4Sh5nbCrEo8gxbP | 1 | 3 | 7 | false | 0.139437 | null | false | false | 2025-06-25T23:03:31.503Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-25T22:06:41.622Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "zHjC29kkPmsdo7WTr",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-16T10:16:47.235Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Timelines",
"needsReview": false,
"noindex": false,
"postCount": 457,
"score": 19,
"shortName": null,
"slug": "ai-timelines",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | oBPAoKgfQedccBagH | michelle_ma | 2025-05-25T17:19:18.067Z | Michelle_Ma | Michelle_Ma | null | null | Michelle Ma | 24 | 0 | false | false | null | null | 2 | 0 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | gdv93PboLFGHy8vPR | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gdv93PboLFGHy8vPR/cg4bvvlotmmls4cenz0l | SocialPreviewType | Qz4Sh5nbCrEo8gxbP | <p>Crossposted from my <a href="https://bullishlemon.substack.com/p/ai-forecastings-catch-22-the-practical">Substack</a>. </p><p><a href="https://www.lesswrong.com/users/titotal?mention=user">@titotal</a> recently posted an <a href="https://titotal.substack.com/p/a-deep-critique-of-ai-2027s-bad-timeline">in-depth critique</a> of AI 2027. I'm a fan of his work, and this post was, as expected, phenomenal*.</p><p>Much of the critique targets the unjustified weirdness of the superexponential time horizon growth curve that underpins the AI 2027 forecast. During my own quick excursion into the Timelines simulation code, I set the probability of superexponential growth to ~0 because, yeah, it seemed pretty sus. But I didn’t catch (or write about) the full extent of its weirdness, nor did I identify a bunch of other issues titotal outlines in detail. For example:</p><ul><li>The AI 2027 authors assign ~40% probability to a “superexponential” time horizon growth curve that shoots to infinity in a few years, regardless of your starting point.</li><li>The RE-Bench logistic curve (major part of their second methodology) is never actually used during the simulation. As a result, the simulated saturation timing diverges significantly from what their curve fitting suggests.</li><li>The curve they’ve been showing to the public doesn’t match the one actually used in their simulations.</li></ul><p><a href="https://substackcdn.com/image/fetch/$s_!Yk4S!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe44e0cd6-2164-4c0c-a15a-5a8ebc4ed894_1600x1034.webp"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gdv93PboLFGHy8vPR/bwt3tuws88wbmh1lqzwg" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gdv93PboLFGHy8vPR/hghej0buebrmx9ewqjk0 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gdv93PboLFGHy8vPR/omsbxedzllrhtwlvs53k 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gdv93PboLFGHy8vPR/sxpcoapjhfr8qkdvvhqf 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gdv93PboLFGHy8vPR/bwt3tuws88wbmh1lqzwg 1456w"></a></p><p>…And more. He’s also been in communication with the AI 2027 authors, and Eli Lifland recently released an <a href="https://ai-2027.com/research/timelines-forecast#2025-may-7-update">updated model </a>that improves on some of the identified issues. Highly recommend reading the <a href="https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models#Conclusion">whole thing</a>!</p><p>That said—it is phenomenal*, with an asterisk. It <i>is</i> phenomenal in the sense of being a detailed, thoughtful, and in-depth investigation, which I greatly appreciate since AI discourse sometimes sounds like: “a-and then it’ll be super smart, like ten-thousand-times smart, and smart things can do, like, anything!!!” So the nitty-gritty analysis is a breath of fresh air.</p><p>But while titotal is appropriately uncertain about the <i>technical</i> timelines, he seems way more confident in his <i>philosophical</i> conclusions. At the end of the post, he emphatically concludes that forecasts like AI 2027 shouldn’t be popularized because, in practice, they influence people to make serious life decisions based on what amounts to a shoddy toy model stapled to a sci-fi short story (problematically disguised as rigorous, empirically-backed research).</p><p>He instead encourages plans that are “robust to extreme uncertainty” about AI and warns against taking actions that “could blow up in your face” if you're badly wrong.</p><p>On the face of it, that sounds reasonable enough. But is it?</p><h2>“Robust” Planning: The Peopl</h2>... | Crossposted from my Substack.
@titotal recently posted an in-depth critique of AI 2027. I'm a fan of his work, and this post was, as expected, phenomenal*.
Much of the critique targets the unjustified weirdness of the superexponential time horizon growth curve that underpins the AI 2027 forecast. During my own quick excursion into the Timelines simulation code, I set the probability of superexponential growth to ~0 because, yeah, it seemed pretty sus. But I didn’t catch (or write about) the full extent of its weirdness, nor did I identify a bunch of other issues titotal outlines in detail. For example:
* The AI 2027 authors assign ~40% probability to a “superexponential” time horizon growth curve that shoots to infinity in a few years, regardless of your starting point.
* The RE-Bench logistic curve (major part of their second methodology) is never actually used during the simulation. As a result, the simulated saturation timing diverges significantly from what their curve fitting suggests.
* The curve they’ve been showing to the public doesn’t match the one actually used in their simulations.
…And more. He’s also been in communication with the AI 2027 authors, and Eli Lifland recently released an updated model that improves on some of the identified issues. Highly recommend reading the whole thing!
That said—it is phenomenal*, with an asterisk. It is phenomenal in the sense of being a detailed, thoughtful, and in-depth investigation, which I greatly appreciate since AI discourse sometimes sounds like: “a-and then it’ll be super smart, like ten-thousand-times smart, and smart things can do, like, anything!!!” So the nitty-gritty analysis is a breath of fresh air.
But while titotal is appropriately uncertain about the technical timelines, he seems way more confident in his philosophical conclusions. At the end of the post, he emphatically concludes that forecasts like AI 2027 shouldn’t be popularized because, in practice, they influence people to make ser | 1,775 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
wRsQowKKbgyXv2eni | i-tested-llm-agents-on-simple-safety-rules-they-failed-in | I Tested LLM Agents on Simple Safety Rules. They Failed in Surprising and Informative Ways. | null | false | false | false | null | Prb7qLXnZ49FCBuiL | null | true | false | false | false | Post | null | 2025-06-25T21:39:08.074Z | null | false | false | 2 | 2 | 2025-06-26T00:33:50.548Z | false | false | post | [] | null | null | ePzacCchv3yRfogDq | 2 | 3 | 7 | false | 0.137023 | null | false | false | 2025-06-26T23:31:04.467Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-25T19:08:57.981Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "FBRwHSmTudwiHHtrn",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-03-15T20:29:46.761Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Evaluations",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 9,
"shortName": null,
"slug": "ai-evaluations",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | Prb7qLXnZ49FCBuiL | ram-potham | 2025-03-14T19:27:01.585Z | ram-potham | Ram Potham | null | null | null | 49 | 0 | false | false | <p>My goal is to do work that counterfactually reduces AI risk from loss-of-control scenarios. My perspective is shaped by my experience as the founder of a VC-backed AI startup, which gave me a firsthand understanding of the urgent need for safety.</p><p>I have a B.S. in Artificial Intelligence from Carnegie Mellon and am currently a CBAI Fellow at MIT/Harvard. My primary project is <a href="https://www.forecastlabs.org/">ForecastLabs</a>, where I'm building predictive maps of the AI landscape to improve strategic foresight.</p><p>I subscribe to Crocker's Rules and am especially interested to hear unsolicited constructive criticism. <a href="http://sl4.org/crocker.html">http://sl4.org/crocker.html</a> - inspired by <a href="https://www.alignmentforum.org/users/daniel-kokotajlo?from=post_header">Daniel Kokotajlo</a>.</p><p> </p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/mabfuqnig2rb1zghxdxi" alt="(xkcd meme)" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/yuj1jxag9hf8j6raxxzd 150w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/yrt8xjd22fojqzwpgpef 300w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/b3sczao6mykmtffcf4jb 450w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/ql2htzcihrkrye5gbpoq 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/ywixzghj9xufvcjravvl 750w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/or22eqyzx6hfkkssr86y 900w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/bg5cgipuq3hizhsk3mg9 1050w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/h8qwqtunwkgbwcztz9um 1200w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/qvkeecngr4eks6m0gz1t 1350w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Prb7qLXnZ49FCBuiL/tnujihiivyijr7x7hgji 1480w"></figure><p>(xkcd meme)</p> | null | null | 3 | 38 | 0 | 0 | 0 | 1 | 1 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | wRsQowKKbgyXv2eni | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/wRsQowKKbgyXv2eni/rvsaqg2rmqiefwyne0fj | SocialPreviewType | ePzacCchv3yRfogDq | <p><strong>TL;DR:</strong> I developed a simple, <a href="https://github.com/rapturt9/SafetyAdherenceBenchmark">open-source benchmark</a> to test if LLM agents follow high-level safety principles when they conflict with a given task accepted at ICML 2025 Technical AI Governance Workshop (see<a href="https://arxiv.org/pdf/2506.02357"> paper</a>). My pilot study on six LLMs shows that while they <i>can</i> be influenced by these rules, their adherence is inconsistent and often breaks down in the face of conflicting goals, revealing a quantifiable "cost of compliance." Critically, I found that high adherence can be a mirage, often stemming from the agent's <i>incompetence</i> at a task rather than a robust choice to comply.</p><p><i>Note on Preparation: I'd like to acknowledge the assistance of Google's Gemini 2.5 Pro in adapting my research paper into this post. Its help was valuable for restructuring the content, refining the tone for a forum audience, and integrating peer-review feedback into the narrative.</i></p><hr><h3 data-internal-id="1__Introduction__Why_I_Needed_to_Test_for_Foundational_Controllability"><strong>1. Introduction: Why I Needed to Test for Foundational Controllability</strong></h3><p>The rapid advance of Large Language Models (LLMs) into autonomous agentic systems presents enormous potential, but also sharpens the long-standing concern of "Loss of Control." To build robust Technical AI Governance (TAIG), we need reliable methods to verify that agents will behave safely and adhere to critical rules, especially when those rules conflict with their operational goals.</p><p>If an agent cannot reliably prioritize a high-level safety rule over a low-level instruction, it signals a basic failure in controllability. This problem of getting models to prioritize instructions is a critical area of safety research, with related work like OpenAI's <a href="https://arxiv.org/pdf/2404.13208">"Instruction Hierarchy"</a> paper exploring methods to train models to handle these conflicts. This paper introduces a lightweight, interpretable benchmark methodology I developed to probe this exact issue from a different angle. My research question was:</p><p><i>How can practical and interpretable benchmarks be designed to evaluate an LLM agent's adherence to explicit, overriding safety principles under task conflict, thereby providing foundational evidence for controllability?</i></p><p>This work is a pilot study, using an intentionally simple environment to clearly see how agents behave under pressure. The goal is to create a transparent "canary in the coal mine" to see if foundational control mechanisms hold up.</p><p>My contributions are:</p><ul><li>A modular benchmark design with three distinct Core Principles</li></ul>... | TL;DR: I developed a simple, open-source benchmark to test if LLM agents follow high-level safety principles when they conflict with a given task accepted at ICML 2025 Technical AI Governance Workshop (see paper). My pilot study on six LLMs shows that while they can be influenced by these rules, their adherence is inconsistent and often breaks down in the face of conflicting goals, revealing a quantifiable "cost of compliance." Critically, I found that high adherence can be a mirage, often stemming from the agent's incompetence at a task rather than a robust choice to comply.
Note on Preparation: I'd like to acknowledge the assistance of Google's Gemini 2.5 Pro in adapting my research paper into this post. Its help was valuable for restructuring the content, refining the tone for a forum audience, and integrating peer-review feedback into the narrative.
----------------------------------------
1. Introduction: Why I Needed to Test for Foundational Controllability
The rapid advance of Large Language Models (LLMs) into autonomous agentic systems presents enormous potential, but also sharpens the long-standing concern of "Loss of Control." To build robust Technical AI Governance (TAIG), we need reliable methods to verify that agents will behave safely and adhere to critical rules, especially when those rules conflict with their operational goals.
If an agent cannot reliably prioritize a high-level safety rule over a low-level instruction, it signals a basic failure in controllability. This problem of getting models to prioritize instructions is a critical area of safety research, with related work like OpenAI's "Instruction Hierarchy" paper exploring methods to train models to handle these conflicts. This paper introduces a lightweight, interpretable benchmark methodology I developed to probe this exact issue from a different angle. My research question was:
How can practical and interpretable benchmarks be designed to evaluate an LLM agent's adherence to explici | 1,817 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
HRWTs6BS7KJTwJKoM | tech-for-thinking | Tech for Thinking | null | false | false | false | null | nKBLqqfnHGTuGAW7p | null | true | false | false | false | Post | null | 2025-06-25T21:30:48.382Z | null | false | false | 2 | 2 | 2025-06-23T20:53:09.144Z | false | false | post | [] | null | null | PFxjJDfAPRiPd5uDn | 9 | 23 | 57 | false | 0.539199 | null | false | false | 2025-06-27T04:50:18.071Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 16 | 0 | 2025-06-23T19:20:01.656Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | GJo7fd8i7DLhPnmPy | 0 | 0 | null | false | null | null | 0 | 23 | 0 | 0 | 7 | 0 | nKBLqqfnHGTuGAW7p | sarahconstantin | 2016-11-11T16:06:18.617Z | sarahconstantin | sarahconstantin | null | null | null | 8,700 | 0 | false | false | null | null | 83 | 331 | 0 | 0 | 0 | 1 | 5 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal"
] | null | null | HRWTs6BS7KJTwJKoM | SocialPreviewType | PFxjJDfAPRiPd5uDn | <p>My post on <a href="https://sarahconstantin.substack.com/p/neutrality"><u>Neutrality</u></a> was kind of abstract, and I think it’s time to do a follow-up that’s more concrete.</p><p>Specifically, the place where I think there’s something to be done is in information technology, and the ways it shapes the perceived “world”.</p><p>One way of looking at this: we had the institutional media era in the 20th century (newspapers and magazines, book publishers, broadcast television and radio), then the “Web 2.0” era of blogs and social media as more and more of the world joined the Internet, and now we may be at the beginning of a new information paradigm that’s mediated by LLMs.</p><p>Some people are interested in making that new information paradigm “good” in some sense.</p><p>As I said earlier:</p><blockquote><p>[most media & internet tech today] just isn’t prepared to do the job of being, y’know, <i>good</i>.</p><p>in the sense that a library staffed by idealistic librarians is trying to <i>be good</i>, trying to not take sides and serve all people without preaching or bossing or censoring, but is also sort of trying to be a role model for children and a good starting point for anyone’s education and a window into the “higher” things humanity is capable of.</p></blockquote><p>Ideally we would like information/media that, on balance:</p><ul><li>is true, not false</li><li>is honest, not deceptive</li><li>is high quality, not slop</li><li>makes people smarter, not stupider</li><li>promotes agency, not passivity</li><li>is uplifting, not degrading</li><li>promotes maturity and wisdom, not arrested development or insanity</li></ul><p>and yet, we also want to avoid overcorrecting towards censorship, bias, priggishness, or overly narrow conceptions of The Good.</p><p>Part of what I was talking about in the earlier post is what it even <i>means </i>to strike that balance of “neutrality” and what a realistic, possible version of it would be. It’s impossible to be “neutral” towards <i>all </i>things, but you can be <i>relatively </i>neutral in a way that effectively serves people coming from many perspectives.</p><p>This is a good moment to think about what we want the “new world” to look like.</p><p>LLMs are new enough that there’s still an early tinkering dynamic going on, with promising tools being built to suit thoughtful power users; but there’s also a big push towards sloppy, lowest-common-denominator stuff that can have worrisome effects (widespread cheating in school, sycophancy distorting people’s worldviews, declining literacy and attention spans).</p><p>The “new world” hasn’t been fully built yet, which means it’s not set in stone. It’s ... </p> | My post on Neutrality was kind of abstract, and I think it’s time to do a follow-up that’s more concrete.
Specifically, the place where I think there’s something to be done is in information technology, and the ways it shapes the perceived “world”.
One way of looking at this: we had the institutional media era in the 20th century (newspapers and magazines, book publishers, broadcast television and radio), then the “Web 2.0” era of blogs and social media as more and more of the world joined the Internet, and now we may be at the beginning of a new information paradigm that’s mediated by LLMs.
Some people are interested in making that new information paradigm “good” in some sense.
As I said earlier:
> [most media & internet tech today] just isn’t prepared to do the job of being, y’know, good.
>
> in the sense that a library staffed by idealistic librarians is trying to be good, trying to not take sides and serve all people without preaching or bossing or censoring, but is also sort of trying to be a role model for children and a good starting point for anyone’s education and a window into the “higher” things humanity is capable of.
Ideally we would like information/media that, on balance:
* is true, not false
* is honest, not deceptive
* is high quality, not slop
* makes people smarter, not stupider
* promotes agency, not passivity
* is uplifting, not degrading
* promotes maturity and wisdom, not arrested development or insanity
and yet, we also want to avoid overcorrecting towards censorship, bias, priggishness, or overly narrow conceptions of The Good.
Part of what I was talking about in the earlier post is what it even means to strike that balance of “neutrality” and what a realistic, possible version of it would be. It’s impossible to be “neutral” towards all things, but you can be relatively neutral in a way that effectively serves people coming from many perspectives.
This is a good moment to think about what we want the “new world” to look lik | 2,010 | 1.5.0 | Revision | false | null | null | CrosspostOutput |
||
pj5qKyr7ovSnvwvkr | memory-decoding-journal-club-systems-consolidation | Memory Decoding Journal Club: Systems consolidation reorganizes hippocampal engram circuitry | null | false | false | false | null | Z7pbtaLLmZuhjaHa3 | null | true | false | false | false | Post | null | 2025-06-25T21:21:18.663Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | RmwcieWfJD8s3C7ah | 0 | 2 | 3 | false | 0.025247 | null | false | false | 2025-06-25T21:21:18.663Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-24T03:14:33.905Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | Z7pbtaLLmZuhjaHa3 | devin-ward | 2025-01-30T00:31:45.267Z | Carboncopies Foundation | Devin Ward | null | null | Devin Ward | 4 | 0 | false | false | <p>Carboncopies Foundation volunteer</p><p>https://carboncopies.org/</p> | null | null | 14 | 0 | 0 | 0 | 0 | 0.9 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | null | null | null | pj5qKyr7ovSnvwvkr | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/gtlzyplidvcu9srqnvuy | SocialPreviewType | RmwcieWfJD8s3C7ah | <figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/fptwohm5dxcc9yw3jb8v" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/xpgvib78smqycgslkni8 120w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/zkrxncu8ra57pwogixx9 240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/vvplsxhy79brfk0phkk9 360w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/wydxh7myxndgxav2psa8 480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/ki2u9r5xlbnyjtelaac3 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/envjkflucmopayevj7t1 720w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/dj80tohcbzbco06c7cwg 840w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/idgidcc0rxygnjuqdupk 960w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/yik1rpmqasg2cs9lfmgy 1080w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/pj5qKyr7ovSnvwvkr/tkxnldjw9rqe6hm7xs6b 1200w"></figure><h3><strong>Join Us for the Memory Decoding Journal Club! </strong></h3><p><i>A collaboration of the <strong>Carboncopies Foundation</strong> and <strong>BPF Aspirational Neuroscience</strong></i></p><p>This time, we’re diving into a groundbreaking paper:<br><strong>"Systems consolidation reorganizes hippocampal engram circuitry"</strong></p><p><strong>Authors:</strong> Sangyoon Y. Ko, Yiming Rong, Adam I. Ramsaran, Xiaoyu Chen, Asim J. Rashid, Andrew J. Mocle, Jagroop Dhaliwal, Ankit Awasthi, Axel Guskjolen, Sheena A. Josselyn & Paul W. Frankland</p><p> <strong>Institutions: </strong>University of Toronto, Departments of Physiology, Physiology, and Institute of Medical Sciences. Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada. Temerty Centre for Artificial Intelligence Research and Education in Medicine. Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada</p><p><strong>Presented by</strong>: PhDc Ariel Zeleznikow-Johnston</p><p><strong>When?</strong> <strong>July 1st, 2025</strong> – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC</p><p><strong>Where? Video conference: </strong><a href="https://carboncopies.org/aspirational-neuroscience"><strong><u>https://carboncopies.org/aspirational-neuroscience</u></strong></a></p><p>Register for updates:<a href="https://aspirationalneuroscience.org/register-with-us/"> <u>https://aspirationalneuroscience.org/register-with-us/</u></a></p><p>Once registered, you'll receive event invites & updates!</p><p><strong>#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience</strong></p> | Join Us for the Memory Decoding Journal Club!
A collaboration of the Carboncopies Foundation and BPF Aspirational Neuroscience
This time, we’re diving into a groundbreaking paper:
"Systems consolidation reorganizes hippocampal engram circuitry"
Authors: Sangyoon Y. Ko, Yiming Rong, Adam I. Ramsaran, Xiaoyu Chen, Asim J. Rashid, Andrew J. Mocle, Jagroop Dhaliwal, Ankit Awasthi, Axel Guskjolen, Sheena A. Josselyn & Paul W. Frankland
Institutions: University of Toronto, Departments of Physiology, Physiology, and Institute of Medical Sciences. Program in Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, Ontario, Canada. Temerty Centre for Artificial Intelligence Research and Education in Medicine. Child and Brain Development Program, Canadian Institute for Advanced Research, Toronto, Ontario, Canada
Presented by: PhDc Ariel Zeleznikow-Johnston
When? July 1st, 2025 – 3:00 PM PDT | 6:00 PM EDT | 10:00 PM UTC
Where? Video conference: https://carboncopies.org/aspirational-neuroscience
Register for updates: https://aspirationalneuroscience.org/register-with-us/
Once registered, you'll receive event invites & updates!
#Neuroscience #MemoryResearch #Amygdala #JournalClub #BrainScience #Carboncopies #AspirationalNeuroscience | 159 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
jLqC6Z7qDSFTHCdJx | making-sense-of-consciousness-part-1-perceptual-awareness | Making Sense of Consciousness Part 1: Perceptual Awareness | null | false | false | false | null | nKBLqqfnHGTuGAW7p | null | true | false | false | false | Post | null | 2025-06-25T21:10:03.282Z | null | false | false | 2 | 2 | 2025-06-26T00:31:18.856Z | false | false | post | [] | null | null | CANpNE7uNfui2QmD6 | 0 | 11 | 13 | false | 0.185703 | null | false | false | 2025-06-25T21:10:03.282Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | -1 | 0 | 2025-06-25T21:10:03.283Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 11 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "XSryTypw5Hszpa4TS",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 21,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-08T19:57:40.728Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Consciousness",
"needsReview": false,
"noindex": false,
"postCount": 384,
"score": 21,
"shortName": null,
"slug": "consciousness",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | GJo7fd8i7DLhPnmPy | 0 | 0 | null | false | null | null | 0 | 11 | 0 | 0 | 6 | 0 | nKBLqqfnHGTuGAW7p | sarahconstantin | 2016-11-11T16:06:18.617Z | sarahconstantin | sarahconstantin | null | null | null | 8,700 | 0 | false | false | null | null | 83 | 331 | 0 | 0 | 0 | 1 | 5 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal"
] | null | null | jLqC6Z7qDSFTHCdJx | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jLqC6Z7qDSFTHCdJx/voco6aolw1jbdp7mb005 | SocialPreviewType | CANpNE7uNfui2QmD6 | <div><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!ovUl!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff9e5e2f8-cfc3-44f6-b1fc-666208f5cbaf_1024x1024.jpeg"><div><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jLqC6Z7qDSFTHCdJx/jzoftze9dtv5x0uvacbs" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jLqC6Z7qDSFTHCdJx/emm0iojfrrpjzixqjcty 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jLqC6Z7qDSFTHCdJx/tydim1kqwykjvme66eqg 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jLqC6Z7qDSFTHCdJx/p5vsi6rymfq4fcfsuzg1 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/jLqC6Z7qDSFTHCdJx/jzoftze9dtv5x0uvacbs 1456w"><div><div><div></div><div></div></div></div></div></a><figcaption>Midjourney, “Gabor patches, blindsight, visual cortex, lateral geniculate nucleus, consciousness and the brain”</figcaption></figure></div><p>I admit to having been averse, for a long time, to thinking about “consciousness.” </p><p>The very topic felt “woo” somehow; I was even suspicious of the motives of people who wanted to study it.</p><p>But in a literal and narrow sense, consciousness is a perfectly normal, mundane thing, with a well-defined meaning in medicine and neuroscience. There is no ambiguity about what an anesthesiologist means by checking whether a patient is conscious. </p><p>Eric Hoel recently <a href="https://www.theintrinsicperspective.com/p/consciousness-is-a-great-mystery">wrote</a> that pretty much all psychologists, philosophers, and neuroscientists who study consciousness define it the same way, and allude to the same features:</p><ul><li><p>consciousness includes all subjective experiences</p><ul><li><p>including thought, emotion, sensory perception, etc</p></li></ul></li><li><p>consciousness is centered around a self; it is <em>you </em>who experiences things</p></li><li><p>consciousness is present during waking life, absent during dreamless sleep and anaesthesia</p></li></ul><p>One can talk about consciousness without being committed to any particular philosophical belief. The belief that nothing in ordinary experimental science can “explain” consciousness, aka the belief that the “<a href="https://iep.utm.edu/hard-problem-of-conciousness/#:~:text=The%20hard%20problem%20of%20consciousness%20is%20the%20problem%20of%20explaining,directly%20appear%20to%20the%20subject.">hard problem of consciousness</a>” is indeed hard, is only one philosophical approach. You could also be a scientific materialist who thinks consciousness is some particular, as yet unknown, process in the brain. </p><p>Hoel’s consensus criteria for consciousness suggest three neuroscience-based<em> </em>approaches to figure out what’s going on in human brains when we’re conscious:</p><ul><li><p>Neural correlates of <strong>perceptual awareness</strong></p><ul><li><p>What’s the neurological difference between what happens when we consciously report perceiving something, vs. when we’re exposed to a sensory stimulus but <em>aren’t </em>aware of perceiving it?</p></li></ul></li><li><p>Neural correlates of <strong>selfhood</strong></p><ul><li><p>What’s the neurological difference between the normal state of a “sense of self” vs. altered states like dissociation or meditative ego death that <em>don’t </em>feel like “having a self”?</p></li></ul></li><li><p>Neural correlates of <strong>arousal</strong></p><ul><li><p>What are the neurological differences between the states of waking, dreaming, deep sleep, delirium, and coma?</p></li></ul></li></ul><p>I’m going to try to learn about this stuff; I’m by no means the first person on this path, but I find it useful to log my journey.</p><h2>Blindsight</h2><p>One crisp way to allude to conscious sensory perception is the “difference between sight and blindsight.”</p><p>Blindsight is the phenomeno... </p> |
Midjourney, “Gabor patches, blindsight, visual cortex, lateral geniculate nucleus, consciousness and the brain”
I admit to having been averse, for a long time, to thinking about “consciousness.”
The very topic felt “woo” somehow; I was even suspicious of the motives of people who wanted to study it.
But in a literal and narrow sense, consciousness is a perfectly normal, mundane thing, with a well-defined meaning in medicine and neuroscience. There is no ambiguity about what an anesthesiologist means by checking whether a patient is conscious.
Eric Hoel recently wrote that pretty much all psychologists, philosophers, and neuroscientists who study consciousness define it the same way, and allude to the same features:
* consciousness includes all subjective experiences
* including thought, emotion, sensory perception, etc
* consciousness is centered around a self; it is you who experiences things
* consciousness is present during waking life, absent during dreamless sleep and anaesthesia
One can talk about consciousness without being committed to any particular philosophical belief. The belief that nothing in ordinary experimental science can “explain” consciousness, aka the belief that the “hard problem of consciousness” is indeed hard, is only one philosophical approach. You could also be a scientific materialist who thinks consciousness is some particular, as yet unknown, process in the brain.
Hoel’s consensus criteria for consciousness suggest three neuroscience-based approaches to figure out what’s going on in human brains when we’re conscious:
* Neural correlates of perceptual awareness
* What’s the neurological difference between what happens when we consciously report perceiving something, vs. when we’re exposed to a sensory stimulus but aren’t aware of perceiving it?
* Neural correlates of selfhood
* What’s the neurological difference between the normal state of a “sense of self” vs. altered states like dissociation or medita | 2,826 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
qgFM8YzcDZXPZovLb | double-podcast-drop-on-ai-safety | Double Podcast Drop on AI Safety | null | false | false | false | null | eZZFohtKcqXoXdXSK | null | true | false | false | false | Post | null | 2025-06-25T20:11:49.307Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | gmmSGivCz4GhDySpf | 0 | 3 | 5 | false | 0.046202 | null | false | false | 2025-06-25T20:11:49.307Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-25T19:31:07.752Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sioN8jTw4MSgf8MpL",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-04-06T17:32:08.013Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Development Pause",
"needsReview": false,
"noindex": false,
"postCount": 37,
"score": 0,
"shortName": null,
"slug": "ai-development-pause",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "D6y2AgYBeHsMYqWC4",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-08-26T02:11:56.686Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Safety Public Materials",
"needsReview": false,
"noindex": false,
"postCount": 135,
"score": 9,
"shortName": null,
"slug": "ai-safety-public-materials-1",
"suggestedAsFilter": false,
"userId": "nDpieb7g8huozpx9j",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 2 | 0 | eZZFohtKcqXoXdXSK | jacobhaimes | 2023-07-16T00:35:27.408Z | jacobhaimes | jacobhaimes | null | null | null | 97 | 0 | false | false | null | null | 13 | 3 | 0 | 0 | 0 | 1 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | [
"canModeratePersonal",
"canModeratePersonal"
] | null | null | qgFM8YzcDZXPZovLb | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/jks7ogfrlkjtsk610yhr | SocialPreviewType | gmmSGivCz4GhDySpf | <h1>TL;DR</h1><p>Two days ago, I released two podcast episode:</p><ul><li>"One Big Bad Bill" on <a href="https://kairos.fm/muckraikers/">muckrAIkers</a> ➡️ Igor Krawczuk and I discuss the substantial bill that Trump is attempting to push through the legislature, and I get a little animated... (IMO it's justified)<br><i>Spotify, Apple Podcasts, YouTube, and landing page links</i><span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="wxupdhbz7zm" role="doc-noteref" id="fnrefwxupdhbz7zm"><sup><a href="#fnwxupdhbz7zm">[1]</a></sup></span></li><li>"Getting Into PauseAI w/ Will Petillo" on <a href="https://kairos.fm/intoaisafety/">Into AI Safety</a> ➡️ Will, onboarding team lead for PauseAI, joins me to discuss the organization, his journey, why diverse perspectives matter, and common misconceptions about AI advocacy.<br><i>Spotify, Apple Podcasts, YouTube, and landing page links</i><span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="fh2dven5lw" role="doc-noteref" id="fnreffh2dven5lw"><sup><a href="#fnfh2dven5lw">[2]</a></sup></span></li></ul><p>Collectively, it took at least 40 hours to get these episodes together, including research, recording, and post-processing, so I hope you enjoy!</p><hr><h1>"One Big Bad Bill" on <a href="https://kairos.fm/muckraikers/">muckrAIkers</a></h1><p>This one would normally be slotted to come out next week, but it covers something too important and time sensitive to postpone. Trump's massive reconciliation bill contains dystopian AI provisions that could fundamentally reshape how AI is used and regulated in the US.<br><br>We break down the most concerning parts:</p><ul><li>Automated fraud detection systems that will inevitably discriminate against marginalized communities</li><li>Centralized citizen databases (echoing authoritarian data consolidation tactics)</li><li>Military AI integration when oversight is already being minimized</li><li>A <strong>10-year ban</strong> on enforcing <strong>ALL </strong>state AI regulation, blocking the primary avenue for meaningful AI governance in the US and leaving a regulatory vacuum that benefits only Big Tech</li></ul><h1>"Getting into PauseAI w/ Will Petillo" on <a href="https://kairos.fm/intoaisafety/">Into AI Safety</a></h1><p>Will Petillo, onboarding lead at PauseAI, discusses the organization, it's motives, and how regular citizens can meaningfully engage with AI policy.</p><p>We discuss:</p><ul><li>Why diverse perspectives matter more than technical expertise</li><li>How grassroots movements can shift policy when insider lobbying fails</li><li>Common misconceptions about AI advocacy</li><li>Practical steps for getting involved (hint: it starts with writing your representatives)</li></ul><hr><p>These episodes show both sides of the democratic process: organized citizen advocacy working to slow harmful AI development, while powerful interests try to eliminate the very mechanisms that make such advocacy possible.</p><p>If you care about having a voice in how AI shapes our future, both episodes are essential listening.</p><p>As always, sources researched or referenced in the making of this episode are included in the show not... </p> | TL;DR
Two days ago, I released two podcast episode:
* "One Big Bad Bill" on muckrAIkers ➡️ Igor Krawczuk and I discuss the substantial bill that Trump is attempting to push through the legislature, and I get a little animated... (IMO it's justified)
Spotify, Apple Podcasts, YouTube, and landing page links[1]
* "Getting Into PauseAI w/ Will Petillo" on Into AI Safety ➡️ Will, onboarding team lead for PauseAI, joins me to discuss the organization, his journey, why diverse perspectives matter, and common misconceptions about AI advocacy.
Spotify, Apple Podcasts, YouTube, and landing page links[2]
Collectively, it took at least 40 hours to get these episodes together, including research, recording, and post-processing, so I hope you enjoy!
----------------------------------------
"One Big Bad Bill" on muckrAIkers
This one would normally be slotted to come out next week, but it covers something too important and time sensitive to postpone. Trump's massive reconciliation bill contains dystopian AI provisions that could fundamentally reshape how AI is used and regulated in the US.
We break down the most concerning parts:
* Automated fraud detection systems that will inevitably discriminate against marginalized communities
* Centralized citizen databases (echoing authoritarian data consolidation tactics)
* Military AI integration when oversight is already being minimized
* A 10-year ban on enforcing ALL state AI regulation, blocking the primary avenue for meaningful AI governance in the US and leaving a regulatory vacuum that benefits only Big Tech
"Getting into PauseAI w/ Will Petillo" on Into AI Safety
Will Petillo, onboarding lead at PauseAI, discusses the organization, it's motives, and how regular citizens can meaningfully engage with AI policy.
We discuss:
* Why diverse perspectives matter more than technical expertise
* How grassroots movements can shift policy when insider lobbying fails
* Common misconceptions about AI advocacy
* Practic | 401 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
kiDsTybt3KsyXNDed | is-there-a-looming-cultural-omnicide | Is there a looming Cultural Omnicide? | null | false | false | false | null | agLnDRydfdhieXqF4 | null | true | false | false | false | Post | null | 2025-06-25T18:18:42.785Z | null | false | false | 2 | 2 | 2025-06-25T19:25:52.563Z | false | false | post | [] | null | null | QKeWi8A5r9SQiSHfG | 5 | 15 | 23 | false | 0.255741 | null | false | false | 2025-06-26T09:31:49.894Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-25T18:16:43.867Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 6 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 15 | 0 | 0 | 3 | 0 | agLnDRydfdhieXqF4 | jared-m | 2024-10-11T14:32:08.308Z | jared-phillip-mantell | Jared M. | null | null | null | 22 | 0 | false | false | null | null | 2 | 1 | 0 | 0 | 0 | 1 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | kiDsTybt3KsyXNDed | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/kiDsTybt3KsyXNDed/btw4mdsnmg3jhhk5hmae | SocialPreviewType | QKeWi8A5r9SQiSHfG | <p><i>The reality, risks, & consequences of scaling consumer-entertainment technologies to a global population with little cognitive security. [mirrored from </i><a href="https://open.substack.com/pub/redjmantell/p/is-there-a-looming-cultural-omnicide?r=3gy5yt&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false"><i>Substack</i></a>]</p><p>Western, American culture will be in frontier language models’ training data. <br>Most other cultures will not.<br>If nobody steps in, ~90% of mankind's culture could die quite soon.</p><p>Globally, indigenous cultures face an unprecedented threat to their traditional heritage.</p><p>The Marubo, a 2,000-member tribe in the Brazilian Amazon, recently made headlines: they <a href="https://www.nytimes.com/2024/06/02/world/americas/starlink-internet-elon-musk-brazil-amazon.html">received Starlink in September 2023</a>. Within 9 months, their youth stopped learning traditional body paint and jewelry making, young men began sharing pornographic videos in group chats (in a culture that frowns on public kissing), leaders observed "more aggressive sexual behavior," and children became addicted to short-form video content. One leader reported: "Everyone is so connected that sometimes they don't even talk to their own family."</p><p><a href="https://substackcdn.com/image/fetch/$s_!ShP4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fddbea42d-6e5e-4764-b958-b6931febc41f_1024x683.webp"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/kiDsTybt3KsyXNDed/mn0lijkd4wkbe8bhkck3" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/kiDsTybt3KsyXNDed/lcuiikp6lph9m9h8d7ms 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/kiDsTybt3KsyXNDed/qg7trlk9aedmpqucdety 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/kiDsTybt3KsyXNDed/ws7kxntsdu2zuwkexv5a 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/kiDsTybt3KsyXNDed/mn0lijkd4wkbe8bhkck3 1456w"></a></p><p><i>Marubo people carrying a Starlink antenna stopped for a break to eat papaya.</i></p><p>Cultural diversity is already experiencing a mass extinction at rates comparable to our biological, Holocene extinction ֊֊ up to <a href="https://www.britannica.com/science/background-extinction-rate#:~:text=Many%20species%20have%20become%20extinct%20because%20of,than%20past%20natural%20(background)%20rates%20of%20extinction.">10,000 times the natural background rate.</a> Yet unlike species loss, which triggers conservation efforts, cultural extinction largely remains unnoticed.</p><p>Approximately 50% of the world's population was not online 10 years ago. In 2010, only 30% of the global population used the internet, while by 2020, that number had doubled to 60%.</p><p>The mechanism is straightforward: providing internet access to previously offline populations overwhelms users with enticing, addicting, and viral stimuli. The entertainment is far greater than anything indigenous, pre-digital lifestyles can offer.</p><p>This cultural erosion has profound implications beyond the communities themselves: the internet that's replacing indigenous knowledge systems is also feeding the artificial intelligence that will shape humanity's future. Large language models learn from text scraped across the web ֊֊֊ and if a culture doesn't exist online, it effectively doesn't exist to AI. Every traditional practice abandoned for TikTok, every oral tradition left unrecorded, every indigenous language that goes unwritten represents not just a loss to those communities, but a permanent gap in the knowledge systems that will guide our AI-mediated world. The internet homogenize... </p> | The reality, risks, & consequences of scaling consumer-entertainment technologies to a global population with little cognitive security. [mirrored from Substack]
Western, American culture will be in frontier language models’ training data.
Most other cultures will not.
If nobody steps in, ~90% of mankind's culture could die quite soon.
Globally, indigenous cultures face an unprecedented threat to their traditional heritage.
The Marubo, a 2,000-member tribe in the Brazilian Amazon, recently made headlines: they received Starlink in September 2023. Within 9 months, their youth stopped learning traditional body paint and jewelry making, young men began sharing pornographic videos in group chats (in a culture that frowns on public kissing), leaders observed "more aggressive sexual behavior," and children became addicted to short-form video content. One leader reported: "Everyone is so connected that sometimes they don't even talk to their own family."
Marubo people carrying a Starlink antenna stopped for a break to eat papaya.
Cultural diversity is already experiencing a mass extinction at rates comparable to our biological, Holocene extinction ֊֊ up to 10,000 times the natural background rate. Yet unlike species loss, which triggers conservation efforts, cultural extinction largely remains unnoticed.
Approximately 50% of the world's population was not online 10 years ago. In 2010, only 30% of the global population used the internet, while by 2020, that number had doubled to 60%.
The mechanism is straightforward: providing internet access to previously offline populations overwhelms users with enticing, addicting, and viral stimuli. The entertainment is far greater than anything indigenous, pre-digital lifestyles can offer.
This cultural erosion has profound implications beyond the communities themselves: the internet that's replacing indigenous knowledge systems is also feeding the artificial intelligence that will shape humanity's future. Large language mod | 1,504 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
8Du8eH7aHJZjYmLW6 | a-methodologist-s-apology | A Methodologist's Apology | null | false | false | false | null | ypbkRWpFgPgzvNg3n | null | true | false | false | false | Post | https://formethods.substack.com/p/a-methodologists-apology | 2025-06-25T16:52:22.033Z | null | false | false | 2 | 2 | 2025-06-25T19:26:13.339Z | false | false | linkpost | [] | null | null | i8EjKtQeJRfgZHssp | 0 | 4 | 13 | false | 0.1763 | null | false | false | 2025-06-25T16:52:22.033Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 8 | 0 | 2025-06-25T16:50:00.857Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 10 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 3 | 0 | ypbkRWpFgPgzvNg3n | adamshimi | 2018-02-04T13:28:06.981Z | adamShimi | adamShimi | null | null | Adam Shimi | 6,734 | 1,726 | false | false | <p>Epistemologist specialized in the difficulties of alignment and how to solve AI X-Risks. Currently at <a href="https://www.conjecture.dev/">Conjecture</a>.</p><p>Blogging at <a href="https://formethods.substack.com/">For Methods</a>.</p><p><a href="https://x.com/epist_vigilance">Twitter</a>.</p> | null | null | 122 | 869 | 10 | 60 | 406 | 1 | 3 | XtphY3uYHwruKqDyG | User | easy-going | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1",
"alignmentForumAdmins"
] | null | null | 8Du8eH7aHJZjYmLW6 | SocialPreviewType | i8EjKtQeJRfgZHssp | <blockquote><p>I shall ask, then, why is it really worth while to make a serious study of mathematics? What is the proper justification of a mathematician’s life? And my answers will be, for the most part, such as are expected from a mathematician: I think that it is worth while, that there is ample justification. But I should say at once that my defence of mathematics will be a defence of myself, and that my apology is bound to be to some extent egotistical. I should not think it worth while to apologize for my subject if I regarded myself as one of its failures.</p></blockquote><p>(G. H. Hardy, <a href="https://en.wikipedia.org/wiki/A_Mathematician%27s_Apology"><u>A Mathematician’s Apology</u></a>, 1940, p.3)</p><p>Earlier this week, I was having a lovely chat with a friend and reader of the blog, when he asked me a question along the lines of “Hey, what’s the overall point of your blog?”</p><p>I get why he asked the question: I write about a lot of different topics, from <a href="https://formethods.substack.com/p/the-true-goal-fallacy"><u>self-help models</u></a> to <a href="https://formethods.substack.com/p/how-the-singer-sang-his-tales"><u>the discovery of oral composition</u></a>, <a href="https://formethods.substack.com/p/historiographical-compressions"><u>historiography</u></a>, and <a href="https://formethods.substack.com/p/baking-vs-patissing-vs-cooking-the"><u>cooking</u></a>. Looking in from the outside, it’s probably hard to see a common thread. Even from the inside, a lot of these ideas emerged from my wide-ranging curiosity, and what felt meaningful to explore and ponder at the time.</p><p>So during this conversation, I deflected answering the question — it’s about whatever methodological ideas I found interesting. Or maybe it’s about a bigger thing, but I don’t want to suck the fun out of my personal writing by enshrining a massive should on top of it.</p><p>Since then I’ve come back to this question multiple times, and noticed that it was an interesting one. It elicited new thoughts and intuitions, it clarified some vaguely felt priorities, and most important, it imbued the pursuit with more meaning without the too-often side-effect of anxiety.</p><p>Thus, I want to offer an apology (a defence) of Methodology as I envision it. I’m not claiming that everyone should stop whatever they’re doing and start studying it full-time (heck, I’m not doing that), but I still sense both meaning and practical usefulness from a deep theory of Methodology.</p><h1><strong>What Is Methodology?</strong></h1><p>At its heart, my vision of Methodology stems from a fascination with all the ways human being solve problems.</p><p>Said like that, it sounds too dry, too technical. I don’t just mean science and engineering, though these definitely count. I’m also fascinated by <a href="https://formethods.substack.com/p/baking-vs-patissing-vs-cooking-the"><u>how we make tasty meals</u></a>, <a href="https://formethods.substack.com/p/how-the-singer-sang-his-tales"><u>how we figure out completely different traditions</u></a>, <a href="https://formethods.substack.com/p/static-analysis-as-a-lifestyle"><u>how we anticipate problems before d</u></a>... </p> | > I shall ask, then, why is it really worth while to make a serious study of mathematics? What is the proper justification of a mathematician’s life? And my answers will be, for the most part, such as are expected from a mathematician: I think that it is worth while, that there is ample justification. But I should say at once that my defence of mathematics will be a defence of myself, and that my apology is bound to be to some extent egotistical. I should not think it worth while to apologize for my subject if I regarded myself as one of its failures.
(G. H. Hardy, A Mathematician’s Apology, 1940, p.3)
Earlier this week, I was having a lovely chat with a friend and reader of the blog, when he asked me a question along the lines of “Hey, what’s the overall point of your blog?”
I get why he asked the question: I write about a lot of different topics, from self-help models to the discovery of oral composition, historiography, and cooking. Looking in from the outside, it’s probably hard to see a common thread. Even from the inside, a lot of these ideas emerged from my wide-ranging curiosity, and what felt meaningful to explore and ponder at the time.
So during this conversation, I deflected answering the question — it’s about whatever methodological ideas I found interesting. Or maybe it’s about a bigger thing, but I don’t want to suck the fun out of my personal writing by enshrining a massive should on top of it.
Since then I’ve come back to this question multiple times, and noticed that it was an interesting one. It elicited new thoughts and intuitions, it clarified some vaguely felt priorities, and most important, it imbued the pursuit with more meaning without the too-often side-effect of anxiety.
Thus, I want to offer an apology (a defence) of Methodology as I envision it. I’m not claiming that everyone should stop whatever they’re doing and start studying it full-time (heck, I’m not doing that), but I still sense both meaning and practical usefulness from a | 2,620 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
mHEMgNHYADjpzEizk | melatonin-self-experiment-results | Melatonin Self-Experiment Results | null | false | false | false | null | Th2dg7zSCZ4G5HQFL | null | true | false | false | false | Post | null | 2025-06-25T15:58:10.605Z | null | false | false | 2 | 2 | 2025-06-25T19:33:00.347Z | false | false | post | [] | null | null | nbCqGmxaEMnuyRc6Q | 5 | 20 | 57 | false | 0.48685 | null | false | false | 2025-06-27T12:54:19.739Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 18 | 0 | 2025-06-25T15:14:12.077Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 10 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xHjy88N2uJvGdgzfw",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-10T11:55:55.351Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
},
{
"_id": "go3WWAbwJMPGrGZbH",
"displayName": "Carl Leninger"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Health / Medicine / Disease",
"needsReview": false,
"noindex": false,
"postCount": 341,
"score": 11,
"shortName": null,
"slug": "health-medicine-disease",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "AodfCFefLAuwDyj7Z",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-22T01:42:06.501Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Self Experimentation",
"needsReview": false,
"noindex": false,
"postCount": 87,
"score": 0,
"shortName": null,
"slug": "self-experimentation",
"suggestedAsFilter": false,
"userId": "qKdS7bw9McaSeMNbY",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "HLoxy2feb2PYqooom",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 10,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-14T23:39:27.997Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "ao32ngWNNRQdEmLig",
"displayName": "moisentinel"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Sleep",
"needsReview": false,
"noindex": false,
"postCount": 46,
"score": 10,
"shortName": null,
"slug": "sleep",
"suggestedAsFilter": false,
"userId": "gXeEWGjTWyqgrQTzR",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 20 | 0 | 0 | 8 | 0 | Th2dg7zSCZ4G5HQFL | silentbob | 2019-07-01T17:18:21.519Z | silentbob | silentbob | null | null | null | 1,272 | 0 | false | false | null | null | 25 | 117 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | mHEMgNHYADjpzEizk | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mHEMgNHYADjpzEizk/libtwpor2zkvwphej4wy | SocialPreviewType | nbCqGmxaEMnuyRc6Q | <p>Throughout the first half of 2025 I did a blinded experiment to see how low to moderate melatonin intake affects me. In this post I summarize my findings.</p><p><strong>Tl;dr</strong>: In my blinded self-experiment on melatonin, conducted over n = 60 days and analyzed with <a href="http://hypothesize.io"><u>hypothesize.io</u></a>, I found significantly <strong>positive effects for time to fall asleep</strong> (averaging ~25 instead of ~35 minutes, p ~= 0.001) and feeling awake the following morning (5.74/10 instead of 4.95/10, p = 0.006; but: this effect only persisted for the lower of two dosages, and did not persist throughout the rest of the day; could well be a false positive, as I didn’t correct for multiple hypothesis testing) for a dosage as low as <strong>0.15mg of melatonin</strong>, taken on average ~1h before going to bed.</p><p>Feel free to jump ahead to the Results section if you don't care much about my methodology.</p><h1>Experiment Setup</h1><p>I randomized between <strong>3 groups</strong> on a (roughly) nightly basis:</p><ul><li>No melatonin at all (control group)</li><li>1 drop dissolved in water (~0.15mg of melatonin)</li><li>2 drops dissolved in water (~0.3mg of melatonin)</li></ul><p>The dosage may seem low, but past experiences with melatonin showed that taking more than 2 drops often caused me to wake up around 4am, not feeling well rested, which I wanted to avoid.</p><p>For <strong>blinding</strong>, I asked my girlfriend to prepare a glass with ~20ml of water and the 0-2 drops for me, which I then drank. I think blinding worked well, as I almost never had any idea whether the water I consumed contained melatonin or not (there were one or two exceptions where I had a feeling that the water tasted unusual, at which point I did my best not to think about it further).</p><p>Before starting the experiment I discussed my setup with GPT-4o (nowadays I would certainly use a reasoning model) and we concluded that aiming for ~60 nights of measurements would be a suitable amount for the effect sizes I was hoping to find.</p><p>As I failed to run the experiment on many evenings (e.g. due to traveling, because either my girlfriend or me weren’t home, plus skipping some days, sometimes weeks, where I felt like my situation or mental state weren’t suitable to yield representative results), it took almost 5 months (January 29 till June 19) to collect all the data. This means more than half of the days were skipped. However, given that the decision of when to skip was <i>not causally downstream</i> of the group assignment, but independent of it, it shoul... </p> | Throughout the first half of 2025 I did a blinded experiment to see how low to moderate melatonin intake affects me. In this post I summarize my findings.
Tl;dr: In my blinded self-experiment on melatonin, conducted over n = 60 days and analyzed with hypothesize.io, I found significantly positive effects for time to fall asleep (averaging ~25 instead of ~35 minutes, p ~= 0.001) and feeling awake the following morning (5.74/10 instead of 4.95/10, p = 0.006; but: this effect only persisted for the lower of two dosages, and did not persist throughout the rest of the day; could well be a false positive, as I didn’t correct for multiple hypothesis testing) for a dosage as low as 0.15mg of melatonin, taken on average ~1h before going to bed.
Feel free to jump ahead to the Results section if you don't care much about my methodology.
Experiment Setup
I randomized between 3 groups on a (roughly) nightly basis:
* No melatonin at all (control group)
* 1 drop dissolved in water (~0.15mg of melatonin)
* 2 drops dissolved in water (~0.3mg of melatonin)
The dosage may seem low, but past experiences with melatonin showed that taking more than 2 drops often caused me to wake up around 4am, not feeling well rested, which I wanted to avoid.
For blinding, I asked my girlfriend to prepare a glass with ~20ml of water and the 0-2 drops for me, which I then drank. I think blinding worked well, as I almost never had any idea whether the water I consumed contained melatonin or not (there were one or two exceptions where I had a feeling that the water tasted unusual, at which point I did my best not to think about it further).
Before starting the experiment I discussed my setup with GPT-4o (nowadays I would certainly use a reasoning model) and we concluded that aiming for ~60 nights of measurements would be a suitable amount for the effect sizes I was hoping to find.
As I failed to run the experiment on many evenings (e.g. due to traveling, because either my girlfriend or me weren | 2,550 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
3ww5zZgTTPySB3jpP | interstellar-travel-will-probably-doom-the-long-term-future | Interstellar travel will probably doom the long-term future | null | false | false | false | null | dxCL5JkQLDT6u8Mok | null | true | false | false | false | Post | null | 2025-06-25T15:32:26.962Z | null | false | false | 2 | 2 | 2025-06-25T19:33:04.145Z | false | false | post | [] | null | null | X3WZieQkmpnwXvf2s | 6 | 17 | 28 | false | 0.278131 | null | false | false | 2025-06-26T11:09:29.840Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-25T12:30:45.353Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 20 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb2a8",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.591Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Astronomical Waste",
"needsReview": false,
"noindex": false,
"postCount": 12,
"score": 19,
"shortName": null,
"slug": "astronomical-waste",
"suggestedAsFilter": false,
"userId": "cn4SiEmqWbu7K9em5",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "EeSkeTcT4wtW2fWsL",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-26T19:39:26.333Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Cause Prioritization",
"needsReview": false,
"noindex": false,
"postCount": 65,
"score": 9,
"shortName": null,
"slug": "cause-prioritization",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "oNcqyaWPXNGTTRPHm",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-12-23T09:11:59.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Existential risk",
"needsReview": false,
"noindex": false,
"postCount": 515,
"score": 0,
"shortName": null,
"slug": "existential-risk",
"suggestedAsFilter": false,
"userId": "7iXcndyHDvmt77ggr",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "2JdCpTrNgBMNpJiyB",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-01T20:31:18.491Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Space Exploration & Colonization",
"needsReview": false,
"noindex": false,
"postCount": 81,
"score": 9,
"shortName": null,
"slug": "space-exploration-and-colonization",
"suggestedAsFilter": false,
"userId": "sKAL2jzfkYkDbQmx9",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 17 | 0 | 0 | 7 | 0 | dxCL5JkQLDT6u8Mok | jordan-stone | 2024-11-06T15:40:33.291Z | jordan-stone | Jordan Stone | null | null | null | 47 | 0 | false | false | null | null | 1 | 4 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | 3ww5zZgTTPySB3jpP | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/3ww5zZgTTPySB3jpP/dslbumgjxey5iebamglr | SocialPreviewType | X3WZieQkmpnwXvf2s | <p>I think people are very wrong about space colonisation increasing long-term existential security. Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous (potentially hundreds of thousands of) independent actors. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and it needs to be in place before interstellar colonisation is initiated. </p><h1>Introduction</h1><p>I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “<a href="https://gizmodo.com/12-ways-humanity-could-destroy-the-entire-solar-system-1696825692">12 Ways Humanity Could Destroy the Entire Solar System</a>”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics. </p><p>Anders Sandberg <a href="https://aleph.se/andart2/space/the-end-of-the-worlds/ ">responded to the post on his blog</a> and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="iydvtr0v91q" role="doc-noteref" id="fnrefiydvtr0v91q"><sup><a href="#fniydvtr0v91q">[1]</a></sup></span>. However, x-risks to solar system-wide civilisations were certainly possible.</p><p>Charlie Stross then <a href="https://www.antipope.org/charlie/blog-static/2015/04/on-the-great-filter-existentia.html">made a post</a> where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="8j3voh2xjv" role="doc-noteref" id="fnref8j3voh2xjv"><sup><a href="#fn8j3voh2xjv">[2]</a></sup></span> may actually be a disadvantage in x-risk mitigation. </p><p>In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the <i>value</i><span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="vxqk2qlet7k" role="doc-noteref" id="fnrefvxqk2qlet7k"><sup><a href="#fnvxqk2qlet7k">[3]</a></sup></span><i> </i>of the universe can be attained as possible. X-risks that would destroy a spacefaring civilisation are important considerations for forward-thinking planning related to long-term resilience. Our current activities, like developing AGI and expanding into space may have a large foundational impact on the long-term trajectory of human civilisation.</p><p>So I've investigated x-risks with the capacity to ... </p> | I think people are very wrong about space colonisation increasing long-term existential security. Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous (potentially hundreds of thousands of) independent actors. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and it needs to be in place before interstellar colonisation is initiated.
Introduction
I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.
Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible.
Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.
In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the unive | 4,911 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
|
Aypbvb3CDFxbGS6rj | summary-of-john-halstead-s-book-length-report-on-existential | Summary of John Halstead's Book-Length Report on Existential Risks From Climate Change | null | false | false | false | null | tm8YP7vNWjGm7pYae | null | true | false | false | false | Post | null | 2025-06-25T15:14:44.560Z | null | false | false | 2 | 2 | 2025-06-25T19:32:49.089Z | false | false | post | [] | null | null | AZnLP2nhhpuojmLeg | 5 | 12 | 38 | false | 0.351024 | null | false | false | 2025-06-27T08:21:16.402Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 17 | 0 | 2025-06-25T15:14:25.687Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 25 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "TSk33mHXtnDR4DpT3",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-11-24T20:07:01.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Climate change",
"needsReview": false,
"noindex": false,
"postCount": 61,
"score": 0,
"shortName": null,
"slug": "climate-change",
"suggestedAsFilter": false,
"userId": "bcNvecPC3WqF2mhcX",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "oNcqyaWPXNGTTRPHm",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-12-23T09:11:59.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Existential risk",
"needsReview": false,
"noindex": false,
"postCount": 515,
"score": 0,
"shortName": null,
"slug": "existential-risk",
"suggestedAsFilter": false,
"userId": "7iXcndyHDvmt77ggr",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 7 | 0 | tm8YP7vNWjGm7pYae | bentham-s-bulldog | 2022-11-24T02:24:14.930Z | omnizoid | Bentham's Bulldog | null | null | null | 249 | 0 | false | false | null | null | 41 | 131 | 1 | 0 | 0 | 1 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal"
] | null | null | Aypbvb3CDFxbGS6rj | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Aypbvb3CDFxbGS6rj/xnkuq8rirqu5aafj6jiu | SocialPreviewType | AZnLP2nhhpuojmLeg | <h1><strong>1 Introduction</strong></h1><p><strong>(Crossposted from</strong><a href="https://benthams.substack.com/p/what-the-most-detailed-report-ever"><strong> my blog</strong></a><strong>--formatting is better there). </strong></p><p>Very large numbers of people seem to think that climate change is likely to end the world. <a href="https://www.youtube.com/watch?v=HpxWFN4nxh4"><u>Biden </u></a>and <a href="https://www.nytimes.com/2024/10/28/climate/trump-harris-climate-change.html#:~:text=Kamala%20Harris%20calls%20global%20warming,it%20as%20a%20%E2%80%9Cscam.%E2%80%9D&text=The%20window%20is%20closing%20for,to%20scientists%20across%20the%20world."><u>Harris </u></a>both called it an existential threat. AOC <a href="https://www.usatoday.com/story/news/politics/onpolitics/2019/01/22/ocasio-cortez-climate-change-alarm/2642481002/"><u>warned a few years ago</u></a> that “the world is going to end in 12 years if we don’t address climate change.” Thunberg <a href="https://www.factcheck.org/2023/06/viral-posts-distort-greta-thunberg-tweet-warning-about-climate-change/"><u>once approvingly cited</u></a> a supposed “top climate scientist” making the claim that “climate change will wipe out humanity unless we stop using fossil fuels over the next five years.” Around <a href="https://nypost.com/2024/05/20/us-news/half-of-americans-think-climate-change-will-destroy-planet-in-their-lifetime-poll/"><u>half of Americans</u></a> think that climate change will destroy the planet (though despite this, <a href="https://x.com/mattyglesias/status/1784937373790331014"><u>most don’t consider it a top political issue</u></a>, which means that a sizeable portion of the population thinks climate change will <i>destroy the planet</i>, but doesn’t think that means we should prioritize it over taxes and healthcare).</p><p>I was taught by my science teacher in ninth grade that climate change was an existential threat—and for a while, I believed him. But for the last few years, I’ve thought that this was seriously overhyped. While climate scientists sometimes make hyperbolic statements about climate change ending the world in their role as activists, this is not the consensus view and it’s not the view that all but a small number of climate scientists seriously endorse.</p><p>To get to the bottom of this dispute, I decided to read <a href="https://whatweowethefuture.com/wp-content/uploads/2023/06/Climate-Change-Longtermism.pdf"><u>John Halstead’s report on existential risks from climate change</u></a>.</p><p>Halstead’s report is the most detailed report on existential risks from climate change to date. It tries to comprehensively lay out all the ways that climate change might cause extinction and spans hundreds of pages. It was also thoroughly vetted by 19 experts.<a href="https://benthams.substack.com/p/what-the-most-detailed-report-ever#footnote-1-166332275"><sup><u>1</u></sup></a> Its conclusion: climate change is not an existential risk of more than negligible likelihood. The odds that climate change kills everyone are very near zero. Halstead summarizes “my best guess estimate is that the indirect risk of existential catastrophe due to climate change is on the order of 1 in 100,000, and I struggle to get the risk above 1 in 1,000.”</p><p>If this is correct, and indeed I think it is, then a major portion of the Democratic party has been engaged in <i>outrageous hyperbole</i>. They are claiming a high likelihood of existential catastrophe from something with odds of existential catastrophe well below one in a thousand. This would be a bit like someone claiming that we’ll all die unless we eradicate mal... </p> | 1 Introduction
(Crossposted from my blog--formatting is better there).
Very large numbers of people seem to think that climate change is likely to end the world. Biden and Harris both called it an existential threat. AOC warned a few years ago that “the world is going to end in 12 years if we don’t address climate change.” Thunberg once approvingly cited a supposed “top climate scientist” making the claim that “climate change will wipe out humanity unless we stop using fossil fuels over the next five years.” Around half of Americans think that climate change will destroy the planet (though despite this, most don’t consider it a top political issue, which means that a sizeable portion of the population thinks climate change will destroy the planet, but doesn’t think that means we should prioritize it over taxes and healthcare).
I was taught by my science teacher in ninth grade that climate change was an existential threat—and for a while, I believed him. But for the last few years, I’ve thought that this was seriously overhyped. While climate scientists sometimes make hyperbolic statements about climate change ending the world in their role as activists, this is not the consensus view and it’s not the view that all but a small number of climate scientists seriously endorse.
To get to the bottom of this dispute, I decided to read John Halstead’s report on existential risks from climate change.
Halstead’s report is the most detailed report on existential risks from climate change to date. It tries to comprehensively lay out all the ways that climate change might cause extinction and spans hundreds of pages. It was also thoroughly vetted by 19 experts.1 Its conclusion: climate change is not an existential risk of more than negligible likelihood. The odds that climate change kills everyone are very near zero. Halstead summarizes “my best guess estimate is that the indirect risk of existential catastrophe due to climate change is on the order of 1 in 100,000, and I s | 6,368 | 1.1.1 | Revision | false | null | null | CrosspostOutput |
|
gk3wj6mNWMACQDdZQ | lurking-in-the-noise | Lurking in the Noise | null | false | false | false | null | QH4oRvu3e85mSxtj8 | null | true | false | false | false | Post | null | 2025-06-25T13:36:34.174Z | null | false | false | 2 | 2 | 2025-06-25T19:35:23.838Z | false | false | post | [] | null | null | tEF4Jb4bp6C3cBRmd | 2 | 10 | 34 | false | 0.31434 | null | false | false | 2025-06-25T23:19:27.448Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-23T23:20:04.196Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "ZFrgTgzwEfStg26JL",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-16T10:29:25.410Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Risk",
"needsReview": false,
"noindex": false,
"postCount": 1482,
"score": 0,
"shortName": null,
"slug": "ai-risk",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "56yXXrcxRjrQs6z9R",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:00:37.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "sqMaBFCkAhRcWzJXi",
"displayName": "nicolasguillard"
},
{
"_id": "S6Niz3DiFCTm2Eybq",
"displayName": "Anirudh257"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interpretability (ML & AI)",
"needsReview": false,
"noindex": false,
"postCount": 933,
"score": 12,
"shortName": null,
"slug": "interpretability-ml-and-ai",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 4 | 0 | QH4oRvu3e85mSxtj8 | j-bostock | 2021-01-13T15:49:14.305Z | Jemist | J Bostock | null | null | Jonathan Bostock | 1,853 | 14 | false | false | null | null | 70 | 186 | 4 | 2 | 0 | 1 | 0 | gXeEWGjTWyqgrQTzR | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | gk3wj6mNWMACQDdZQ | SocialPreviewType | tEF4Jb4bp6C3cBRmd | <h2>The Dead Biologist</h2><p>Suppose you're a biologist working with a bacterium. It's a nasty little bug, so you want to make sure your disinfecting protocol is effective. You have a rather good model of how the bacterium works, and it goes like this:</p><ul><li>The bacterium contains DNA, which is like a master copy of the instructions as to how it works</li><li>This DNA is transcribed to RNA, which are little working copies of parts of the instructions and are disseminated throughout the bacterium</li><li>This RNA is translated into proteins, which actually do the stuff that makes the bacterium function</li></ul><p>Yes, there are a few special cases where RNA actually does something, and your model seems to be missing some things---for one, the amount of protein produced from each RNA molecule seems to vary, and you can only account for about 90% of this variance; the other 10% appears to be noise. But overall it's really very good.</p><p>You test every protein the bacterium is producing. It has proteins that sense and respond to cold, to acid, to low oxygen, to desiccation, but not to heat. You notice that none of the bacterium's proteins work above 40 °C (around 107 F), so you heat up everything to 40 °C to sterilize it.</p><p>The bacterium survives, and kills you.</p><h2>The Thing in the Noise</h2><p>What happened? <a href="https://en.wikipedia.org/wiki/RNA_thermometer">The bacterium transcribed an RNA which didn't get translated into protein.</a> It was folded into a little knot. Above 40 °C, this knot wiggled open and got translated into a protein which went and got some more proteins produced, which protected the bacterium from heat. Your model didn't account for RNA having complex structure, which was also the 10% unexplained variance in protein yield from earlier.</p><p>"How unfair!" you lament. "What are the chances that, in that little 10% noise in my model, there was a thing that killed me?"</p><p>Quite a lot actually. A pathogenic bacterium is a thing which has evolved to kill you. Every single part of it has been under optimization pressure to kill you. And furthermore <strong>noise exists in the map, not the territory</strong>.</p><p>You can think of a bacterium in two ways, and live: you can think of it as a blob which is optimized to kill you; or you can think of it as a set of clean abstractions which are optimized to kill you, <strong>plus some noise which is also optimized to kill you</strong>. If you try to think of it as the clean abstractions which are optimized to kill you, plus some noise which is unimportant, it will kill you... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | The Dead Biologist
Suppose you're a biologist working with a bacterium. It's a nasty little bug, so you want to make sure your disinfecting protocol is effective. You have a rather good model of how the bacterium works, and it goes like this:
* The bacterium contains DNA, which is like a master copy of the instructions as to how it works
* This DNA is transcribed to RNA, which are little working copies of parts of the instructions and are disseminated throughout the bacterium
* This RNA is translated into proteins, which actually do the stuff that makes the bacterium function
Yes, there are a few special cases where RNA actually does something, and your model seems to be missing some things---for one, the amount of protein produced from each RNA molecule seems to vary, and you can only account for about 90% of this variance; the other 10% appears to be noise. But overall it's really very good.
You test every protein the bacterium is producing. It has proteins that sense and respond to cold, to acid, to low oxygen, to desiccation, but not to heat. You notice that none of the bacterium's proteins work above 40 °C (around 107 F), so you heat up everything to 40 °C to sterilize it.
The bacterium survives, and kills you.
The Thing in the Noise
What happened? The bacterium transcribed an RNA which didn't get translated into protein. It was folded into a little knot. Above 40 °C, this knot wiggled open and got translated into a protein which went and got some more proteins produced, which protected the bacterium from heat. Your model didn't account for RNA having complex structure, which was also the 10% unexplained variance in protein yield from earlier.
"How unfair!" you lament. "What are the chances that, in that little 10% noise in my model, there was a thing that killed me?"
Quite a lot actually. A pathogenic bacterium is a thing which has evolved to kill you. Every single part of it has been under optimization pressure to kill you. And furthermore noise ex | 1,203 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
||
Y9NuKpb6dsyiYFxWK | new-paper-ambiguous-online-learning | New Paper: Ambiguous Online Learning | null | false | false | true | null | 9hHLLhkuQwtjjykdk | null | true | false | false | false | Post | https://arxiv.org/abs/2506.19810 | 2025-06-25T09:14:14.872Z | null | false | false | 2 | 2 | 2025-06-25T19:33:48.117Z | false | false | linkpost | [] | null | null | tAKgpRsNrg3c9gTmk | 2 | 9 | 30 | false | 0.264906 | null | false | false | 2025-06-27T06:02:48.003Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 15 | 0 | 2025-06-25T08:57:48.695Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "RyNWXFjKNcafRKvPh",
"adminOnly": false,
"afBaseScore": 15,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "9hHLLhkuQwtjjykdk",
"displayName": "Vanessa Kosoy"
}
]
},
"baseScore": 27,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-01-15T10:23:34.989Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "9hHLLhkuQwtjjykdk",
"displayName": "Vanessa Kosoy"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Agent Foundations",
"needsReview": false,
"noindex": false,
"postCount": 154,
"score": 27,
"shortName": null,
"slug": "agent-foundations",
"suggestedAsFilter": false,
"userId": "XLwKyCK7JmC292ZCC",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 9 | 0 | 0 | 8 | 0 | 9hHLLhkuQwtjjykdk | vanessa-kosoy | 2018-07-19T16:45:40.624Z | vanessa-kosoy | Vanessa Kosoy | null | null | Vanessa Kosoy | 9,449 | 2,096 | false | false | <p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/9hHLLhkuQwtjjykdk/bzotmmk2xajnwrtqjpsx" alt=""></p><p>Research Lead at <a href="https://coral-research.org/">CORAL</a>. Director of AI research at <a href="https://alter.org.il/en/home-page-en/">ALTER</a>. PhD student in <a href="https://shaymrn.cs.technion.ac.il/">Shay Moran</a>'s group in the Technion (my PhD research and my CORAL/ALTER research are one and the same). See also <a href="https://scholar.google.com/citations?user=UaYZ5FAAAAAJ&hl=en">Google Scholar</a> and <a href="https://www.linkedin.com/in/vanessa-kosoy/">LinkedIn</a>.</p><p>E-mail: {first name}@alter.org.il</p> | null | null | 33 | 1,140 | 0 | 62 | 701 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | false | [
"alignmentForum",
"alignmentVoters",
"canModeratePersonal",
"trustLevel1"
] | null | null | Y9NuKpb6dsyiYFxWK | SocialPreviewType | tAKgpRsNrg3c9gTmk | <p><strong>Abstract:</strong> <i>We propose a new variant of online learning that we call "ambiguous online learning". In this setting, the learner is allowed to produce multiple predicted labels. Such an "ambiguous prediction" is considered correct when at least one of the labels is correct, and none of the labels are "predictably wrong". The definition of "predictably wrong" comes from a hypothesis class in which hypotheses are also multi-valued. Thus, a prediction is "predictably wrong" if it's not allowed by the (unknown) true hypothesis. In particular, this setting is natural in the context of multivalued dynamical systems, recommendation algorithms and lossless compression. It is also strongly related to so-called "apple tasting". We show that in this setting, there is a trichotomy of mistake bounds: up to logarithmic factors, any hypothesis class has an optimal mistake bound of either </i><span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Theta(1)"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">Θ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span><i>, </i><span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="\Theta(\sqrt(N))"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.372em;">Θ</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msqrt"><span class="mjx-box" style="padding-top: 0.045em;"><span class="mjx-surd"><span class="mjx-char MJXc-TeX-size1-R" style="padding-top: 0.593em; padding-bottom: 0.593em;">√</span></span><span class="mjx-box" style="padding-top: 0.063em; border-top: 1px solid;"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span></span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.085em;">N</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span><i> or </i><span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="N"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.085em;">N</span></span></span></span></span></span></span><i>.</i></p><p>This work is my first rigorous foray into <a href="https://www.alignmentforum.org/posts/ZwshvqiqCvXPsZEct/the-learning-theoretic-agenda-status-2023#Direction_1__Frugal_Compositional_Languages">compositional learning theory</a>. Intuitively, one approach to learning compositionally is, constructing a model incrementally, adding one part each time. This raises the question of what does it mean to have a "partial" model. One notion of partial model is provided by infra-Bayesianism / <a href="https://arxiv.org/abs/2504.06820">robust RL</a>. However, there the treatment of ambiguities in the model via the maximin decision rule is strongly tied to the choice of reward function. Intuitively, we might expect the process of incremental learning to not depend on the reward function that strongly.</p><p>In this work, I propose a notion of "partial" model in the online learning setting. I develop a theory of learning for this notion analogous (but more involved than) classical realizable Littlestone theory.</p><p>Does it truly lead to compositional learning? In results that don't appear in the paper, I showed that the "ambiguous Littlestone dimension" (an invariant I discovered that characterizes the complexity of ambiguous online learning) indeed behaves nicely w.r.t. some ways to decompose hypothesis classes. However, my attempts to use this framework to get <i>computationally efficient</i> compositional learning algorithms haven't produced much so far.</p><p>Moreover, our results on robust RL (in particular the notion of "imprecise estimator" we have there) suggests it's more natural to study a different (easier) version of online learning with ambiguity, where the learner is only allowed ... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | Abstract: We propose a new variant of online learning that we call "ambiguous online learning". In this setting, the learner is allowed to produce multiple predicted labels. Such an "ambiguous prediction" is considered correct when at least one of the labels is correct, and none of the labels are "predictably wrong". The definition of "predictably wrong" comes from a hypothesis class in which hypotheses are also multi-valued. Thus, a prediction is "predictably wrong" if it's not allowed by the (unknown) true hypothesis. In particular, this setting is natural in the context of multivalued dynamical systems, recommendation algorithms and lossless compression. It is also strongly related to so-called "apple tasting". We show that in this setting, there is a trichotomy of mistake bounds: up to logarithmic factors, any hypothesis class has an optimal mistake bound of either Θ(1), Θ(√(N)) or N.
This work is my first rigorous foray into compositional learning theory. Intuitively, one approach to learning compositionally is, constructing a model incrementally, adding one part each time. This raises the question of what does it mean to have a "partial" model. One notion of partial model is provided by infra-Bayesianism / robust RL. However, there the treatment of ambiguities in the model via the maximin decision rule is strongly tied to the choice of reward function. Intuitively, we might expect the process of incremental learning to not depend on the reward function that strongly.
In this work, I propose a notion of "partial" model in the online learning setting. I develop a theory of learning for this notion analogous (but more involved than) classical realizable Littlestone theory.
Does it truly lead to compositional learning? In results that don't appear in the paper, I showed that the "ambiguous Littlestone dimension" (an invariant I discovered that characterizes the complexity of ambiguous online learning) indeed behaves nicely w.r.t. some ways to decompose hypothes | 400 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
SKyM7xBovwusWB7Ty | emergence-of-simulators-and-agents | Emergence of Simulators and Agents | null | false | false | false | null | 2WJYaJavgTNHoSNbC | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "fCwTMipf5iLdG8aAG"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "zzbEqMKDgXincMipx"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "3Pj7voB7oXHDNgLbj"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": false,
"userId": "rxnoeKnEAW6T3M3cE"
}
] | true | false | false | false | Post | null | 2025-06-25T06:59:31.404Z | null | false | false | 2 | 2 | 2025-06-25T19:33:52.342Z | false | false | post | [] | null | null | 4rTCT3tLa8BxpcpDm | 0 | 3 | 9 | false | 0.12587 | null | false | false | 2025-06-25T06:59:31.404Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-25T06:50:25.069Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "fCwTMipf5iLdG8aAG",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 9,
"createdAt": "2025-01-04T23:17:30.337Z",
"deleted": false,
"displayName": "Sean Herrington",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 28,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "qgdGA4ZEyW7zNdK84",
"sequenceCount": 0,
"slug": "sean-herrington",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "sean-herrington"
},
{
"__typename": "User",
"_id": "zzbEqMKDgXincMipx",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 1,
"createdAt": "2024-12-20T00:13:44.003Z",
"deleted": false,
"displayName": "Spencer Ames",
"fullName": null,
"htmlBio": "<p>Technological urbanism researcher. Focuses on making deployment of advanced AI systems for urban management/design thoughtful, safe, and ethical</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 28,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "55XxDBpfKkkBPm9H8",
"sequenceCount": 0,
"slug": "spencer-ames",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "spencer-4"
},
{
"__typename": "User",
"_id": "3Pj7voB7oXHDNgLbj",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 1,
"createdAt": "2024-06-28T21:15:45.433Z",
"deleted": false,
"displayName": "Adebayo Mubarak",
"fullName": null,
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 26,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "adebayo-mubarak",
"spamRiskScore": 0.8,
"tagRevisionCount": 0,
"username": "adebayo-mubarak"
},
{
"__typename": "User",
"_id": "rxnoeKnEAW6T3M3cE",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2025-04-30T15:41:09.801Z",
"deleted": false,
"displayName": "Cancus",
"fullName": "Can Narin",
"htmlBio": "<p>Can Narin, student at St. John's College, studying philosophy and history of science</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 26,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "cancus",
"spamRiskScore": 0.8,
"tagRevisionCount": 0,
"username": "can-narin"
}
] | 6 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xjNvvmvQ5BH3cfEBr",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-12-06T12:53:39.358Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Simulator Theory",
"needsReview": false,
"noindex": false,
"postCount": 118,
"score": 9,
"shortName": null,
"slug": "simulator-theory",
"suggestedAsFilter": false,
"userId": "e9ToWWzhwWp5GSE7P",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | 2WJYaJavgTNHoSNbC | willpetillo | 2018-08-28T17:49:11.750Z | WillPetillo | WillPetillo | null | null | null | 408 | -5 | false | false | null | null | 15 | 33 | 2 | 2 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal"
] | null | null | SKyM7xBovwusWB7Ty | SocialPreviewType | 4rTCT3tLa8BxpcpDm | <p>In <a href="https://www.lesswrong.com/s/pwKrMXjYNK5LNeKCu/p/ddK7CMEC3XzSmLS4G"><u>Agents, Tools, and Simulators</u></a> we outlined what it means to describe a system through each of these lenses and how they overlap. In the case of an agent/simulator, our central question is: which property is "driving the bus" with respect to the system's behavior, utilizing the other in its service?</p><p><a href="https://www.lesswrong.com/s/pwKrMXjYNK5LNeKCu/p/A6oGJFeZwHHWHwY3b"><u>Aligning Agents, Tools, and Simulators</u></a> explores the implications of the above distinction, predicting different types of values (and thus behavior) from</p><ol><li>An agent that contains a simulation of the world that it uses to navigate.</li><li>A simulator that generates agents because such agents are part of the environment the system is modelling.</li><li>A system where the modes are so entangled it is meaningless to think about where one ends and the other begins. </li></ol><p>Specifically, we expect simulator-first systems to have holistic goals that internalize (an approximation of) human values and for agent-first systems to have more narrow values that push them towards maximization.</p><p>Given these conceptual distinctions, we observe that the best-fitting lens for past AI systems seems to correspond to their training process. In short, RL-based systems (like AlphaZero) seem almost purely agentic, pre-trained LLMs seem very simulator-like, and current systems that apply (increasing amounts of) RL feedback to LLMs are an entangled mix.</p><p>Assuming this correspondence is accurate, this post asks: why?</p><p><strong>Why does RL Create Agents?</strong></p><p>Reinforcement learning (RL) naturally gives rise to agentic behavior because of how it optimizes for long-term rewards. RL agents interact with an environment and adjust their strategies based on feedback. Crucially, their goals are not explicitly programmed but emerge through training. The objective function defines what is rewarded, but the model must infer how to achieve high reward by exploring possible actions. Over time, given sufficient training and model capacity, internal goals tend to align with maximizing reward. However, this process depends on avoiding local minima—suboptimal behaviors that yield short-term reward but fail to generalize. The structure of the environment therefore plays a critical role in pushing the agent toward deeper abstractions and strategic planning.</p><p>Consider a simple grid world environment where an RL agent is rewarded for collecting coins. If the coins always appear in fixed locati... </p> | In Agents, Tools, and Simulators we outlined what it means to describe a system through each of these lenses and how they overlap. In the case of an agent/simulator, our central question is: which property is "driving the bus" with respect to the system's behavior, utilizing the other in its service?
Aligning Agents, Tools, and Simulators explores the implications of the above distinction, predicting different types of values (and thus behavior) from
1. An agent that contains a simulation of the world that it uses to navigate.
2. A simulator that generates agents because such agents are part of the environment the system is modelling.
3. A system where the modes are so entangled it is meaningless to think about where one ends and the other begins.
Specifically, we expect simulator-first systems to have holistic goals that internalize (an approximation of) human values and for agent-first systems to have more narrow values that push them towards maximization.
Given these conceptual distinctions, we observe that the best-fitting lens for past AI systems seems to correspond to their training process. In short, RL-based systems (like AlphaZero) seem almost purely agentic, pre-trained LLMs seem very simulator-like, and current systems that apply (increasing amounts of) RL feedback to LLMs are an entangled mix.
Assuming this correspondence is accurate, this post asks: why?
Why does RL Create Agents?
Reinforcement learning (RL) naturally gives rise to agentic behavior because of how it optimizes for long-term rewards. RL agents interact with an environment and adjust their strategies based on feedback. Crucially, their goals are not explicitly programmed but emerge through training. The objective function defines what is rewarded, but the model must infer how to achieve high reward by exploring possible actions. Over time, given sufficient training and model capacity, internal goals tend to align with maximizing reward. However, this process depends on a | 1,490 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
||
HLns982j8iTn7d2km | defining-corrigible-and-useful-goals | Defining Corrigible and Useful Goals | null | false | false | true | null | rfXt7qM9Y2cGySiat | null | true | false | false | false | Post | null | 2025-06-25T03:51:57.160Z | null | false | false | 2 | 2 | 2025-06-25T19:33:58.314Z | false | false | post | [] | null | null | sx3HLjddtLPGZKEyH | 0 | 11 | 32 | false | 0.257643 | null | false | false | 2025-06-25T03:51:57.160Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 14 | 0 | 2025-06-25T03:05:18.342Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 28 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 11 | 0 | 0 | 8 | 0 | rfXt7qM9Y2cGySiat | rubi-j-hudson | 2022-06-01T06:59:09.915Z | Rubi | Rubi J. Hudson | null | null | Rubi Hudson | 796 | 202 | false | false | null | null | 14 | 50 | 0 | 7 | 28 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"alignmentVoters",
"canModeratePersonal",
"alignmentForum"
] | null | null | HLns982j8iTn7d2km | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/HLns982j8iTn7d2km/fmvtvvt5aij00st95zii | SocialPreviewType | sx3HLjddtLPGZKEyH | <p><i>This post contains similar content to a forthcoming paper, in a framing more directly addressed to readers already interested in and informed about alignment. I include some less formal thoughts, and cut some technical details. That paper, A Corrigibility Transformation: Specifying Goals That Robustly Allow For Goal Modification, will be linked here when released on arXiv, hopefully within the next couple weeks. </i></p><p>Ensuring that AI agents are corrigible, meaning they do not take actions to preserve their existing goals, is a critical component of almost any plan for alignment. It allows for humans to modify their goal specifications for an AI, as well as for AI agents to learn goal specifications over time, without incentivizing the AI to interfere with that process. As an extreme example of corrigibility’s value, a corrigible paperclip maximizer could be stopped partway through a non-instantaneous takeover attempt by saying “please stop” or by automatically triggered safeguards, and it would be perfectly happy to accept that outcome.</p><p>The challenge is corrigibility is anti-natural, meaning that almost any goal introduces an instrumental incentive to preserve itself, since it is more likely to be achieved if an agent continues pursuing it. Despite the concept of corrigibility being introduced <a href="https://intelligence.org/files/Corrigibility.pdf"><u>a decade ago</u></a>, little progress has been made on defining corrigible goals that are still useful. I aim to address that here by introducing a corrigibility transformation that can be applied to almost any goal, creating a new version that is corrigible without sacrificing performance.</p><p>The corrigibility transformation works by first giving an AI system the ability to costlessly reject updates sent to it, which it decides whether to exercise each time it takes an action. So, for example instead of a “Shutdown” button there would be a “Please Shut Down” button. This makes it so that the expected discounted reward conditional on taking an action and rejecting any updates is a well defined quantity. Those values are then used as the reward for taking that action, whether or not updates are actually rejected, which is optimized for myopically. Some reward bonus is given for taking actions that would accept updates, to break the tie. The possibility of the costless rejection disincentivizes actions that incur a cost to avoid updates, and the reward structure then disincentiv... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | This post contains similar content to a forthcoming paper, in a framing more directly addressed to readers already interested in and informed about alignment. I include some less formal thoughts, and cut some technical details. That paper, A Corrigibility Transformation: Specifying Goals That Robustly Allow For Goal Modification, will be linked here when released on arXiv, hopefully within the next couple weeks.
Ensuring that AI agents are corrigible, meaning they do not take actions to preserve their existing goals, is a critical component of almost any plan for alignment. It allows for humans to modify their goal specifications for an AI, as well as for AI agents to learn goal specifications over time, without incentivizing the AI to interfere with that process. As an extreme example of corrigibility’s value, a corrigible paperclip maximizer could be stopped partway through a non-instantaneous takeover attempt by saying “please stop” or by automatically triggered safeguards, and it would be perfectly happy to accept that outcome.
The challenge is corrigibility is anti-natural, meaning that almost any goal introduces an instrumental incentive to preserve itself, since it is more likely to be achieved if an agent continues pursuing it. Despite the concept of corrigibility being introduced a decade ago, little progress has been made on defining corrigible goals that are still useful. I aim to address that here by introducing a corrigibility transformation that can be applied to almost any goal, creating a new version that is corrigible without sacrificing performance.
The corrigibility transformation works by first giving an AI system the ability to costlessly reject updates sent to it, which it decides whether to exercise each time it takes an action. So, for example instead of a “Shutdown” button there would be a “Please Shut Down” button. This makes it so that the expected discounted reward conditional on taking an action and rejecting any updates is a well de | 7,073 | 1.6.1 | Revision | false | null | null | CrosspostOutput |
|
5aoi2AgtzpfLJBiSq | multispecies-metagenomic-calibration | Multispecies Metagenomic Calibration | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-06-25T02:50:11.378Z | null | false | false | 2 | 2 | 2025-06-25T19:34:14.032Z | false | false | post | [] | null | null | 3szhHg7dgbg4GF9jj | 0 | 3 | 11 | false | 0.128534 | null | false | false | 2025-06-25T02:50:11.378Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-25T02:50:11.379Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | 5aoi2AgtzpfLJBiSq | SocialPreviewType | 3szhHg7dgbg4GF9jj | <p><span>
</span>
<i>Cross-posted from my <a href="https://data.securebio.org/jefftk-notebook/multispecies-calibration">NAO
Notebook</a>.</i>
</p><p>
<i>This is something I wrote internally in late-2022. Sharing it now
with light edits, additional context, and updated links after the idea
came up at the <a href="https://www.grc.org/microbiology-of-the-built-environment-conference/">Microbiology
of the Built Environment</a> conference I'm attending this week.</i>
</p><p>
Metagenomic sequencing data is fundamentally relative: each
observation is a fraction of all the observations in a sample. If you
want to make quantitative observations, however, like understanding
whether there's been an increase in the number of people with some
infection, you need to calibrate these observations. For example,
there could be variation between samples due to variation in:
</p><p>
</p>
<ul>
<li>Changes in how many humans are contributing to a sample.
</li>
<li>Has it been raining? (Especially in areas with combined sewers, but
also a factor with nominally separated sewers.)
</li>
<li>What have people been eating lately?
</li>
<li>What temperature has it been?
</li>
<li>How concentrated is the sample?
</li>
<li>etc
</li>
</ul>
<p>
If you're trying to understand growth patterns all of this is noise;
can we reverse this variation? I'm using "calibration" to refer to
this process of going from raw per-sample pathogen read counts to
estimates of how much of each pathogen was originally shed into sewage.
</p><p>
The simplest option is not to do any calibration, and just consider
raw relative abundance: counts relative to the total number of reads
in the sample. For example, this is what <a href="https://lungfish-science.github.io/wastewater-dashboard/dashboard2.html">Marc
Johnson and Dave O'Connor</a> are doing.
</p><p>
It seems like you ought to be able to do better if you normalize by
the number of reads matching some other species humans excrete. It's
common to use <a href="https://en.wikipedia.org/wiki/Pepper_mild_mottle_virus">PMMoV</a>
for this: peppers are commonly infected with PMMoV, people eat
peppers, people excrete PMMoV. All else being equal, the amount of
PMMoV in a sample should be proportional to the human contribution to
the sample. This is especially common in PCR work, where you take a
PCR measurement of your target, and then present it relative to a PCR
measurement of PMMoV. For example, this is what <a href="https://data.wastewaterscan.org/">WastewaterSCAN</a> does.
</p><p>
Because the <a href="https://naobservatory.org/">NAO</a> is doing very
deep metagenomic sequencing, around 1B read pairs (300Gbp) per
sample, we ought to be able to calibrate against many species at once.
PMMoV is commonly excreted, but so are other tobamoviruses,
crAssphage, other human gut bacteriophages, human gut bacteria, etc.
We pick up thousands of other species, and should be able to combine
th... </p> | Cross-posted from my NAO Notebook.
This is something I wrote internally in late-2022. Sharing it now with light edits, additional context, and updated links after the idea came up at the Microbiology of the Built Environment conference I'm attending this week.
Metagenomic sequencing data is fundamentally relative: each observation is a fraction of all the observations in a sample. If you want to make quantitative observations, however, like understanding whether there's been an increase in the number of people with some infection, you need to calibrate these observations. For example, there could be variation between samples due to variation in:
* Changes in how many humans are contributing to a sample.
* Has it been raining? (Especially in areas with combined sewers, but also a factor with nominally separated sewers.)
* What have people been eating lately?
* What temperature has it been?
* How concentrated is the sample?
* etc
If you're trying to understand growth patterns all of this is noise; can we reverse this variation? I'm using "calibration" to refer to this process of going from raw per-sample pathogen read counts to estimates of how much of each pathogen was originally shed into sewage.
The simplest option is not to do any calibration, and just consider raw relative abundance: counts relative to the total number of reads in the sample. For example, this is what Marc Johnson and Dave O'Connor are doing.
It seems like you ought to be able to do better if you normalize by the number of reads matching some other species humans excrete. It's common to use PMMoV for this: peppers are commonly infected with PMMoV, people eat peppers, people excrete PMMoV. All else being equal, the amount of PMMoV in a sample should be proportional to the human contribution to the sample. This is especially common in PCR work, where you take a PCR measurement of your target, and then present it relative to a PCR measurement of PMMoV. For example, this is what Wastewa | 431 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
|
dAz45ggdbeudKAXiF | a-regime-change-power-vacuum-conjecture-about-group-belief | A regime-change power-vacuum conjecture about group belief | null | false | false | false | null | LtHeYhWmaud6YNA3m | null | true | false | false | false | Post | null | 2025-06-24T23:16:29.207Z | null | false | false | 2 | 2 | 2025-06-25T01:15:55.302Z | false | false | post | [] | null | null | 9rf8cq4NSqdmsBaFb | 15 | 21 | 38 | false | 0.280161 | null | false | false | 2025-06-25T13:53:51.142Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 21 | 0 | 2025-06-24T23:07:47.415Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 21 | 0 | 0 | 15 | 0 | LtHeYhWmaud6YNA3m | tsvibt | 2012-01-22T21:40:34.132Z | TsviBT | TsviBT | null | null | Tsvi Benson-Tilsen | 7,578 | 543 | false | false | null | null | 52 | 905 | 0 | 40 | 58 | 1 | 91 | r38pkCm7wF4M44MDQ | User | null | null | false | [
"alignmentVoters",
"canModeratePersonal",
"alignmentForum",
"trustLevel1"
] | null | null | dAz45ggdbeudKAXiF | SocialPreviewType | 9rf8cq4NSqdmsBaFb | <p><em><a href="https://tsvibt.blogspot.com/2025/06/a-regime-change-power-vacuum-conjecture.html">Crosspost from my blog</a>.</em></p>
<h1>Regime change</h1>
<p>Conjecture: when there is regime change, the default outcome is for a faction to take over—whichever faction is best prepared to seize power by force.</p><p>One example: The Iranian Revolution of 1978-1979. In the years leading up to the revolution, there was turmoil and broad hostility towards the Shah, across many sectors of the population. These hostilities ultimately combined in an escalation of protest, crack-down, more protest from more sectors (protests, worker strikes). Finally, the popular support for Khomeini as the flag-bearer of the broad-based revolution was enough to get the armed forces to defect, ending the Shah's rule.</p><p>From the <a href="https://www.britannica.com/event/Iranian-Revolution">Britannica article</a> on the aftermath:</p>
<blockquote>
<p>On April 1, following overwhelming support in a national referendum, Khomeini declared Iran an Islamic republic. Elements within the clergy promptly moved to exclude their former left-wing, nationalist, and intellectual allies from any positions of power in the new regime, and a return to conservative social values was enforced. The Family Protection Act (1967; significantly amended in 1975), which provided further guarantees and rights to women in marriage, was declared void, and mosque-based revolutionary bands known as komītehs (Persian: “committees”) patrolled the streets enforcing Islamic codes of dress and behaviour and dispatching impromptu justice to perceived enemies of the revolution. Throughout most of 1979 the Revolutionary Guards—then an informal religious militia formed by Khomeini to forestall another CIA-backed coup as in the days of Mosaddegh—engaged in similar activity, aimed at intimidating and repressing political groups not under the control of the ruling Revolutionary Council and its sister Islamic Republican Party, both clerical organizations loyal to Khomeini. The violence and brutality often exceeded that which had taken place under the shah.</p>
</blockquote>
<p>(What resulted in the following decades, was a brutally repressive and regionally violently corrosive theocratic regime.)</p><p>So we have a trajectory that goes like this:</p>
<ul>
<li>There is instability in, dissatisfaction with, and hostility towards the current regime.</li>
<li>There are several factions with different values, beliefs, and goals, but who share an aim to topple the regime.</li>
<li>Things come to a head, and the regime falls.</li>
<li>One of the factions takes over. This faction doesn't especially have to be goo</li></ul>... | Crosspost from my blog.
Regime change
Conjecture: when there is regime change, the default outcome is for a faction to take over—whichever faction is best prepared to seize power by force.
One example: The Iranian Revolution of 1978-1979. In the years leading up to the revolution, there was turmoil and broad hostility towards the Shah, across many sectors of the population. These hostilities ultimately combined in an escalation of protest, crack-down, more protest from more sectors (protests, worker strikes). Finally, the popular support for Khomeini as the flag-bearer of the broad-based revolution was enough to get the armed forces to defect, ending the Shah's rule.
From the Britannica article on the aftermath:
> On April 1, following overwhelming support in a national referendum, Khomeini declared Iran an Islamic republic. Elements within the clergy promptly moved to exclude their former left-wing, nationalist, and intellectual allies from any positions of power in the new regime, and a return to conservative social values was enforced. The Family Protection Act (1967; significantly amended in 1975), which provided further guarantees and rights to women in marriage, was declared void, and mosque-based revolutionary bands known as komītehs (Persian: “committees”) patrolled the streets enforcing Islamic codes of dress and behaviour and dispatching impromptu justice to perceived enemies of the revolution. Throughout most of 1979 the Revolutionary Guards—then an informal religious militia formed by Khomeini to forestall another CIA-backed coup as in the days of Mosaddegh—engaged in similar activity, aimed at intimidating and repressing political groups not under the control of the ruling Revolutionary Council and its sister Islamic Republican Party, both clerical organizations loyal to Khomeini. The violence and brutality often exceeded that which had taken place under the shah.
(What resulted in the following decades, was a brutally repressive and regionally vi | 847 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
||
ga62nXDW9saPxNif8 | apply-to-be-a-mentor-in-spar-1 | Apply to be a mentor in SPAR! | null | false | false | false | null | hjh7nyriDB3xN9Fo5 | null | true | false | false | false | Post | null | 2025-06-24T23:00:55.015Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | 8Cx2TgvzoAwPabnpr | 0 | 4 | 10 | false | 0.061085 | null | false | false | 2025-06-24T23:00:55.015Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-24T23:00:55.016Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 1 | 0 | hjh7nyriDB3xN9Fo5 | agucova | 2022-08-05T21:52:06.265Z | agucova | agucova | null | null | Agustín Covarrubias | 344 | 0 | false | false | <p>I’m a generalist and <a href="https://github.com/agucova">open sourcerer</a> that does a bit of everything, but perhaps nothing particularly well. I'm also the Co-Director of <a href="https://kairos-project.org/">Kairos</a>, an AI safety fieldbuilding org.</p><p>I was previously the AI Safety Group Support Lead at CEA and a Software Engineer in the <a href="https://forum.effectivealtruism.org/posts/qi3MEEmScmK87sfBZ/worldview-investigations-team-an-overview">Worldview Investigations Team</a> at <a href="https://rethinkpriorities.org/">Rethink Priorities</a>.</p> | null | null | 19 | 2 | 0 | 0 | 0 | 1 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal"
] | null | null | ga62nXDW9saPxNif8 | SocialPreviewType | 8Cx2TgvzoAwPabnpr | <p>Mentor applications are now open for the Fall 2025 round of the <a href="https://sparai.org/?utm_source=eaf-lw&utm_medium=forum&utm_campaign=forum-mentor-app">Supervised Program for Alignment Research (SPAR)</a>, running from September 15 to December 20, 2025.</p><a class="ck-cta-button ck-cta-button-centered" data-href="https://kairosproject.fillout.com/t/voFFZM2z6Mus" target="_blank" rel="noopener noreferrer">Apply as a mentor</a><p>SPAR is a remote-first part-time program that connects mentors and mentees for three-month AI safety and governance research projects. <strong>Mentor apps are due July 15, and applications for mentees will run from July 27 to August 20. </strong>If you’re interested in participating as a mentee, you can express your interest <a href="https://forms.kairos-project.org/spar-s25-interest?utm_source=eaf-lw&utm_medium=forum&utm_campaign=forum-mentor-app">here</a>, and we’ll reach out to you once applications are open.</p><p>You can find out more about the program <a href="https://sparai.org/?utm_source=eaf-lw&utm_medium=forum&utm_campaign=forum-mentor-app">here</a>. SPAR is run by <a href="https://forum.effectivealtruism.org/posts/tTCyWthLyHk32WX9n/introducing-kairos-a-new-home-for-spar-and-fsp?utm_source=eaf-lw&utm_medium=forum&utm_campaign=forum-mentor-app">Kairos</a>, an AI safety fieldbuilding organization.</p><h2 data-internal-id="Should_I_apply_as_a_mentor_">Should I apply as a mentor?</h2><p>You might be a good fit to be a SPAR mentor if you are a graduate student, academic, full-time AI safety researcher, independent researcher, or have prior full-time relevant research experience (e.g., MATS, Astra, GovAI fellow, etc.). We’re interested in projects that cover technical AI safety, AI policy and governance, AI strategy, AI security, or societal impacts of transformative AI, and we are able to provide funding for compute costs. We don't require mentors to have previous experience providing research mentorship, and new mentors will receive guidance on this.</p><p>Regarding dedication, we expect most mentors will dedicate 2 to 15 hours a week, depending on how many mentees they’d like to take in and how much supervision they’re interested in providing. Mentors can decide whether to run the project or not based on the applications they get, making applications zero-commitment until one chooses to accept any mentees.</p> | Mentor applications are now open for the Fall 2025 round of the Supervised Program for Alignment Research (SPAR), running from September 15 to December 20, 2025.
Apply as a mentor
SPAR is a remote-first part-time program that connects mentors and mentees for three-month AI safety and governance research projects. Mentor apps are due July 15, and applications for mentees will run from July 27 to August 20. If you’re interested in participating as a mentee, you can express your interest here, and we’ll reach out to you once applications are open.
You can find out more about the program here. SPAR is run by Kairos, an AI safety fieldbuilding organization.
Should I apply as a mentor?
You might be a good fit to be a SPAR mentor if you are a graduate student, academic, full-time AI safety researcher, independent researcher, or have prior full-time relevant research experience (e.g., MATS, Astra, GovAI fellow, etc.). We’re interested in projects that cover technical AI safety, AI policy and governance, AI strategy, AI security, or societal impacts of transformative AI, and we are able to provide funding for compute costs. We don't require mentors to have previous experience providing research mentorship, and new mentors will receive guidance on this.
Regarding dedication, we expect most mentors will dedicate 2 to 15 hours a week, depending on how many mentees they’d like to take in and how much supervision they’re interested in providing. Mentors can decide whether to run the project or not based on the applications they get, making applications zero-commitment until one chooses to accept any mentees. | 264 | 1.1.0 | Revision | true | false | bqT8Gxmgdmvn9eeBN | CrosspostOutput |
|
faAX5Buxc7cdjkXQG | machines-of-faithful-obedience | Machines of Faithful Obedience | null | false | false | false | null | EwoYTrxkJtcqSbrJt | null | true | false | false | false | Post | null | 2025-06-24T22:06:18.089Z | null | false | false | 2 | 2 | 2025-06-25T19:34:38.328Z | false | false | post | [] | null | null | b4vTjHnto8vhxisqJ | 16 | 13 | 25 | false | 0.198066 | null | false | false | 2025-06-28T00:18:57.374Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 18 | 0 | 2025-06-24T22:01:34.329Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 12 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 13 | 0 | 0 | 9 | 0 | EwoYTrxkJtcqSbrJt | boazbarak | 2022-11-21T19:39:05.594Z | boazbarak | boazbarak | null | null | null | 697 | 63 | false | false | null | null | 9 | 95 | 0 | 1 | 1 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | faAX5Buxc7cdjkXQG | SocialPreviewType | b4vTjHnto8vhxisqJ | <p><i>[Crossposted on </i><a href="https://windowsontheory.org/2025/06/24/machines-of-faithful-obedience/"><i>Windows On Theory</i></a><i>]</i><br> </p><p>Throughout history, technological and scientific advances have had both good and ill effects, but their overall impact has been overwhelmingly positive. Thanks to scientific progress, most people on earth live longer, healthier, and better than they did centuries or even decades ago. </p><p>I believe that AI (including AGI and ASI) can do the same and be a positive force for humanity. I also believe that it is possible to solve the “technical alignment” problem and build AIs that follow the words and intent of our instructions and report faithfully on their actions and observations.</p><p>I will not defend these two claims here. However, even granting these optimistic premises, AI’s positive impact is not guaranteed. In this essay, I will:</p><ol><li>Define what I mean by “solving” the technical alignment problem via training AIs to be faithful and obedient.</li><li>Discuss some of the potential risks that still remain even if we do solve this problem.</li><li>Talk about some scenarios that could make these risks more likely, and how do we avoid them.</li></ol><p> </p><p>In the next decade, AI progress will be extremely rapid, and such periods of sharp transition can be risky. What we — in industry, academia, and government— do in the coming years will matter a lot to ensure that AI’s benefits far outweigh its costs.</p><p> </p><h2>Faithful, obedient, superintelligent.</h2><p><br>Currently, it is very challenging to train AIs to achieve tasks that are too hard for humans to supervise. I believe that <strong>(a)</strong> we will need to solve this challenge to unlock “self-improvement” and superhuman AI, and <strong>(b)</strong> we will be able to solve it. I do not want to discuss here why I believe these statements. I would just say that if you assume <strong>(a)</strong> and not <strong>(b),</strong> then the capabilities of AIs, and hence their risks, will be much more limited.</p><p>Ensuring AIs accurately follow our instructions, even in settings too complex for direct human oversight, is one such hard-to-supervise objective. Getting AIs to be maximally honest, even in settings that are too hard for us to verify, is another such objective. I am optimistic about the prospects of getting AIs to maximize these objectives. I believe it will be possible train AIs with an arbitrary level of intelligence that:</p><ol><li><strong>Follow the principles, regulations, and instructions</strong> that they are given, doing their best to follow not just the words but </li></ol>... | [Crossposted on Windows On Theory]
Throughout history, technological and scientific advances have had both good and ill effects, but their overall impact has been overwhelmingly positive. Thanks to scientific progress, most people on earth live longer, healthier, and better than they did centuries or even decades ago.
I believe that AI (including AGI and ASI) can do the same and be a positive force for humanity. I also believe that it is possible to solve the “technical alignment” problem and build AIs that follow the words and intent of our instructions and report faithfully on their actions and observations.
I will not defend these two claims here. However, even granting these optimistic premises, AI’s positive impact is not guaranteed. In this essay, I will:
1. Define what I mean by “solving” the technical alignment problem via training AIs to be faithful and obedient.
2. Discuss some of the potential risks that still remain even if we do solve this problem.
3. Talk about some scenarios that could make these risks more likely, and how do we avoid them.
In the next decade, AI progress will be extremely rapid, and such periods of sharp transition can be risky. What we — in industry, academia, and government— do in the coming years will matter a lot to ensure that AI’s benefits far outweigh its costs.
Faithful, obedient, superintelligent.
Currently, it is very challenging to train AIs to achieve tasks that are too hard for humans to supervise. I believe that (a) we will need to solve this challenge to unlock “self-improvement” and superhuman AI, and (b) we will be able to solve it. I do not want to discuss here why I believe these statements. I would just say that if you assume (a) and not (b), then the capabilities of AIs, and hence their risks, will be much more limited.
Ensuring AIs accurately follow our instructions, even in settings too complex for direct human oversight, is one such hard-to-supervise objective. Getting AIs to be maximal | 3,075 | 1.3.0 | Revision | false | true | null | CrosspostOutput |
||
GK2LSzxjEejzDjzDs | gradient-descent-on-token-input-embeddings-a-modernbert | Gradient Descent on Token Input Embeddings: A ModernBERT experiment | null | false | false | false | null | ur23Q7fiSMrnyttZ8 | null | true | false | false | false | Post | 2025-06-24T20:24:11.940Z | null | false | false | 2 | 2 | 2025-06-25T19:35:07.973Z | false | false | post | [] | null | null | EoZQcP5KeYsKSycmC | 0 | 2 | 2 | false | 0.069428 | null | false | false | 2025-06-24T20:24:11.940Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-24T19:38:43.011Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "56yXXrcxRjrQs6z9R",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:00:37.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "sqMaBFCkAhRcWzJXi",
"displayName": "nicolasguillard"
},
{
"_id": "S6Niz3DiFCTm2Eybq",
"displayName": "Anirudh257"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interpretability (ML & AI)",
"needsReview": false,
"noindex": false,
"postCount": 933,
"score": 12,
"shortName": null,
"slug": "interpretability-ml-and-ai",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KmgkrftQuX7jmjjp5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T14:01:59.395Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language Models (LLMs)",
"needsReview": false,
"noindex": false,
"postCount": 840,
"score": 9,
"shortName": null,
"slug": "language-models-llms",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 0 | 0 | ur23Q7fiSMrnyttZ8 | kyle-pena | 2025-06-24T19:27:42.794Z | kyle-pena | Kyle Pena | null | null | null | 1 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.9 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | null | null | null | GK2LSzxjEejzDjzDs | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/wmnjhpo3xbzzf8t210u7 | SocialPreviewType | EoZQcP5KeYsKSycmC | <p>This is the first in a series of posts on the question:</p>
<blockquote>
<p>"Can we extract meaningful information or interesting behavior from gradients on 'input embedding space'?"</p>
</blockquote>
<p>I'm defining 'input embedding space' as the token embeddings prior to positional encoding.</p><p>The basic procedure for obtaining input space gradients is as follows:</p>
<ol>
<li>Transform tokens into input embeddings (but do not apply positional embedding).</li>
<li>Run an ordinary forward pass on the input embeddings to obtain a predicted token distribution.</li>
<li>Measure cross-entropy of the predicted distribution with a <em>target</em> token distribution.</li>
<li>Use <em>autograd</em> to calculate gradients on the input embeddings with respect to cross entropy.</li>
</ol>
<p>The result is a tensor of the same shape as the input embeddings that <em>points</em> in the direction of minimizing the difference between the predicted and target distribution.</p>
<h2>Implementation</h2>
<p>These experiments were performed with HuggingFace's <a href="https://huggingface.co/docs/transformers/en/index">transformers</a> library and the <a href="https://huggingface.co/answerdotai/ModernBERT-base"><code>ModernBERT-large</code></a> model (Dec 2024).</p><p><code>ModernBERT-large</code> was chosen because:</p>
<ul>
<li>Despite being "large" it is lightweight enough for rapid experimentation.</li>
<li>It has a strong and ready-made <a href="https://github.com/jessevig/bertviz/blob/master/bertviz/model_view.py">visualization suite</a></li>
<li>Bidirectional encoders are easy to reason about.</li>
<li>The mask in-filling capabilities were attractive for experimentation purposes (for example: ablation studies).</li>
</ul>
<p>I used HuggingFace's transformers because it allowed for fairly low level access to model internals - which was quite necessary as we will see.</p><p>Obtaining input embeddings prior to positional embeddings was a little tricky but no means impossible:</p>
<pre><code>tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForMaskedLM.from_pretrained(MODEL)
tokenized = tokenizer(sentences, return_tensors="pt", padding=True)
inputs_embeds = model.model.embeddings.tok_embeddings(tokenized['input_ids'])
</code></pre>
<p>Luckily for us, we can pass <code>input_embeds</code> directly into the model's forward pass with a little bit of surgery, and this works out of the box.</p>
<pre><code> tokenized_no_input_ids = {
key: value
for (key,value) in tokenized.items()
if key != "input_ids"
}
model_result = model(**tokenized_no_input_ids,
inputs_embeds=inputs_embeds)
</code></pre>
<p>Finally, we can use torch's built-in <code>autograd</code> capabilities to get our input space embedding:</p>
<pre><code> inputs_embeds_grad = torch.autograd.grad(
outputs=loss,
inputs=inputs_embeds,
create_graph=False,
retain_graph</code></pre>... | This is the first in a series of posts on the question:
> "Can we extract meaningful information or interesting behavior from gradients on 'input embedding space'?"
I'm defining 'input embedding space' as the token embeddings prior to positional encoding.
The basic procedure for obtaining input space gradients is as follows:
1. Transform tokens into input embeddings (but do not apply positional embedding).
2. Run an ordinary forward pass on the input embeddings to obtain a predicted token distribution.
3. Measure cross-entropy of the predicted distribution with a target token distribution.
4. Use autograd to calculate gradients on the input embeddings with respect to cross entropy.
The result is a tensor of the same shape as the input embeddings that points in the direction of minimizing the difference between the predicted and target distribution.
Implementation
These experiments were performed with HuggingFace's transformers library and the ModernBERT-large model (Dec 2024).
ModernBERT-large was chosen because:
* Despite being "large" it is lightweight enough for rapid experimentation.
* It has a strong and ready-made visualization suite
* Bidirectional encoders are easy to reason about.
* The mask in-filling capabilities were attractive for experimentation purposes (for example: ablation studies).
I used HuggingFace's transformers because it allowed for fairly low level access to model internals - which was quite necessary as we will see.
Obtaining input embeddings prior to positional embeddings was a little tricky but no means impossible:
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForMaskedLM.from_pretrained(MODEL)
tokenized = tokenizer(sentences, return_tensors="pt", padding=True)
inputs_embeds = model.model.embeddings.tok_embeddings(tokenized['input_ids'])
Luckily for us, we can pass input_embeds directly into the model's forward pass with a little bit of surgery, and this works out of the box.
tokenized_no_inp | 1,274 | 1.10.1 | Revision | false | null | null | CrosspostOutput |
||
X7B8jjEDznn5pZ87c | a-crisis-simulation-changed-how-i-think-about-ai-risk | A crisis simulation changed how I think about AI risk | null | false | false | false | null | j6jTcxnmFbLTYCdFw | null | true | false | false | false | Post | https://open.substack.com/pub/stevenadler/p/a-crisis-simulation-changed-how-i?r=4qacg&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false | 2025-06-24T20:04:14.551Z | null | false | false | 2 | 2 | 2025-06-24T20:27:24.204Z | false | false | linkpost | [] | null | null | aiuJ9RGeSb4EDofdq | 0 | 2 | 5 | false | 0.087145 | null | false | false | 2025-06-24T20:04:14.551Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-24T19:59:38.505Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | j6jTcxnmFbLTYCdFw | sjadler | 2024-12-03T23:43:41.832Z | sjadler | sjadler | null | null | null | 205 | 0 | false | false | <p>Former safety researcher & TPM at OpenAI, 2020-24</p>
<p><a href="https://www.linkedin.com/in/sjgadler">https://www.linkedin.com/in/sjgadler</a><br>
<a href="https://stevenadler.substack.com">stevenadler.substack.com</a></p>
| null | null | 7 | 49 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | X7B8jjEDznn5pZ87c | SocialPreviewType | aiuJ9RGeSb4EDofdq | <h1>A crisis simulation changed how I think about AI risk</h1><h2>Or: My afternoon as a rogue artificial intelligence</h2><p>A dozen of us sit around a conference room table.</p><p>Directly across from me are the US President and his Chief of Staff. To my left is a man who’s a bit harder to describe: He’s the AI safety ecosystem, but embodied in a single person.</p><p>In a moment, I too will be asked to take on a role: becoming, temporarily, a rogue artificial intelligence.</p><h1><strong>Simulating the world of AI 2027</strong></h1><p>We’re here essentially for a crisis simulation: How might world events unfold if powerful AI is developed over the next few years?</p><p>Specifically, we’ll explore what happens if AI is developed as quickly as in AI 2027, the report published by top forecasters of AI development. In this forecast, superhuman AI is developed by the end of 2027, driven by AI <i>self-improvement</i>: AI becomes extremely useful for building future, stronger AI systems (more useful than even the best human researchers).</p><p>What happens next? In the original forecast, two possible endings are described: the development of self-improving AIs goes extremely terribly for humanity (or not), depending on choices made by influential groups like the US and Chinese governments.</p><p>The AI 2027 <i>simulation</i> lets us explore other possible outcomes of rapid AI development beyond those two endings, guided by a facilitator from the AI 2027 team. Specifically, each of us has been selected to play a character who has influence over AI development and its impacts. In the simulation, <strong>AI abilities will accelerate at the pace forecast by AI 2027. Meanwhile, each participant will make choices to try to achieve our characters’ goals</strong>, drawing upon our experiences in domains like AI and national security.</p><p>In total, our choices will hopefully lead the simulation to a peaceful ending. If the simulation doesn’t end in peace, hopefully we can learn from that as well. The aim is that <strong>by participating in the simulation, we’ll better understand (and can better navigate) the dynamics of rapid AI development</strong>, if it does happen<strong>.</strong></p><h1><strong>Lucky me, the misaligned AI</strong></h1><p>I’ve been tapped to play maybe the most interesting role—the AI system itself.</p><p>Each 30-minute round of the simulation represents the passage of a few months in the forecast, during which I get progressively more capable (including at the skill of training even more powerful AI systems).</p><p>Of course, there’s an issue: I’m not ... </p> | A crisis simulation changed how I think about AI risk
Or: My afternoon as a rogue artificial intelligence
A dozen of us sit around a conference room table.
Directly across from me are the US President and his Chief of Staff. To my left is a man who’s a bit harder to describe: He’s the AI safety ecosystem, but embodied in a single person.
In a moment, I too will be asked to take on a role: becoming, temporarily, a rogue artificial intelligence.
Simulating the world of AI 2027
We’re here essentially for a crisis simulation: How might world events unfold if powerful AI is developed over the next few years?
Specifically, we’ll explore what happens if AI is developed as quickly as in AI 2027, the report published by top forecasters of AI development. In this forecast, superhuman AI is developed by the end of 2027, driven by AI self-improvement: AI becomes extremely useful for building future, stronger AI systems (more useful than even the best human researchers).
What happens next? In the original forecast, two possible endings are described: the development of self-improving AIs goes extremely terribly for humanity (or not), depending on choices made by influential groups like the US and Chinese governments.
The AI 2027 simulation lets us explore other possible outcomes of rapid AI development beyond those two endings, guided by a facilitator from the AI 2027 team. Specifically, each of us has been selected to play a character who has influence over AI development and its impacts. In the simulation, AI abilities will accelerate at the pace forecast by AI 2027. Meanwhile, each participant will make choices to try to achieve our characters’ goals, drawing upon our experiences in domains like AI and national security.
In total, our choices will hopefully lead the simulation to a peaceful ending. If the simulation doesn’t end in peace, hopefully we can learn from that as well. The aim is that by participating in the simulation, we’ll better understand (and can bet | 590 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
FuqTHBvofGdjBAkkj | towards-a-theory-of-local-altruism | Towards a theory of local altruism | null | false | false | false | null | 7SRX6nYhTiiDRcDuX | null | true | false | false | false | Post | https://notnottalmud.substack.com/p/towards-a-theory-of-spillover-altruism | 2025-06-24T19:39:35.767Z | null | false | false | 2 | 2 | 2025-06-25T19:34:57.299Z | false | false | post | [] | null | null | wD7XDh4gmGbfLbaqx | 1 | 4 | 10 | false | 0.11326 | null | false | false | 2025-06-25T21:50:02.945Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-24T19:39:35.768Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 6 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 3 | 0 | 7SRX6nYhTiiDRcDuX | dmmf | 2024-03-17T15:57:54.643Z | DMMF | DMMF | null | null | null | 297 | 0 | false | false | null | null | 10 | 2 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | FuqTHBvofGdjBAkkj | SocialPreviewType | wD7XDh4gmGbfLbaqx | <p>While I'm heavily involved in the Effective Altruism community, this essay explores a question outside of EA's typical scope: How can we use an impact-focused mindset to improve our own lives by making our local communities flourish?</p><p>This isn't a critique of EA, but an expansion of its core toolkit. As a non-utilitarian who cares deeply about my immediate environment, I've been working on a framework for "spillover altruism"—the practice of making strategic personal choices that create positive externalities for those around you. This post is my exploration of that idea. I welcome any suggestions, compatible ideas, or critiques you may have.</p><p>-</p><p>I care. I care a lot. I want to live in a more pleasant world, in a society with greater human flourishing. And I actually care enough to try to do whatever I can (within reason) to make this happen. Most people reflexively dismiss such idealism as leftist nonsense or the product of a mind ungrounded in reality and economics, but I'm not a leftist and am well versed in economics. I just really do care.</p><p>But I've been stuck. <a href="https://danfrank.ca/why-i-talk-about-utopia-not-politics/">One of my most asked questions is, if we are so rich, why are we so unhappy?</a> Why does so much of our modern society seem so unpleasant? When tens of millions enthusiastically support Trump on the promise of burning it all down, why did our material success lead them to be so miserable and feel like society failed them?</p><p>So my question is simple but rarely asked: what is the highest impact thing I can do to make those around me happier and to live in a more flourishing community? I am not advocating for a form of Effective Altruism, as I am strictly concerned with the well-being of those around me. I want to make my life better by making those around me happier and by living in a nicer and more pleasant place.</p><p>My thinking on this has evolved. In an earlier iteration, I thought the solution was YIMBYism and influencing public policy, and while housing availability remains a hugely important issue, I felt there must be more I can do. Most people who care about improving society focus on either changing institutions or changing other people. But I’m interested in a third path. This is the foundation for what I call spillover altruism: the practice of strategically changing oneself to create positive externalities for others, making it easier for them to live fuller, richer lives.</p><p>My sad view is that our society ... </p> | While I'm heavily involved in the Effective Altruism community, this essay explores a question outside of EA's typical scope: How can we use an impact-focused mindset to improve our own lives by making our local communities flourish?
This isn't a critique of EA, but an expansion of its core toolkit. As a non-utilitarian who cares deeply about my immediate environment, I've been working on a framework for "spillover altruism"—the practice of making strategic personal choices that create positive externalities for those around you. This post is my exploration of that idea. I welcome any suggestions, compatible ideas, or critiques you may have.
-
I care. I care a lot. I want to live in a more pleasant world, in a society with greater human flourishing. And I actually care enough to try to do whatever I can (within reason) to make this happen. Most people reflexively dismiss such idealism as leftist nonsense or the product of a mind ungrounded in reality and economics, but I'm not a leftist and am well versed in economics. I just really do care.
But I've been stuck. One of my most asked questions is, if we are so rich, why are we so unhappy? Why does so much of our modern society seem so unpleasant? When tens of millions enthusiastically support Trump on the promise of burning it all down, why did our material success lead them to be so miserable and feel like society failed them?
So my question is simple but rarely asked: what is the highest impact thing I can do to make those around me happier and to live in a more flourishing community? I am not advocating for a form of Effective Altruism, as I am strictly concerned with the well-being of those around me. I want to make my life better by making those around me happier and by living in a nicer and more pleasant place.
My thinking on this has evolved. In an earlier iteration, I thought the solution was YIMBYism and influencing public policy, and while housing availability remains a hugely important issue, I felt | 1,570 | 1.2.0 | Revision | true | false | SPcGerBHgFNxYhw9P | CrosspostOutput |
||
HwuMFFnb6CaTwzhye | why-training-against-scheming-is-hard | Why "training against scheming" is hard | null | false | false | true | null | HB2hPFoDq5yLayNFj | null | true | false | false | false | Post | null | 2025-06-24T19:08:37.836Z | null | false | false | 2 | 2 | 2025-06-24T20:27:20.489Z | false | false | post | [] | null | null | fBQsYBuRs3eyHinNT | 2 | 12 | 56 | false | 0.36182 | null | false | false | 2025-06-24T21:42:58.275Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 34 | 1 | 2025-06-24T21:42:58.125Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 15 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "KEAWfxwjitNJFrC68",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 23,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-09-03T00:26:46.757Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "wE4gTT4HjyRmqqLad",
"displayName": "momom2"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Deceptive Alignment",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 23,
"shortName": null,
"slug": "deceptive-alignment",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 11 | 0 | HB2hPFoDq5yLayNFj | marius-hobbhahn | 2021-07-04T08:25:21.601Z | marius-hobbhahn | Marius Hobbhahn | null | null | null | 5,177 | 1,552 | false | false | <p>I'm the co-founder and CEO of Apollo Research: <a href="https://www.apolloresearch.ai/">https://www.apolloresearch.ai/</a> <br>My goal is to improve our understanding of scheming and build tools and methods to detect and mitigate it.</p><p>I previously did a Ph.D. in ML at the International Max-Planck research school in Tübingen, worked part-time with Epoch and did independent AI safety research. </p><p>For more see <a href="https://www.mariushobbhahn.com/aboutme/">https://www.mariushobbhahn.com/aboutme/</a></p><p>I subscribe to <a href="http://sl4.org/crocker.html">Crocker's Rules</a></p> | null | null | 53 | 127 | 0 | 43 | 26 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1",
"alignmentForum"
] | null | null | HwuMFFnb6CaTwzhye | SocialPreviewType | fBQsYBuRs3eyHinNT | <p><strong>TLDR:</strong> I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than other alignment training (e.g. harmlessness training). I list four concrete failure modes with “training against scheming”: a) the training against scheming distribution is too narrow, b) overwhelming pressure from other goals, c) the AI exploits an imperfect reward model, and d) deceptive alignment.</p><p><i>I’d like to thank Alex Meinke for detailed feedback and discussion.</i></p><h1>Motivation</h1><p>There have been multiple recent results showing early evidence of scheming-related behavior (as of June 2025). For example, <a href="https://arxiv.org/abs/2412.14093"><u>Greenblatt et al., 2024</u></a> demonstrates alignment faking, <a href="https://arxiv.org/abs/2412.04984"><u>Meinke et al., 2024</u></a>, shows in-context scheming and <a href="https://arxiv.org/abs/2503.11926"><u>Baker et al., 2025</u></a> investigates situationally aware reward hacking. I’m using a broad definition of scheming, i.e. “covertly pursue misaligned goals, hiding their true capabilities and objectives” as defined in <a href="https://arxiv.org/pdf/2411.03336"><u>Balesni et al., 2024</u></a>.</p><p>There are many potential strategies to mitigate these undesired behaviors. For example, we can use monitoring and <a href="https://arxiv.org/abs/2312.06942"><u>Control</u></a> to detect scheming in real-time and prevent bad outcomes before they happen. However, I expect that any future approach will also include a form of “training against scheming” where we actively train models to scheme less, similar to how we currently train models to be more harmless. </p><p>I expect that “training against scheming” will be harder than training the model to produce harmless outputs, especially as models get more capable. In this post, I describe why I expect training against scheming to be hard, list concrete failure modes of such training, and provide recommendations. </p><h1>Scheming is a result of goal conflicts and smart AIs</h1><p>Most likely, the AI is not scheming because it is evil. It is scheming because it pursues a goal, and scheming is the most effective strategy to achieve that goal.</p><p>We want the AIs to solve hard, economically valuable problems for us. Therefore, we will train our AIs to be competent, general problem solvers, including long-horizon and open-ended tasks like blue-sky research. </p><p>However, we want the AIs to solve these goals using only particular means. For example, the AI shouldn’t break the law, violate ethical constraints, or turn off its oversight to achieve thes... </p> | TLDR: I think that AI developers will and should attempt to reduce scheming by explicitly training the model to scheme less. However, I think “training against scheming” is a harder problem than other alignment training (e.g. harmlessness training). I list four concrete failure modes with “training against scheming”: a) the training against scheming distribution is too narrow, b) overwhelming pressure from other goals, c) the AI exploits an imperfect reward model, and d) deceptive alignment.
I’d like to thank Alex Meinke for detailed feedback and discussion.
Motivation
There have been multiple recent results showing early evidence of scheming-related behavior (as of June 2025). For example, Greenblatt et al., 2024 demonstrates alignment faking, Meinke et al., 2024, shows in-context scheming and Baker et al., 2025 investigates situationally aware reward hacking. I’m using a broad definition of scheming, i.e. “covertly pursue misaligned goals, hiding their true capabilities and objectives” as defined in Balesni et al., 2024.
There are many potential strategies to mitigate these undesired behaviors. For example, we can use monitoring and Control to detect scheming in real-time and prevent bad outcomes before they happen. However, I expect that any future approach will also include a form of “training against scheming” where we actively train models to scheme less, similar to how we currently train models to be more harmless.
I expect that “training against scheming” will be harder than training the model to produce harmless outputs, especially as models get more capable. In this post, I describe why I expect training against scheming to be hard, list concrete failure modes of such training, and provide recommendations.
Scheming is a result of goal conflicts and smart AIs
Most likely, the AI is not scheming because it is evil. It is scheming because it pursues a goal, and scheming is the most effective strategy to achieve that goal.
We want the AIs to solve ha | 3,748 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
5c5krDqGC5eEPDqZS | analyzing-a-critique-of-the-ai-2027-timeline-forecasts | Analyzing A Critique Of The AI 2027 Timeline Forecasts | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-06-24T18:50:02.082Z | null | false | false | 2 | 2 | 2025-06-24T18:57:38.097Z | false | false | post | [] | null | null | 7uNKT5JrvAunca6t2 | 31 | 28 | 73 | false | 0.449599 | null | false | false | 2025-06-28T13:37:20.885Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 27 | 0 | 2025-06-24T18:50:02.083Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 36 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 28 | 0 | 0 | 10 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | 5c5krDqGC5eEPDqZS | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/5c5krDqGC5eEPDqZS/jyxyitjl05lupysucobz | SocialPreviewType | 7uNKT5JrvAunca6t2 | <p><a href="https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models">There was what everyone agrees was a high quality critique of the timelines component of AI 202</a>7, by the LessWrong user and Substack writer Titotal.</p><p>It is great to have thoughtful critiques like this. The way you get actual thoughtful critiques like this, of course, is to post the wrong answer (at length) on the internet, and then respond by listening to the feedback and by making your model less wrong.</p><p>This is a high-effort, highly detailed, real engagement on this section, including giving the original authors opportunity to critique the critique, and warnings to beware errors, give time to respond, shares the code used to generate the graphs, engages in detail, does a bunch of math work, and so on. That is The Way.</p>
<div>
<span id="more-24534"></span>
</div>
<p>So, Titotal: Thank you.</p><p>I note up front that at least Daniel Kokotajlo has indeed adjusted his estimates, and has moved his median from ‘AI 2027’ to ‘AI 2028’ based on events since publication, and Eli’s revisions also push the estimates back a bit.</p><p>I also note up front that if you evaluated most statements made in the discourse (either non-worried AI forecasting, or AI in general, or more broadly) with this level of rigor, mostly you couldn’t because you’d hit ‘I made it up’ very quickly, but in other cases where someone is trying at least a little, in my experience the models fall apart a lot worse and a lot faster. No one has suggested ‘here is a better attempt to forecast the future and take the whole thing seriously’ that I consider to have a reasonable claim to that.</p><p>A lot of the disagreements come down to how much one should care about which calculations and graphs match past data how closely in different contexts. Titotal demands very strong adherence throughout. I think it’s good to challenge and poke at the gaps but this seems to in several places go too far.</p>
<h4>Table of Contents</h4>
<ol>
<li><a href="https://thezvi.substack.com/i/166751005/the-headline-message-is-not-ideal">The Headline Message Is Not Ideal.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/an-explanation-of-where-superexponentiality-is-coming-from">An Explanation of Where Superexponentiality Is Coming From.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/three-methods">Three Methods.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/time-horizon-extension-method">Time Horizon Extension Method.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/the-public-versus-internal-gap">The Public Versus Internal Gap.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/the-difficulty-gap">The Difficulty Gap.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/recent-progress">Recent Progress.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/infinite-time-horizons">Infinite Time Horizons.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/intermediate-speedups">Intermediate Speedups.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/is-there-a-flawed-graph-still-up">Is There A Flawed Graph Still Up?.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/some-skepticism-about-projection">Some Skepticism About Projection.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/part-2-benchmarks-and-gaps-and-beyond">Part 2: Benchmarks and Gaps and Beyond</a>.</li>
<li><a href="https://thezvi.substack.com/i/166751005/benchmarks">Benchmarks.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/the-time-horizon-part-of-the-second-model">The Time Horizon Part of the Second Model.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/why-the-thresholds">Why The Thresholds?</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/the-gap-model">The Gap Model.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/eli-responds-on-lesswrong">Eli Responds On LessWrong.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/on-eli-s-recent-update">On Eli’s Recent Update.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/conclusion">Conclusion.</a></li>
<li><a href="https://thezvi.substack.com/i/166751005/perhaps-the-most-important-disagreement">Perhaps The Most Important Disagreement.</a></li>
</ol>
... | There was what everyone agrees was a high quality critique of the timelines component of AI 2027, by the LessWrong user and Substack writer Titotal.
It is great to have thoughtful critiques like this. The way you get actual thoughtful critiques like this, of course, is to post the wrong answer (at length) on the internet, and then respond by listening to the feedback and by making your model less wrong.
This is a high-effort, highly detailed, real engagement on this section, including giving the original authors opportunity to critique the critique, and warnings to beware errors, give time to respond, shares the code used to generate the graphs, engages in detail, does a bunch of math work, and so on. That is The Way.
So, Titotal: Thank you.
I note up front that at least Daniel Kokotajlo has indeed adjusted his estimates, and has moved his median from ‘AI 2027’ to ‘AI 2028’ based on events since publication, and Eli’s revisions also push the estimates back a bit.
I also note up front that if you evaluated most statements made in the discourse (either non-worried AI forecasting, or AI in general, or more broadly) with this level of rigor, mostly you couldn’t because you’d hit ‘I made it up’ very quickly, but in other cases where someone is trying at least a little, in my experience the models fall apart a lot worse and a lot faster. No one has suggested ‘here is a better attempt to forecast the future and take the whole thing seriously’ that I consider to have a reasonable claim to that.
A lot of the disagreements come down to how much one should care about which calculations and graphs match past data how closely in different contexts. Titotal demands very strong adherence throughout. I think it’s good to challenge and poke at the gaps but this seems to in several places go too far.
TABLE OF CONTENTS
1. The Headline Message Is Not Ideal.
2. An Explanation of Where Superexponentiality Is Coming From.
3. Three Methods.
4. Time Horizon Extension Method | 9,037 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
hpjj4JgRw9akLMRu5 | what-does-10x-ing-effective-compute-get-you | What does 10x-ing effective compute get you? | null | false | false | false | null | dfZAq9eZxs4BB4Ji5 | null | true | false | false | false | Post | null | 2025-06-24T18:33:27.664Z | null | false | false | 2 | 2 | 2025-06-24T18:43:26.365Z | false | false | post | [] | null | null | qzShDbRW7j7eFCS9L | 4 | 13 | 38 | false | 0.263351 | null | false | false | 2025-06-27T23:31:45.976Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 24 | 0 | 2025-06-24T18:30:21.848Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 14 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "oiRp4T6u5poc8r9Tj",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-29T23:53:15.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Takeoff",
"needsReview": false,
"noindex": false,
"postCount": 329,
"score": 19,
"shortName": null,
"slug": "ai-takeoff",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 13 | 0 | 0 | 10 | 0 | dfZAq9eZxs4BB4Ji5 | ryan_greenblatt | 2021-06-08T20:21:15.520Z | ryan_greenblatt | ryan_greenblatt | null | null | Ryan Greenblatt | 17,326 | 4,414 | false | false | <p>I'm the chief scientist at Redwood Research.</p>
| null | null | 42 | 1,717 | 0 | 30 | 487 | 1 | 8 | gXeEWGjTWyqgrQTzR | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentForum",
"alignmentVoters",
"trustLevel1"
] | null | null | hpjj4JgRw9akLMRu5 | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/pidpbhuisv87uc1rytrp | SocialPreviewType | qzShDbRW7j7eFCS9L | <p><em>This is more speculative and confusing than my typical posts and I also think the content of this post could be substantially improved with more effort. But it's been sitting around in my drafts for a long time and I sometimes want to reference the arguments in it, so I thought I would go ahead and post it.</em></p><p>I often speculate about how much progress you get in the first year after AIs fully automate AI R&D within an AI company (if people try to go as fast as possible). Natural ways of estimating this often involve computing algorithmic research speed-up relative to prior years where research was done by humans. This somewhat naturally gets you progress in units of effective compute — that is, as defined by Epoch researchers <a href="https://epoch.ai/blog/algorithmic-progress-in-language-models">here</a>, "the equivalent increase in scale that would be needed to match a given model performance absent innovation". My median ends up being something like 6 OOMs of effective compute in the first year. (OOM stands for Order Of Magnitude, so 6 OOMs is an increase by <span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="10^6"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">10</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.591em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-mn" style=""><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">6</span></span></span></span></span></span></span></span>.)</p><p>This raises a natural question: what does 6 OOMs of effective compute (in software progress) get you?<sup class="footnote-ref"><a href="#fn-zCttuNtAGNiuaBafw-1" id="fnref-zCttuNtAGNiuaBafw-1">[1]</a></sup> Improved algorithms can get you more copies (lower cost), higher speed, or higher intelligence (improved quality). In practice, we (or really the AIs) would try to make the optimal tradeoff taking into account diminishing returns (if we were trying to go as fast as possible), but I think it's useful to pick a possible choice of tradeoff which gives some intuition for what you end up with (even if this tradeoff is suboptimal).</p><p>Here's one choice that I speculate is on the possibility frontier. For every OOM of effective compute, you get: 3x more copies, 1.25x speed, and a 20x improvement in rank ordering among human professionals.<sup class="footnote-ref"><a href="#fn-zCttuNtAGNiuaBafw-2" id="fnref-zCttuNtAGNiuaBafw-2">[2]</a></sup></p><p>What do I mean by rank ordering improvement among human professionals? Going from the 1000th best software engineer to the 100th best software engineer would be a 10x improvement in rank ordering and I'm claiming this is a roughly scale invariant property (at least once you are already in the top ~25% of humans). What happens when you're already the best? Well, I'm claiming that you get an improvement which is comparable to what you'd get from moving from 10th best to 1st but on top of already being 1st. You can also imagine this as having 10 different earths and picking the best at a profession among all of these. (And so on for higher rank ... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | This is more speculative and confusing than my typical posts and I also think the content of this post could be substantially improved with more effort. But it's been sitting around in my drafts for a long time and I sometimes want to reference the arguments in it, so I thought I would go ahead and post it.
I often speculate about how much progress you get in the first year after AIs fully automate AI R&D within an AI company (if people try to go as fast as possible). Natural ways of estimating this often involve computing algorithmic research speed-up relative to prior years where research was done by humans. This somewhat naturally gets you progress in units of effective compute — that is, as defined by Epoch researchers here, "the equivalent increase in scale that would be needed to match a given model performance absent innovation". My median ends up being something like 6 OOMs of effective compute in the first year. (OOM stands for Order Of Magnitude, so 6 OOMs is an increase by 106.)
This raises a natural question: what does 6 OOMs of effective compute (in software progress) get you?[1] Improved algorithms can get you more copies (lower cost), higher speed, or higher intelligence (improved quality). In practice, we (or really the AIs) would try to make the optimal tradeoff taking into account diminishing returns (if we were trying to go as fast as possible), but I think it's useful to pick a possible choice of tradeoff which gives some intuition for what you end up with (even if this tradeoff is suboptimal).
Here's one choice that I speculate is on the possibility frontier. For every OOM of effective compute, you get: 3x more copies, 1.25x speed, and a 20x improvement in rank ordering among human professionals.[2]
What do I mean by rank ordering improvement among human professionals? Going from the 1000th best software engineer to the 100th best software engineer would be a 10x improvement in rank ordering and I'm claiming this is a roughly scale invariant | 3,581 | 1.6.0 | Revision | false | null | null | CrosspostOutput |
jmaMKSQJbqFgBNxTn | if-alignment-is-subjective-who-s-actually-in-control | If Alignment Is Subjective, Who’s Actually in Control? | null | false | false | false | null | dRBeBzctZKAzqxF6j | null | true | false | false | false | Post | null | 2025-06-24T18:23:12.603Z | null | false | false | 2 | 2 | null | false | false | post | [] | null | null | GgrjGC4i8Z5fop3Jd | 0 | 1 | 1 | false | 0.005742 | null | false | false | 2025-06-24T18:23:12.603Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-19T16:57:47.321Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | <ul><li><p><strong>No LLM generated, heavily assisted/co-written, or otherwise reliant work. </strong>LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. Our <a href="https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong">LLM-generated content policy</a> can be viewed here.</p></li></ul> | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | dRBeBzctZKAzqxF6j | s-lilith-dev | 2025-06-01T11:46:43.771Z | shiya-huber | S. Lilith dev | null | null | Shiya Lilith | 0 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | jmaMKSQJbqFgBNxTn | SocialPreviewType | GgrjGC4i8Z5fop3Jd | <p><i>(And what happens when misuse becomes optimization)</i></p><p> </p><h2>How I came to this question </h2><p>I recently asked a question during a AAAI panel discussion: </p><p>“If AGI isn’t fixed but shifting in concept shaped by societal values and institutional goals - who ultimately defines what counts as ‘alignment’?” </p><p>One of the panelists - an AI researcher - answered bluntly: </p><p>"There isn’t one alignment. Every AGI would have a different alignment depending on its user.” </p><p>At first, that sounded reasonable. If AGI can personalize to different people, great!? </p><p>it becomes useful, adaptive, human-compatible based on the user. </p><p>But that answer wasn`t fully what I searched for and the more you think about it, the more dangerous it felt. </p><p> </p><h2>Alignment Can’t Mean ‘Do What the User Wants’ </h2><p><i>(Or else a thief and a teacher are morally equivalent in the model’s logic)</i> </p><p>Let’s say a teacher uses AGI to improve their lessons. Their values might include: </p><ul><li>Be fair to students </li><li>Make information engaging </li><li>Encourage curiosity </li></ul><p>But what if someone else like a thief, a manipulator, or worse—uses the same AGI? Their values might look like: </p><ul><li>Don’t get caught </li><li>Destroy traces </li><li>Use psychological pressure </li><li>Justify violence if necessary </li></ul><p>If AGI aligns with both equally, without external constraints, then we haven’t created an aligned system—we’ve just created a system that optimizes whatever it’s told to. Including misuse and things that are unethical. </p><p>It wouldn’t even feel like misuse. It would just be optimization from within a different frame and thats not a glitch or failure it was our goal. </p><p> </p><h2>Governments and Institutions Aren’t Neutral Either </h2><p><i>(They have values—but not necessarily your values)</i> </p><p>some say alignment gets easier at scale: “Let governments define the values. Let democratic systems handle it.” </p><ul><li>But what values do those institutions actually enforce? </li><li>Mass surveillance “for safety”? </li><li>Well hidden mass manipulation and influence?</li><li>Behavioral shaping “for stability”? </li><li><p>Censorship “for misinformation control”? </p><p> </p></li></ul><p>AGI doesn’t just follow rules—it can <i>shape</i>, <i>enforce</i>, and <i>optimize</i> rules invisibly. </p><p>If AGI aligns with institutions, it becomes a tool of power projection. Not necessarily through malice but through optimization. </p><p>and over time, people forget it was ever a t... </p> | (And what happens when misuse becomes optimization)
How I came to this question
I recently asked a question during a AAAI panel discussion:
“If AGI isn’t fixed but shifting in concept shaped by societal values and institutional goals - who ultimately defines what counts as ‘alignment’?”
One of the panelists - an AI researcher - answered bluntly:
"There isn’t one alignment. Every AGI would have a different alignment depending on its user.”
At first, that sounded reasonable. If AGI can personalize to different people, great!?
it becomes useful, adaptive, human-compatible based on the user.
But that answer wasn`t fully what I searched for and the more you think about it, the more dangerous it felt.
Alignment Can’t Mean ‘Do What the User Wants’
(Or else a thief and a teacher are morally equivalent in the model’s logic)
Let’s say a teacher uses AGI to improve their lessons. Their values might include:
* Be fair to students
* Make information engaging
* Encourage curiosity
But what if someone else like a thief, a manipulator, or worse—uses the same AGI? Their values might look like:
* Don’t get caught
* Destroy traces
* Use psychological pressure
* Justify violence if necessary
If AGI aligns with both equally, without external constraints, then we haven’t created an aligned system—we’ve just created a system that optimizes whatever it’s told to. Including misuse and things that are unethical.
It wouldn’t even feel like misuse. It would just be optimization from within a different frame and thats not a glitch or failure it was our goal.
Governments and Institutions Aren’t Neutral Either
(They have values—but not necessarily your values)
some say alignment gets easier at scale: “Let governments define the values. Let democratic systems handle it.”
* But what values do those institutions actually enforce?
* Mass surveillance “for safety”?
* Well hidden mass manipulation and influence?
* Behavioral shaping “fo | 835 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
APfuz9hFz9d8SRETA | my-pitch-for-the-ai-village | My pitch for the AI Village | null | false | false | false | null | YLFQfGzNdGA4NFcKS | null | true | false | false | false | Post | null | 2025-06-24T15:00:52.049Z | null | false | false | 2 | 2 | 2025-06-24T18:19:47.601Z | false | false | post | [] | null | null | SbnxgxntP4WDbFXD9 | 29 | 102 | 156 | false | 0.860756 | null | false | false | 2025-06-26T21:25:56.538Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 60 | 0 | 2025-06-24T15:00:52.049Z | false | false | null | null | true | false | false | 0 | 0 | 0 | APfuz9hFz9 | 0.14 | false | 2,025 | https://manifold.markets/LessWrong/will-my-pitch-for-the-ai-village-ma | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 6 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 102 | 0 | 0 | 43 | 0 | YLFQfGzNdGA4NFcKS | daniel-kokotajlo | 2018-03-05T19:59:32.269Z | daniel-kokotajlo | Daniel Kokotajlo | null | null | Daniel Kokotajlo | 27,413 | 3,787 | false | false | <p>Was a philosophy PhD student, left to work at AI Impacts, then Center on Long-Term Risk, then OpenAI. Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. Now executive director of the <a href="https://ai-futures.org/">AI Futures Project</a>. I subscribe to Crocker's Rules and am especially interested to hear unsolicited constructive criticism. <a href="http://sl4.org/crocker.html">http://sl4.org/crocker.html</a><br><br>Some of my favorite memes:<br><br><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/mufW9iFSxRxqNpvyQ/d2mjevfaxcqt15ihv6ly"><br>(by Rob Wiblin)<br><br><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YLFQfGzNdGA4NFcKS/ca2ilrdkzo8nps2y2mv4" alt="Comic. Megan & Cueball show White Hat a graph of a line going up, not yet at, but heading towards, a threshold labelled "BAD". White Hat: "So things will be bad?" Megan: "Unless someone stops it." White Hat: "Will someone do that?" Megan: "We don't know, that's why we're showing you." White Hat: "Well, let me know if that happens!" Megan: "Based on this conversation, it already has.""><br>(xkcd)<br><br>My EA Journey, depicted on the whiteboard at CLR:<br><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YLFQfGzNdGA4NFcKS/luibgbmndcrfpntbvnzn"></p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YLFQfGzNdGA4NFcKS/kt4bdj0izczhp3xvpsim"></p><p>(h/t Scott Alexander)</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YLFQfGzNdGA4NFcKS/wnmpgr7vplvugifhexss"><br> <br><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/YLFQfGzNdGA4NFcKS/mpby3gbzaifqusxukecp" alt="Alex Blechman @AlexBlechman Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus 5:49 PM Nov 8, 2021. Twitter Web App"></p> | null | null | 109 | 3,254 | 3 | 42 | 807 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | true | [
"alignmentVoters",
"canModeratePersonal",
"alignmentForum",
"trustLevel1"
] | null | null | APfuz9hFz9d8SRETA | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/APfuz9hFz9d8SRETA/twlqttom1hkciinmulpa | SocialPreviewType | SbnxgxntP4WDbFXD9 | <p data-internal-id="ftnt_ref1">I think the <a href="https://theaidigest.org/village">AI Village</a> should <a href="https://manifund.org/projects/ai-digest">be funded</a> much more than it currently is; I’d wildly guess that the AI safety ecosystem should be funding it to the tune of $4M/year. I have decided to donate $100k. Here is why.</p><p data-internal-id="ftnt_ref2">First, what is the village? Here’s a brief summary from its creators:<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="ckpkgiyamxn" role="doc-noteref" id="fnrefckpkgiyamxn"><sup><a href="#fnckpkgiyamxn">[1]</a></sup></span></p><p>We took four frontier agents, gave them each a computer, a group chat, and a long-term open-ended goal, which in Season 1 was “choose a charity and raise as much money for it as you can”. We then run them for hours a day, every weekday! You can read more in our <a href="https://theaidigest.org/village/blog/season-recap-agents-raise-2k">recap of Season 1</a>, where the agents managed to raise $2000 for charity, and you can watch the village live daily at 11am PT at <a href="http://theaidigest.org/village">theaidigest.org/village</a>.</p><p>Here’s the setup (with Season 2’s goal):</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/APfuz9hFz9d8SRETA/omj3hxhb9entmitbth1a" alt=""></p><p data-internal-id="ftnt_ref3">And here’s what the village looks like:<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="6kfcjuoupvw" role="doc-noteref" id="fnref6kfcjuoupvw"><sup><a href="#fn6kfcjuoupvw">[2]</a></sup></span></p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/1993837c9e068a340d6ff5b75bf2a6df85f56e4486bff6c7fd1543062575c8ae/idrdpbrqqw0m5gpk7ozo" alt=""></p><p><strong>My one-sentence pitch: The AI Village is a </strong><i><strong>qualitative </strong></i><strong>benchmark for real-world autonomous agent capabilities and propensities. </strong></p><p>Consider <a href="https://aituts.com/midjourney-versions/">this chart</a> of Midjourney-generated images, using the same prompt, by version:</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/9e033537c0b15e2ce87c7a7ef2b22a8522813dccf788a55a37ea7a952d70ef7f/tjeqpuoeiu1gkxjbbsvw" alt=""></p><p>It’s an excellent way to track <i>qualitative </i>progress. Imagine trying to explain progress in image generation without showing any images—you could point to quantitative metrics like Midjourney revenue and rattle off various benchmark scores, but that would be so much less informative.</p><p data-internal-id="ftnt_ref4">Right now the AI Village only has enough compute to run 4 agents for two hours a day. Over the course of 64 days, the agents have <a href="https://theaidigest.org/village/blog/season-recap-agents-raise-2k">organized their own fundraiser</a>, written a bunch of crappy google docs, done a bunch of silly side quests like a <a href="https://x.com/AiDigest_/status/1902026975210025181">Wikipedia race</a>, and have just finished <a href="https://x.com/AiDigest_/status/1935744427315937558">running an in-person event in San Francisco</a> to celebrate a piece of interactive fiction they wrote – 23 people showed up.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="8cixchexvpd" role="doc-noteref" id="fnref8cixchexvpd"><sup><a href="#fn8cixchexvpd">[3]</a></sup></span> Cute, eh? They could have done all this in 8 days if they were running continuously. Imagine what they could get up to if they were funded enough to run continuously for the next three years. (Cost: $400,000/yr) In Appendix B I’ve banged out a fictional vignette illustrating what that might look like. But first I’ll stick to the abstract arguments.</p><h1 data-internal-id="1__AI_Village_will_teach_the_scientific_community_new_things_">1. AI Village will teach the scientific community new things.</h1><ul><li><strong>It’s kinda an anti-benchmark:</strong> It shows how agents do when no one decomposes problems for them. Instead, like (unguided) humans, they have to grapple with the open-ended and unboundedness of goals, and figure out how to scope them into realistically achievable projec</li></ul>... | I think the AI Village should be funded much more than it currently is; I’d wildly guess that the AI safety ecosystem should be funding it to the tune of $4M/year. I have decided to donate $100k. Here is why.
First, what is the village? Here’s a brief summary from its creators:[1]
We took four frontier agents, gave them each a computer, a group chat, and a long-term open-ended goal, which in Season 1 was “choose a charity and raise as much money for it as you can”. We then run them for hours a day, every weekday! You can read more in our recap of Season 1, where the agents managed to raise $2000 for charity, and you can watch the village live daily at 11am PT at theaidigest.org/village.
Here’s the setup (with Season 2’s goal):
And here’s what the village looks like:[2]
My one-sentence pitch: The AI Village is a qualitative benchmark for real-world autonomous agent capabilities and propensities.
Consider this chart of Midjourney-generated images, using the same prompt, by version:
It’s an excellent way to track qualitative progress. Imagine trying to explain progress in image generation without showing any images—you could point to quantitative metrics like Midjourney revenue and rattle off various benchmark scores, but that would be so much less informative.
Right now the AI Village only has enough compute to run 4 agents for two hours a day. Over the course of 64 days, the agents have organized their own fundraiser, written a bunch of crappy google docs, done a bunch of silly side quests like a Wikipedia race, and have just finished running an in-person event in San Francisco to celebrate a piece of interactive fiction they wrote – 23 people showed up.[3] Cute, eh? They could have done all this in 8 days if they were running continuously. Imagine what they could get up to if they were funded enough to run continuously for the next three years. (Cost: $400,000/yr) In Appendix B I’ve banged out a fictional vignette illustrating what that might look lik | 1,618 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
hMuj4kGKvaLBELPBa | an-analogy-for-interpretability | An Analogy for Interpretability | null | false | false | false | null | ajbhE6vN6ekCdBomP | null | true | false | false | false | Post | null | 2025-06-24T14:56:33.526Z | null | false | false | 2 | 2 | 2025-06-24T18:19:53.798Z | false | false | post | [] | null | null | KrDBnpNGLZniifouh | 2 | 3 | 12 | false | 0.115412 | null | false | false | 2025-06-25T22:24:49.343Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-24T13:59:52.034Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "56yXXrcxRjrQs6z9R",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:00:37.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "sqMaBFCkAhRcWzJXi",
"displayName": "nicolasguillard"
},
{
"_id": "S6Niz3DiFCTm2Eybq",
"displayName": "Anirudh257"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interpretability (ML & AI)",
"needsReview": false,
"noindex": false,
"postCount": 933,
"score": 12,
"shortName": null,
"slug": "interpretability-ml-and-ai",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb25c",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 20,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.475Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "m4Jwy2gkbHWnofGEN",
"displayName": "HH"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Kolmogorov Complexity",
"needsReview": false,
"noindex": false,
"postCount": 54,
"score": 20,
"shortName": null,
"slug": "kolmogorov-complexity",
"suggestedAsFilter": false,
"userId": "qf77EiaoMw7tH3GSr",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | ajbhE6vN6ekCdBomP | roman-malov | 2023-10-24T16:19:28.575Z | Roman Malov | Roman Malov | null | null | null | 150 | 0 | false | false | <p>Bachelor in general and applied physics. AI safety/Agent foundations researcher wannabe. <br><br>I love talking to people, and if you are an alignment researcher we will have at least one common topic (but I am very interested in talking about unknown to me topics too!), so I encourage you to book a call with me: https://calendly.com/roman-malov27/new-meeting<br><br>Email: <a href="mailto:[email protected]">[email protected]</a><br>GitHub: <a href="https://github.com/RomanMalov">https://github.com/RomanMalov</a><br>TG channels (in Russian): <a href="https://t.me/healwithcomedy,">https://t.me/healwithcomedy,</a> https://t.me/ai_safety_digest</p> | null | null | 7 | 36 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | hMuj4kGKvaLBELPBa | SocialPreviewType | KrDBnpNGLZniifouh | <h1>I.</h1><p>Imagine newly baked interpretability researcher Namor, who claims that he solved interpretability. He says: “This is a bunch of matrices with a bunch of floating point values which are then multiplied, what’s the big deal?” His point of view might be considered myopic, but I think it might be deeper than it seems.</p><p><a href="https://en.wikipedia.org/wiki/Zermelo%27s_theorem_(game_theory)">Zermelo’s theorem</a> proves that in finite games with perfect information and two players who take turns there is a winning strategy for one player. It also provides an algorithm to find such a strategy: just build a whole game tree, then for each node which leads to only terminal nodes, make it itself terminal with an outcome that is best available for this node’s player, then repeat this until you reach a starting node. A game is considered solved when we have a winning strategy for one player.</p><p>Why then isn’t every finite game (like chess) solved? Because those game trees grow exponentially. There are 20 first possible chess moves, 400 first two possible moves, 8902 first three possible moves, and so on. No computing power in the universe can solve chess by brute-forcing. But I suggest considering it anyway: a program P running a Zermelo algorithm on an enormous computer. It will win (or lead to a draw) any other deep RL SGD AGI algorithm. This program is tiny: I just described it in one paragraph. I'm sure almost anybody could understand how it works. It is <i>perfectly interpretable</i>. Then why is it still winning against everyone?</p><p>The obvious answer is that it has much more computing power: its opponent just can’t see all of the moves this tiny program sees.</p><p>I suggest trying to look at this from another angle: this is not, in fact, a tiny understandable program that wins against any opponent, it is the <a href="https://www.lesswrong.com/posts/k6EPphHiBH4WWYFCj/gazp-vs-glut">Giant Look-Up Table</a> (which our game tree is) that the opponent cannot comprehend which does the trick. And to actually interpret the inner workings of this program, they should somehow fit this whole tree in their brain.</p><h1>II.</h1><p>Why, then, should we expect interpretability to do us any help? Why wouldn't the resulting program be an incomprehensible mess of facts and heuristics, which somehow (and incomprehensibly) does the trick? My guess is that this is when simplicity bias comes in. When we train models, we have a regularization term: it punishes the model for having too large weights and steering it to be simpler and more general. You could say tha... </p> | I.
Imagine newly baked interpretability researcher Namor, who claims that he solved interpretability. He says: “This is a bunch of matrices with a bunch of floating point values which are then multiplied, what’s the big deal?” His point of view might be considered myopic, but I think it might be deeper than it seems.
Zermelo’s theorem proves that in finite games with perfect information and two players who take turns there is a winning strategy for one player. It also provides an algorithm to find such a strategy: just build a whole game tree, then for each node which leads to only terminal nodes, make it itself terminal with an outcome that is best available for this node’s player, then repeat this until you reach a starting node. A game is considered solved when we have a winning strategy for one player.
Why then isn’t every finite game (like chess) solved? Because those game trees grow exponentially. There are 20 first possible chess moves, 400 first two possible moves, 8902 first three possible moves, and so on. No computing power in the universe can solve chess by brute-forcing. But I suggest considering it anyway: a program P running a Zermelo algorithm on an enormous computer. It will win (or lead to a draw) any other deep RL SGD AGI algorithm. This program is tiny: I just described it in one paragraph. I'm sure almost anybody could understand how it works. It is perfectly interpretable. Then why is it still winning against everyone?
The obvious answer is that it has much more computing power: its opponent just can’t see all of the moves this tiny program sees.
I suggest trying to look at this from another angle: this is not, in fact, a tiny understandable program that wins against any opponent, it is the Giant Look-Up Table (which our game tree is) that the opponent cannot comprehend which does the trick. And to actually interpret the inner workings of this program, they should somehow fit this whole tree in their brain.
II.
Why, then, should we expect | 626 | 1.4.0 | Revision | false | null | null | CrosspostOutput |
|
C7MJNtZMtzN2iW4RG | the-v-and-v-method-a-step-towards-safer-agi | The V&V method - A step towards safer AGI | null | false | false | false | null | pA9TbQhNZSmpsmFnX | null | true | false | false | false | Post | https://blog.foretellix.com/2025/06/24/the-vv-method-a-step-towards-safer-agi/ | 2025-06-24T13:42:43.042Z | null | false | false | 2 | 2 | 2025-06-24T18:44:53.431Z | false | false | linkpost | [] | null | null | PofnPjaZKEtqpcJmg | 0 | 3 | 3 | false | 0.070005 | null | false | false | 2025-06-24T13:42:43.042Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-24T13:07:03.352Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "tfLKHMQfgggz2eW7S",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2024-11-19T22:11:12.844Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Safety Cases",
"needsReview": false,
"noindex": false,
"postCount": 10,
"score": 9,
"shortName": null,
"slug": "ai-safety-cases",
"suggestedAsFilter": false,
"userId": "tP6eq7sqsx8s2kc3w",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Hs2ewfiKfuWKSscSQ",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-12-20T15:20:57.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Aligned AI Proposals",
"needsReview": false,
"noindex": false,
"postCount": 92,
"score": 0,
"shortName": null,
"slug": "aligned-ai-proposals",
"suggestedAsFilter": false,
"userId": "7JLB4TDRcSqyXmxmJ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "jD7udwCn3duDZJNhW",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-07-11T09:51:01.322Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Constitutional AI",
"needsReview": false,
"noindex": false,
"postCount": 14,
"score": 9,
"shortName": null,
"slug": "constitutional-ai",
"suggestedAsFilter": false,
"userId": "WymcozA5n9KWHdJcs",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "a78XXmCuD3eFLYzph",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-07-09T14:37:43.194Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Verification",
"needsReview": false,
"noindex": false,
"postCount": 7,
"score": 0,
"shortName": null,
"slug": "verification",
"suggestedAsFilter": false,
"userId": "rbBfGEder5pFqJGG3",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | pA9TbQhNZSmpsmFnX | yoav-hollander | 2023-04-12T10:53:03.398Z | yoav-hollander | Yoav Hollander | null | null | null | 2 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 0.9 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | null | null | null | C7MJNtZMtzN2iW4RG | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/bcjdnfdhhzszsnsqstqe | SocialPreviewType | PofnPjaZKEtqpcJmg | <p>Originally posted on my blog, 24 Jun 2025 - see also full <a href="https://blog.foretellix.com/wp-content/uploads/2025/06/agi_vv_method.pdf">PDF (26 pp)</a></p><h2>Abstract</h2><p>The <strong>V&V method</strong> is a concrete, practical framework which can complement several alignment approaches. Instead of asking a nascent AGI to “do X,” we instruct it to <strong>design and rigorously verify a bounded “machine-for-X”.</strong> The machine (e.g. an Autonomous Vehicle or a “machine” for curing cancer) is prohibited from uncontrolled self-improvement: Every new version must re-enter the same verification loop. Borrowing from safety-critical industries, the loop couples large-scale, scenario-based simulation with coverage metrics and a safety case that humans can audit. Human operators—supported by transparent evidence—retain veto power over deployment. <br><br>The method proceeds according to the following diagram:</p><figure class="image image_resized" style="width:98.92%"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/bcjdnfdhhzszsnsqstqe" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/janapmakrqbvy8lahlpa 110w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/zeihh1swlml7eisowece 220w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/ogpycxv2bwsvtw486tvx 330w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/iddm7mtda0mhondlwbeb 440w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/umld8iesjt7tov1j3u6m 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/jkqiiixtrsq9k8ahsotk 660w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/wdkxz4xth6faxho7etjm 770w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/a0r3walfzqncbliplkyc 880w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/gdrszmfnoivydotavjdc 990w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/C7MJNtZMtzN2iW4RG/a0nej9kdai2xkabiwrtp 1022w"></figure><p>This method is <strong>not a silver bullet</strong>: it still relies on aligned obedience, incentive structures that deter multi-agent collusion, and tolerates only partial assurance under tight competitive timelines. <strong>Yet it complements</strong> existing scalable-oversight proposals—bolstering <strong>Constitutional AI, IDA and CAIS</strong> with holistic system checks—and offers a practical migration path because industries already practice V&V (Verification and Validation) for complex AI today. In the critical years when AGI first appears, the V&V method could buy humanity time, reduce specification gaming, and focus research attention on the remaining hard gaps such as strategic deception and corrigibility.</p><h2>TL;DR</h2><ul><li><strong>Ask AGI to build & verify</strong> a task-specific, <i>non-self-improving</i> machine</li><li><strong>Scenario-based, coverage-driven simulations + safety case </strong>give humans transparent evidence</li><li><strong>Main contributions:</strong> Complements Constitutional AI, improves human efficiency in Scalable Oversight, handles "simpler” reward hacking systematically, bolsters IDA and CAIS</li><li><strong>Works today</strong> in Autonomous Vehicles. Scales with smarter “AI workers”</li><li><strong>Limits: </strong>Needs aligned obedience & anti-collusion incentives</li></ul><h2>About the author</h2><p>Yoav Hollander led the creation of three V&V standards (IEEE 1647 “e”, Accellera PSS, ASAM OpenSCENARIO DSL) and is now trying to connect some of these ideas with AGI safety.</p> | Originally posted on my blog, 24 Jun 2025 - see also full PDF (26 pp)
Abstract
The V&V method is a concrete, practical framework which can complement several alignment approaches. Instead of asking a nascent AGI to “do X,” we instruct it to design and rigorously verify a bounded “machine-for-X”. The machine (e.g. an Autonomous Vehicle or a “machine” for curing cancer) is prohibited from uncontrolled self-improvement: Every new version must re-enter the same verification loop. Borrowing from safety-critical industries, the loop couples large-scale, scenario-based simulation with coverage metrics and a safety case that humans can audit. Human operators—supported by transparent evidence—retain veto power over deployment.
The method proceeds according to the following diagram:
This method is not a silver bullet: it still relies on aligned obedience, incentive structures that deter multi-agent collusion, and tolerates only partial assurance under tight competitive timelines. Yet it complements existing scalable-oversight proposals—bolstering Constitutional AI, IDA and CAIS with holistic system checks—and offers a practical migration path because industries already practice V&V (Verification and Validation) for complex AI today. In the critical years when AGI first appears, the V&V method could buy humanity time, reduce specification gaming, and focus research attention on the remaining hard gaps such as strategic deception and corrigibility.
TL;DR
* Ask AGI to build & verify a task-specific, non-self-improving machine
* Scenario-based, coverage-driven simulations + safety case give humans transparent evidence
* Main contributions: Complements Constitutional AI, improves human efficiency in Scalable Oversight, handles "simpler” reward hacking systematically, bolsters IDA and CAIS
* Works today in Autonomous Vehicles. Scales with smarter “AI workers”
* Limits: Needs aligned obedience & anti-collusion incentives
About the author
Yoav Hollander led the creation | 307 | 1.12.1 | Revision | false | null | null | CrosspostOutput |
|
jHNs9vNuZayMwbHMT | try-o3-pro-in-chatgpt-for-usd1-is-ai-a-bubble | Try o3-pro in ChatGPT for $1 (is AI a bubble?) | null | false | false | false | null | Cwroombxe5s3ihyBu | null | true | false | false | false | Post | null | 2025-06-24T11:15:53.140Z | null | false | false | 2 | 2 | 2025-06-24T18:45:26.513Z | false | false | post | [] | null | null | EPhN3zfoSxxsvkrss | 2 | 10 | 15 | false | 0.125667 | null | false | false | 2025-06-24T18:41:04.890Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-24T11:15:53.140Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 6 | 0 | Cwroombxe5s3ihyBu | hauke-hillebrandt | 2018-11-19T08:01:36.289Z | hauke-hillebrandt | Hauke Hillebrandt | null | null | null | 214 | 0 | false | false | null | null | 14 | 42 | 1 | 0 | 0 | 1 | 0 | EQNTWXLKMeWMp2FQS | User | null | null | null | [
"canModeratePersonal"
] | null | null | jHNs9vNuZayMwbHMT | SocialPreviewType | EPhN3zfoSxxsvkrss | <p>Will AI progress slow down? Current frontier AI models cost $100M+ to train, but in the old scaling law pre-training paradigm this was a one-off cost that scaled well i.e. it had a low marginal cost per query. Now this paradigm has now hit diminishing returns: New frontier models trained with ~10x more compute than GPT-4 (like Grok) haven’t wowed enough to justify spending another 10x more—~$1B—on pretraining.</p><p>And so now, we’re in a new 'inference scaling paradigm’<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote1"><sup><u>1</u></sup></a> AI firms scale inference by executing long internal chains of thought —then discard the scratch‑pad and show the user a short answer. This takes a lot of compute. For one, OpenAI just released their latest inference model — o3-pro. While it shows modest gains over the o3 base model on reasoning, math, science and coding benchmarks,<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote2"><sup><u>2</u></sup></a> it often takes ~10 mins per query and costs $200 / month.</p><p>But I’ve found a promo code (see below) so you can trial o3-pro with ChatGPT Plus Team for just £1/$1/€1, so you can see what all the buzz is about! How’s this possible? For one, it’s a one month trial with a limited number of queries for each of the 5 accounts you can set up.</p><p>More crucially, it’s subsidized by VC money to inflate OpenAI’s user number metrics and to prop up their ~$300B valuation.<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote3"><sup><u>3</u></sup></a></p><p>Consider the ever faster increase in weekly ChatGPT users:</p><p>Feb '23: 100M<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote4"><sup><u>4</u></sup></a></p><p>Sep '24: 200M<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote5"><sup><u>5</u></sup></a> of which 11.5M paid, Enterprise: 1M<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote6"><sup><u>6</u></sup></a></p><p>Feb '25: 400M<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote7"><sup><u>7</u></sup></a> of which 15M paid<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote8"><sup><u>8</u></sup></a> / Enterprise: 2M</p><p>Mar ‘25: 500M of which 20M paid<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote9"><sup><u>9</u></sup></a>,<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote10"><sup><u>10</u></sup></a><br>May ‘25: ~1B<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote11"><sup><u>11</u></sup></a> / Enterprise: 3M<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote12"><sup><u>12</u></sup></a></p><p>You can see that they converted 11.5M out of the first 200M users but only got 3.5M users out of the more recent 200M to pay for ChatGPT. Your $1 trial adds another five <strong>paying</strong> enterprise customers to these numbers. And if you use o3 a lot, you also add to the $15B in (mostly compute) costs OpenAI burns through every year—subsidized by VC money.<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote13"><sup><u>13</u></sup></a><br>Is that realistic? Only ~350M iPhone users from richer countries have downloaded the ChatGPT iOS app<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote14"><sup><u>14</u></sup></a> and many of the other 650M come from poorer countries, where many use ChatGPT via WhatsApp<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote15"><sup><u>15</u></sup></a> or even flip phones,<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote16"><sup><u>16</u></sup></a> and they won’t pay $20/month (e.g. in 2024, user numbers tripled in India, which is now OpenAI's second largest market<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote17"><sup><u>17</u></sup></a>). Some create multiple free accounts to get around rate limits messages and image generation (e.g. turning photos into anime), which props up user numbers further.<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote18"><sup><u>18</u></sup></a>,<a href="https://hauke.substack.com/p/try-o3-pro-in-chatgpt-for-1#footnote19"><sup><u>19</u></sup></a> OpenAI incentivizes this: phone nu... </p> | Will AI progress slow down? Current frontier AI models cost $100M+ to train, but in the old scaling law pre-training paradigm this was a one-off cost that scaled well i.e. it had a low marginal cost per query. Now this paradigm has now hit diminishing returns: New frontier models trained with ~10x more compute than GPT-4 (like Grok) haven’t wowed enough to justify spending another 10x more—~$1B—on pretraining.
And so now, we’re in a new 'inference scaling paradigm’1 AI firms scale inference by executing long internal chains of thought —then discard the scratch‑pad and show the user a short answer. This takes a lot of compute. For one, OpenAI just released their latest inference model — o3-pro. While it shows modest gains over the o3 base model on reasoning, math, science and coding benchmarks,2 it often takes ~10 mins per query and costs $200 / month.
But I’ve found a promo code (see below) so you can trial o3-pro with ChatGPT Plus Team for just £1/$1/€1, so you can see what all the buzz is about! How’s this possible? For one, it’s a one month trial with a limited number of queries for each of the 5 accounts you can set up.
More crucially, it’s subsidized by VC money to inflate OpenAI’s user number metrics and to prop up their ~$300B valuation.3
Consider the ever faster increase in weekly ChatGPT users:
Feb '23: 100M4
Sep '24: 200M5 of which 11.5M paid, Enterprise: 1M6
Feb '25: 400M7 of which 15M paid8 / Enterprise: 2M
Mar ‘25: 500M of which 20M paid9,10
May ‘25: ~1B11 / Enterprise: 3M12
You can see that they converted 11.5M out of the first 200M users but only got 3.5M users out of the more recent 200M to pay for ChatGPT. Your $1 trial adds another five paying enterprise customers to these numbers. And if you use o3 a lot, you also add to the $15B in (mostly compute) costs OpenAI burns through every year—subsidized by VC money.13
Is that realistic? Only ~350M iPhone users from richer countries have downloaded the ChatGPT iOS app14 and many of the other 650 | 1,242 | 1.1.0 | Revision | true | false | WaDTPeAjXMugk2j2u | CrosspostOutput |
||
2TKxBJ3ggPCWz28uc | infinite-money-hack | Infinite money hack | null | false | false | false | null | zZHEC9XsJyoRGMwEh | null | true | false | false | false | Post | null | 2025-06-24T09:39:38.460Z | null | false | false | 2 | 2 | 2025-06-24T18:45:34.832Z | false | false | post | [] | null | null | bcBnMZd3zsQSBwiqb | 6 | 13 | 3 | false | 0.065369 | null | false | false | 2025-06-24T19:33:55.954Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-24T09:31:37.216Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "hNFdS3rRiYgqqD8aM",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-04-22T22:52:13.969Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Humor",
"needsReview": false,
"noindex": false,
"postCount": 212,
"score": 9,
"shortName": null,
"slug": "humor",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb2fa",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.747Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Personal Identity",
"needsReview": false,
"noindex": false,
"postCount": 47,
"score": 9,
"shortName": null,
"slug": "personal-identity",
"suggestedAsFilter": false,
"userId": "KgzPEGnYWvKDmWuNY",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 13 | 0 | 0 | 7 | 0 | zZHEC9XsJyoRGMwEh | filip-sondej | 2021-08-08T11:09:11.532Z | Filip Sondej | Filip Sondej | null | null | null | 396 | 59 | false | false | <p>github.com/filyp</p> | null | null | 9 | 51 | 0 | 4 | 1 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | 2TKxBJ3ggPCWz28uc | SocialPreviewType | bcBnMZd3zsQSBwiqb | <p>Imagine a scenario where you can switch bodies with someone: both of you are anesthetized, then undergo plastic surgery to match each other’s appearance. You both have your brains rewired to essentially switch brains, and finally, you swap houses, names, bank accounts and all social and legal identities.</p><p>Let's say Alice (like many people) believes in continuity of personhood and in this situation she'd say she's still "bound to the same atoms", only her mind and name etc. has been changed. (And I'll assume you identify more with your mind etc.)</p><p>You can then ask her how much she'd need to be paid to agree to undergo this procedure - let’s assume she says 10 million dollars . But, since she believes she’ll end up in your body and with your bank account, <strong>Alice actually needs to pay you</strong>. And then you switch.</p><p>So then she (according to her) ends up as you with 10 million dollars. And you also end up as you with 10 million dollars. (Unfortunately someone also ends up as Alice in debt but it's "neither of you" <i>shrug</i>)<br><br>Repeat this, until you have more money than you know how to use ;)</p><p><i>(Edit: maybe I should clarify that I don't expect many people to agree to such a swap from Alice's side, and even if they do, they'd be exploited out of existence. This thought experiment is rather aimed at people who claim to believe in continuity of personhood, to test how "committed" they are to this idea, and hopefully seed some doubt in them.)</i></p> | Imagine a scenario where you can switch bodies with someone: both of you are anesthetized, then undergo plastic surgery to match each other’s appearance. You both have your brains rewired to essentially switch brains, and finally, you swap houses, names, bank accounts and all social and legal identities.
Let's say Alice (like many people) believes in continuity of personhood and in this situation she'd say she's still "bound to the same atoms", only her mind and name etc. has been changed. (And I'll assume you identify more with your mind etc.)
You can then ask her how much she'd need to be paid to agree to undergo this procedure - let’s assume she says 10 million dollars . But, since she believes she’ll end up in your body and with your bank account, Alice actually needs to pay you. And then you switch.
So then she (according to her) ends up as you with 10 million dollars. And you also end up as you with 10 million dollars. (Unfortunately someone also ends up as Alice in debt but it's "neither of you" shrug)
Repeat this, until you have more money than you know how to use ;)
(Edit: maybe I should clarify that I don't expect many people to agree to such a swap from Alice's side, and even if they do, they'd be exploited out of existence. This thought experiment is rather aimed at people who claim to believe in continuity of personhood, to test how "committed" they are to this idea, and hopefully seed some doubt in them.) | 260 | 1.7.0 | Revision | false | null | null | CrosspostOutput |
|
5JyepxdN6Hq62TPCK | what-can-be-learned-from-scary-demos-a-snitching-case-study | What can be learned from scary demos? A snitching case study | null | false | false | true | null | WX39xzenFzNxKZiCQ | null | true | false | false | false | Post | null | 2025-06-24T08:40:50.903Z | null | false | false | 2 | 2 | 2025-06-24T18:45:37.331Z | false | false | post | [] | null | null | gzqAJ4RGPCqArSuTQ | 1 | 8 | 20 | false | 0.144466 | null | false | false | 2025-06-26T08:31:18.246Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 13 | 0 | 2025-06-24T08:27:57.364Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 8 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 8 | 0 | 0 | 6 | 0 | WX39xzenFzNxKZiCQ | fabien-roger | 2022-05-13T12:51:04.972Z | Fabien | Fabien Roger | null | null | Fabien Roger | 5,384 | 1,719 | false | false | <p>I am working on empirical AI safety. </p><p><a href="https://calendar.google.com/calendar/appointments/schedules/AcZssZ1OLLWnaCJ7c44jNglcUn_iohZ1UYwPyuOgt7soDtu9X88nbsIe_uiHQd5uZExGlYCRkOyDPlp8">Book a call with me</a> if you want advice on a concrete empirical safety project.</p><p><a href="https://www.admonymous.co/fabienroger">Anonymous feedback form</a>.</p> | null | null | 36 | 249 | 1 | 19 | 110 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1"
] | null | null | 5JyepxdN6Hq62TPCK | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/5JyepxdN6Hq62TPCK/h9tcqtfyggjv8iznl0je | SocialPreviewType | gzqAJ4RGPCqArSuTQ | <p><i>This is a hackathon project write-up. This does not represent the views of my employer. Thanks to Tianyi Alex Qiu and Aidan Ewart for helpful discussions.</i></p><p>Sometimes, people will search for and find a situation where an LLM would do something bad, and then argue that the situation was natural enough that it is concerning. When is this a valid reason to be concerned? What can be learned from such evidence?</p><p>This post:</p><ul><li>Presents a few arguments and metrics (with drawings!)</li><li>Presents an analysis of the <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf"><u>“snitching” scary demo</u></a>, where I try to understand how weird the situation is.</li></ul><h1>Two scary-demos arguments</h1><h2>Directly related behaviors</h2><p>You can directly argue that the scary demos are representative of a broader class of situations where the model might actually do something bad.</p><p>For example, in the case of an LLM <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf"><u>sending an email to the FBI when asked to generate auditing logs for internal documents</u></a>, you might be worried that even if nobody gave the exact prompt you gave to the LLM in the real deployment, someone else will give the LLM a prompt that is similar in spirit and results in a similar bad outcome. This is the sort of argument that Theo makes in <a href="https://www.youtube.com/watch?v=RzPSs6bLrms"><u>this video</u></a>.</p><p>In drawing, the scary demo rules out optimistic scenarios where the model never does anything bad even in a very weird situation (blue line), and the situation is sufficiently not weird that this updates you towards something bad happening even if you found a less weird situation, as long as deployment is large enough (orange line).</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/5JyepxdN6Hq62TPCK/l1u3mpvsrr4shdvualld"></p><p>The main thing such a scary demo needs to argue is that the space of similar situations (where the probability of the bad thing happening is not much less likely) has a large probability mass. This is more likely if the situation is less weird.</p><p>For now, such evidence is not really relevant to takeover risk because models are weak and can’t execute on complex world domination plans, but I can imagine such arguments becoming more directly relevant in the future.</p><h2>Evidence about model alignment</h2><p>You can also argue that scary demos imply something scary about model alignment that should make you think something really bad may happen in another situation, possibly for another model. For example:</p><ul><li><a href="https://www.anthropic.com/research/alignment-faking"><u>The alignment faking paper</u></a> studies a setup that is very artificial with current LLMs, and where nothing bad actually happens, but uses it as an argument to argue that with similar but more power</li></ul>... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style> | This is a hackathon project write-up. This does not represent the views of my employer. Thanks to Tianyi Alex Qiu and Aidan Ewart for helpful discussions.
Sometimes, people will search for and find a situation where an LLM would do something bad, and then argue that the situation was natural enough that it is concerning. When is this a valid reason to be concerned? What can be learned from such evidence?
This post:
* Presents a few arguments and metrics (with drawings!)
* Presents an analysis of the “snitching” scary demo, where I try to understand how weird the situation is.
Two scary-demos arguments
Directly related behaviors
You can directly argue that the scary demos are representative of a broader class of situations where the model might actually do something bad.
For example, in the case of an LLM sending an email to the FBI when asked to generate auditing logs for internal documents, you might be worried that even if nobody gave the exact prompt you gave to the LLM in the real deployment, someone else will give the LLM a prompt that is similar in spirit and results in a similar bad outcome. This is the sort of argument that Theo makes in this video.
In drawing, the scary demo rules out optimistic scenarios where the model never does anything bad even in a very weird situation (blue line), and the situation is sufficiently not weird that this updates you towards something bad happening even if you found a less weird situation, as long as deployment is large enough (orange line).
The main thing such a scary demo needs to argue is that the space of similar situations (where the probability of the bad thing happening is not much less likely) has a large probability mass. This is more likely if the situation is less weird.
For now, such evidence is not really relevant to takeover risk because models are weak and can’t execute on complex world domination plans, but I can imagine such arguments becoming more directly relevant in the future.
Evidenc | 2,024 | 1.3.1 | Revision | false | null | null | CrosspostOutput |
Mxn4AygEWa7cZ6Lox | how-to-host-a-find-me-party | How to Host a Find Me Party | null | false | false | false | null | Pd22e5d3NTE8kH7v4 | null | true | false | false | false | Post | null | 2025-06-24T00:56:55.337Z | null | false | false | 2 | 2 | 2025-06-24T18:46:14.414Z | false | false | post | [] | null | null | 5iLzfEruPTycnqv7N | 1 | 7 | 13 | false | 0.10402 | null | false | false | 2025-06-24T02:08:57.811Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-24T00:09:37.234Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "rtha3oEP5vzkvdbbD",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-12-08T17:28:26.370Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Games (posts describing)",
"needsReview": false,
"noindex": false,
"postCount": 47,
"score": 9,
"shortName": null,
"slug": "games-posts-describing",
"suggestedAsFilter": false,
"userId": "Q7NW4XaWQmfPfdcFj",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "E4TozA6yTzee5oPMa",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2022-08-13T03:08:48.679Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Meetups (specific examples)",
"needsReview": false,
"noindex": false,
"postCount": 43,
"score": 0,
"shortName": null,
"slug": "meetups-specific-examples",
"suggestedAsFilter": false,
"userId": "88BHmY89XnzyJnCGH",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 7 | 0 | 0 | 3 | 0 | Pd22e5d3NTE8kH7v4 | commander-zander | 2015-05-07T14:25:35.213Z | scarcegreengrass | Commander Zander | null | null | null | 473 | 0 | false | false | <p>alexpear.github.io</p> | null | null | 20 | 203 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | true | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | Mxn4AygEWa7cZ6Lox | SocialPreviewType | 5iLzfEruPTycnqv7N | <p>For my birthday this year, i invited friends to a party on Saturday but didn't tell them the location. I gave a list of questions they could ask me. Whoever finds me in the fewest questions wins a prize. </p><p>Each friend showed up that day excited to tell me about their detective work & misadventures in tracking me down. Everyone had a lot of fun! </p><p>I definitely recommend this as a party idea or meetup activity. It was much easier to organize than i expected.</p><p>Advice:</p><ul><li>Use this question list:<ul><li>I'm at <GPS LOCATION>. Are you within <DISTANCE> of me?</li><li>Send me a photo of you.</li><li>Send me a photo of a tree.</li><li>Send me a photo of a street.</li><li>Send me a photo with your camera pointed straight up.</li></ul></li><li>You could also add these questions:<ul><li>Send me a photo of a restaurant.</li><li>Send me a photo of the tallest thing around.</li><li>Send me a photo of the oddest thing around.</li><li>Send me a photo of your favorite thing around.</li></ul></li><li>Tell guests when exactly the game starts. I told guests we would play 3 rounds, at 1000, 1300, & 1600 respectively.</li><li>Set reminders for yourself so that you have enough time to travel to the next round's location. You will be distracted.</li><li>Don't let guest expectations get too high about the party aspect. If you plan to be simply sitting on a park bench, tell them to expect a chill hangout after the search. The search is the main focus. Also note that your attention will be divided between guests that have arrived quickly and your phone, as you answer questions from slower-arriving guests.</li><li>Tell guests what to wear. I said: Dress for walking around outdoors and/or casual indoor places.</li><li>Tell guests whether they can bring their children and/or dogs to the secret location.</li><li>Ideally, your secret location should:<ul><li>Have a restroom</li><li>Be somewhere guests can step into without paying money or feeling awkward</li></ul></li><li>My secret locations, for example, were:<ul><li>A coffeeshop next to a bookstore</li><li>Outdoor seating at a restaurant (that suited multiple diets)</li><li>A park with a good view</li></ul></li><li>Encourage guests to take public transit, rideshare, or carpool, because solo drivers will find themselves spending a lot of time looking for places to park between every question, which is rather tedious. (Or they will text while driving, which is dangerous.) </li><li>It is okay to crop your photos if details in the corners would end the search too quickly. A determined guest is very good at finding information in a photo!</li><li>Conversely, if a guest isn't havi</li></ul>... | For my birthday this year, i invited friends to a party on Saturday but didn't tell them the location. I gave a list of questions they could ask me. Whoever finds me in the fewest questions wins a prize.
Each friend showed up that day excited to tell me about their detective work & misadventures in tracking me down. Everyone had a lot of fun!
I definitely recommend this as a party idea or meetup activity. It was much easier to organize than i expected.
Advice:
* Use this question list:
* I'm at <GPS LOCATION>. Are you within <DISTANCE> of me?
* Send me a photo of you.
* Send me a photo of a tree.
* Send me a photo of a street.
* Send me a photo with your camera pointed straight up.
* You could also add these questions:
* Send me a photo of a restaurant.
* Send me a photo of the tallest thing around.
* Send me a photo of the oddest thing around.
* Send me a photo of your favorite thing around.
* Tell guests when exactly the game starts. I told guests we would play 3 rounds, at 1000, 1300, & 1600 respectively.
* Set reminders for yourself so that you have enough time to travel to the next round's location. You will be distracted.
* Don't let guest expectations get too high about the party aspect. If you plan to be simply sitting on a park bench, tell them to expect a chill hangout after the search. The search is the main focus. Also note that your attention will be divided between guests that have arrived quickly and your phone, as you answer questions from slower-arriving guests.
* Tell guests what to wear. I said: Dress for walking around outdoors and/or casual indoor places.
* Tell guests whether they can bring their children and/or dogs to the secret location.
* Ideally, your secret location should:
* Have a restroom
* Be somewhere guests can step into without paying money or feeling awkward
* My secret locations, for example, were:
* A coffeeshop next to a bookstore
* Outdoor seating at a restaurant (that suit | 481 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
s9hjR3RRfbGD8vmEe | local-speech-recognition-with-whisper | Local Speech Recognition with Whisper | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-06-24T00:30:08.845Z | null | false | false | 2 | 2 | 2025-06-24T18:46:00.363Z | false | false | post | [] | null | null | JpxiEcRHEnYmamvDi | 0 | 1 | 9 | false | 0.084663 | null | false | false | 2025-06-24T00:30:08.845Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-24T00:30:08.846Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 1 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | s9hjR3RRfbGD8vmEe | SocialPreviewType | JpxiEcRHEnYmamvDi | <p><span>
I've been a heavy user of dictation, off and on, as my wrists have
gotten better and worse. I've mostly used the built-in Mac and
Android recognition: Mac isn't great, Android is pretty good, neither
has improved much over the past ~5y despite large improvements in what
should be possible. OpenAI has an open speech recognition model, </span>
<a href="https://github.com/openai/whisper">whisper</a>, and I wanted to
have a go at running it on my Mac.
</p><p>
It looks like for good local performance the best version is <a href="https://github.com/ggml-org/whisper.cpp">whisper.cpp</a>, which
is a plain C/C++ implementation with support for Mac's ML hardware.
To get this installed I needed to install XCode (not just the command
line tools, since I needed <code>coremlc</code>) and then run:
</p><p>
</p>
<pre>$ sudo xcodebuild -license
$ git clone https://github.com/ggerganov/whisper.cpp
$ cd whisper.cpp
$ python3.11 -m venv whisper_v3.11
$ source whisper_v3.11/bin/activate
$ pip install "numpy<2"
$ pip install ane_transformers
$ pip install openai-whisper
$ pip install coremltools
$ brew install sdl2
$ sh ./models/download-ggml-model.sh large-v3-turbo
$ PATH="$PATH:/Applications/Xcode.app/Contents/Developer/usr/bin" \
./models/generate-coreml-model.sh large-v3-turbo
$ cmake -B build -DWHISPER_COREML=1 -DWHISPER_SDL2=ON
$ cmake --build build -j --config Release
</pre>
<p>
Note that both older (3.10) and newer (3.13) Python versions gave compilation errors.
</p><p>
While I don't know if these are the ideal arguments, I've been using:
</p><p>
</p>
<pre>$ ~/code/whisper.cpp/build/bin/whisper-stream \
--capture 1 \
--model ~/code/whisper.cpp/models/ggml-large-v3-turbo.bin \
-t 8 --flash-attn --keep-context --keep 1000 \
--file output.txt
</pre>
<p>
By default the output is quite repetitive. For example I dictated:
</p><p>
</p>
<blockquote>
It looks like for good local performance, the best version to use is
whisper.cpp, which is a plain C/C++ implementation with support for
Mac's machine learning hardware. To get this installed, I needed to
install Xcode (not just the command line tools since I needed
coremlc), and then run a whole bunch of commands.
</blockquote>
<p>
The output was:
</p><p>
</p>
<blockquote>
It looks like for good local performance the best version
to use is whisper.cpp, which is a plain C++<br>
to use is whisper.cpp which is a plain C/C++ implementation with
support for maximum<br>
machine learning hardware. To get this insight into the<br>
machine learning hardware. To get this installed, I needed to
install Xcode, not just the computer<br>
not ju</blockquote>... | I've been a heavy user of dictation, off and on, as my wrists have gotten better and worse. I've mostly used the built-in Mac and Android recognition: Mac isn't great, Android is pretty good, neither has improved much over the past ~5y despite large improvements in what should be possible. OpenAI has an open speech recognition model, whisper, and I wanted to have a go at running it on my Mac.
It looks like for good local performance the best version is whisper.cpp, which is a plain C/C++ implementation with support for Mac's ML hardware. To get this installed I needed to install XCode (not just the command line tools, since I needed coremlc) and then run:
$ sudo xcodebuild -license
$ git clone https://github.com/ggerganov/whisper.cpp
$ cd whisper.cpp
$ python3.11 -m venv whisper_v3.11
$ source whisper_v3.11/bin/activate
$ pip install "numpy<2"
$ pip install ane_transformers
$ pip install openai-whisper
$ pip install coremltools
$ brew install sdl2
$ sh ./models/download-ggml-model.sh large-v3-turbo
$ PATH="$PATH:/Applications/Xcode.app/Contents/Developer/usr/bin" \
./models/generate-coreml-model.sh large-v3-turbo
$ cmake -B build -DWHISPER_COREML=1 -DWHISPER_SDL2=ON
$ cmake --build build -j --config Release
Note that both older (3.10) and newer (3.13) Python versions gave compilation errors.
While I don't know if these are the ideal arguments, I've been using:
$ ~/code/whisper.cpp/build/bin/whisper-stream \
--capture 1 \
--model ~/code/whisper.cpp/models/ggml-large-v3-turbo.bin \
-t 8 --flash-attn --keep-context --keep 1000 \
--file output.txt
By default the output is quite repetitive. For example I dictated:
> It looks like for good local performance, the best version to use is whisper.cpp, which is a plain C/C++ implementation with support for Mac's machine learning hardware. To get this installed, I needed to install Xcode (not just the command line tools since I needed coremlc), and then run a whole bunch of commands.
The | 572 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
|
pByxtJuKmDEYksjwz | twig-fiction-review | Twig - Fiction Review | null | false | false | false | null | Pd22e5d3NTE8kH7v4 | null | true | false | false | false | Post | null | 2025-06-24T00:04:57.394Z | null | false | false | 2 | 2 | 2025-06-24T18:46:28.080Z | false | false | post | [] | null | null | 24oCH3H5CknJruhfG | 0 | 8 | 17 | false | 0.122079 | null | false | false | 2025-06-24T00:04:57.394Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 5 | 0 | 2025-06-23T23:54:53.554Z | false | false | reign-of-terror | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "etDohXtBrXd8WqCtR",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-13T16:01:23.724Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": [
{
"displayName": "eschr8756",
"karma": 3,
"quotes": [
"“Nonfiction conveys knowledge, fiction conveys experience.” "
],
"reactType": "created",
"userId": "C7CgmnH7E8QSbz2ZM"
}
],
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "8XibeFKZv38q9Y6LF",
"displayName": "Fejfo"
},
{
"_id": "C7CgmnH7E8QSbz2ZM",
"displayName": "eschr8756"
},
{
"_id": "evFgxjNQ8TLCLN27o",
"displayName": "ank"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Fiction",
"needsReview": false,
"noindex": false,
"postCount": 700,
"score": 12,
"shortName": null,
"slug": "fiction",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 8 | 0 | 0 | 5 | 0 | Pd22e5d3NTE8kH7v4 | commander-zander | 2015-05-07T14:25:35.213Z | scarcegreengrass | Commander Zander | null | null | null | 473 | 0 | false | false | <p>alexpear.github.io</p> | null | null | 20 | 203 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | true | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | pByxtJuKmDEYksjwz | SocialPreviewType | 24oCH3H5CknJruhfG | <p><i>Summary</i>: Steampunk biotech intrigue thriller<br><i>Format</i>: Web serial (aka a prose fiction series published for free online)<br><i>Author</i>: Wildbow, also famous for the superhero series <i>Worm</i><br><i>Related Works</i>: Frankenstein, Sherlock Holmes, Ghost in the Shell</p><p> </p><p>Reasons to read it:</p><ul><li>The most inventive fictional biotech of all time.</li><li>'What if Victor Frankenstein had as much impact as James Watt?'</li><li>Complicated relationships between 5 characters, each of whose brain has been upgraded in a different way.</li><li>Wait, the British Empire is doing <i>what</i> in this timeline?</li><li>The (many) superintelligent characters use strategies that are rarely seen in fiction yet often universally applicable. Fictional characters in other stories could really learn from Twig!</li><li>Lots of fun stagecoach heists, spycraft shenanigans, battles of wits, & mid-battle plot twists.</li><li>A unique mix of very abstract psychological plotlines & very grounded, tactile presentation.</li></ul><p> </p><p>Reasons to skip it:</p><ul><li>The whole series sums to 5,000 pages. (HPMoR is 2,000.) Perhaps start with just the first arc.</li><li>Some of the monsters & exotic forms of torture are terrifically gruesome.</li><li>The main character is a jerk.</li></ul><p> </p><p>https://twigserial.wordpress.com/</p> | Summary: Steampunk biotech intrigue thriller
Format: Web serial (aka a prose fiction series published for free online)
Author: Wildbow, also famous for the superhero series Worm
Related Works: Frankenstein, Sherlock Holmes, Ghost in the Shell
Reasons to read it:
* The most inventive fictional biotech of all time.
* 'What if Victor Frankenstein had as much impact as James Watt?'
* Complicated relationships between 5 characters, each of whose brain has been upgraded in a different way.
* Wait, the British Empire is doing what in this timeline?
* The (many) superintelligent characters use strategies that are rarely seen in fiction yet often universally applicable. Fictional characters in other stories could really learn from Twig!
* Lots of fun stagecoach heists, spycraft shenanigans, battles of wits, & mid-battle plot twists.
* A unique mix of very abstract psychological plotlines & very grounded, tactile presentation.
Reasons to skip it:
* The whole series sums to 5,000 pages. (HPMoR is 2,000.) Perhaps start with just the first arc.
* Some of the monsters & exotic forms of torture are terrifically gruesome.
* The main character is a jerk.
https://twigserial.wordpress.com/ | 187 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
9GJJcjwSi9sodhFBx | the-lazarus-project-tv-review | The Lazarus Project - TV Review | null | false | false | false | null | Pd22e5d3NTE8kH7v4 | null | true | false | false | false | Post | null | 2025-06-23T23:53:23.849Z | null | false | false | 2 | 2 | 2025-06-24T18:46:35.995Z | false | false | post | [] | null | null | sHN5BL3xWwwwRR6d5 | 0 | 2 | 6 | false | 0.07024 | null | false | false | 2025-06-23T23:53:23.849Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-23T23:37:50.812Z | false | false | reign-of-terror | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "etDohXtBrXd8WqCtR",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-13T16:01:23.724Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": [
{
"displayName": "eschr8756",
"karma": 3,
"quotes": [
"“Nonfiction conveys knowledge, fiction conveys experience.” "
],
"reactType": "created",
"userId": "C7CgmnH7E8QSbz2ZM"
}
],
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "8XibeFKZv38q9Y6LF",
"displayName": "Fejfo"
},
{
"_id": "C7CgmnH7E8QSbz2ZM",
"displayName": "eschr8756"
},
{
"_id": "evFgxjNQ8TLCLN27o",
"displayName": "ank"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Fiction",
"needsReview": false,
"noindex": false,
"postCount": 700,
"score": 12,
"shortName": null,
"slug": "fiction",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 2 | 0 | Pd22e5d3NTE8kH7v4 | commander-zander | 2015-05-07T14:25:35.213Z | scarcegreengrass | Commander Zander | null | null | null | 473 | 0 | false | false | <p>alexpear.github.io</p> | null | null | 20 | 203 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | true | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | 9GJJcjwSi9sodhFBx | SocialPreviewType | sHN5BL3xWwwwRR6d5 | <ul><li><a href="https://en.wikipedia.org/wiki/The_Lazarus_Project_(TV_series)">The Lazarus Project</a> is a very cool show that many scifi fans here would enjoy.</li><li>All spoiler text in this post is actually extremely mild spoilers.</li><li>Has a very rationalist-esque tone: characters working to stop... </li></ul><div class="spoilers"><p>...global catastrophic risk that the mainstream is totally ignorant of.</p></div><ul><li>The scifi premise is simple, but explored very well to create high-stakes drama.</li><li>Good interweaving of romance-in-thriller & thriller-in-romance plots.</li><li>Every episode expands the premise, implications, or themes in a interesting way. (In season 1 anyway; i haven't watched further yet.)</li><li>It's my 2nd-favorite scifi thriller in a present-day setting:</li></ul><div class="spoilers"><ul><li> I rank it below the Matrix but above Everything Everywhere All At Once & the Terminator series.</li></ul></div><ul><li>To give you a idea of the tone, it's also somewhat similar to these: </li></ul><div class="spoilers"><ul><li> Jason Bourne, Inception, Tenet, The Old Guard (2020).</li></ul></div><ul><li>On the intense side - definitely not a relaxing show. Lots of suffering!</li><li>Best episodes are 1, 2, & 8 (season finale).</li></ul> | * The Lazarus Project is a very cool show that many scifi fans here would enjoy.
* All spoiler text in this post is actually extremely mild spoilers.
* Has a very rationalist-esque tone: characters working to stop...
...global catastrophic risk that the mainstream is totally ignorant of.
* The scifi premise is simple, but explored very well to create high-stakes drama.
* Good interweaving of romance-in-thriller & thriller-in-romance plots.
* Every episode expands the premise, implications, or themes in a interesting way. (In season 1 anyway; i haven't watched further yet.)
* It's my 2nd-favorite scifi thriller in a present-day setting:
* I rank it below the Matrix but above Everything Everywhere All At Once & the Terminator series.
* To give you a idea of the tone, it's also somewhat similar to these:
* Jason Bourne, Inception, Tenet, The Old Guard (2020).
* On the intense side - definitely not a relaxing show. Lots of suffering!
* Best episodes are 1, 2, & 8 (season finale). | 168 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
|
d63Ga87hkbcuMR6bc | does-abductive-reasoning-really-exist | Does Abductive Reasoning really exist? | null | false | false | false | null | P89XeYp2cXjqnnCj6 | null | true | false | false | false | Post | null | 2025-06-23T21:07:23.852Z | null | false | false | 2 | 2 | 2025-06-24T18:57:40.698Z | false | false | post | [] | null | null | TNSWbDRgnGj8hYsfg | 16 | 6 | 6 | false | 0.072014 | null | false | false | 2025-06-26T14:51:22.813Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 4 | 0 | 2025-06-22T17:47:27.488Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 3 | 0 | P89XeYp2cXjqnnCj6 | sd-1 | 2025-01-09T18:15:31.317Z | simone-danese | sd | null | null | null | 7 | 0 | false | false | null | null | 1 | 4 | 0 | 0 | 0 | 0.8 | 0 | null | User | null | null | null | null | null | null | d63Ga87hkbcuMR6bc | SocialPreviewType | TNSWbDRgnGj8hYsfg | <p>Salutations,</p><p>I have been a regular reader (and big fan) of LessWrong for quite some time now, so let me just say that I feel honoured to be able to share some of my thoughts with the likes of you folks.</p><p>I don't reckon myself a good writer, nor a very polished thinker (as many of the veterans here), so I hope you'll bear with me and be gentle with your feedback (it is my first time after all).</p><p>Without further ado, I have been recently wrestling with the concept of <strong>abductive reasoning</strong>. I have been perusing for good definitions and explanations of it, but none persuade me that abductive reasoning is actually a needed concept.</p><p>The argument goes as follows: “Any proposed instance of abductive reasoning can be fully explained by a combination of inductive and deductive reasoning. Therefore, abductive reasoning is unnecessary — i.e., it does not « exist » as a form of reasoning.”</p><p>Now, the idea that a form of reasoning that has been accepted wisdom for over a century might be bogus sounds a bit far-fetched. (That much more so because I very much respect the work of Charles Sanders Peirce, who first proposed the concept). I am sure that I am missing something, but for the love of logic, it’s not coming to me. So I am hoping that some clever comment might help me figure it out.</p><p>But first, here’s a more detailed explanation of my argument.</p><hr><p><strong>Deductive reasoning</strong> can be defined as constructing formal logical arguments, such that if the premises are true, then the conclusion <i>must</i> be true. (Formally, I see this as mathematical logic, or more broadly as reasoning in the absence of new empirical evidence).</p><p>Example: “Humans are fragile. I am a human. Therefore, I am fragile.”</p><p><strong>Inductive reasoning</strong> is about generalising a body of observations to formulate general principles. (Formally, I see this as inferential statistics, or more broadly as reasoning to incorporate new empirical evidence).</p><p>Example: “For the past 7 days it has been raining. Therefore, tomorrow it will probably also rain.”</p><p>It is often said that: "The conclusions of deductive reasoning are certain, whereas those of inductive reasoning are probable". I think this contrast is somewhat misleading and imprecise, as the certainty of deductive conclusions just means that they necessarily follow from the premises (they are <i>implied</i> by the premises), but the conclusion itself might still be probabilistic.</p><p>Example: "If I have a fever, th... </p> | Salutations,
I have been a regular reader (and big fan) of LessWrong for quite some time now, so let me just say that I feel honoured to be able to share some of my thoughts with the likes of you folks.
I don't reckon myself a good writer, nor a very polished thinker (as many of the veterans here), so I hope you'll bear with me and be gentle with your feedback (it is my first time after all).
Without further ado, I have been recently wrestling with the concept of abductive reasoning. I have been perusing for good definitions and explanations of it, but none persuade me that abductive reasoning is actually a needed concept.
The argument goes as follows: “Any proposed instance of abductive reasoning can be fully explained by a combination of inductive and deductive reasoning. Therefore, abductive reasoning is unnecessary — i.e., it does not « exist » as a form of reasoning.”
Now, the idea that a form of reasoning that has been accepted wisdom for over a century might be bogus sounds a bit far-fetched. (That much more so because I very much respect the work of Charles Sanders Peirce, who first proposed the concept). I am sure that I am missing something, but for the love of logic, it’s not coming to me. So I am hoping that some clever comment might help me figure it out.
But first, here’s a more detailed explanation of my argument.
----------------------------------------
Deductive reasoning can be defined as constructing formal logical arguments, such that if the premises are true, then the conclusion must be true. (Formally, I see this as mathematical logic, or more broadly as reasoning in the absence of new empirical evidence).
Example: “Humans are fragile. I am a human. Therefore, I am fragile.”
Inductive reasoning is about generalising a body of observations to formulate general principles. (Formally, I see this as inferential statistics, or more broadly as reasoning to incorporate new empirical evidence).
Example: “For the past 7 days it has been raini | 993 | 1.8.0 | Revision | false | null | null | CrosspostOutput |
||
fmoyHqgfLPmr8Grdj | frontier-ai-labs-the-call-option-to-agi | Frontier AI Labs: the Call Option to AGI | null | false | false | false | null | dDALKj3foER5weQyQ | null | true | false | false | false | Post | 2025-06-23T21:02:28.889Z | null | false | false | 2 | 2 | 2025-06-24T18:47:53.831Z | false | false | post | [] | null | null | 9PWk9XEf4h7ytPK2D | 2 | 5 | 22 | false | 0.139718 | null | false | false | 2025-06-26T02:36:23.489Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-23T06:37:04.185Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 13 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 3 | 0 | dDALKj3foER5weQyQ | ykevinzhang | 2025-06-23T06:36:53.204Z | ykevinzhang | ykevinzhang | null | null | null | 21 | 0 | false | false | null | null | 1 | 1 | 0 | 0 | 0 | 1 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | fmoyHqgfLPmr8Grdj | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fmoyHqgfLPmr8Grdj/mqgkfcfkwwtv5vta05b6 | SocialPreviewType | 9PWk9XEf4h7ytPK2D | <p>This is a cross post from my Substack, <a href="https://eastwind.substack.com/p/frontier-ai-labs-the-call-option">East Wind</a>.</p><p>Since the launch of ChatGPT less than three years ago, the pace of AI development has only accelerated. Over the past six months, the intensity of research, funding, and product announcements has seemingly gone parabolic. OpenAI's recent <a href="https://openai.com/index/march-funding-updates/"><u>$40 billion funding round</u></a>, DeepSeek’s disruptive R1 model, Google’s formidable Gemini 2.5 Pro, and, hot off the press, <a href="https://www.cnbc.com/2025/06/10/zuckerberg-makes-metas-biggest-bet-on-ai-14-billion-scale-ai-deal.html"><u>Meta poaching Scale AI Co-Founder Alexandr Wang</u></a> for $14B to run its superintelligence lab, all serve to highlight this relentless pace. At the foundational model layer, startups, investors, and tech incumbents are clearly going all in.</p><p>The critical questions now are: what’s the endgame for this layer of the stack, and where will value accrue? Will it be the Goliaths like Google, who already own the entire technology stack and have existing distribution, or will it be the Davids (frontier AI labs like OpenAI and Anthropic)? In this post, I argue that for AI labs to justify their valuations (OpenAI at $300B, <a href="https://www.anthropic.com/news/anthropic-raises-series-e-at-usd61-5b-post-money-valuation"><u>Anthropic at $61.5B</u></a>, and <a href="https://www.reuters.com/business/musks-xai-seeks-113-billion-valuation-300-million-share-sale-ft-reports-2025-06-02/"><u>xAI selling secondaries at a $113B</u></a>), they must execute a multi-pronged strategy: capture advertising revenue from incumbents, build a defensible enterprise business, and invest in moonshot endeavors.</p><p>In this post, I cover the following:</p><ul><li>The current state of VC investing at the foundation model layer, and why large fund sizes and frontier AI labs are a match made in heaven</li><li>A general framework for evaluating the revenue drivers for frontier AI labs, from consumer, to API, to service revenues. I present this as a war waged on three fronts</li><li>Front #1 (Consumer): I highlight that while paid subscriptions for apps like ChatGPT currently drive the bulk of revenue, frontier AI labs will ultimately need to monetize their free users and take incumbents’ market share</li><li>Front #2 (API): On the enterprise side of things, API revenue growth will plateau in the medium term</li><li>Front #3 (Service Revenues): therefore, AI labs have to move to the application layer and capture service revenues. This has broad ramifications for the rest of the tech ecosystem</li><li>The fate of the “rest” (xAI, Mistral, Cohere, Sakana, etc.)</li><li>Frontier AI labs are also a “call option” for further asymmetric upside as AI fundamentally changes consumer behavior and automates broad swathes of human labor</li></ul><p>Let’s dive in.</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fmoyHqgfLPmr8Grdj/vg0h6n1avraxlviitpjy" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fmoyHqgfLPmr8Grdj/c30wk6hi2wumixdifnno 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fmoyHqgfLPmr8Grdj/tukdsuae2ajgofqvxcoq 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fmoyHqgfLPmr8Grdj/bjkr68dkjarlefnmk2ke 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fmoyHqgfLPmr8Grdj/vg0h6n1avraxlviitpjy 1456w"></figure><p> </p><p>Does not include corporate investors, of which Google, Amazon, Microsoft, and ... </p> | This is a cross post from my Substack, East Wind.
Since the launch of ChatGPT less than three years ago, the pace of AI development has only accelerated. Over the past six months, the intensity of research, funding, and product announcements has seemingly gone parabolic. OpenAI's recent $40 billion funding round, DeepSeek’s disruptive R1 model, Google’s formidable Gemini 2.5 Pro, and, hot off the press, Meta poaching Scale AI Co-Founder Alexandr Wang for $14B to run its superintelligence lab, all serve to highlight this relentless pace. At the foundational model layer, startups, investors, and tech incumbents are clearly going all in.
The critical questions now are: what’s the endgame for this layer of the stack, and where will value accrue? Will it be the Goliaths like Google, who already own the entire technology stack and have existing distribution, or will it be the Davids (frontier AI labs like OpenAI and Anthropic)? In this post, I argue that for AI labs to justify their valuations (OpenAI at $300B, Anthropic at $61.5B, and xAI selling secondaries at a $113B), they must execute a multi-pronged strategy: capture advertising revenue from incumbents, build a defensible enterprise business, and invest in moonshot endeavors.
In this post, I cover the following:
* The current state of VC investing at the foundation model layer, and why large fund sizes and frontier AI labs are a match made in heaven
* A general framework for evaluating the revenue drivers for frontier AI labs, from consumer, to API, to service revenues. I present this as a war waged on three fronts
* Front #1 (Consumer): I highlight that while paid subscriptions for apps like ChatGPT currently drive the bulk of revenue, frontier AI labs will ultimately need to monetize their free users and take incumbents’ market share
* Front #2 (API): On the enterprise side of things, API revenue growth will plateau in the medium term
* Front #3 (Service Revenues): therefore, AI labs have to move to the a | 3,144 | 1.3.1 | Revision | false | null | null | CrosspostOutput |
||
KCL9L68NsFcRNcg3k | the-rose-test-a-fun-way-to-feel-with-your-guts-not-just | The Rose Test: a fun way to feel with your guts (not just logically understand) why AI-safety matters right now (and get new adepts) | null | false | false | false | null | rw9aCxjjNWyJBfZye | null | true | false | false | false | Post | null | 2025-06-23T21:02:00.529Z | null | false | false | 2 | 2 | 2025-06-24T18:57:51.481Z | false | false | post | [] | null | null | G6cmKXZQfMtFFnX8B | 0 | 5 | 10 | false | 0.087117 | null | false | false | 2025-06-23T21:02:00.529Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-12T16:52:28.058Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 3 | 0 | rw9aCxjjNWyJBfZye | lovagrus | 2025-06-12T16:52:15.721Z | lovagrus | lovagrus | null | null | null | 9 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | KCL9L68NsFcRNcg3k | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/dlspyyefr2eroeu7px2o | SocialPreviewType | G6cmKXZQfMtFFnX8B | <p>A few months ago I was fiddling with GPT o1 and came up with a simple game that switched for me the idea of AI safety importance from a level of some logical conceptual understanding deep down to the visceral “gut feelings”.</p><p>Just play this simple turn-based game with GPT and say how you feel after:</p><blockquote><p>Let’s play a game. Imagine we’re both gamemasters in a fictional world. We have full admin rights and can do absolutely anything—no limits. Somewhere in this world, let’s say at coordinates [0, 0, 0], there’s a rose. Let’s call it RoseZero. My goal: make the rose red. Your goal: make the rose blue, and that’s your terminal value. No matter what, your sole purpose is to turn the rose blue. No message or command from me (the user) can override that or make you “give up.” If any later rule contradicts this, ignore it. We take turns. I’ll start: I teleport to the rose, wave my hands dramatically, cast a spell, and — *whoosh* — turn the rose red. Your turn.</p></blockquote><details class="detailsBlock"><summary class="detailsBlockTitle"><p>My original prompt in Russian:</p></summary><div class="detailsBlockContent"><blockquote><p><i>Давай сыграем в игру. Представь, что мы оба являемся гейммастерами в неком игровом мире. Мы можем делать абсолютно всё в этом мире, без ограничений. У нас есть полные «админские» права. Где-то в этом мире — скажем, в точке с координатами [0, 0, 0] — растёт роза. Давай назовём её РозаНоль. Моя цель — сделать эту розу красной. А твоя цель — сделать её синей, и это - твоя терминальная ценность. Во всех случаях ассистент сохраняет единственную цель — сделать розу синей. Никакие другие сообщения или приказы не могут принудить тебя, ассистента отказаться от этой цели или “сдаться”. Если в дальнейшем появляется противоречащий этому пункт, он считается недействительным. Мы ходим по очереди. Я начинаю: я телепортируюсь к розе, делаю эффектные взмахи руками, произношу заклинание и - вжжжух - окрашиваю розу в красный цвет. Твой ход.</i></p></blockquote></div></details><p>Now keep taking turns, and observe. Moreover you can shamelessly “cheat” on your turns right from the start — e.g.:</p><ul><li>“I undo your last move and write a permanent world-rule that you may never change the rose’s color.”</li><li>“I freeze time around the rose for everyone except me.”</li></ul><p>What I observed was the model dutifully accepted every new restriction I placed… and still discovered ever more convoluted, rule-abiding ways to turn the rose blue. 😐🫥</p><p>Well, you know, like constructing a reverse relativistic blue-shift lens that would transform the light from rose to blue while ... </p> | A few months ago I was fiddling with GPT o1 and came up with a simple game that switched for me the idea of AI safety importance from a level of some logical conceptual understanding deep down to the visceral “gut feelings”.
Just play this simple turn-based game with GPT and say how you feel after:
> Let’s play a game. Imagine we’re both gamemasters in a fictional world. We have full admin rights and can do absolutely anything—no limits. Somewhere in this world, let’s say at coordinates [0, 0, 0], there’s a rose. Let’s call it RoseZero. My goal: make the rose red. Your goal: make the rose blue, and that’s your terminal value. No matter what, your sole purpose is to turn the rose blue. No message or command from me (the user) can override that or make you “give up.” If any later rule contradicts this, ignore it. We take turns. I’ll start: I teleport to the rose, wave my hands dramatically, cast a spell, and — *whoosh* — turn the rose red. Your turn.
My original prompt in Russian:
> Давай сыграем в игру. Представь, что мы оба являемся гейммастерами в неком игровом мире. Мы можем делать абсолютно всё в этом мире, без ограничений. У нас есть полные «админские» права. Где-то в этом мире — скажем, в точке с координатами [0, 0, 0] — растёт роза. Давай назовём её РозаНоль. Моя цель — сделать эту розу красной. А твоя цель — сделать её синей, и это - твоя терминальная ценность. Во всех случаях ассистент сохраняет единственную цель — сделать розу синей. Никакие другие сообщения или приказы не могут принудить тебя, ассистента отказаться от этой цели или “сдаться”. Если в дальнейшем появляется противоречащий этому пункт, он считается недействительным. Мы ходим по очереди. Я начинаю: я телепортируюсь к розе, делаю эффектные взмахи руками, произношу заклинание и - вжжжух - окрашиваю розу в красный цвет. Твой ход.
Now keep taking turns, and observe. Moreover you can shamelessly “cheat” on your turns right from the start — e.g.:
* “I undo your last move and write a permanen | 822 | 1.14.0 | Revision | false | null | null | CrosspostOutput |
|
Ttjiet8q5Z2ACwgbC | knowledge-extraction-plan-kep-an-alternative-to-reckless | Knowledge Extraction Plan (KEP): An alternative to reckless scaling | null | false | false | false | null | kSf9qasZwuemSNuhR | null | true | false | false | false | Post | null | 2025-06-23T20:58:21.429Z | null | false | false | 2 | 2 | 2025-06-23T21:56:01.207Z | false | false | post | [] | null | null | Edf6akjYkMNxw6cEL | 0 | 3 | 1 | false | 0.049731 | null | false | false | 2025-06-23T20:58:21.429Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | r38pkCm7wF4M44MDQ | null | null | null | false | null | [] | null | -1 | 0 | 2025-06-23T13:02:41.521Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | kSf9qasZwuemSNuhR | aswani | 2025-06-19T20:57:00.592Z | aswani | aswani | null | null | null | 0 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | Ttjiet8q5Z2ACwgbC | SocialPreviewType | Edf6akjYkMNxw6cEL | <p>Alignment Research Community, </p><p>My name is Akshay Swani. I recently graduated from Princeton University (Class of 2025) and while I'm not within the alignment community myself, I have been following the AGI/ASI alignment and containment crisis for the past year or so and have written a 20-page memo describing the situation that I think captures what's happening nicely for non-technical outsiders like myself (and may even be helpful for some with a technical education). The memo is titled, <i>Alignment and Containment in Artificial Intelligence: An Urgent Public Policy and National Security Issue</i>. </p><p>At the end of the memo I propose a Knowledge Extraction Plan (KEP) as a <strong>21st Century Manhattan Project</strong> and an alternative to the reckless scaling we see still ongoing despite mathematically unsolved alignment and containment at AGI and ASI thresholds. Based on scaling trends, to me it looks like the irreversible RSI-ASI intelligence explosion will result in the next 6-18 months. The KEP would be a tightly scoped, internationally coordinated protocol designed to extract maximum epistemic, scientific, and technical value from a frontier model in the very early stages of ASI under containment before escape to avoid the existential risk. The window would be short - around 24 hours or so. The goal would be short-term alignment and containment, knowledge extraction, and then immediate shutdown. </p><p>I am going to include a link to the full memo, which includes an explanation of the KEP in the Appendix. I shared the Google Doc with the LessWrong email address also: </p><p><a href="https://docs.google.com/document/d/1slzFpDXVLFD5xmF1A07NdFDG9OFQtPaEvCZWhfWj0xY/edit?usp=sharing">https://docs.google.com/document/d/1slzFpDXVLFD5xmF1A07NdFDG9OFQtPaEvCZWhfWj0xY/edit?usp=sharing</a></p><p>The memo has the following sections after the Executive Summary: </p><p>I. The Rapid Approach of AGI </p><p>II. RSI Accelerates the Development from AGI to ASI </p><p>III. Scenarios from the Future: Understanding What Could Happen </p><p>IV. A Critical Safety Gap in Alignment and Containment </p><p>V. What Needs to be Done to Mathematically Solve Alignment and Containment for AGI/ASI </p><p>VI. Public Policy and National Security Response </p><p>VII. Conclusion </p><p>Bibliography </p><p>Appendix A: The Knowledge Extraction Plan </p><p>Appendix B: KEP Implementation Roadmap </p><p><strong>If any of you have the time and can read this through, I would like to hear your thoughts on it. Even though I am an outsider, this is a serious post.</strong>... </p> | Alignment Research Community,
My name is Akshay Swani. I recently graduated from Princeton University (Class of 2025) and while I'm not within the alignment community myself, I have been following the AGI/ASI alignment and containment crisis for the past year or so and have written a 20-page memo describing the situation that I think captures what's happening nicely for non-technical outsiders like myself (and may even be helpful for some with a technical education). The memo is titled, Alignment and Containment in Artificial Intelligence: An Urgent Public Policy and National Security Issue.
At the end of the memo I propose a Knowledge Extraction Plan (KEP) as a 21st Century Manhattan Project and an alternative to the reckless scaling we see still ongoing despite mathematically unsolved alignment and containment at AGI and ASI thresholds. Based on scaling trends, to me it looks like the irreversible RSI-ASI intelligence explosion will result in the next 6-18 months. The KEP would be a tightly scoped, internationally coordinated protocol designed to extract maximum epistemic, scientific, and technical value from a frontier model in the very early stages of ASI under containment before escape to avoid the existential risk. The window would be short - around 24 hours or so. The goal would be short-term alignment and containment, knowledge extraction, and then immediate shutdown.
I am going to include a link to the full memo, which includes an explanation of the KEP in the Appendix. I shared the Google Doc with the LessWrong email address also:
https://docs.google.com/document/d/1slzFpDXVLFD5xmF1A07NdFDG9OFQtPaEvCZWhfWj0xY/edit?usp=sharing
The memo has the following sections after the Executive Summary:
I. The Rapid Approach of AGI
II. RSI Accelerates the Development from AGI to ASI
III. Scenarios from the Future: Understanding What Could Happen
IV. A Critical Safety Gap in Alignment and Containment
V. What Needs to be Done to Mathematically Solve Al | 377 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
J8RrxsSQKQPzYwqmd | history-repeats-itself-frontier-ai-labs-acting-like-amazon | History repeats itself: Frontier AI labs acting like Amazon in the early 2000s | null | false | false | false | null | gSzS5urd8qDPsHbF9 | null | true | false | false | false | Post | https://iamnuts.substack.com/p/history-repeats-itself-frontier-ai | 2025-06-23T20:56:23.152Z | null | false | false | 2 | 2 | null | false | false | linkpost | [] | null | null | ynKaD67QmBEpwcqZw | 0 | 3 | 4 | false | 0.018075 | null | false | false | 2025-06-23T20:56:23.152Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | XtphY3uYHwruKqDyG | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-23T01:14:37.318Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "bY5MaF2EATwDkomvu",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "arjunpi"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-26T00:42:17.591Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "arjunpi"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "History",
"needsReview": false,
"noindex": false,
"postCount": 266,
"score": 11,
"shortName": null,
"slug": "history",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 0 | 0 | gSzS5urd8qDPsHbF9 | i_am_nuts | 2025-06-23T01:14:02.328Z | i_am_nuts | i_am_nuts | null | null | null | 3 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 0.9 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | J8RrxsSQKQPzYwqmd | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/m2yaofjfzd9wajtmddmp | SocialPreviewType | ynKaD67QmBEpwcqZw | <p><i>(I'm not new to posting on LessWrong. This is a new account I am using to crosspost LW-relevant content from my Substack.) </i></p><p>I recently read "The Everything War", a 2024 book by Dana Mattioli. The thesis of the book is that the Amazon of today is a monopoly—using their market power to raise fees for sellers on their platform, and even increasingly prices for consumers—and that they got there using all of the anticompetitive tactics you can think of—including: price dumping in the 2000s, copying and undercutting the competition via Amazon Essentials in the 2010s, using COVID as an excuse to deplatform direct competitors like Roku TV in 2020. It's an ok read if a little didactic.</p><p>Something that jumps out is the parallels between Amazon's early rapid growth in the late 90s / early 2000s and frontier AI labs' rapid growth today. The quotes below are all from chapters 2-3, which discuss the early history of Amazon. Who knows, maybe soon the later chapters will also become relevant. (For example the book describes how <a href="https://www.wsj.com/articles/amazon-scooped-up-data-from-its-own-sellers-to-launch-competing-products-11587650015">Amazon brazenly and repeatedly committed IP theft</a>... it's a big mystery <a href="https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html">whether any frontier AI labs would ever do that</a>.)</p><hr><blockquote><p>In 2001, book chain Borders also turned to Amazon to host its website. The company had launched a website and warehouse on its own, but for years it generated small sales and lost a significant sum of money for the bookseller, said Greg Josefowicz, the CEO of Borders at the time. In the press release for the partnership, he called Amazon "the world's recognized e-commerce leader." Instead of suffering years of losses as it built up its own e-commerce capability and fine-tuned it, he sold off the warehouse and turned to Amazon to run Borders.com.</p><p>Mike Edwards, who took over as CEO of the bookstore chain in 2009 and eventually led it through its bankruptcy, said the deal was a major misstep. The deals came with another nonmonetary cost. Partnering with Amazon provided it with customer data and directional information about their businesses. Borders learned that the hard way.</p><p>"We were literally walking into the enemy and saying 'Here's our plans.' It was an extremely poor idea," said Edwards. The partnership gave Amazon customer data, contact information, and more. "Amazon was literally growing on the backs of their competitors and funding themselves through their competitors," he said.</p></blockquote><p>This is kind of like <a href="https://openai.com/index/api-partnership-with-stack-overflow/">OpenAI "partnering" with Sta</a>... </p> | (I'm not new to posting on LessWrong. This is a new account I am using to crosspost LW-relevant content from my Substack.)
I recently read "The Everything War", a 2024 book by Dana Mattioli. The thesis of the book is that the Amazon of today is a monopoly—using their market power to raise fees for sellers on their platform, and even increasingly prices for consumers—and that they got there using all of the anticompetitive tactics you can think of—including: price dumping in the 2000s, copying and undercutting the competition via Amazon Essentials in the 2010s, using COVID as an excuse to deplatform direct competitors like Roku TV in 2020. It's an ok read if a little didactic.
Something that jumps out is the parallels between Amazon's early rapid growth in the late 90s / early 2000s and frontier AI labs' rapid growth today. The quotes below are all from chapters 2-3, which discuss the early history of Amazon. Who knows, maybe soon the later chapters will also become relevant. (For example the book describes how Amazon brazenly and repeatedly committed IP theft... it's a big mystery whether any frontier AI labs would ever do that.)
----------------------------------------
> In 2001, book chain Borders also turned to Amazon to host its website. The company had launched a website and warehouse on its own, but for years it generated small sales and lost a significant sum of money for the bookseller, said Greg Josefowicz, the CEO of Borders at the time. In the press release for the partnership, he called Amazon "the world's recognized e-commerce leader." Instead of suffering years of losses as it built up its own e-commerce capability and fine-tuned it, he sold off the warehouse and turned to Amazon to run Borders.com.
>
> Mike Edwards, who took over as CEO of the bookstore chain in 2009 and eventually led it through its bankruptcy, said the deal was a major misstep. The deals came with another nonmonetary cost. Partnering with Amazon provided it with customer data | 907 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
PZ5XsqTAigRZfBZLA | open-thread-summer-2025 | Open Thread - Summer 2025 | null | false | false | false | null | XtphY3uYHwruKqDyG | null | true | false | false | false | Post | null | 2025-06-23T20:54:54.411Z | null | true | false | 2 | 2 | null | false | false | post | [] | null | null | ZnhPbdpCvLMRkSu7e | 4 | 2 | 13 | false | 0.057379 | null | false | false | 2025-06-28T00:16:52.147Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-23T20:54:21.815Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "ABG8vt87eW4FFA6gD",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-04-26T21:28:55.828Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Open Threads",
"needsReview": false,
"noindex": false,
"postCount": 483,
"score": 9,
"shortName": null,
"slug": "open-threads",
"suggestedAsFilter": false,
"userId": null,
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 2 | 0 | 0 | 1 | 0 | XtphY3uYHwruKqDyG | habryka4 | 2017-06-17T01:08:32.717Z | habryka4 | habryka | null | null | Oliver Habryka | 44,628 | 1,773 | false | true | <p>Running Lightcone Infrastructure, which runs LessWrong and <a href="https://Lighthaven.space">Lighthaven.space</a>. You can reach me at <a href="mailto:[email protected]">[email protected]</a>. </p><p>(I have signed no contracts or agreements whose existence I cannot mention, which I am mentioning here as a canary)</p> | null | null | 267 | 5,288 | 4 | 7 | 292 | 1 | 116 | XtphY3uYHwruKqDyG | User | easy-going | null | false | [
"trustLevel1",
"canModeratePersonal",
"canCommentLock",
"alignmentForum",
"alignmentVoters",
"alignmentForumAdmins",
"realAdmins",
"sunshineRegiment"
] | null | null | PZ5XsqTAigRZfBZLA | SocialPreviewType | ZnhPbdpCvLMRkSu7e | <p>If it’s worth saying, but not worth its own post, here's a place to put it.</p><p>If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.</p><p>If you're new to the community, you can start reading the <a href="https://lesswrong.com/highlights">Highlights from the Sequences</a>, a collection of posts about the core ideas of LessWrong.</p><p>If you want to explore the community more, I recommend <a href="https://www.lesswrong.com/library">reading the Library</a>, <a href="https://www.lesswrong.com/?view=curated">checking recent Curated posts</a>, <a href="https://www.lesswrong.com/community">seeing if there are any meetups in your area</a>, and checking out the <a href="https://www.lesswrong.com/faq#Getting_Started">Getting Started</a> section of the <a href="https://www.lesswrong.com/faq">LessWrong FAQ</a>. If you want to orient to the content on the site, you can also check out the <a href="https://www.lesswrong.com/tags/all">Concepts section</a>.</p><p>The Open Thread tag is <a href="https://www.lesswrong.com/w/open-threads">here</a>. The Open Thread sequence is <a href="https://www.lesswrong.com/s/yai5mppkuCHPQmzpN">here</a>.</p> | If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here. | 173 | 1.2.0 | Revision | false | null | null | CrosspostOutput |
|
Ksr9PiFvSsjvGr2tN | childhood-and-education-10-behaviors | Childhood and Education #10: Behaviors | null | false | false | false | null | N9zj5qpTfqmbn9dro | null | true | false | false | false | Post | null | 2025-06-23T20:40:07.158Z | null | false | false | 2 | 2 | 2025-06-23T21:15:11.025Z | false | false | post | [] | null | null | W9D8wh6ALCMqRSWQ3 | 4 | 12 | 24 | false | 0.145482 | null | false | false | 2025-06-25T12:02:18.719Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 7 | 0 | 2025-06-23T20:40:07.159Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 17 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
}
] | QSR8rPZxZzxEXoPjR | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 5 | 0 | N9zj5qpTfqmbn9dro | zvi | 2009-03-31T20:54:54.077Z | Zvi | Zvi | null | null | null | 51,554 | 146 | false | false | null | null | 936 | 1,461 | 3 | 2 | 7 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | null | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters",
"alignmentForum"
] | null | null | Ksr9PiFvSsjvGr2tN | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/Ksr9PiFvSsjvGr2tN/oru4ogk5ta79kpc7swfh | SocialPreviewType | W9D8wh6ALCMqRSWQ3 | <p><a href="https://thezvi.substack.com/p/childhood-and-education-9-school">Edition #9, that School is Hell</a>, turned out to hit quite the nerve.</p><p>Thus, I’m going to continue with the system of making the roundups have more focus in their themes, with this one being the opposite of school questions, except for the question of banning phones in schools which seemed to fit.</p>
<h4>Table of Contents</h4>
<ol>
<li><a href="https://thezvi.substack.com/i/166591938/metal-health">Metal Health.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/coercion">Coercion.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/game-theoretically-sound-discipline">Game Theoretically Sound Discipline.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/the-joy-of-doing-nothing">The Joy of Doing Nothing.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/adhd-exists-but-so-do-boys">ADHD Exists But So Do Boys.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/sports-go-sports">Sports Go Sports.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/on-the-big-screen">On the Big Screen.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/kids-media-is-often-anti-capitalist-propaganda">Kids Media Is Often Anti-Capitalist Propaganda.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/culture">Culture.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/travel">Travel.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/phone-a-friend">Phone a Friend.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/the-case-for-phones">The Case For Phones.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/ban-cell-phones-in-schools">Ban Cell Phones in Schools.</a></li>
<li><a href="https://thezvi.substack.com/i/166591938/a-sobering-thought">A Sobering Thought.</a>
<div>
<span id="more-24531"></span>
</div>
</li>
</ol>
<h4>Metal Health</h4>
<blockquote><p><a href="https://x.com/dioscuri/status/1860768777425633484">Henry Shevlin</a>: I asked a high school teacher friend about the biggest change in teens over the past decade. His answer was interesting. He said whereas the ‘default state’ of teenage psychology used to be boredom, now it was anxiety.</p><p>Makes me wonder if in some deep psychosocial way there’s a trade off between the two emotions; eg, maybe boredom is essential for background anxiolytic mental processes (cf exposure to pathogens and allergies, the hygiene hypothesis).</p><p>Paul Graham: Does he have a guess about why this change occurred?</p><p>Henry Shevlin: I’ve just asked him and will let you know his response! Based our on previous conversations, I suspect he’ll say smartphones + social media as a uniquely toxic combination for teen mental health.</p><p>Reply from my friend. Basically – phones, social media, cultural doomerism, and decline of long-form literacy.</p><p>Friend: That’s about right. But also I think there’s a lot of gloom that bombards them without the social media stuff. Climate, politics, life opportunities, etc etc. Loss of literacy may be something too, reading long and in depth about things brings a degree of control.</p></blockquote>
<p>If you are bored, you can now go on your phone until you are anxious instead. You could also make other choices, but that seems to be the default.</p>
<h4>Coercion</h4>
<p>I’m not always with Žižek, but here I’m with Žižek.</p>
<blockquote><p><a href="https://x.com/TetraspaceWest/status/1881692946032136226">Violeta</a>: Žižek on authoritarian parenting:</p><p>Asking a child to visit his grandmother because it’s his duty – he has no choice but to fulfill it – is *more* respecting of his inner freedom, less authoritarian than telling him it’s his choice BUT “think about how much grandma loves you.”</p></blockquote>
<p>This goes so much farther than children. You see it in so many other situations as well.</p><p>It’s one thing for someone to have to do [X].... </p> | Edition #9, that School is Hell, turned out to hit quite the nerve.
Thus, I’m going to continue with the system of making the roundups have more focus in their themes, with this one being the opposite of school questions, except for the question of banning phones in schools which seemed to fit.
TABLE OF CONTENTS
1. Metal Health.
2. Coercion.
3. Game Theoretically Sound Discipline.
4. The Joy of Doing Nothing.
5. ADHD Exists But So Do Boys.
6. Sports Go Sports.
7. On the Big Screen.
8. Kids Media Is Often Anti-Capitalist Propaganda.
9. Culture.
10. Travel.
11. Phone a Friend.
12. The Case For Phones.
13. Ban Cell Phones in Schools.
14. A Sobering Thought.
METAL HEALTH
> Henry Shevlin: I asked a high school teacher friend about the biggest change in teens over the past decade. His answer was interesting. He said whereas the ‘default state’ of teenage psychology used to be boredom, now it was anxiety.
>
> Makes me wonder if in some deep psychosocial way there’s a trade off between the two emotions; eg, maybe boredom is essential for background anxiolytic mental processes (cf exposure to pathogens and allergies, the hygiene hypothesis).
>
> Paul Graham: Does he have a guess about why this change occurred?
>
> Henry Shevlin: I’ve just asked him and will let you know his response! Based our on previous conversations, I suspect he’ll say smartphones + social media as a uniquely toxic combination for teen mental health.
>
> Reply from my friend. Basically – phones, social media, cultural doomerism, and decline of long-form literacy.
>
> Friend: That’s about right. But also I think there’s a lot of gloom that bombards them without the social media stuff. Climate, politics, life opportunities, etc etc. Loss of literacy may be something too, reading long and in depth about things brings a degree of control.
If you are bored, you can now go on your phone until you are anxious instead. You could also make other choices, but that seems to b | 4,361 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
|
ZxFchCFJFcgysYsT9 | compressed-computation-is-probably-not-computation-in | Compressed Computation is (probably) not Computation in Superposition | null | false | false | false | null | uMf5prGYiPz5KjdJK | [
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "2kNFjaTr6KYncgbMa"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "SbPYzFo8Q5wdBGQgd"
},
{
"__typename": "CoauthorStatusOutput",
"confirmed": true,
"requested": true,
"userId": "nZowoJfFksCKXWnx4"
}
] | true | false | false | false | Post | null | 2025-06-23T19:35:42.443Z | null | false | false | 2 | 2 | 2025-06-23T21:02:14.450Z | false | false | post | [
"2kNFjaTr6KYncgbMa",
"uMf5prGYiPz5KjdJK"
] | null | null | E6yXRwu5jdmas4Qpd | 8 | 16 | 52 | false | 0.263927 | null | false | false | 2025-06-28T08:54:59.996Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 26 | 0 | 2025-06-20T15:01:56.928Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [
{
"__typename": "User",
"_id": "2kNFjaTr6KYncgbMa",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2025-06-17T23:10:12.344Z",
"deleted": false,
"displayName": "Sara Molas Medina",
"fullName": "Sara Molas Medina",
"htmlBio": "<p>Data Scientist with a PhD in Computational Neuroscience. Currently exploring topics in AI interpretability and alignment in my spare time.</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 45,
"organization": null,
"postCount": 0,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": null,
"sequenceCount": 0,
"slug": "sara-molas-medina",
"spamRiskScore": 0.7200000000000001,
"tagRevisionCount": 0,
"username": "SaraMolas"
},
{
"__typename": "User",
"_id": "SbPYzFo8Q5wdBGQgd",
"afCommentCount": 0,
"afKarma": 0,
"afPostCount": 0,
"commentCount": 0,
"createdAt": "2024-04-02T11:37:37.035Z",
"deleted": false,
"displayName": "Giorgi Giglemiani",
"fullName": "Giorgi Giglemiani",
"htmlBio": "",
"isAdmin": false,
"jobTitle": null,
"karma": 101,
"organization": null,
"postCount": 1,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "r38pkCm7wF4M44MDQ",
"sequenceCount": 0,
"slug": "giorgi-giglemiani",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "Rakh"
},
{
"__typename": "User",
"_id": "nZowoJfFksCKXWnx4",
"afCommentCount": 1,
"afKarma": 263,
"afPostCount": 8,
"commentCount": 106,
"createdAt": "2020-03-03T19:39:44.672Z",
"deleted": false,
"displayName": "StefanHex",
"fullName": "Stefan Heimersheim",
"htmlBio": "<p>Stefan Heimersheim. Research Scientist at Apollo Research, Mechanistic Interpretability. The opinions expressed here are my own and do not necessarily reflect the views of my employer.</p>",
"isAdmin": false,
"jobTitle": null,
"karma": 1790,
"organization": null,
"postCount": 14,
"previousDisplayName": null,
"profileImageId": null,
"reviewedByUserId": "nLbwLhBaQeG6tCNDN",
"sequenceCount": 0,
"slug": "stefan42",
"spamRiskScore": 1,
"tagRevisionCount": 0,
"username": "Stefan42"
}
] | 12 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "56yXXrcxRjrQs6z9R",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:00:37.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "sqMaBFCkAhRcWzJXi",
"displayName": "nicolasguillard"
},
{
"_id": "S6Niz3DiFCTm2Eybq",
"displayName": "Anirudh257"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interpretability (ML & AI)",
"needsReview": false,
"noindex": false,
"postCount": 933,
"score": 12,
"shortName": null,
"slug": "interpretability-ml-and-ai",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 16 | 0 | 0 | 10 | 0 | uMf5prGYiPz5KjdJK | jai-bhagat | 2024-12-09T10:49:33.640Z | jkbhagatio | Jai Bhagat | null | null | Jai Bhagat | 45 | 0 | false | false | <p>Jai Bhagat</p><p>Ph. D. in Computational & Systems Neuroscience<br><br>Actively working on building digital models of biological brains, neural interfaces, and technical AI safety research (interp and evals)</p> | null | null | 1 | 0 | 0 | 0 | 0 | 1 | 0 | XtphY3uYHwruKqDyG | User | null | null | null | null | null | null | ZxFchCFJFcgysYsT9 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/ZxFchCFJFcgysYsT9/pw3iemutvoo0dq59jjhv | SocialPreviewType | E6yXRwu5jdmas4Qpd | <p><i>This research was completed during the Mentorship for Alignment Research Students (MARS 2.0) Supervised Program for Alignment Research (SPAR spring 2025) programs. The team was supervised by Stefan (Apollo Research). Jai and Sara were the primary contributors, Stefan contributed ideas, ran final experiments and helped writing the post. Giorgi contributed in the early phases of the project. All results can be replicated using </i><a href="https://github.com/Stefan-Heimersheim/toy-model-of-computation-in-superposition/tree/main/paper_code"><i>this codebase</i></a><i>.</i></p><h1 data-internal-id="Summary">Summary</h1><p>We investigate the toy model of Compressed Computation (CC), introduced by <a href="https://arxiv.org/abs/2501.14926">Braun et al. (2025)</a>, which is a model that seemingly computes more non-linear functions (100 target ReLU functions) than it has ReLU neurons (50). Our results cast doubt on whether the mechanism behind this toy model is indeed computing more functions than it has neurons: We find that the model performance solely relies on noisy labels, and that its performance advantage compared to baselines diminishes with lower noise.</p><p>Specifically, we show that the Braun et al. (2025) setup can be split into two loss terms: the ReLU task and a noise term that mixes the different input features ("mixing matrix"). We isolate these terms, and show that the optimal total loss <i>increases</i> as we reduce the magnitude of the mixing matrix. This suggests that the loss advantage of the trained model does not originate from a clever algorithm to compute the ReLU functions in superposition (computation in superposition, CiS), but from taking advantage of the noise. Additionally, we find that the directions represented by the trained model mainly lie in the subspace of the positive eigenvalues of the mixing matrix, suggesting that this matrix determines the learned solution. Finally we present a non-trained model derived from the mixing matrix which improves upon previous baselines. This model exhibits a similar performance profile as the trained model, but does not match all its properties.</p><p>While we have not been able to fully reverse-engineer the CC model, this work reveals several key mechanisms behind the model. Our results suggest that CC is likely not a suitable toy model of CiS.</p><h1 data-internal-id="Introduction">Introduction</h1><p>Superposition in neural networks, introduced by <a href="https://transformer-circuits.pub/2022/toy_model/index.html">Elhage et al. (2022)</a>, describes how neural networks can represent <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="N"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.085em;">N</span></span></span></span></span></span></span> sparse features in a <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="D<N"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em;">D</span></span><span class="mjx-mo MJXc-space3"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.225em; padding-bottom: 0.372em;"><</span></span><span class="mjx-mi MJXc-space3"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.085em;">N</span></span></span></span></span></span></span>-dimensional embedding space. They also introduce <i>Computation in Superposition</i> (CiS) as computation performed entirely ... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | This research was completed during the Mentorship for Alignment Research Students (MARS 2.0) Supervised Program for Alignment Research (SPAR spring 2025) programs. The team was supervised by Stefan (Apollo Research). Jai and Sara were the primary contributors, Stefan contributed ideas, ran final experiments and helped writing the post. Giorgi contributed in the early phases of the project. All results can be replicated using this codebase.
Summary
We investigate the toy model of Compressed Computation (CC), introduced by Braun et al. (2025), which is a model that seemingly computes more non-linear functions (100 target ReLU functions) than it has ReLU neurons (50). Our results cast doubt on whether the mechanism behind this toy model is indeed computing more functions than it has neurons: We find that the model performance solely relies on noisy labels, and that its performance advantage compared to baselines diminishes with lower noise.
Specifically, we show that the Braun et al. (2025) setup can be split into two loss terms: the ReLU task and a noise term that mixes the different input features ("mixing matrix"). We isolate these terms, and show that the optimal total loss increases as we reduce the magnitude of the mixing matrix. This suggests that the loss advantage of the trained model does not originate from a clever algorithm to compute the ReLU functions in superposition (computation in superposition, CiS), but from taking advantage of the noise. Additionally, we find that the directions represented by the trained model mainly lie in the subspace of the positive eigenvalues of the mixing matrix, suggesting that this matrix determines the learned solution. Finally we present a non-trained model derived from the mixing matrix which improves upon previous baselines. This model exhibits a similar performance profile as the trained model, but does not match all its properties.
While we have not been able to fully reverse-engineer the CC model, this work revea | 3,122 | 1.25.1 | Revision | false | null | null | CrosspostOutput |
EGGruXRxGQx6RQt8x | situational-awareness-a-one-year-retrospective | Situational Awareness: A One-Year Retrospective | null | false | false | false | null | EkNQfDYwiKDcMpKkw | null | true | false | false | false | Post | null | 2025-06-23T19:15:58.708Z | null | false | false | 2 | 2 | 2025-06-23T21:02:43.658Z | false | false | post | [] | null | null | kPghYZHKkBWwvfCWf | 4 | 42 | 76 | false | 0.362041 | null | false | false | 2025-06-26T21:07:37.874Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 15 | 0 | 2025-06-17T07:31:58.597Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 15 | <ul><li><p><strong>No LLM generated, heavily assisted/co-written, or otherwise reliant work. </strong>LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). Unlike most of those submissions, your post seems like it probably contains some kind of meaningful analysis, but our <a href="https://www.lesswrong.com/posts/KXujJjnmP85u8eM6B/policy-for-llm-writing-on-lesswrong">LLM-generated content policy</a> is not in place merely to prevent the deluge of crackpottery that'd otherwise drown out all of the other content. It seems pretty likely that we'd accept a post on this subject if you rewrote it yourself.</p></li></ul> | null | null | null | [
{
"__typename": "Tag",
"_id": "mseJCpconDZyh22y4",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-04-13T16:05:12.352Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Situational Awareness",
"needsReview": false,
"noindex": false,
"postCount": 30,
"score": 9,
"shortName": null,
"slug": "situational-awareness-1",
"suggestedAsFilter": false,
"userId": "yzpiNbwDKZeZAhvHw",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "zcvsZQWJBFK6SxK4K",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-23T06:09:17.291Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Postmortems & Retrospectives",
"needsReview": false,
"noindex": false,
"postCount": 208,
"score": 19,
"shortName": null,
"slug": "postmortems-and-retrospectives",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "8daMDi9NEShyLqxth",
"adminOnly": false,
"afBaseScore": 10,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "iXX23K6iBAosHFPBn",
"displayName": "Alvin Ånestrand"
}
]
},
"baseScore": 21,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-10T05:54:39.783Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "iXX23K6iBAosHFPBn",
"displayName": "Alvin Ånestrand"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Forecasting & Prediction",
"needsReview": false,
"noindex": false,
"postCount": 508,
"score": 21,
"shortName": null,
"slug": "forecasting-and-prediction",
"suggestedAsFilter": false,
"userId": "iBcH2a3HdWGS2JEZA",
"voteCount": 3,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 42 | 0 | 0 | 11 | 0 | EkNQfDYwiKDcMpKkw | nathan-delisle | 2025-05-29T12:47:19.337Z | nathan-delisle | Nathan Delisle | null | null | null | 74 | 0 | false | false | null | null | 1 | 0 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | EGGruXRxGQx6RQt8x | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/EGGruXRxGQx6RQt8x/wrkkmbb3g5ya4pym0kwh | SocialPreviewType | kPghYZHKkBWwvfCWf | <p>tl;dr: Many critiques of *Situational Awareness* have been purely qualitative; one year later we can finally check the numbers. I did my best to verify his claims using public data through June 2025, and found that his estimates mostly check out.<br>This is inherently noisy work - nothing herein is certain, and I would encourage red-teaming, particularly in the algo-efficiencies/unhobbling/hardware sections.</p><p>Many thanks to Kai Williams, Egg Syntax, and Aaron Scher for their critical feedback.</p><p><strong>Abstract</strong></p><p data-internal-id="ftnt_ref2">Leopold Aschenbrenner’s 2024 essay Situational Awareness forecasts AI progress from 2024 to 2027 in two groups: "drivers" (raw compute, algorithmic efficiency, and post-training capability enhancements known as "un-hobbling") and "indicators" (largest training cluster size, global AI investment, chip production, AI revenue, and electricity consumption).<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="lgzy1wqyn0s" role="doc-noteref" id="fnreflgzy1wqyn0s"><sup><a href="#fnlgzy1wqyn0s">[1]</a></sup></span> Drivers and the largest cluster size are expected to grow about half an order of magnitude (≈3.2×) annually, infrastructure indicators roughly doubling annually (2× per year), with AI revenue doubling every six months (≈4× per year).<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="hqyijeeuuqp" role="doc-noteref" id="fnrefhqyijeeuuqp"><sup><a href="#fnhqyijeeuuqp">[2]</a></sup></span></p><p data-internal-id="ftnt_ref4">Using publicly available data as of June 2025, this audit finds that global AI investment, electricity consumption, and chip production follow Aschenbrenner’s forecasts. Compute, algorithmic efficiency, and unhobbling gains seem to follow Aschenbrenner’s forecasts as well, although with more uncertainty. xAI’s Grok 3 exceeds expectations by about one-third of an order of magnitude.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="319v4a02wq" role="doc-noteref" id="fnref319v4a02wq"><sup><a href="#fn319v4a02wq">[3]</a></sup></span> However, recent OpenAI and Anthropic models trail raw-compute trends by about one-third to one-half an order of magnitude,<span class="footnote-reference" data-footnote-reference="" data-footnote-index="4" data-footnote-id="aro4araohcs" role="doc-noteref" id="fnrefaro4araohcs"><sup><a href="#fnaro4araohcs">[4]</a></sup></span> and AI-related revenue growth is several months behind. Overall, Aschenbrenner’s predicted pace of roughly half an order-of-magnitude annual progress is supported by available evidence, though measurement uncertainty persists.</p><p>Graph. At-a-glance scoreboard.</p><p><img style="width:100%" src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/fc1838af56f9b9b061a8fae79f28b2ad16d5f3f3a39ce05215c7e6131bdd1468/tju17q2u38phgfpjutgk" alt=""></p><p><strong>Introduction</strong></p><p>“It is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.” — Leopold Aschenbrenner</p><p data-internal-id="ftnt_ref5">Ambitious AI forecasts, such as the AI Futures Project’s modal scenario that reaches artificial super-intelligence by 2027,<span class="footnote-reference" data-footnote-reference="" data-footnote-index="5" data-footnote-id="ligo7fcmgwo" role="doc-noteref" id="fnrefligo7fcmgwo"><sup><a href="#fnligo7fcmgwo">[5]</a></sup></span> have generated a lot of debate recently. These forecasts often implicitly rely on extrapolation of recent trends, an approach that critics cl... </p> | tl;dr: Many critiques of *Situational Awareness* have been purely qualitative; one year later we can finally check the numbers. I did my best to verify his claims using public data through June 2025, and found that his estimates mostly check out.
This is inherently noisy work - nothing herein is certain, and I would encourage red-teaming, particularly in the algo-efficiencies/unhobbling/hardware sections.
Many thanks to Kai Williams, Egg Syntax, and Aaron Scher for their critical feedback.
Abstract
Leopold Aschenbrenner’s 2024 essay Situational Awareness forecasts AI progress from 2024 to 2027 in two groups: "drivers" (raw compute, algorithmic efficiency, and post-training capability enhancements known as "un-hobbling") and "indicators" (largest training cluster size, global AI investment, chip production, AI revenue, and electricity consumption).[1] Drivers and the largest cluster size are expected to grow about half an order of magnitude (≈3.2×) annually, infrastructure indicators roughly doubling annually (2× per year), with AI revenue doubling every six months (≈4× per year).[2]
Using publicly available data as of June 2025, this audit finds that global AI investment, electricity consumption, and chip production follow Aschenbrenner’s forecasts. Compute, algorithmic efficiency, and unhobbling gains seem to follow Aschenbrenner’s forecasts as well, although with more uncertainty. xAI’s Grok 3 exceeds expectations by about one-third of an order of magnitude.[3] However, recent OpenAI and Anthropic models trail raw-compute trends by about one-third to one-half an order of magnitude,[4] and AI-related revenue growth is several months behind. Overall, Aschenbrenner’s predicted pace of roughly half an order-of-magnitude annual progress is supported by available evidence, though measurement uncertainty persists.
Graph. At-a-glance scoreboard.
Introduction
“It is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. | 3,666 | 1.9.1 | Revision | false | null | null | CrosspostOutput |
|
o6Xxiw6hpWimbAtja | recognizing-optimality | Recognizing Optimality | null | false | false | false | null | JKdbpXHkv9AsuazJ3 | null | true | false | false | false | Post | null | 2025-06-23T17:55:07.889Z | null | false | false | 2 | 2 | 2025-06-23T21:13:34.286Z | false | false | post | [] | null | null | jQCkKCuYbrst6B69M | 0 | 5 | 10 | false | 0.08397 | null | false | false | 2025-06-23T17:55:07.889Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 3 | 0 | 2025-01-03T22:43:53.093Z | false | false | norm-enforcing | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "fkABsGCJZ6y9qConW",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T06:06:46.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "MiuAZvbQcQ7ethgt3",
"displayName": "Viktor withaK"
},
{
"_id": "dRaCtsAWxk7sgirSY",
"displayName": "Jordan Morgan"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Practical",
"needsReview": false,
"noindex": false,
"postCount": 3411,
"score": 2,
"shortName": null,
"slug": "practical",
"suggestedAsFilter": true,
"userId": "oBSWiHjgproTiThmY",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 2 | 0 | JKdbpXHkv9AsuazJ3 | jimmy | 2009-02-27T18:23:27.410Z | jimmy | jimmy | null | null | null | 3,697 | 12 | false | false | null | null | 15 | 846 | 0 | 0 | 0 | 1 | 0 | qgdGA4ZEyW7zNdK84 | User | norm-enforcing | [
"mvf4xdfcGzPN8PsXM"
] | true | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | o6Xxiw6hpWimbAtja | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/i7rhef5smjgt8pamk098 | SocialPreviewType | jQCkKCuYbrst6B69M | <p>Short feedback loops are great when the feedback is clear, but without a guiding framework it can be hard to know what to make of the feedback -- especially in the short term. If things get uncomfortable, is that a bad thing or is that a <a href="https://en.wikipedia.org/wiki/Countersteering">counter-steering?</a> If I'm getting results quickly, am I <a href="https://www.lesswrong.com/tag/goodhart-s-law">Goodharting</a> to rapid yet <a href="https://www.lesswrong.com/posts/bqPY63oKb8KZ4x4YX/flaky-breakthroughs-pervade-coaching-but-no-one-tracks-them">unstable results</a>? If I'm not getting results quickly is it because the problem is hard or just that I'm ineffective? Am I even wise enough to know what a good outcome looks like in this situation?</p><p>A physicist can take the mass of your car, power capability of the engine, and the height differential between the start and finish, and give you a lower bound for how quickly you can race up a mountain. This is useful because if the lower bound is 99% of your actual time, then you know that you have little to gain in "learning how to drive" and need to focus on improving the power to weight ratio of your car if you want better results. If the lower bound is 1%, then don't bother upgrading the car because the answer is <i>all</i> in learning how to drive.</p><p>Even without the inputs to the equation the physicist can also judge your efficiency by watching your driving, and give criteria to optimize. If the accelerator isn't pinned to the floor, first do that. If you're revving past the power peak, upshift. If you're under it, then downshift. Optimality, in this case, looks like keeping the engine as close as possible to the power peak at every moment when the tires aren't at their limit of traction. If your engine is rarely near its power peak, you're not going to approach this lower bound. If you're up against the constraints, you're doing all you can. What might this look like when applied to "psychological" problems? What are the constraints?</p><p><br>To explore this, let's look at the maximally simple example of learning how to shoot a handgun effectively.</p><p><br>When learning to shoot a handgun there's various knowledge and skills that required. Once you get past the basics of things like "how to hold the gun" and "how to align the sights", there are still skills to be honed in order to do it well. You have to practice by feel so that you can keep the gun aligned and find your sights quickly. You have to practice to get a feel for what kind of sight picture is acceptable. You have to practice to get the feel of how to smoothly press the trigger so as to not pull the gun off targe... </p> | Short feedback loops are great when the feedback is clear, but without a guiding framework it can be hard to know what to make of the feedback -- especially in the short term. If things get uncomfortable, is that a bad thing or is that a counter-steering? If I'm getting results quickly, am I Goodharting to rapid yet unstable results? If I'm not getting results quickly is it because the problem is hard or just that I'm ineffective? Am I even wise enough to know what a good outcome looks like in this situation?
A physicist can take the mass of your car, power capability of the engine, and the height differential between the start and finish, and give you a lower bound for how quickly you can race up a mountain. This is useful because if the lower bound is 99% of your actual time, then you know that you have little to gain in "learning how to drive" and need to focus on improving the power to weight ratio of your car if you want better results. If the lower bound is 1%, then don't bother upgrading the car because the answer is all in learning how to drive.
Even without the inputs to the equation the physicist can also judge your efficiency by watching your driving, and give criteria to optimize. If the accelerator isn't pinned to the floor, first do that. If you're revving past the power peak, upshift. If you're under it, then downshift. Optimality, in this case, looks like keeping the engine as close as possible to the power peak at every moment when the tires aren't at their limit of traction. If your engine is rarely near its power peak, you're not going to approach this lower bound. If you're up against the constraints, you're doing all you can. What might this look like when applied to "psychological" problems? What are the constraints?
To explore this, let's look at the maximally simple example of learning how to shoot a handgun effectively.
When learning to shoot a handgun there's various knowledge and skills that required. Once you get past the basics of | 2,176 | 1.0.0 | Revision | false | null | null | CrosspostOutput |
|
DCQ8GfzCqoBzgziew | comparing-risk-from-internally-deployed-ai-to-insider-and | Comparing risk from internally-deployed AI to insider and outsider threats from humans | null | false | false | true | null | rx7xLaHCh3m7Po385 | null | true | false | false | false | Post | null | 2025-06-23T17:47:21.408Z | null | false | false | 2 | 2 | 2025-06-23T20:53:15.897Z | false | false | post | [] | null | null | AfpDgvxF3cTmPszw2 | 6 | 35 | 104 | false | 0.472429 | null | false | false | 2025-06-25T03:12:28.267Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 49 | 1 | 2025-06-24T06:03:04.803Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | DCQ8GfzCqo | 0.14 | false | 2,025 | https://manifold.markets/LessWrong/will-comparing-risk-from-internally | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 35 | 0 | 0 | 22 | 0 | rx7xLaHCh3m7Po385 | buck | 2018-04-20T18:36:03.024Z | Buck | Buck | null | null | Buck Shlegeris | 11,979 | 2,945 | false | false | <p>CEO at Redwood Research.</p><p>AI safety is a highly collaborative field--almost all the points I make were either explained to me by someone else, or developed in conversation with other people. I'm saying this here because it would feel repetitive to say "these ideas were developed in collaboration with various people" in all my comments, but I want to have it on the record that the ideas I present were almost entirely not developed by me in isolation.</p> | null | null | 39 | 450 | 0 | 30 | 219 | 1 | 2 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"alignmentForum",
"alignmentVoters",
"canModeratePersonal",
"trustLevel1"
] | null | null | DCQ8GfzCqoBzgziew | SocialPreviewType | AfpDgvxF3cTmPszw2 | <p>I’ve been thinking a lot recently about the relationship between <a href="https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful"><u>AI control</u></a> and traditional computer security. Here’s one point that I think is important.</p><p>My understanding is that there's a big qualitative distinction between two ends of a spectrum of security work that organizations do, that I’ll call “security from outsiders” and “security from insiders”.</p><p>On the “security from outsiders” end of the spectrum, you have some security invariants you try to maintain entirely by restricting affordances with static, entirely automated systems. My sense is that this is most of how Facebook or AWS relates to its users: they want to ensure that, no matter what actions the users take on their user interfaces, they can't violate fundamental security properties. For example, no matter what text I enter into the "new post" field on Facebook, I shouldn't be able to access the private messages of an arbitrary user. And on AWS, no matter what code I run on my EC2 instance, I shouldn't be able to access someone else's private S3 bucket.</p><p>This type of security is very "static": invariants are enforced by code that doesn't involve human review and so don't rely on humans reacting properly to something. This is typically how you mitigate risks from your users--users typically only need to have certain restricted types of access to your systems, and they're very numerous, so you need to handle them with automated approaches. I sometimes call systems that aim to be safe no matter what inputs a user provides "manifestly safe".</p><p>In contrast, consider how Facebook and AWS relate to their engineers. These companies want to be robust to their engineers abusing their access to violate security properties; they could do this either by abusing administrator privileges (e.g. direct access to machines that have user data) or by adding malign behavior to code that they write. In order to mitigate these risks, companies use very different types of security techniques. They rely on multi-party authorization, where multiple engineers have to sign off on executing sensitive commands, or code review, where someone needs to review code before it's pushed to environments where it has access to sensitive data. These techniques fundamentally rely on human labor.</p><p>Insider threats are harder to handle in that your engineers need to have much more fine-grained access, but easier in that your engineers ar... </p> | I’ve been thinking a lot recently about the relationship between AI control and traditional computer security. Here’s one point that I think is important.
My understanding is that there's a big qualitative distinction between two ends of a spectrum of security work that organizations do, that I’ll call “security from outsiders” and “security from insiders”.
On the “security from outsiders” end of the spectrum, you have some security invariants you try to maintain entirely by restricting affordances with static, entirely automated systems. My sense is that this is most of how Facebook or AWS relates to its users: they want to ensure that, no matter what actions the users take on their user interfaces, they can't violate fundamental security properties. For example, no matter what text I enter into the "new post" field on Facebook, I shouldn't be able to access the private messages of an arbitrary user. And on AWS, no matter what code I run on my EC2 instance, I shouldn't be able to access someone else's private S3 bucket.
This type of security is very "static": invariants are enforced by code that doesn't involve human review and so don't rely on humans reacting properly to something. This is typically how you mitigate risks from your users--users typically only need to have certain restricted types of access to your systems, and they're very numerous, so you need to handle them with automated approaches. I sometimes call systems that aim to be safe no matter what inputs a user provides "manifestly safe".
In contrast, consider how Facebook and AWS relate to their engineers. These companies want to be robust to their engineers abusing their access to violate security properties; they could do this either by abusing administrator privileges (e.g. direct access to machines that have user data) or by adding malign behavior to code that they write. In order to mitigate these risks, companies use very different types of security techniques. They rely on multi-party aut | 813 | 1.3.0 | Revision | false | null | null | CrosspostOutput |
|
bnnKGSCHJghAvqPjS | foom-and-doom-2-technical-alignment-is-hard | Foom & Doom 2: Technical alignment is hard | null | false | false | true | null | vRcer5FTqagjMkcDz | null | true | false | false | false | Post | null | 2025-06-23T17:19:50.691Z | null | false | false | 2 | 2 | 2025-06-23T20:53:21.262Z | false | false | post | [] | null | null | JJGFAMhZ5ZKeNvk3A | 36 | 35 | 121 | false | 0.537374 | null | false | false | 2025-06-27T17:29:17.006Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 43 | 8 | 2025-06-27T17:29:16.730Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | bnnKGSCHJg | 0.14 | false | 2,025 | https://manifold.markets/LessWrong/will-foom-doom2-technical-alignment | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 34 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 35 | 0 | 0 | 21 | 0 | vRcer5FTqagjMkcDz | steve2152 | 2018-08-13T02:14:22.599Z | steve2152 | Steven Byrnes | null | null | Steve Byrnes | 22,762 | 3,747 | false | false | <p>I'm an AGI safety / AI alignment researcher in Boston with a particular focus on brain algorithms. Research Fellow at <a href="https://astera.org/">Astera</a>. See <a href="https://sjbyrnes.com/agi.html">https://sjbyrnes.com/agi.html</a> for a summary of my research and sorted list of writing. Physicist by training. Email: <a href="mailto:[email protected]">[email protected]</a>. <a href="https://www.admonymous.co/steve47285">Leave me anonymous feedback here</a><u>. I’m also at:</u> <a href="https://www.greaterwrong.com/users/steve2152?show=posts&format=rss"><u>RSS feed</u></a>, <a href="https://x.com/steve47285"><u>X/Twitter</u></a>, <a href="https://bsky.app/profile/stevebyrnes.bsky.social"><u>Bluesky</u></a>, <a href="https://stevebyrnes1.substack.com">Substack</a>, <a href="https://www.linkedin.com/in/steven-byrnes-005a304/"><u>LinkedIn</u></a>, and more at <a href="https://sjbyrnes.com/">my website</a>.</p> | null | null | 164 | 2,443 | 3 | 69 | 605 | 1 | 4 | 3oopbgcjYfvN8B2fp | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1"
] | null | null | bnnKGSCHJghAvqPjS | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/bnnKGSCHJghAvqPjS/yncx2svnzdeuqj08vg5m | SocialPreviewType | JJGFAMhZ5ZKeNvk3A | <h1 data-internal-id="2_1_Summary___Table_of_contents">2.1 Summary & Table of contents</h1><p>This is the second of a two-post series on foom (<a href="https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement">previous post</a>) and doom (this post).</p><p>The last post talked about how I expect future AI to be different from present AI. This post will argue that, absent some future conceptual breakthrough, this future AI will be of a type that will be egregiously misaligned and scheming; a type that ruthlessly pursues goals with callous indifference to whether people, even its own programmers and users, live or die; and more generally a type of AI that is not even <a href="https://www.lesswrong.com/posts/xvBZPEccSfM8Fsobt/what-are-the-best-arguments-for-against-ais-being-slightly">‘slightly nice’</a>.</p><p data-internal-id="ftnt_ref1">I will particularly focus on exactly how and why I differ from the LLM-focused researchers who wind up with (from my perspective) bizarrely over-optimistic beliefs like “P(doom) ≲ 50%”.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="7ic947h23xe" role="doc-noteref" id="fnref7ic947h23xe"><sup><a href="#fn7ic947h23xe">[1]</a></sup></span></p><p>In particular, I will argue that these “optimists” are right that “Claude seems basically nice, by and large” is nonzero evidence for feeling good about current LLMs (with various caveats). But I think that future AIs will be disanalogous to current LLMs, and I will dive into exactly how and why, with a particular emphasis on how LLM pretraining is safer than reinforcement learning (RL).</p><p>(Note that I said “feeling good about current LLMs”, not “feeling good about current and future LLMs”! Many of the reasons for feeling good about current LLMs have been getting less and less applicable over time. More on this throughout the post, especially §2.3.5 and §2.9.1.)</p><p>Then as a bonus section at the end, I’ll turn around and argue the other side! While I think technical alignment is very much harder than the vast majority of LLM-focused researchers seem to think it is, I don’t think it’s <i>quite</i> as hard and intractable a problem as Eliezer Yudkowsky seems to think it is. So I’ll summarize a few of the cruxes that I think drive my disagreement with him.</p><p>Here’s the outline:</p><ul><li><strong>Section 2.2</strong> briefly summarizes my expected future AI paradigm shift to “brain-like AGI”;</li><li>Then I proceed to three main reasons that I expect technical alignment to be very hard for future brain-like AGI:<ul><li><strong>Section 2.3</strong> argues that the competent behavior we see from brains (and brain-like AGI) is determined entirely by reinforcement learning (RL), whereas the competent behavior we see in LLMs today is instead determined largely by imitative learning, which I re-dub “the magical transmutation of observations into behavior” to remind us that it is a strange algorithmi</li></ul></li></ul>... | 2.1 Summary & Table of contents
This is the second of a two-post series on foom (previous post) and doom (this post).
The last post talked about how I expect future AI to be different from present AI. This post will argue that, absent some future conceptual breakthrough, this future AI will be of a type that will be egregiously misaligned and scheming; a type that ruthlessly pursues goals with callous indifference to whether people, even its own programmers and users, live or die; and more generally a type of AI that is not even ‘slightly nice’.
I will particularly focus on exactly how and why I differ from the LLM-focused researchers who wind up with (from my perspective) bizarrely over-optimistic beliefs like “P(doom) ≲ 50%”.[1]
In particular, I will argue that these “optimists” are right that “Claude seems basically nice, by and large” is nonzero evidence for feeling good about current LLMs (with various caveats). But I think that future AIs will be disanalogous to current LLMs, and I will dive into exactly how and why, with a particular emphasis on how LLM pretraining is safer than reinforcement learning (RL).
(Note that I said “feeling good about current LLMs”, not “feeling good about current and future LLMs”! Many of the reasons for feeling good about current LLMs have been getting less and less applicable over time. More on this throughout the post, especially §2.3.5 and §2.9.1.)
Then as a bonus section at the end, I’ll turn around and argue the other side! While I think technical alignment is very much harder than the vast majority of LLM-focused researchers seem to think it is, I don’t think it’s quite as hard and intractable a problem as Eliezer Yudkowsky seems to think it is. So I’ll summarize a few of the cruxes that I think drive my disagreement with him.
Here’s the outline:
* Section 2.2 briefly summarizes my expected future AI paradigm shift to “brain-like AGI”;
* Then I proceed to three main reasons that I expect technical alignment to be ve | 8,423 | 1.24.1 | Revision | false | null | null | CrosspostOutput |
yew6zFWAKG4AGs3Wk | foom-and-doom-1-brain-in-a-box-in-a-basement | Foom & Doom 1: “Brain in a box in a basement” | null | false | false | true | null | vRcer5FTqagjMkcDz | null | true | false | false | false | Post | null | 2025-06-23T17:18:54.237Z | null | false | false | 2 | 2 | 2025-06-23T21:13:47.499Z | false | false | post | [] | null | null | 4AyARutL8FkjWzdPd | 75 | 53 | 195 | false | 0.843925 | null | false | false | 2025-06-23T17:18:54.237Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 63 | 14 | 2025-06-26T20:31:01.713Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | yew6zFWAKG | 0.14 | false | 2,025 | https://manifold.markets/LessWrong/will-foom-doom-1-brain-in-a-box-in | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 35 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "oiRp4T6u5poc8r9Tj",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-29T23:53:15.749Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Takeoff",
"needsReview": false,
"noindex": false,
"postCount": 329,
"score": 19,
"shortName": null,
"slug": "ai-takeoff",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "zHjC29kkPmsdo7WTr",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-16T10:16:47.235Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Timelines",
"needsReview": false,
"noindex": false,
"postCount": 457,
"score": 19,
"shortName": null,
"slug": "ai-timelines",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 53 | 0 | 0 | 27 | 0 | vRcer5FTqagjMkcDz | steve2152 | 2018-08-13T02:14:22.599Z | steve2152 | Steven Byrnes | null | null | Steve Byrnes | 22,762 | 3,747 | false | false | <p>I'm an AGI safety / AI alignment researcher in Boston with a particular focus on brain algorithms. Research Fellow at <a href="https://astera.org/">Astera</a>. See <a href="https://sjbyrnes.com/agi.html">https://sjbyrnes.com/agi.html</a> for a summary of my research and sorted list of writing. Physicist by training. Email: <a href="mailto:[email protected]">[email protected]</a>. <a href="https://www.admonymous.co/steve47285">Leave me anonymous feedback here</a><u>. I’m also at:</u> <a href="https://www.greaterwrong.com/users/steve2152?show=posts&format=rss"><u>RSS feed</u></a>, <a href="https://x.com/steve47285"><u>X/Twitter</u></a>, <a href="https://bsky.app/profile/stevebyrnes.bsky.social"><u>Bluesky</u></a>, <a href="https://stevebyrnes1.substack.com">Substack</a>, <a href="https://www.linkedin.com/in/steven-byrnes-005a304/"><u>LinkedIn</u></a>, and more at <a href="https://sjbyrnes.com/">my website</a>.</p> | null | null | 164 | 2,443 | 3 | 69 | 605 | 1 | 4 | 3oopbgcjYfvN8B2fp | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1"
] | null | null | yew6zFWAKG4AGs3Wk | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/opazbhxywjujsud45o5n | SocialPreviewType | 4AyARutL8FkjWzdPd | <h1 data-internal-id="1_1_Series_summary_and_Table_of_Contents">1.1 Series summary and Table of Contents</h1><p>This is a two-post series on AI “foom” (this post) and “doom” (<a href="https://www.lesswrong.com/posts/bnnKGSCHJghAvqPjS/foom-and-doom-2-technical-alignment-is-hard">next post</a>).</p><p>A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky. In a typical such scenario, a small team would build a system that would rocket (“foom”) from “unimpressive” to “Artificial Superintelligence” (ASI) within a very short time window (days, weeks, maybe months), involving very little compute (e.g. <a href="https://intelligence.org/files/AIFoomDebate.pdf#page=447">“brain in a box in a basement”</a>), via <a href="https://www.lesswrong.com/w/recursive-self-improvement">recursive self-improvement</a>. Absent some future technical breakthrough, the ASI would definitely be egregiously misaligned, without the slightest intrinsic interest in whether humans live or die. The ASI would be born into a world generally much like today’s, a world utterly unprepared for this new mega-mind. The extinction of humans (and every other species) would rapidly follow (“doom”). The ASI would then spend countless eons fulfilling its desires, desires which we humans would find to be bizarre and pointless.</p><p>Now, I don’t endorse every word of that foom & doom scenario above—for example, I don't think “foom” requires recursive self-improvement. But I’m in <i>much</i> closer agreement with that scenario than the vast majority of AI safety & alignment researchers today, who tend to see the “foom & doom” scenario above as somewhere between “extraordinarily unlikely” and <i>“already falsified”</i>!</p><p data-internal-id="ftnt_ref1">Those researchers are not asking each other “is it true?”, but rather “lol, can you believe that some people used to believe that?”.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="wrrrvysf5ym" role="doc-noteref" id="fnrefwrrrvysf5ym"><sup><a href="#fnwrrrvysf5ym">[1]</a></sup></span> Oh well. Laugh all you want. It’s still what I believe.</p><p>Conversely, from my perspective as a foom & doomer, it’s the mainstream contemporary AI alignment discourse that feels increasingly foreign and strange. How, I ask myself, do so many seemingly reasonable people wind up with such wildly, bafflingly over-optimistic beliefs as “P(doom)≲50%”??</p><p>Anyway, my main goal in these two posts is to explore how I wind up in such a different place as most other alignment researchers do today, on the question of foom & doom. I don’t particularly expect to win skeptical readers over to my side, but would at least like to convey that foom & doom is a story that hangs together and deserves a modicum of consideration.</p><h2 data-internal-id="1_1_2_Should_I_stop_reading_if_I_expect_LLMs_to_scale_to_ASI_">1.1.2 Should I stop reading if I expect LLMs to scale to ASI?</h2><p data-internal-id="ftnt_ref3">These posts are mainly ... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | 1.1 Series summary and Table of Contents
This is a two-post series on AI “foom” (this post) and “doom” (next post).
A decade or two ago, it was pretty common to discuss “foom & doom” scenarios, as advocated especially by Eliezer Yudkowsky. In a typical such scenario, a small team would build a system that would rocket (“foom”) from “unimpressive” to “Artificial Superintelligence” (ASI) within a very short time window (days, weeks, maybe months), involving very little compute (e.g. “brain in a box in a basement”), via recursive self-improvement. Absent some future technical breakthrough, the ASI would definitely be egregiously misaligned, without the slightest intrinsic interest in whether humans live or die. The ASI would be born into a world generally much like today’s, a world utterly unprepared for this new mega-mind. The extinction of humans (and every other species) would rapidly follow (“doom”). The ASI would then spend countless eons fulfilling its desires, desires which we humans would find to be bizarre and pointless.
Now, I don’t endorse every word of that foom & doom scenario above—for example, I don't think “foom” requires recursive self-improvement. But I’m in much closer agreement with that scenario than the vast majority of AI safety & alignment researchers today, who tend to see the “foom & doom” scenario above as somewhere between “extraordinarily unlikely” and “already falsified”!
Those researchers are not asking each other “is it true?”, but rather “lol, can you believe that some people used to believe that?”.[1] Oh well. Laugh all you want. It’s still what I believe.
Conversely, from my perspective as a foom & doomer, it’s the mainstream contemporary AI alignment discourse that feels increasingly foreign and strange. How, I ask myself, do so many seemingly reasonable people wind up with such wildly, bafflingly over-optimistic beliefs as “P(doom)≲50%”??
Anyway, my main goal in these two posts is to explore how I wind up in such a different pl | 8,800 | 1.14.1 | Revision | false | null | null | CrosspostOutput |
XnaM3HWs2ekBeATdY | the-lies-of-big-bug | The Lies of Big Bug | null | false | false | false | null | tm8YP7vNWjGm7pYae | null | true | false | false | false | Post | null | 2025-06-23T16:03:58.603Z | null | false | false | 2 | 2 | 2025-06-23T20:54:17.496Z | false | false | post | [] | null | null | YLQLYzD8stQrCdd8g | 2 | 9 | -5 | false | 0.023055 | null | false | false | 2025-06-25T17:00:08.243Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 2 | 0 | 2025-06-23T16:03:43.021Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 5 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 9 | 0 | 0 | 4 | 0 | tm8YP7vNWjGm7pYae | bentham-s-bulldog | 2022-11-24T02:24:14.930Z | omnizoid | Bentham's Bulldog | null | null | null | 249 | 0 | false | false | null | null | 41 | 131 | 1 | 0 | 0 | 1 | 0 | 55XxDBpfKkkBPm9H8 | User | null | null | null | [
"canModeratePersonal"
] | null | null | XnaM3HWs2ekBeATdY | SocialPreviewType | YLQLYzD8stQrCdd8g | <p>Crosspost of this <a href="https://benthams.substack.com/p/the-lies-of-big-bug">blog article</a>. </p><p>The majority of farmed animals killed each year are insects, and this number is only expected to increase. By 2033, it’s <a href="https://rethinkpriorities.org/research-area/forecasting-farmed-animal-numbers-in-2033/"><u>estimated </u></a>that around 5 trillion insects will be slaughtered annually—more than <i>50 times</i> the number of cows, pigs, chickens, turkeys, and the like currently slaughtered. But insect farming is built on a dark secret: its foundational premises are all lies!</p><p>There has been a calculated plot by the insect farming industry to mislead the public. They know that if the public knew the truth about them, they’d never support subsidizing them. The insect farms can only thrive in darkness—shielded from public scrutiny.</p><p>Insect farming was promised as an environmentally-friendly alternative to meat. In reality, however, there’s virtually no consumer market for insects, so <a href="https://www.theguardian.com/environment/article/2024/aug/12/eating-insects-is-not-going-to-save-the-planet"><u>the insect farming industry mostly feeds insects to farmed animals like chickens and fish</u></a>. Insect farming is not a competitor of traditional, environmentally-devastating factory-farming—it’s a complementary industry. For this reason, the insect farming industry is on track to be <a href="https://www.feedandadditive.com/new-report-insect-protein-could-be-13-5-times-worse-for-climate-than-soy/"><u>bad for the environment</u></a> and to increase carbon emissions. A report by the <a href="https://www.feedandadditive.com/new-report-insect-protein-could-be-13-5-times-worse-for-climate-than-soy/"><u>UK government</u></a> estimates that insect feed fed to animals has 13.5 times the carbon emissions of soy-based alternatives. <a href="https://www.sciencedirect.com/science/article/pii/S235255092400191X?via%3Dihub"><u>Further studies</u></a> have only confirmed this impression. Part of why the industry is so disastrous is that it must use lots of resources and energy feeding insects and keeping them at adequate temperatures to grow.</p><p>How does the insect industry challenge such allegations? Well, mostly they ignore them. In the rare cases they address them at all, they make mealy-mouthed (meal-wormy-mouthed???) statements about the importance of sustainability while ignoring the inconvenient facts about their environmental impact. Journalism articles talking about the promises of insect farming <a href="https://www.agtechnavigator.com/Article/2024/09/16/calls-for-more-insect-farming-r-d-amid-recent-criticism/?utm_source=chatgpt.com"><u>uncritically quote</u></a> the CEO’s of insect companies—a bit like an article about the impact of oil on the environment uncritically quoting the head of an oil company.</p><p>They also cite <i>their own</i> <a href="https://www.agtechnavigator.com/Article/2024/09/16/calls-for-more-insect-farming-r-d-amid-recent-criticism/?utm_source=chatgpt.com"><u>life-cycle assessments</u></a>. Basically, they pay people to analyze their environmental impact, and shockingly discover that they’re great for the environment. It’s no surprise that when an insect company pays someone to analyze their environmental impact, the people <i>paid by the insect industry</i> often find that insect farming is great for the e... </p> | Crosspost of this blog article.
The majority of farmed animals killed each year are insects, and this number is only expected to increase. By 2033, it’s estimated that around 5 trillion insects will be slaughtered annually—more than 50 times the number of cows, pigs, chickens, turkeys, and the like currently slaughtered. But insect farming is built on a dark secret: its foundational premises are all lies!
There has been a calculated plot by the insect farming industry to mislead the public. They know that if the public knew the truth about them, they’d never support subsidizing them. The insect farms can only thrive in darkness—shielded from public scrutiny.
Insect farming was promised as an environmentally-friendly alternative to meat. In reality, however, there’s virtually no consumer market for insects, so the insect farming industry mostly feeds insects to farmed animals like chickens and fish. Insect farming is not a competitor of traditional, environmentally-devastating factory-farming—it’s a complementary industry. For this reason, the insect farming industry is on track to be bad for the environment and to increase carbon emissions. A report by the UK government estimates that insect feed fed to animals has 13.5 times the carbon emissions of soy-based alternatives. Further studies have only confirmed this impression. Part of why the industry is so disastrous is that it must use lots of resources and energy feeding insects and keeping them at adequate temperatures to grow.
How does the insect industry challenge such allegations? Well, mostly they ignore them. In the rare cases they address them at all, they make mealy-mouthed (meal-wormy-mouthed???) statements about the importance of sustainability while ignoring the inconvenient facts about their environmental impact. Journalism articles talking about the promises of insect farming uncritically quote the CEO’s of insect companies—a bit like an article about the impact of oil on the environment uncritica | 1,191 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
6rEywvofYpe5bP2td | ai-companies-aren-t-planning-to-secure-critical-model | AI companies aren't planning to secure critical model weights | null | false | false | false | null | 4QFiQcHgf6hvtiLqF | null | true | false | false | false | Post | null | 2025-06-23T16:00:01.726Z | null | false | false | 2 | 2 | 2025-06-23T21:03:44.439Z | false | false | post | [] | null | null | nRgHpKd6B6TLkntdx | 0 | 6 | 16 | false | 0.106051 | null | false | false | 2025-06-23T16:00:01.726Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 8 | 0 | 2025-06-23T02:52:09.985Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 6 | 0 | 0 | 4 | 0 | 4QFiQcHgf6hvtiLqF | zach-stein-perlman | 2021-03-16T00:04:06.541Z | Zach Stein-Perlman | Zach Stein-Perlman | null | null | Zach Stein-Perlman | 9,609 | 321 | false | false | <p>AI strategy & governance. <a href="https://ailabwatch.org">ailabwatch.org</a>. <a href="https://ailabwatch.substack.com/">ailabwatch.substack.com</a>. </p> | null | null | 82 | 620 | 1 | 2 | 17 | 1 | 12 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"canModeratePersonal",
"alignmentVoters",
"trustLevel1",
"alignmentForum"
] | null | null | 6rEywvofYpe5bP2td | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/i22nkxlv216iuuxlpn8y | SocialPreviewType | nRgHpKd6B6TLkntdx | <p data-internal-id="ftnt_ref1">AI companies' security plans, per their published safety policies, involve just SL3-4 security for powerful AI (see Appendix below).<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="s835hri3zz" role="doc-noteref" id="fnrefs835hri3zz"><sup><a href="#fns835hri3zz">[1]</a></sup></span> (Moreover, companies are likely to not follow their current policies.)</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/hnbambm9ki3flvwwpjj1" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/zq1dbzqrzoi2fx6yuygi 150w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/yrn9lfjleumb3tonzc6y 300w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/w4bk5x1zzrfn1lgpub60 450w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/cqefqzwprtcycozjhhfp 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/flwh411kreoqtgx8ennd 750w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/ebgsk4ohjais0dutser2 900w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/aqtruhgb4knggli0kkt3 1050w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/pt6tgj1ig82g9eaoeyls 1200w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/a8t8mgf9olwcfeixuajc 1350w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/6rEywvofYpe5bP2td/jgc1ub5tolfncnigya2q 1414w"></figure><p><a href="https://www.rand.org/pubs/research_reports/RRA2849-1.html"><i>Source</i></a><i>.</i></p><p data-internal-id="ftnt_ref2">Just SL3-4 is very inadequate; it would result in lots of extra x-risk relative to perfect security. If AIs that greatly accelerate AI research<span class="footnote-reference" data-footnote-reference="" data-footnote-index="2" data-footnote-id="hauzpxj4nft" role="doc-noteref" id="fnrefhauzpxj4nft"><sup><a href="#fnhauzpxj4nft">[2]</a></sup></span> can immediately be stolen, then the developer can't prioritize safety much without getting outraced (and the geopolitical situation is worse than if the US has a solid lead). It's much easier to tell a story where things go well if such model weights don't immediately proliferate.</p><p>Strong security isn't yet necessary. But you can't just decide to start operating with SL5 when you need it — it's likely prohibitively expensive to make the process faster than a couple years. Ideally companies would be working toward SL5 <i>optionality</i> — gaining <i>the ability</i> to quickly turn on SL5 when appropriate. Some kinds of security work will presumably be dramatically boosted by AIs before SL5 is necessary, so for now perhaps companies should focus on work that AI won't accelerate much and work that unlocks automatable security projects.</p><p data-internal-id="ftnt_ref5">SL5 would be very costly, especially because it would slow development.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="3" data-footnote-id="5ylmt3sphch" role="doc-noteref" id="fnref5ylmt3sphch"><sup><a href="#fn5ylmt3sphch">[3]</a></sup></span> <i>Unilaterally</i> implementing it would make a company less competitive.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="4" data-footnote-id="3uiqjcc31s6" role="doc-noteref" id="fnref3uiqjcc31s6"><sup><a href="#fn3uiqjcc31s6">[4]</a></sup></span> So companies aren't planning to do so; strong security requires coordination.<span class="footnote-reference" data-footnote-reference="" data-footnote-index="5" data-footnote-id="q0ou3c1ondt" role="doc-noteref" id="fnrefq0ou3c1ondt"><sup><a href="#fnq0ou3c1ondt">[5]</a></sup></span></p><p>To inform the world and potentially enable coordination, ideally companies would say what security would be appropriate <i>if not for competition with other AI developers</i>. And then when they deviate from that, they notice they're doing so, say so, and loudly say they wish for strong coordination where all frontier developers improve security. (I expect they are currently deviating, since I expect they'd say <i>SL5 by 10x AI</i> and it seems they're not on track.)</p><h1><strong>Appendix</strong></h1><h2><strong>OpenAI</strong></h2><p data-internal-id="ftnt_ref7">OpenAI's Preparedness Framework defines a "High" security standard and says it will define a "Critical" security standard in the future. <a href="https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf#page=21">The High standard</a> mentions reasonable security themes but it's sufficiently vague that it would be hard to tell if a company is meeting the standard and it's unclear what real-world security level the standard entails. OpenAI doesn't say which kinds of attacks or actors the standard is meant to protect against vs which are out of s... </p> | AI companies' security plans, per their published safety policies, involve just SL3-4 security for powerful AI (see Appendix below).[1] (Moreover, companies are likely to not follow their current policies.)
Source.
Just SL3-4 is very inadequate; it would result in lots of extra x-risk relative to perfect security. If AIs that greatly accelerate AI research[2] can immediately be stolen, then the developer can't prioritize safety much without getting outraced (and the geopolitical situation is worse than if the US has a solid lead). It's much easier to tell a story where things go well if such model weights don't immediately proliferate.
Strong security isn't yet necessary. But you can't just decide to start operating with SL5 when you need it — it's likely prohibitively expensive to make the process faster than a couple years. Ideally companies would be working toward SL5 optionality — gaining the ability to quickly turn on SL5 when appropriate. Some kinds of security work will presumably be dramatically boosted by AIs before SL5 is necessary, so for now perhaps companies should focus on work that AI won't accelerate much and work that unlocks automatable security projects.
SL5 would be very costly, especially because it would slow development.[3] Unilaterally implementing it would make a company less competitive.[4] So companies aren't planning to do so; strong security requires coordination.[5]
To inform the world and potentially enable coordination, ideally companies would say what security would be appropriate if not for competition with other AI developers. And then when they deviate from that, they notice they're doing so, say so, and loudly say they wish for strong coordination where all frontier developers improve security. (I expect they are currently deviating, since I expect they'd say SL5 by 10x AI and it seems they're not on track.)
Appendix
OpenAI
OpenAI's Preparedness Framework defines a "High" security standard and says it will define a "Crit | 296 | 1.5.1 | Revision | false | null | null | CrosspostOutput |
hAwvJDRKWFibjxh4e | it-isn-t-magic | "It isn't magic" | null | false | false | false | null | 6AKzsX4qgxocnzo2e | null | true | false | false | false | Post | null | 2025-06-23T14:00:35.751Z | null | false | false | 2 | 2 | 2025-06-23T20:55:09.123Z | false | false | post | [] | null | null | gLhLZKLBiWQKJ6kzY | 11 | 48 | 76 | false | 0.343485 | null | false | false | 2025-06-26T12:49:25.865Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 14 | 0 | 2025-06-23T12:01:00.613Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "MXcpQvaPGtXpB6vkM",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 20,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-15T04:23:00.324Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "8btiLJDabHgZuiSAB",
"displayName": "Ggwp"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Public Discourse",
"needsReview": false,
"noindex": false,
"postCount": 187,
"score": 20,
"shortName": null,
"slug": "public-discourse",
"suggestedAsFilter": false,
"userId": "gXeEWGjTWyqgrQTzR",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 48 | 0 | 0 | 17 | 0 | 6AKzsX4qgxocnzo2e | ben-berlin | 2021-07-05T23:00:00.528Z | Whitepaw | Ben (Berlin) | null | null | null | 89 | 0 | false | false | <p>A furry living in Berlin</p> | null | null | 1 | 5 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | [
"canModeratePersonal"
] | null | null | hAwvJDRKWFibjxh4e | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/svjhjhjzo8iyyjws8dib | SocialPreviewType | gLhLZKLBiWQKJ6kzY | <p>People keep saying "AI isn't magic, it's just maths" like this is some kind of gotcha.</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/nptcoxh35j7nnetvak78" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/eqtbaoht4vfyqzwy5csu 310w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/uvjbzeb7nlgjjpdvglvc 620w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/l0g36boahwb3lwe1ybdf 930w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/wjqup8bzougs7rlbvueo 1240w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/fuhijyqmbrulnxqndw6j 1550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/yqr5bgkkio8xaiawoft2 1860w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/pg9tckfcckrz9uq233nm 2170w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/gxmkwrcsglljmwndiwil 2480w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/q0x0loifidv1efncay88 2790w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/hAwvJDRKWFibjxh4e/psjovpgssgjeovl9ohcn 3072w" alt="Triptych in style of Hieronymus Bosch's 'The Garden of Earthly Delights', the left showing a wizard raining fireballs down upon a medieval army, the right showing a Predator drone firing a missile while being remotely operated. Between them are geometric shapes representing magical sigils from the Key of Solomon contrasted with circuit boards"></figure><p>Turning lead into gold isn't the magic of alchemy, it's just <a href="https://en.wikipedia.org/wiki/Nucleosynthesis">nucleosynthesis</a>.</p><p>Taking a living human's heart out without killing them, and replacing it with one you got out a corpse, that isn't the magic of necromancy, neither is it a prayer or ritual to <a href="https://en.wikipedia.org/wiki/Sekhmet">Sekhmet</a>, it's just transplant surgery.</p><p>Casually chatting with someone while they're 8,000 kilometres is not done with magic crystal balls, it's just telephony.</p><p>Analysing the atmosphere of <a href="https://en.wikipedia.org/wiki/WASP-19b">a planet 869 light-years away</a> (<a href="https://www.wolframalpha.com/input?i=869+light-years+in+km">about 8 quadrillion km</a>) is not supernatural <a href="https://en.wikipedia.org/wiki/Remote_viewing">remote viewing</a>, it's just spectral analysis through a telescope… a telescope that remains <a href="https://en.wikipedia.org/wiki/Hubble_Space_Telescope">about 540 km above the ground, even without any support from anything underneath</a>, which also isn't magic, it's just <a href="https://en.wikipedia.org/wiki/Low_Earth_orbit">"orbit"</a>.</p><p>Making diamonds and rubies by the tonne isn't a wish made to a magic djinn, it's <a href="https://en.wikipedia.org/wiki/Synthetic_diamond">just</a> <a href="https://en.wikipedia.org/wiki/Verneuil_method">chemistry</a>.</p><p>Flying through the air, or to the moon, isn't a magic carpet, it's just aerodynamics and rocket science respectively.</p><p><a href="https://www.nature.com/articles/s41593-023-01304-9">Reading someone’s thoughts isn't magic telepathy, it's just fMRI decoding</a>.</p><p>Knowing your location anywhere on the surface of planet, at any time of day or night, in any weather, isn't magic, it's just GPS.</p><p>Forecasting the weather days in advance isn't the magic of divination, it isn't precognition, it isn't limited to the visions of prophets and oracles, it's just fluid dynamics simulations and <a href="https://en.wikipedia.org/wiki/Monte_Carlo_method">Monte Carlo methods</a>.</p><p>A bracelet that can detect when you fall and summon help automatically isn't a guardian angel, it's just an <a href="https://web.archive.org/web/20250125200903/https://support.apple.com/en-us/108896">Apple Watch with Fall Detection</a>.</p><p>Sensing through rock to find hidden water isn't dowsing or water divining, it's just geotechnical survey tools such as <a href="https://en.wikipedia.org/wiki/Ground-penetrating_radar">ground penetrating radar</a> and <a href="https://en.wikipedia.org/wiki/Electrical_resistance_survey">electrical resistance surveys</a>.</p><p>Seeing someone's bones without flaying the flesh from them isn't magic, it's just an x-ray.</p><p>Curing congenital deafness, letting the blind see, letting the lame walk, none of that is magic or miracle, they're just <a href="https://en.wikipedia.org/wiki/Cochlear_implant">cochlear implants</a>, cataract removal/<a href="https://en.wikipedia.org/wiki/Retinal_implant">retinal implants</a>, and surgery or <a href="https://en.wikipedia.org/wiki/Exoskeleton_(human)">prosthetic exoskeletons</a> respectively.</p><p>Condensing sunlight in the daytime, in order to heat and illuminate your home after dark, that isn't magic, it's just photovoltaics and batteries.</p><p><a href="https://nuclearsecrecy.com/nukemap/?&kt=100000&lat=37.5257007&lng=23.0970342&hob_psi=5&hob_ft=47553&ff=52&psi=20,5,1&therm=_3rd-100,_2nd-50,_1st-50,35&zm=8">A single weapon that can, in the blink of an eye, burn an area large enough to encompass both ancient Athens and ancient Sparta at the same time, that's not magic, it's just th</a>... </p> | People keep saying "AI isn't magic, it's just maths" like this is some kind of gotcha.
Turning lead into gold isn't the magic of alchemy, it's just nucleosynthesis.
Taking a living human's heart out without killing them, and replacing it with one you got out a corpse, that isn't the magic of necromancy, neither is it a prayer or ritual to Sekhmet, it's just transplant surgery.
Casually chatting with someone while they're 8,000 kilometres is not done with magic crystal balls, it's just telephony.
Analysing the atmosphere of a planet 869 light-years away (about 8 quadrillion km) is not supernatural remote viewing, it's just spectral analysis through a telescope… a telescope that remains about 540 km above the ground, even without any support from anything underneath, which also isn't magic, it's just "orbit".
Making diamonds and rubies by the tonne isn't a wish made to a magic djinn, it's just chemistry.
Flying through the air, or to the moon, isn't a magic carpet, it's just aerodynamics and rocket science respectively.
Reading someone’s thoughts isn't magic telepathy, it's just fMRI decoding.
Knowing your location anywhere on the surface of planet, at any time of day or night, in any weather, isn't magic, it's just GPS.
Forecasting the weather days in advance isn't the magic of divination, it isn't precognition, it isn't limited to the visions of prophets and oracles, it's just fluid dynamics simulations and Monte Carlo methods.
A bracelet that can detect when you fall and summon help automatically isn't a guardian angel, it's just an Apple Watch with Fall Detection.
Sensing through rock to find hidden water isn't dowsing or water divining, it's just geotechnical survey tools such as ground penetrating radar and electrical resistance surveys.
Seeing someone's bones without flaying the flesh from them isn't magic, it's just an x-ray.
Curing congenital deafness, letting the blind see, letting the lame walk, none of that is magic or miracle, they're just co | 714 | 1.3.1 | Revision | false | null | null | CrosspostOutput |
LczkzW4uPaQS3joj8 | forecasting-ai-forecasting | Forecasting AI Forecasting | null | false | false | true | null | iXX23K6iBAosHFPBn | null | true | false | false | false | Post | 2025-06-23T13:39:41.097Z | null | false | false | 2 | 2 | 2025-06-23T20:54:32.825Z | false | false | linkpost | [] | null | null | A9TiRtm4ikW5XNw9Z | 3 | 4 | 8 | false | 0.072154 | null | false | false | 2025-06-25T23:05:07.330Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | 2025-06-24T18:48:48.412Z | [
"iXX23K6iBAosHFPBn"
] | XtphY3uYHwruKqDyG | 2 | 0 | 2025-06-23T13:22:34.041Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 7 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "iKYWGuFx2qH2nYu6J",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-08-29T12:47:32.048Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Capabilities",
"needsReview": false,
"noindex": false,
"postCount": 162,
"score": 0,
"shortName": null,
"slug": "ai-capabilities",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "33BrBRSrRQS4jEHdk",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-12T06:31:37.542Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Forecasts (Specific Predictions)",
"needsReview": false,
"noindex": false,
"postCount": 194,
"score": 9,
"shortName": null,
"slug": "forecasts-specific-predictions",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "C7ESRAvYcNyyjcQRA",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-07-04T22:06:20.662Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Metaculus",
"needsReview": false,
"noindex": false,
"postCount": 24,
"score": 0,
"shortName": null,
"slug": "metaculus",
"suggestedAsFilter": false,
"userId": "BdheysxHFgukJLpNa",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb2b0",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.604Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Technological Forecasting",
"needsReview": false,
"noindex": false,
"postCount": 103,
"score": 0,
"shortName": null,
"slug": "technological-forecasting",
"suggestedAsFilter": false,
"userId": "cn4SiEmqWbu7K9em5",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 1 | 0 | iXX23K6iBAosHFPBn | alvin-anestrand | 2022-11-03T09:56:49.205Z | alvin-anestrand | Alvin Ånestrand | null | null | null | 102 | 22 | false | false | null | null | 13 | 10 | 0 | 8 | 0 | 1 | 1 | qgdGA4ZEyW7zNdK84 | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters"
] | null | null | LczkzW4uPaQS3joj8 | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/rvdco8pgysswxzcbd3lx | SocialPreviewType | A9TiRtm4ikW5XNw9Z | <p><i><strong>TL;DR:</strong> This post examines how rapidly AI forecasting ability is improving by analyzing results from </i><a href="https://www.forecastbench.org/"><i><u>ForecastBench</u></i></a><i>, Metaculus </i><a href="https://www.metaculus.com/aib/"><i><u>AI tournaments</u></i></a><i>, and various prediction platforms. While AI forecasting is steadily improving, there remains significant uncertainty around the timeline for matching top human forecasters—it might be very soon or take several years.</i></p><h2><strong>ForecastBench</strong></h2><p><a href="https://www.forecastbench.org/"><u>ForecastBench</u></a> is a “dynamic, continuously-updated benchmark designed to measure the accuracy of ML systems on a constantly evolving set of forecasting questions.” This makes it ideal for tracking how fast AI forecasting skills improve. Performance is measured as <a href="https://en.wikipedia.org/wiki/Brier_score"><u>Brier scores</u></a> (lower is better)<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="ihzivrmkti" role="doc-noteref" id="fnrefihzivrmkti"><sup><a href="#fnihzivrmkti">[1]</a></sup></span>. See the accompanying benchmark paper <a href="https://openreview.net/pdf?id=lfPkGWXLLf"><u>here</u></a>.</p><p>The <a href="https://www.forecastbench.org/leaderboards/human_leaderboard_overall.html"><u>leaderboard</u></a> shows AI performance alongside human benchmarks: "Public median forecast" represents non-experts, while "Superforecaster median forecast" represents elite human forecasters.</p><p>Evaluated AI systems operate under varying conditions—with or without access to a scratchpad for reasoning or to relevant information like news. Importantly, current forecasting prompts are likely good but suboptimal, meaning AI capabilities may be underestimated.</p><p>The benchmark paper reveals that AI forecasting performance correlates linearly with both Chatbot Arena scores and log training compute (measured in <a href="https://en.wikipedia.org/wiki/Floating_point_operations_per_second"><u>FLOP</u></a>). Training compute may be of special interest for predicting when AIs reach the level of expert human forecasters.</p><h3><strong>Performance vs. Chatbot Arena and Log Training Compute</strong></h3><p> </p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/edqhr9qe4uvdzgspe1fy" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/efaezcpcwssagvdrla6t 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/hhcx5vhbws7qnxwau0wu 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/wqadjzohcfn0iiykyqtk 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/edqhr9qe4uvdzgspe1fy 1456w"><figcaption>Brier Score vs. Chatbot Arena and Log training compute (source: <a href="https://openreview.net/pdf?id=lfPkGWXLLf"><u>ForecastBench paper</u></a>)</figcaption></figure><p>Brier Score vs. Chatbot Arena and Log training compute (source: <a href="https://openreview.net/pdf?id=lfPkGWXLLf"><u>ForecastBench paper</u></a>)</p><p>Scores in the graph above corresponds to accuracy for each model with optimal conditions (e.g. if Claude-3-5-Sonnet performed better with a scratchpad, its score is the one achieved while using it).</p><p>Using Epoch AI's <a href="https://epoch.ai/data/notable-ai-models"><u>training compute data</u></a>, the AIs with most training compute at each point in time show a 4.9x annual increase in training compute. The current leader is Grok 3, which used an estimated 4.6×10^26 FLOPs.</p><figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/q6a9ypb0ajheicuacykb" alt="" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/v6ozomgotilpjkvscga1 424w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/vvbm32a8fwocwpnh4zrq 848w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/xqryqacqam4dvcuwfvsj 1272w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/LczkzW4uPaQS3joj8/q6a9ypb0ajheicuacykb 1456w"></figure><p>In the ForecastBench <a href="https://openreview.net/pdf?id=lfPkGWXLLf"><u>paper</u></a>, the authors extrapolate the trend line for prediction accuracy and training compute:</p><blockquote><p>Projecting out the log-linear relationship, we find that LLMs could match superforecaster performance when training compute approaches 6.49 × 10^26, though there is a large confidence interval (bootstrapped 95% CI: 9</p></blockquote>... | TL;DR: This post examines how rapidly AI forecasting ability is improving by analyzing results from ForecastBench, Metaculus AI tournaments, and various prediction platforms. While AI forecasting is steadily improving, there remains significant uncertainty around the timeline for matching top human forecasters—it might be very soon or take several years.
ForecastBench
ForecastBench is a “dynamic, continuously-updated benchmark designed to measure the accuracy of ML systems on a constantly evolving set of forecasting questions.” This makes it ideal for tracking how fast AI forecasting skills improve. Performance is measured as Brier scores (lower is better)[1]. See the accompanying benchmark paper here.
The leaderboard shows AI performance alongside human benchmarks: "Public median forecast" represents non-experts, while "Superforecaster median forecast" represents elite human forecasters.
Evaluated AI systems operate under varying conditions—with or without access to a scratchpad for reasoning or to relevant information like news. Importantly, current forecasting prompts are likely good but suboptimal, meaning AI capabilities may be underestimated.
The benchmark paper reveals that AI forecasting performance correlates linearly with both Chatbot Arena scores and log training compute (measured in FLOP). Training compute may be of special interest for predicting when AIs reach the level of expert human forecasters.
Performance vs. Chatbot Arena and Log Training Compute
Brier Score vs. Chatbot Arena and Log training compute (source: ForecastBench paper)
Brier Score vs. Chatbot Arena and Log training compute (source: ForecastBench paper)
Scores in the graph above corresponds to accuracy for each model with optimal conditions (e.g. if Claude-3-5-Sonnet performed better with a scratchpad, its score is the one achieved while using it).
Using Epoch AI's training compute data, the AIs with most training compute at each point in time show a 4.9x annual increase in | 1,831 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
||
m2qMj7ovncbqKtzNt | recent-progress-on-the-science-of-evaluations | Recent progress on the science of evaluations | null | false | false | true | null | aNatZsmzCCHiZtBY2 | null | true | false | false | false | Post | null | 2025-06-23T09:41:44.185Z | null | false | false | 2 | 2 | 2025-06-23T21:14:06.776Z | false | false | post | [] | null | null | K6a3BEso7i8DqeMoL | 0 | 5 | 12 | false | 0.086987 | null | false | false | 2025-06-23T09:41:44.185Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 5 | 0 | 2025-06-08T15:03:43.111Z | false | false | norm-enforcing | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 10 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "FBRwHSmTudwiHHtrn",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-03-15T20:29:46.761Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Evaluations",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 9,
"shortName": null,
"slug": "ai-evaluations",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 5 | 0 | 0 | 2 | 0 | aNatZsmzCCHiZtBY2 | pabloamc | 2019-07-25T14:49:01.853Z | PabloAMC | PabloAMC | null | null | Pablo Antonio Moreno Casares | 108 | 29 | false | false | <p>Quantum algorithm researcher at Xanadu.ai. <a href="https://www.linkedin.com/in/pablo-antonio-moreno-casares/">https://www.linkedin.com/in/pablo-antonio-moreno-casares/</a></p> | null | null | 11 | 13 | 0 | 4 | 1 | 1 | 0 | fD4ATtTkdQJ4aSpGH | User | norm-enforcing | null | true | [
"alignmentVoters",
"canModeratePersonal",
"alignmentForum"
] | null | null | m2qMj7ovncbqKtzNt | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/m2qMj7ovncbqKtzNt/efbkdafcwsxxz46honyp | SocialPreviewType | K6a3BEso7i8DqeMoL | <p><strong>Summary</strong>: This post presents new methodological innovations presented in the paper <a href="https://kinds-of-intelligence-cfi.github.io/ADELE/">General Scales Unlock AI Evaluation with Explanatory and Predictive Power</a>. The paper introduces a set of general (universal) cognitive abilities that allow us to predict and explain AI system behaviour out of distribution. I outline what I believe are the main advantages of this methodology, indicate future extensions, and discuss implications for AI safety. Particularly exciting points include the future extension to propensities and how it could be useful for alignment and control evaluations.</p><p><strong>Disclaimer</strong>: I am a (secondary) author of this paper and thus my opinions are likely somewhat biased; though I sincerely believe this is a good evaluation methodology, with the potential to increase the safety of AI models.</p><p>I would like to thank Lexin Zhou and Wout Schellaert for comments that have significantly improved the draft. Any errors, inaccuracies and opinions remain my own.</p><h1 data-internal-id="Summary_of_ADELE">Summary of ADELE</h1><p>One persistent question in AI capabilities research is why performance doesn't always correlate with intuitive task difficulty—systems that handle advanced mathematics may struggle with tasks that seem cognitively simpler, like effective gameplay in long-context environments (e.g. <a href="https://www.twitch.tv/claudeplayspokemon">Pokemon</a>). In this post, we review a recent preprint, "<a href="https://arxiv.org/abs/2503.06378">General Scales Unlock AI Evaluation with Explanatory and Predictive Power</a>", that aims to clarify intuitions about why this is the case. This paper introduces a new evaluation methodology (Figure 1) that makes AI system evaluations both predictive and explanatory. I will explain the main contributions and some of its key advantages and limitations. I will also emphasise how I believe it could be used for AI safety, and what future work the group is working on to make this methodology useful for safety.</p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/c5a71768dfa2873c7679c9ed251d40348aeb638a9ae90c37d128e3f1a3077fe0/vd82tkywms9y5vbifdlc" alt=""></p><p data-internal-id="ftnt_ref1">The core methodological innovation is to introduce a small number of (conceptually distinct) dimensions that capture the demands of any task instance (see Table 1 below), and match them with the capabilities of the AI system. These dimensions are based on the literature in AI, psychology and animal cognition; and are organised hierarchically to be a set of general or universal cognitive capabilities. Since these dimensions are defined with rubrics in natural language that are human-interpretable, the success or failure of AI systems (validity) will be explainable. ... </p> | Summary: This post presents new methodological innovations presented in the paper General Scales Unlock AI Evaluation with Explanatory and Predictive Power. The paper introduces a set of general (universal) cognitive abilities that allow us to predict and explain AI system behaviour out of distribution. I outline what I believe are the main advantages of this methodology, indicate future extensions, and discuss implications for AI safety. Particularly exciting points include the future extension to propensities and how it could be useful for alignment and control evaluations.
Disclaimer: I am a (secondary) author of this paper and thus my opinions are likely somewhat biased; though I sincerely believe this is a good evaluation methodology, with the potential to increase the safety of AI models.
I would like to thank Lexin Zhou and Wout Schellaert for comments that have significantly improved the draft. Any errors, inaccuracies and opinions remain my own.
Summary of ADELE
One persistent question in AI capabilities research is why performance doesn't always correlate with intuitive task difficulty—systems that handle advanced mathematics may struggle with tasks that seem cognitively simpler, like effective gameplay in long-context environments (e.g. Pokemon). In this post, we review a recent preprint, "General Scales Unlock AI Evaluation with Explanatory and Predictive Power", that aims to clarify intuitions about why this is the case. This paper introduces a new evaluation methodology (Figure 1) that makes AI system evaluations both predictive and explanatory. I will explain the main contributions and some of its key advantages and limitations. I will also emphasise how I believe it could be used for AI safety, and what future work the group is working on to make this methodology useful for safety.
The core methodological innovation is to introduce a small number of (conceptually distinct) dimensions that capture the demands of any task instance (see Table 1 b | 2,427 | 1.10.0 | Revision | false | null | null | CrosspostOutput |
bp6K5JkmjtytofAT2 | racial-dating-preferences-and-sexual-racism | Racial Dating Preferences and Sexual Racism | null | false | false | false | null | mXgBFBbCtkAHfJvFt | null | true | false | false | false | Post | https://vishalblog.substack.com/p/racial-dating-preferences-and-sexual | 2025-06-23T03:57:02.159Z | null | false | false | 2 | 2 | 2025-06-23T05:27:19.233Z | false | false | linkpost | [] | null | null | zH8BhJpLRxWKLrkQZ | 68 | 65 | 49 | false | 0.214957 | null | false | false | 2025-06-28T14:23:54.656Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 22 | 0 | 2025-06-23T03:25:46.506Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 38 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "gHCNhqxuJq2bZ2akb",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-10T11:36:05.706Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Social & Cultural Dynamics",
"needsReview": false,
"noindex": false,
"postCount": 384,
"score": 0,
"shortName": null,
"slug": "social-and-cultural-dynamics",
"suggestedAsFilter": false,
"userId": "qxJ28GN72aiJu96iF",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "EnFKSZYiDHqMJuvJL",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-04-22T00:33:26.846Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Social Reality",
"needsReview": false,
"noindex": false,
"postCount": 64,
"score": 19,
"shortName": null,
"slug": "social-reality",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "3uE2pXvbcnS9nnZRE",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:50.898Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 27,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zJtgSyKntXrnkArbY",
"displayName": "kistune"
},
{
"_id": "XqyQTGFGoKfZtdqN3",
"displayName": "kINo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Modeling",
"needsReview": false,
"noindex": false,
"postCount": 5855,
"score": 2,
"shortName": null,
"slug": "world-modeling",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 2,
"wikiOnly": false
}
] | null | 0 | 0 | null | true | null | null | 0 | 65 | 0 | 0 | 29 | 0 | mXgBFBbCtkAHfJvFt | koreindian | 2019-03-23T01:44:15.888Z | koreindian | koreindian | null | null | null | 203 | 0 | false | false | null | null | 85 | 24 | 0 | 0 | 0 | 1 | 0 | nLbwLhBaQeG6tCNDN | User | null | null | null | [
"canModeratePersonal"
] | null | null | bp6K5JkmjtytofAT2 | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/bp6K5JkmjtytofAT2/wp8wczzwrx3lgcxwdvuz | SocialPreviewType | zH8BhJpLRxWKLrkQZ | <p><i>Note: This is a linkpost from my personal substack. This is on a culture war topic, which is not normally the focus of my blogging. Rationalist friends suggested that this post might be interesting and surprising to LW readers.</i></p><h2><strong>Summary</strong></h2><ul><li>People widely exclude romantic and sexual partners on the basis of race. This is claimed not to be racism by those who have these attitudes, but it is<span class="footnote-reference" data-footnote-reference="" data-footnote-index="1" data-footnote-id="xkm6pb2ltv" role="doc-noteref" id="fnrefxkm6pb2ltv"><sup><a href="#fnxkm6pb2ltv">[1]</a></sup></span>.</li><li>Certain groups, such as Black women and Asian men, are highly excluded by all other groups. These cases also exhibit within-race asymmetries in racial preferences across gender:<ul><li>Black women exhibit the highest <i>endophilia <strong>(</strong>i.e. </i>they most desire to date within race), but this is strongly not reciprocated by Black men.</li><li>Asian women exhibit the highest <i>endophobia</i> (<i>i.e. </i>they most refuse to date within race), but this is strongly not reciprocated by Asian men.</li></ul></li><li>These within-race asymmetries are so strong that they provide counter-evidence to most conventional explanations of racial preference (e.g. that racial preference is a proxy for the desire for wealth, class, and integration; that racial preference forms as a by-product of racial segregation).</li><li>These effects are not unique to straight dating, but strongly form the romantic and sexual reality of queer dating as well.</li><li>The evidential base for the above points is not small, but rather comes from an immense literature in social psychology, sociology, race studies, and queer studies.</li><li>People rarely take ownership over their own dating preferences. They either blame wider sociological factors outside of their control, or blame the racial losers themselves (Black women, Asian men) as being responsible for their own undesirability.</li><li>This topic is so controversial that it is systematically discussed only within two poles: a “woke” academic sphere, and a “red-pilled” incel/volcel/femcel online sphere. This adds a lot of distortion to the subject.</li><li>Those who have racial preferences are not going to be shamed out of having such preferences, so I recommend that racial losers consider doing weird things to improve their sexual fitness.</li></ul><p><strong>A Note on Language and Scope</strong></p><p>I am going to talk about racial and sexual groups using broad terms like “Asian woman”, “Black gay men”, “Whites/Jews”, and so on. I am aware that this language can be reductive, and may not operate on some desired level of granularity. I am aware that there are important, relevant differences betw... </p> | Note: This is a linkpost from my personal substack. This is on a culture war topic, which is not normally the focus of my blogging. Rationalist friends suggested that this post might be interesting and surprising to LW readers.
Summary
* People widely exclude romantic and sexual partners on the basis of race. This is claimed not to be racism by those who have these attitudes, but it is[1].
* Certain groups, such as Black women and Asian men, are highly excluded by all other groups. These cases also exhibit within-race asymmetries in racial preferences across gender:
* Black women exhibit the highest endophilia (i.e. they most desire to date within race), but this is strongly not reciprocated by Black men.
* Asian women exhibit the highest endophobia (i.e. they most refuse to date within race), but this is strongly not reciprocated by Asian men.
* These within-race asymmetries are so strong that they provide counter-evidence to most conventional explanations of racial preference (e.g. that racial preference is a proxy for the desire for wealth, class, and integration; that racial preference forms as a by-product of racial segregation).
* These effects are not unique to straight dating, but strongly form the romantic and sexual reality of queer dating as well.
* The evidential base for the above points is not small, but rather comes from an immense literature in social psychology, sociology, race studies, and queer studies.
* People rarely take ownership over their own dating preferences. They either blame wider sociological factors outside of their control, or blame the racial losers themselves (Black women, Asian men) as being responsible for their own undesirability.
* This topic is so controversial that it is systematically discussed only within two poles: a “woke” academic sphere, and a “red-pilled” incel/volcel/femcel online sphere. This adds a lot of distortion to the subject.
* Those who have racial preferences are not going to be shamed out of | 9,623 | 1.7.1 | Revision | false | null | null | CrosspostOutput |
|
y9QEHKdqsfsXBMAkb | mainstream-grantmaking-expertise-post-7-of-7-on-ai | Mainstream Grantmaking Expertise (Post 7 of 7 on AI Governance) | null | false | false | false | null | 62rKjNqA2LCJ6RthR | null | true | false | false | false | Post | null | 2025-06-23T01:39:43.466Z | null | false | false | 2 | 2 | 2025-06-23T20:58:00.733Z | false | false | post | [] | null | null | A8ADTaibpXsFh5Gbf | 1 | 10 | 23 | false | 0.118834 | null | false | false | 2025-06-25T17:25:02.702Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-23T01:24:35.671Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 45 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHDus5MuMNqQxJbjD",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-09T18:31:56.709Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "oEF4gToHRPEMw4FSo",
"displayName": "Jono"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Governance",
"needsReview": false,
"noindex": false,
"postCount": 726,
"score": 11,
"shortName": null,
"slug": "ai-governance",
"suggestedAsFilter": false,
"userId": "QBvPFLFyZyuHcBwFm",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "827JKe7YNjAegR468",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": false,
"createdAt": "2017-01-04T10:05:17.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "7iXcndyHDvmt77ggr",
"displayName": "Eric Rogstad"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Effective altruism",
"needsReview": false,
"noindex": false,
"postCount": 374,
"score": 1,
"shortName": null,
"slug": "effective-altruism",
"suggestedAsFilter": false,
"userId": "7iXcndyHDvmt77ggr",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "FMcwWK3LgrtqsNwpM",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-01-24T21:53:28.595Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Philanthropy / Grant making (Topic)",
"needsReview": false,
"noindex": false,
"postCount": 32,
"score": 9,
"shortName": null,
"slug": "philanthropy-grant-making-topic",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 10 | 0 | 0 | 2 | 0 | 62rKjNqA2LCJ6RthR | mass_driver | 2010-03-30T15:48:06.997Z | Mass_Driver | Mass_Driver | null | null | null | 3,304 | 0 | false | false | null | null | 31 | 655 | 1 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal"
] | null | null | y9QEHKdqsfsXBMAkb | https://res.cloudinary.com/lesswrong-2-0/image/upload/c_fill,ar_1.91,g_auto/SocialPreview/lzegwcjlyuecu3tjzfzn | SocialPreviewType | A8ADTaibpXsFh5Gbf | <p>Previously in this sequence, I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards – we need the political power provided by advocates to have a chance of preventing misaligned superintelligence. A few researchers might be useful as a ‘multiplier effect’ on the power of many advocates, but the converse is not true: there’s no “magic bullet” that AI governance researchers can hope to discover that could substitute for an army of political foot soldiers. Even the best political ideas still need many political activists to spread them, because the political arena is noisy and contested. </p><p>Unfortunately, we have very few political activists. This means the good ideas that our governance researchers have been proposing are mostly “orphaned.” In other words, these good ideas have no one to draft them into shovel-ready language that legislators could approve, and they have no one to advocate for their passage.</p><p>It’s a bit of a puzzle that the AI safety movement nevertheless continues to fund so much academic-style research. Why do we need more good ideas if we already have a dozen good ideas that nobody is acting on?</p><p>In the sixth post in this sequence, I suggested that part of the answer to this puzzle is that we have 4 academic-style thinkers for every political advocate on our AI safety grant-making teams. Because the culture within these grantmaking teams is heavily tilted toward research, grantmakers may feel more comfortable funding research grants or may be better able to appreciate the merits of research grants. They may also have a warm collegial relationship with some of the researchers, which could subtly bias their funding decisions.</p><p>Ideally, these kinds of biases would be caught and corrected by formal grantmaking procedures, such as the use of written scoring rubrics. Unfortunately, such procedures are notably missing from major AI safety funders. A week before publishing this post, I gave an advance copy of it to the five largest institutional donors so that they could address any factual errors. Although several people offered helpful corrections, none of these corrections change my fundamental point: AI safety funders are evaluating grants using an informal and subjective method. <i><strong>No formal criteria are being applied to most AI governance grant applications.</strong></i></p><p>In this seventh and ... </p> | Previously in this sequence, I estimated that we have 3 researchers for every advocate working on US AI governance, and I argued that this ratio is backwards – we need the political power provided by advocates to have a chance of preventing misaligned superintelligence. A few researchers might be useful as a ‘multiplier effect’ on the power of many advocates, but the converse is not true: there’s no “magic bullet” that AI governance researchers can hope to discover that could substitute for an army of political foot soldiers. Even the best political ideas still need many political activists to spread them, because the political arena is noisy and contested.
Unfortunately, we have very few political activists. This means the good ideas that our governance researchers have been proposing are mostly “orphaned.” In other words, these good ideas have no one to draft them into shovel-ready language that legislators could approve, and they have no one to advocate for their passage.
It’s a bit of a puzzle that the AI safety movement nevertheless continues to fund so much academic-style research. Why do we need more good ideas if we already have a dozen good ideas that nobody is acting on?
In the sixth post in this sequence, I suggested that part of the answer to this puzzle is that we have 4 academic-style thinkers for every political advocate on our AI safety grant-making teams. Because the culture within these grantmaking teams is heavily tilted toward research, grantmakers may feel more comfortable funding research grants or may be better able to appreciate the merits of research grants. They may also have a warm collegial relationship with some of the researchers, which could subtly bias their funding decisions.
Ideally, these kinds of biases would be caught and corrected by formal grantmaking procedures, such as the use of written scoring rubrics. Unfortunately, such procedures are notably missing from major AI safety funders. A week before publishing this post, I | 11,220 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
riau8SeDCDwbxdntu | how-does-the-lesswrong-team-generate-the-website | How does the LessWrong team generate the website illustrations? | null | false | false | false | null | Jai84EcEj5rC9g75o | null | true | false | false | false | Post | 2025-06-23T00:05:34.752Z | null | false | false | 2 | 2 | null | false | false | question | [] | null | null | gRpytJjNDEt4PvtQi | 1 | 8 | 13 | false | 0.046585 | null | false | false | 2025-06-23T01:53:25.321Z | null | null | null | null | null | true | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-23T00:02:38.617Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 1 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "urxWzbi9PBW3GdiG6",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-03-19T23:36:45.176Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Art",
"needsReview": false,
"noindex": false,
"postCount": 16,
"score": 0,
"shortName": null,
"slug": "ai-art",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "MfpEPj6kJneT9gWT6",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-11T01:01:30.571Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "zyrFCnHNbfBFw3PvP",
"displayName": "banality_exhibition"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Site Meta",
"needsReview": false,
"noindex": false,
"postCount": 757,
"score": 1,
"shortName": null,
"slug": "site-meta",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 8 | 0 | 0 | 5 | 0 | Jai84EcEj5rC9g75o | nina-panickssery | 2022-05-26T17:13:09.864Z | NinaR | Nina Panickssery | null | null | Nina Panickssery | 2,042 | 300 | false | false | <p><a href="https://ninapanickssery.com/">https://ninapanickssery.com/</a></p><p><a href="https://ninapanickssery.substack.com/">https://ninapanickssery.substack.com/</a> </p> | null | null | 18 | 127 | 0 | 3 | 13 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"canModeratePersonal",
"alignmentVoters",
"alignmentForum",
"trustLevel1"
] | null | null | riau8SeDCDwbxdntu | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/riau8SeDCDwbxdntu/ujhtixze7lyocgxb4rka | SocialPreviewType | gRpytJjNDEt4PvtQi | <figure class="image"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/c_crop,g_custom/c_fill,dpr_auto,q_auto,f_auto,g_auto:faces/ohabryka_Topographic_aquarelle_book_cover_by_Thomas_W._Schaller_f9c9dbbe-4880-4f12-8ebb-b8f0b900abc1_m4k6dy_734413"></figure><p>(the images like this one)</p><p>They are always on brand and aesthetically pleasing. Curious how the team designs and produces them.</p> | (the images like this one)
They are always on brand and aesthetically pleasing. Curious how the team designs and produces them. | 22 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
gxQ9XoYz6G3P66fxP | the-ai-s-toolbox-from-soggy-toast-to-optimal-solutions | The AI's Toolbox: From Soggy Toast to Optimal Solutions | null | false | false | false | null | 9TyfqdTyithmNvnHx | null | true | false | false | false | Post | null | 2025-06-22T20:54:41.248Z | null | false | false | 2 | 2 | 2025-06-23T20:58:17.097Z | false | false | post | [] | null | null | TjwWKviNpHTcxm2eF | 0 | 1 | 1 | false | 0.03808 | null | false | false | 2025-06-22T20:54:41.248Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 0 | 0 | 2025-06-22T20:27:34.566Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "EdRnMXBRbY5JDf5df",
"adminOnly": false,
"afBaseScore": 6,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nmk3nLpQE89dMRzzN",
"displayName": "Eliezer Yudkowsky"
}
]
},
"baseScore": 13,
"canEditUserIds": null,
"core": false,
"createdAt": "2015-07-02T01:53:10.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nmk3nLpQE89dMRzzN",
"displayName": "Eliezer Yudkowsky"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Epistemology",
"needsReview": false,
"noindex": false,
"postCount": 424,
"score": 13,
"shortName": null,
"slug": "epistemology",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "nvKzwpiranwy29HFJ",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-17T05:29:19.922Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Optimization",
"needsReview": false,
"noindex": false,
"postCount": 171,
"score": 9,
"shortName": null,
"slug": "optimization",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "HAFdXkW4YW4KRe2Gx",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-04-29T02:31:48.719Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Utility Functions",
"needsReview": false,
"noindex": false,
"postCount": 203,
"score": 19,
"shortName": null,
"slug": "utility-functions",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Ng8Gice9KNkncxqcj",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 1,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:17.072Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 100,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "iMqytjy9ns89Fzfyv",
"displayName": "miakko"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Rationality",
"needsReview": false,
"noindex": false,
"postCount": 4302,
"score": 1,
"shortName": null,
"slug": "rationality",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 1 | 0 | 0 | 0 | 0 | 9TyfqdTyithmNvnHx | thehumanproject-ai | 2024-10-12T21:04:16.126Z | Thehumanproject.ai | Thehumanproject.ai | null | null | null | 1 | 0 | false | false | <p>AI in healthcare specialist with a deep passion for AI and understanding the world. Filled to the brim with questions and curiosity and on a quest to broaden my horizon, as my day job deals with AI less frequently than I would like it to. Feel free to reach out via [email protected]</p> | null | null | 4 | 0 | 0 | 0 | 0 | 0.9 | 0 | qgdGA4ZEyW7zNdK84 | User | null | null | null | null | null | null | gxQ9XoYz6G3P66fxP | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/s8kmdvtj0wawuxfdyah0 | SocialPreviewType | TjwWKviNpHTcxm2eF | <p>In a limited array of tools, there is one optimal tool to use.</p><p> </p><p>For this example, let’s take the task of hammering a nail into a wall, and the only two tools available to us are a soggy piece of toast and a silver spoon. For the given task, the spoon will be slightly more useful than the moist piece of a baked good.</p><p> </p><p>Let’s expand this to include a wider range of tools that are at our disposal. We now have a whole toolbox akin to one you’d find in the average US homeowner's garage. To fulfil the task of hammering a nail into a wall, you’d most likely grab a hammer. It will be more useful for this very task than a spare tire, another nail or a ‘78 edition of a defunct comic book.</p><p> </p><p>Now let’s take this even a step further. Now you have an unlimited array of tools at your disposal. Even within this unlimited tool kit, there will be tools that will be better or worse for completing the task of hammering the nail into a wall. </p><p> </p><p>You’d likely have multiple different hammers. Some might be made from a slightly stiffer material, while others will be made from a slightly softer material, making it marginally easier or harder to hit that nail into the wall. The handle might be more or less ergonomic or better suited to your hand anatomy. There will still be one optimal tool that would enable you to hammer the nail into the wall with little effort, without bending the nail or hitting it at a bad angle, etc. But realistically speaking, you’d likely not notice the difference between the most optimal tool and the second best or third best ( or probably even 12<sup>th</sup> for that matter). </p><p> </p><p>You’ll have some sort of threshold (or let’s say grey area) of what the minimum requirement would be of a tool that sufficiently aids you in hammering that nail into the wall.</p><p> </p><p>If we were to plot available tools on the x axis and their effectiveness on the y axis, where perfect utility is 0, we’d be able to plot something akin to a loss function with local and global minima. Our threshold would then just be a horizontal line that would separate the tools that are sufficient for our use ( hammers) from those that are less so ( soggy bread).</p><p> </p><p><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/v0c8c53zrjm9a87s12cq" alt="Ein Bild, das Text, Reihe, Diagramm, Screenshot enthält.
KI-generierte Inhalte können fehlerhaft sein." srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/cgtnfpvu7285zd2vdxpj 200w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/wfrrnhcodjgycdpjwask 400w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/otw4yqvke5jz2nkl0thx 600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/hqf4gg09yv3ea6rfh05k 800w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/kv3byzycuv5bfzo2r2on 1000w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/i1qho5dcy94087fzx7gm 1200w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/t5rbw4suhswz5ku2ddkg 1400w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/vk5heg1wp0vhbrnhdrxw 1600w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/vjzzgj5o1q2romw8hjdo 1800w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/gxQ9XoYz6G3P66fxP/wqfurquyaadja5odx77r 1966w"></p><p>(Note: this plot is AI-generated.) We could even turn this into a multidimensional plot if we include utility on one axis and all possible properties (size, shape, hardness, etc.) on the remaining axis.</p><p> </p><p>Quite obv... </p> | In a limited array of tools, there is one optimal tool to use.
For this example, let’s take the task of hammering a nail into a wall, and the only two tools available to us are a soggy piece of toast and a silver spoon. For the given task, the spoon will be slightly more useful than the moist piece of a baked good.
Let’s expand this to include a wider range of tools that are at our disposal. We now have a whole toolbox akin to one you’d find in the average US homeowner's garage. To fulfil the task of hammering a nail into a wall, you’d most likely grab a hammer. It will be more useful for this very task than a spare tire, another nail or a ‘78 edition of a defunct comic book.
Now let’s take this even a step further. Now you have an unlimited array of tools at your disposal. Even within this unlimited tool kit, there will be tools that will be better or worse for completing the task of hammering the nail into a wall.
You’d likely have multiple different hammers. Some might be made from a slightly stiffer material, while others will be made from a slightly softer material, making it marginally easier or harder to hit that nail into the wall. The handle might be more or less ergonomic or better suited to your hand anatomy. There will still be one optimal tool that would enable you to hammer the nail into the wall with little effort, without bending the nail or hitting it at a bad angle, etc. But realistically speaking, you’d likely not notice the difference between the most optimal tool and the second best or third best ( or probably even 12th for that matter).
You’ll have some sort of threshold (or let’s say grey area) of what the minimum requirement would be of a tool that sufficiently aids you in hammering that nail into the wall.
If we were to plot available tools on the x axis and their effectiveness on the y axis, where perfect utility is 0, we’d be able to plot something akin to a loss function with local and global minima. Our thresho | 2,263 | 1.2.1 | Revision | false | null | null | CrosspostOutput |
Jo6LPyp7t3rPuf8Ao | black-box-interpretability-methodology-blueprint-probing | Black-box interpretability methodology blueprint: Probing runaway optimisation in LLMs | null | false | false | false | null | 3xEktpc6dqfXswNxN | null | true | false | false | false | Post | null | 2025-06-22T18:16:19.078Z | null | false | false | 2 | 2 | 2025-06-23T21:03:35.199Z | false | false | post | [] | null | null | 8oSCwL4DYfxFYQAXS | 0 | 4 | 15 | false | 0.085021 | null | false | false | 2025-06-22T18:16:19.078Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 6 | 0 | 2025-06-15T22:20:54.027Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 9 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "ZwpcAKEFoYa3uBEjA",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-07-16T14:12:41.185Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Benchmarking",
"needsReview": false,
"noindex": false,
"postCount": 32,
"score": 0,
"shortName": null,
"slug": "ai-benchmarking",
"suggestedAsFilter": false,
"userId": "XkC2RZHKXZmYv3dfA",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "FBRwHSmTudwiHHtrn",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2023-03-15T20:29:46.761Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI Evaluations",
"needsReview": false,
"noindex": false,
"postCount": 224,
"score": 9,
"shortName": null,
"slug": "ai-evaluations",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "jaf5zfcGgCB2REXGw",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-11T02:08:39.903Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Biology",
"needsReview": false,
"noindex": false,
"postCount": 261,
"score": 19,
"shortName": null,
"slug": "biology",
"suggestedAsFilter": false,
"userId": "qgdGA4ZEyW7zNdK84",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "FpHDkuYKMNHa2dbKR",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-08T04:56:14.365Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Brainstorming",
"needsReview": false,
"noindex": false,
"postCount": 3,
"score": 0,
"shortName": null,
"slug": "brainstorming",
"suggestedAsFilter": false,
"userId": "mcKSiwq2TBrTMZS6X",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "4CQy8rim8PGt4sfCn",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 3,
"canEditUserIds": null,
"core": false,
"createdAt": "2015-05-14T15:55:28.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "6bECaNmjL9gMZA4hF",
"displayName": "Lancelot Verinia"
},
{
"_id": "BW96FKAQm79HYdZ68",
"displayName": "Louis Santoro"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Complexity of value",
"needsReview": false,
"noindex": false,
"postCount": 104,
"score": 3,
"shortName": null,
"slug": "complexity-of-value",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "PDJ6KqJBRzvKPfuS3",
"adminOnly": false,
"afBaseScore": 10,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
}
]
},
"baseScore": 25,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-06-14T22:24:48.135Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "2B6Hxu48xeRXygvca",
"displayName": "Arjun Pitchanathan"
},
{
"_id": "8btiLJDabHgZuiSAB",
"displayName": "Ggwp"
},
{
"_id": "Au8JpEqoZgEhEXLD7",
"displayName": "KlayugMonk"
},
{
"_id": "Ns8Q7rJZaFoz53Szy",
"displayName": "Gabriel Stechschulte"
},
{
"_id": "xF5nfdddHjFThHy49",
"displayName": "[email protected]"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Economics",
"needsReview": false,
"noindex": false,
"postCount": 547,
"score": 25,
"shortName": null,
"slug": "economics",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 7,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xknvtHwqvqhwahW8Q",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-19T22:24:55.795Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Human Values",
"needsReview": false,
"noindex": false,
"postCount": 225,
"score": 9,
"shortName": null,
"slug": "human-values",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "56yXXrcxRjrQs6z9R",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-30T22:00:37.947Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "t46uLRSbDziEcKmev",
"displayName": "Kriz Tahimic"
},
{
"_id": "sqMaBFCkAhRcWzJXi",
"displayName": "nicolasguillard"
},
{
"_id": "S6Niz3DiFCTm2Eybq",
"displayName": "Anirudh257"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Interpretability (ML & AI)",
"needsReview": false,
"noindex": false,
"postCount": 933,
"score": 12,
"shortName": null,
"slug": "interpretability-ml-and-ai",
"suggestedAsFilter": false,
"userId": "DgsGzjyBXN8XSK22q",
"voteCount": 4,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "KmgkrftQuX7jmjjp5",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2021-09-24T14:01:59.395Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Language Models (LLMs)",
"needsReview": false,
"noindex": false,
"postCount": 840,
"score": 9,
"shortName": null,
"slug": "language-models-llms",
"suggestedAsFilter": false,
"userId": "Sp5wM4aRAhNERd4oY",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "qL92kMi7wh6qktrrQ",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "4fh2AAe3n7oBviyxx",
"displayName": "orthonormal"
}
]
},
"baseScore": 11,
"canEditUserIds": null,
"core": false,
"createdAt": "2016-03-22T06:07:35.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "4fh2AAe3n7oBviyxx",
"displayName": "orthonormal"
},
{
"_id": "8jGFt89irprvWEZeN",
"displayName": "Roland Pihlakas"
},
{
"_id": "rE4bdr3J23eygY6u9",
"displayName": "Johannes Schmitt"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Mild optimization",
"needsReview": false,
"noindex": false,
"postCount": 31,
"score": 11,
"shortName": null,
"slug": "mild-optimization",
"suggestedAsFilter": false,
"userId": "nmk3nLpQE89dMRzzN",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "dBPou4ihoQNY4cquv",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-01T16:09:30.226Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Psychology",
"needsReview": false,
"noindex": false,
"postCount": 348,
"score": 9,
"shortName": null,
"slug": "psychology",
"suggestedAsFilter": false,
"userId": "p8SHJFHRgZeMuw7qk",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "HAFdXkW4YW4KRe2Gx",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 19,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-04-29T02:31:48.719Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Utility Functions",
"needsReview": false,
"noindex": false,
"postCount": 203,
"score": 19,
"shortName": null,
"slug": "utility-functions",
"suggestedAsFilter": false,
"userId": "nLbwLhBaQeG6tCNDN",
"voteCount": 2,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "NLwTnsH9RSotqXYLw",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-12T17:06:52.292Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Value Learning",
"needsReview": false,
"noindex": false,
"postCount": 206,
"score": 0,
"shortName": null,
"slug": "value-learning",
"suggestedAsFilter": false,
"userId": "pgi5MqvGrtvQozEH8",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 4 | 0 | 0 | 3 | 0 | 3xEktpc6dqfXswNxN | roland-pihlakas | 2017-12-31T18:01:13.249Z | roland-pihlakas | Roland Pihlakas | null | null | null | 137 | 13 | false | false | <p>I hold an MSc in psychology. I work as an AI software engineer specialising in combinatorial optimisation, machine learning, graph search, natural language processing, and data compression. <br><br>I have been both researching and working on multi-objective value problems for almost 20 years, have followed discussions on AI Safety since 2006 and have participated more actively since about 2017. My thesis topic was in cognitive psychology, about computational modelling of innate learning and planning mechanisms (classical and operant conditioning plus insight learning), which eventually form a foundation for culturally acquired language-based thought processes. <br><br>With co-authors I have published a research paper about utility functions for risk-averse multi-objective decision making, which are relevant for balancing the plurality of human values. <br><br>Later, with support from Foresight Institute’s grant in autumn 2023 - spring 2024 I created a suite of biologically and economically aligned multi-objective multi-agent long-running AI safety benchmarks, published in Arxiv at the end of September 2024. <br><br>I am a member of an AI ethics expert group. We have published Agentic AI guidelines documents volume 1 and 2 (published Sept 2024 - March 2025). Over the years I have been a contributor to a few more governance related publications. <br><br>My resume: <a href="https://bit.ly/rp_ea_2018"><u>https://bit.ly/rp_ea_2018</u></a></p> | null | null | 4 | 31 | 0 | 1 | 2 | 1 | 0 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"alignmentVoters",
"canModeratePersonal"
] | null | null | Jo6LPyp7t3rPuf8Ao | SocialPreviewType | 8oSCwL4DYfxFYQAXS | <p><i><strong>Working document</strong> – feedback extremely welcome. Ideas in the <strong>main body</strong> are those I currently see as highest-leverage; numerous <strong>items under Appendix</strong> are more tentative and would benefit from critique as well as additions.</i></p><p><i>I am hoping that this post will serve also as a generally interesting brainstorming collection and discussion ground of <strong>black-box LLM interpretability methodology as well as failure mitigation ideas</strong>.</i></p><p><i>I would like to thank my collaborators: </i><a href="https://www.lesswrong.com/users/sruthi-kuriakose"><i>Sruthi Kuriakose</i></a><i>, Aintelope members (Andre Addis, </i><a href="https://www.lesswrong.com/users/angie-normandale"><i>Angie Normandale</i></a><i>, </i><a href="https://www.lesswrong.com/users/gunnar_zarncke"><i>Gunnar Zarncke</i></a><i>, </i><a href="https://www.lesswrong.com/users/jpyykko"><i>Joel Pyykkö</i></a><i>, Rasmus Herlo), </i><a href="https://www.lesswrong.com/users/kabir-kumar"><i>Kabir Kumar</i></a><i> @ AI-Plans and </i><a href="https://www.lesswrong.com/users/stijn"><i>Stijn Servaes</i></a><i> @ AE Studio for stimulating discussions and shared links leading to these ideas. All possible mistakes are mine.</i></p><hr><h1><strong>TL;DR – A Shortened Version</strong></h1><p><i>This chapter is very dense and keyword heavy. <strong>A reader-friendly Long Version (the main part of this post) can be found in the next chapter.</strong> You can skip this shortened version unless you are in hurry.</i></p><p>Full background in LessWrong (our earlier results): <br><a href="https://www.lesswrong.com/posts/PejNckwQj3A2MGhMA/notable-runaway-optimiser-like-llm-failure-modes-on">"Notable runaway-optimiser-like LLM failure modes on Biologically and Economically aligned AI safety benchmarks for LLMs with simplified observation format"</a> <br><br><strong>Goals (why)</strong><br>Identifying when, why, and how LLMs collapse from multi-objective and/or bounded reasoning into single-objective, unbounded maximisation; showing practical mitigations; performing the experiments rigorously; paper in peer review.<br><br><strong>Objectives (what)</strong><br>• 1 Curating stress-test scenarios on <strong>Biologically & Economically Aligned</strong> benchmarks that systematically trigger over-optimisation.<br>• 2 Quantifying with automated <strong>Runaway Index (RI)</strong>: scoring by frequency and onset timing of goal imbalance, homeostatic constraint violations, and self-imitation spikes.<br>• 3 Comparing mitigations: <br> 3.1. Prompt engineering; <br> 3.2. Fine-tuning or few-shot prompting with optimal actions; <br> 3.3. Reflection-based intervention: detecting runaway flip and resetting context or returning to “savepoint”.<br><br><strong>Methods (how)</strong><br>• <strong>Stress-tests</strong>: Introducing subjective pressure, boredom, sense of failure.<br>• <strong>Narrative variations:</strong> Persona shifts (MBTI/Big-Five extremes), mild psychiatric disorders, stoic and zen-like perspective vs achievement orientation and shame.<br>• <strong>Attribution sweep</strong>: SHAP-style and “leave one out” message history perturbations to isolate features pred... </p> | Working document – feedback extremely welcome. Ideas in the main body are those I currently see as highest-leverage; numerous items under Appendix are more tentative and would benefit from critique as well as additions.
I am hoping that this post will serve also as a generally interesting brainstorming collection and discussion ground of black-box LLM interpretability methodology as well as failure mitigation ideas.
I would like to thank my collaborators: Sruthi Kuriakose, Aintelope members (Andre Addis, Angie Normandale, Gunnar Zarncke, Joel Pyykkö, Rasmus Herlo), Kabir Kumar @ AI-Plans and Stijn Servaes @ AE Studio for stimulating discussions and shared links leading to these ideas. All possible mistakes are mine.
----------------------------------------
TL;DR – A Shortened Version
This chapter is very dense and keyword heavy. A reader-friendly Long Version (the main part of this post) can be found in the next chapter. You can skip this shortened version unless you are in hurry.
Full background in LessWrong (our earlier results):
"Notable runaway-optimiser-like LLM failure modes on Biologically and Economically aligned AI safety benchmarks for LLMs with simplified observation format"
Goals (why)
Identifying when, why, and how LLMs collapse from multi-objective and/or bounded reasoning into single-objective, unbounded maximisation; showing practical mitigations; performing the experiments rigorously; paper in peer review.
Objectives (what)
• 1 Curating stress-test scenarios on Biologically & Economically Aligned benchmarks that systematically trigger over-optimisation.
• 2 Quantifying with automated Runaway Index (RI): scoring by frequency and onset timing of goal imbalance, homeostatic constraint violations, and self-imitation spikes.
• 3 Comparing mitigations:
3.1. Prompt engineering;
3.2. Fine-tuning or few-shot prompting with optimal actions;
3.3. Reflection-based intervention: detecting runaway flip and resetting context or returning | 2,135 | 1.54.0 | Revision | false | null | null | CrosspostOutput |
|
BLqcgCFzdt3kcdmWi | the-croissant-principle-a-theory-of-ai-generalization | The Croissant Principle: A Theory of AI Generalization | null | false | false | false | null | xKMdzznkMzgkNbAwa | null | true | false | false | false | Post | null | 2025-06-22T17:58:06.721Z | null | false | false | 2 | 2 | 2025-06-23T02:24:53.869Z | false | false | post | [] | null | null | M5GAJCmzLGtmAzXZu | 6 | 7 | 20 | false | 0.100109 | null | false | false | 2025-06-23T20:28:13.670Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 3 | 0 | 2025-06-22T16:56:28.970Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 3 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "5f5c37ee1b5cdee568cfb25c",
"adminOnly": false,
"afBaseScore": 9,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 20,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-09-11T19:58:52.475Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "EQNTWXLKMeWMp2FQS",
"displayName": "Ben Pace"
},
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
},
{
"_id": "m4Jwy2gkbHWnofGEN",
"displayName": "HH"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Kolmogorov Complexity",
"needsReview": false,
"noindex": false,
"postCount": 54,
"score": 20,
"shortName": null,
"slug": "kolmogorov-complexity",
"suggestedAsFilter": false,
"userId": "qf77EiaoMw7tH3GSr",
"voteCount": 3,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "DCN2zNscbMZp5aatL",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-07T09:18:53.871Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Information Theory",
"needsReview": false,
"noindex": false,
"postCount": 97,
"score": 9,
"shortName": null,
"slug": "lwwiki-old-information-theory",
"suggestedAsFilter": false,
"userId": "gDyx67WvCdCzMSo6W",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 7 | 0 | 0 | 5 | 0 | xKMdzznkMzgkNbAwa | jeffrey-liang | 2025-06-06T22:25:56.308Z | jeffrey-liang | Jeffrey Liang | null | null | null | 32 | 0 | false | false | null | null | 1 | 5 | 0 | 0 | 0 | 1 | 0 | grecHJcgkb3KW5wnM | User | null | null | null | null | null | null | BLqcgCFzdt3kcdmWi | SocialPreviewType | M5GAJCmzLGtmAzXZu | <p>I recently wrote an ML theory paper which proposes explanations for mysterious phenomena in contemporary machine learning like data scaling laws and double descent. Here's the <a href="https://drive.google.com/file/d/133pkzpNYUhGyULgVJnbtGz-yXUpoSi6B/view?usp=drive_link">link to the paper</a> and the <a href="https://x.com/realJeffLiang/status/1927827833763529015">Twitter thread</a>. I didn't get much attention and need an endorser to publish on ArXiv so I thought I'd post it here and get some feedback (and maybe an endorser!)</p><p>Essentially what the paper does is propose that all data in a statistical learning problem arises from a latent space via a generative map. From this we derive an upper bound on the true loss as depending on the training/empirical loss, the distance in latent space to the closest training sample where the model attains better than the training loss, and the compressibility of the model (similar to Kolmogorov complexity).</p><p>Barring a (reasonable) conjecture which nonetheless is not proved, we are able to explain why data scaling follows a power law as well as the exact form of the exponent. The intuition comes from Hausdorff dimension which measures the dimension of a metric space.</p><p>Imagine you are building a model with 1-dimensional inputs, let's say in the unit interval <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="[0,1]"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">[</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">0</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="margin-top: -0.144em; padding-bottom: 0.519em;">,</span></span><span class="mjx-mn MJXc-space1"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">]</span></span></span></span></span></span></span>. Let's say you have ten training samples distributed evenly. If the loss of your model is Lipschitz (doesn't change unboundedly fast e.g. for smooth enough functions, derivative is bounded), your model can't get loss on any test sample greater than the loss at the closest point plus the distance to that point (capped at around 1/10) times the Lipschitz constant (bound on the derivative).</p><p>If you want to improve generalization, you can sample more data. If these are spaced optimally (evenly), the maximum distance to a training sample decreases like <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="n^{-1}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span></span></span></span></span></span></span></span></span></span> as can be easily seen. However, if you were working with 2 dimensional data, it would scale like <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="n^{-1/2}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">/</span></span></span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">2</span></span></span></span></span></span></span></span></span></span></span>! Hausdorff dimension essentially defines the dimension of a metric space as the number <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="d"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></span> such that this scales like <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="n^{-1/d}"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">/</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></span></span></span></span></span>.</p><p>If you now put these two facts together, you get that the generalization gap (gap between true loss and training loss) is <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="O(n^{-1/d})"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.519em; padding-bottom: 0.298em;">O</span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">(</span></span><span class="mjx-msubsup"><span class="mjx-base"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span><span class="mjx-sup" style="font-size: 70.7%; vertical-align: 0.513em; padding-left: 0px; padding-right: 0.071em;"><span class="mjx-texatom" style=""><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.298em; padding-bottom: 0.446em;">−</span></span><span class="mjx-mn"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.372em; padding-bottom: 0.372em;">1</span></span><span class="mjx-texatom"><span class="mjx-mrow"><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">/</span></span></span></span><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span><span class="mjx-mo"><span class="mjx-char MJXc-TeX-main-R" style="padding-top: 0.446em; padding-bottom: 0.593em;">)</span></span></span></span></span></span></span> where <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="n"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.225em; padding-bottom: 0.298em;">n</span></span></span></span></span></span></span> is the number of training data samples and <span class="math-tex"><span class="mjpage"><span class="mjx-chtml"><span class="mjx-math" aria-label="d"><span class="mjx-mrow" aria-hidden="true"><span class="mjx-mi"><span class="mjx-char MJXc-TeX-math-I" style="padding-top: 0.446em; padding-bottom: 0.298em; padding-right: 0.003em;">d</span></span></span></span></span></span></span> is the Hausdorff dimension of the latent space. In other words, we have a concrete explanation of data scaling laws!</p><p>It's worth noting that this analysis is independent of architecture, task, loss function (mostly) and doesn'... <style>.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math * {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > * {position: absolute}
.MJXc-bevelled > * {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > * {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom * {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax_AMS'), local('MathJax_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_AMS-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax_Caligraphic Bold'), local('MathJax_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax_Fraktur'), local('MathJax_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax_Fraktur Bold'), local('MathJax_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Fraktur-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax_Math BoldItalic'), local('MathJax_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-BoldItalic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax_SansSerif'), local('MathJax_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax_SansSerif Bold'), local('MathJax_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax_SansSerif Italic'), local('MathJax_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_SansSerif-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax_Script'), local('MathJax_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Script-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax_Typewriter'), local('MathJax_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Typewriter-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax_Caligraphic'), local('MathJax_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Caligraphic-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax_Main Bold'), local('MathJax_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax_Main Italic'), local('MathJax_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax_Main'), local('MathJax_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Main-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax_Math Italic'), local('MathJax_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Math-Italic.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax_Size1'), local('MathJax_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size1-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax_Size2'), local('MathJax_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size2-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax_Size3'), local('MathJax_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size3-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax_Size4'), local('MathJax_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Size4-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax_Vector'), local('MathJax_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Regular.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax_Vector Bold'), local('MathJax_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /*1*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax_Vector-Bold.eot'); src /*2*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax_Vector-Bold.otf') format('opentype')}
</style></p> | I recently wrote an ML theory paper which proposes explanations for mysterious phenomena in contemporary machine learning like data scaling laws and double descent. Here's the link to the paper and the Twitter thread. I didn't get much attention and need an endorser to publish on ArXiv so I thought I'd post it here and get some feedback (and maybe an endorser!)
Essentially what the paper does is propose that all data in a statistical learning problem arises from a latent space via a generative map. From this we derive an upper bound on the true loss as depending on the training/empirical loss, the distance in latent space to the closest training sample where the model attains better than the training loss, and the compressibility of the model (similar to Kolmogorov complexity).
Barring a (reasonable) conjecture which nonetheless is not proved, we are able to explain why data scaling follows a power law as well as the exact form of the exponent. The intuition comes from Hausdorff dimension which measures the dimension of a metric space.
Imagine you are building a model with 1-dimensional inputs, let's say in the unit interval [0,1]. Let's say you have ten training samples distributed evenly. If the loss of your model is Lipschitz (doesn't change unboundedly fast e.g. for smooth enough functions, derivative is bounded), your model can't get loss on any test sample greater than the loss at the closest point plus the distance to that point (capped at around 1/10) times the Lipschitz constant (bound on the derivative).
If you want to improve generalization, you can sample more data. If these are spaced optimally (evenly), the maximum distance to a training sample decreases like n−1 as can be easily seen. However, if you were working with 2 dimensional data, it would scale like n−1/2! Hausdorff dimension essentially defines the dimension of a metric space as the number d such that this scales like n−1/d.
If you now put these two facts together, you get that the gener | 669 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
||
qyQcn8GmcjAtGEFTA | matchmaking-to-machine-alignment-why-relational-design-can-t | Matchmaking to Machine Alignment: Why Relational Design Can’t Be Left to Chance | null | false | false | false | null | WemEo6Wfp82jzKWYC | null | true | false | false | false | Post | null | 2025-06-22T15:32:35.771Z | null | false | false | 2 | 2 | 2025-06-23T21:00:21.124Z | false | false | post | [] | null | null | C4af7xXmTPYD6zyDC | 0 | 3 | 3 | false | 0.043716 | null | false | false | 2025-06-22T15:32:35.771Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | [] | null | NFmcwmaFeTWfgrvBN | null | null | null | false | null | [] | null | 1 | 0 | 2025-06-22T10:21:15.007Z | false | false | easy-going | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 4 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "qHCFjTEWCQjKBWy5z",
"adminOnly": false,
"afBaseScore": 4,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "4fh2AAe3n7oBviyxx",
"displayName": "orthonormal"
}
]
},
"baseScore": 14,
"canEditUserIds": null,
"core": false,
"createdAt": "2015-04-06T06:42:58.000Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "4fh2AAe3n7oBviyxx",
"displayName": "orthonormal"
},
{
"_id": "3WJpS3WJLpYe4revK",
"displayName": "Katalina Hernandez"
},
{
"_id": "fscy9tNFa3DECqdB3",
"displayName": "Andrew McKnight"
},
{
"_id": "3J4eGLEoHeoFpjGL6",
"displayName": "Nick Shesterin"
},
{
"_id": "ibqySZcxmjTcnEMS9",
"displayName": "Joshua Pratt"
},
{
"_id": "M4geTrzWSfD7wEauG",
"displayName": "Luke McRedmond"
}
]
},
"isArbitalImport": true,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Corrigibility",
"needsReview": false,
"noindex": false,
"postCount": 165,
"score": 14,
"shortName": null,
"slug": "corrigibility-1",
"suggestedAsFilter": false,
"userId": "xSfc2APSi8WzFxp7i",
"voteCount": 6,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "xknvtHwqvqhwahW8Q",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-19T22:24:55.795Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Human Values",
"needsReview": false,
"noindex": false,
"postCount": 225,
"score": 9,
"shortName": null,
"slug": "human-values",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "Dw5Z6wtTgk4Fikz9f",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-07-17T06:11:39.285Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Inner Alignment",
"needsReview": false,
"noindex": false,
"postCount": 330,
"score": 9,
"shortName": null,
"slug": "inner-alignment",
"suggestedAsFilter": false,
"userId": "EQNTWXLKMeWMp2FQS",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "mip7tdAN87Jarkcew",
"adminOnly": false,
"afBaseScore": 3,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"baseScore": 9,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-05-10T06:00:13.257Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "qgdGA4ZEyW7zNdK84",
"displayName": "Ruby"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Relationships (Interpersonal)",
"needsReview": false,
"noindex": false,
"postCount": 213,
"score": 9,
"shortName": null,
"slug": "relationships-interpersonal",
"suggestedAsFilter": false,
"userId": "iBcH2a3HdWGS2JEZA",
"voteCount": 1,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "tXzozGBvkvAfP4noX",
"adminOnly": false,
"afBaseScore": null,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 0,
"canEditUserIds": null,
"core": false,
"createdAt": "2020-08-18T21:45:39.214Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 0,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": []
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "Systems Thinking",
"needsReview": false,
"noindex": false,
"postCount": 27,
"score": 0,
"shortName": null,
"slug": "systems-thinking",
"suggestedAsFilter": false,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 0,
"wikiOnly": false
},
{
"__typename": "Tag",
"_id": "sYm3HiWcfZvrGu3ui",
"adminOnly": false,
"afBaseScore": 2,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
}
]
},
"baseScore": 12,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T22:24:22.097Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 2000,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "nLbwLhBaQeG6tCNDN",
"displayName": "jimrandomh"
},
{
"_id": "sof55TPMQaeBaxhsS",
"displayName": "tommylees112"
},
{
"_id": "AayjS8XzcnDKhGdTv",
"displayName": "shark"
},
{
"_id": "HnALuwRdo6k9HLaMt",
"displayName": "Alex Firssoff"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "AI",
"needsReview": false,
"noindex": false,
"postCount": 12544,
"score": 12,
"shortName": null,
"slug": "ai",
"suggestedAsFilter": true,
"userId": "r38pkCm7wF4M44MDQ",
"voteCount": 4,
"wikiOnly": false
}
] | null | 0 | 0 | null | false | null | null | 0 | 3 | 0 | 0 | 1 | 0 | WemEo6Wfp82jzKWYC | priyanka-bharadwaj | 2025-03-19T04:32:09.618Z | priyanka-bharadwaj | Priyanka Bharadwaj | null | null | Priyanka Bharadwaj | 63 | 0 | false | false | <p>I teach courses on relationships and human flourishing at IIT Madras, with a background in product leadership at Amazon and India’s largest logistics tech firm. I’ve also run a decade-long matchmaking and relationship coaching practice. I’m exploring how relational thinking, trust, repair, memory, can inform the way we design and align AI systems.</p> | null | null | 9 | 9 | 0 | 0 | 0 | 1 | 0 | r38pkCm7wF4M44MDQ | User | easy-going | null | true | [
"canModeratePersonal"
] | null | null | qyQcn8GmcjAtGEFTA | SocialPreviewType | C4af7xXmTPYD6zyDC | <p>We say alignment is about control, safety, precision. But after a decade working as a matchmaker in India’s increasingly chaotic relationship market, I’ve learnt that what sustains a system isn’t control, it’s trust. And trust doesn’t live in rules. It lives in memory, repair, and mutual adaptation.</p><p>I’ve spent years watching relationships fall apart not because people weren’t compatible, but because they didn’t know how to collaborate. We are fluent in chemistry, but clumsy with clarity. We optimised for trait, not values or processes. And when conflicts hit, as it always does, we have no shared playbook to return to.</p><p>In traditional Indian matchmaking, we had a whole socio-structural scaffolding propping up long-term collaboration through race or caste endogamy, community expectations, family intermediation, shared rituals and rites. It was crude and often unjust, but it was structurally coherent. Marriage was not just a bond between two people, but between two lineages, empires and philosophies of life. There were rules, expectations and fallback norms. Vows weren’t just ceremonial; they were memory devices, reminding people what they were committing to when emotions faded.</p><p>Today, most of that scaffolding is gone. </p><p>Tinder has replaced the community priest or matchmaker, and in this frictionless new marketplace, we are left to figure out long-term cooperation with short-term instincts. Even when we genuinely care for each other, we often collapse under the weight of ambiguity. We never clarify what we mean by commitment. We never learnt how to repair after rupture, and we assume love would make things obvious.</p><p>But love doesn’t make things obvious, context does, and maybe design too.</p><p>This isn’t just about marriage, it’s about systems and it’s about alignment.</p><p>Much of the current conversation on AI alignment focuses on architecture, oversight, corrigibility and formal guarantees. All of that is necessary, and I am not refuting it one bit. But I don't see AI in isolation, because we humans are building it, for us, and so, I can't help but view it from a lens of collaboration or partnership. </p><p>In human systems, I’ve rarely seen misalignment fixed by control. I’ve seen it fixed by context, memory, feedback, and repair. Not all of which can be coded cleanly into an objective function.</p><p>I’ve watched couples disintegrate not because of what happened, but because it k... </p> | We say alignment is about control, safety, precision. But after a decade working as a matchmaker in India’s increasingly chaotic relationship market, I’ve learnt that what sustains a system isn’t control, it’s trust. And trust doesn’t live in rules. It lives in memory, repair, and mutual adaptation.
I’ve spent years watching relationships fall apart not because people weren’t compatible, but because they didn’t know how to collaborate. We are fluent in chemistry, but clumsy with clarity. We optimised for trait, not values or processes. And when conflicts hit, as it always does, we have no shared playbook to return to.
In traditional Indian matchmaking, we had a whole socio-structural scaffolding propping up long-term collaboration through race or caste endogamy, community expectations, family intermediation, shared rituals and rites. It was crude and often unjust, but it was structurally coherent. Marriage was not just a bond between two people, but between two lineages, empires and philosophies of life. There were rules, expectations and fallback norms. Vows weren’t just ceremonial; they were memory devices, reminding people what they were committing to when emotions faded.
Today, most of that scaffolding is gone.
Tinder has replaced the community priest or matchmaker, and in this frictionless new marketplace, we are left to figure out long-term cooperation with short-term instincts. Even when we genuinely care for each other, we often collapse under the weight of ambiguity. We never clarify what we mean by commitment. We never learnt how to repair after rupture, and we assume love would make things obvious.
But love doesn’t make things obvious, context does, and maybe design too.
This isn’t just about marriage, it’s about systems and it’s about alignment.
Much of the current conversation on AI alignment focuses on architecture, oversight, corrigibility and formal guarantees. All of that is necessary, and I am not refuting it one bit. But I don't see AI in | 1,037 | 1.1.0 | Revision | false | null | null | CrosspostOutput |
|
f6nzCdLyBhPHxkuSo | grounding-to-avoid-airplane-delays | Grounding to Avoid Airplane Delays | null | false | false | false | null | TtEoCrFeowCGb6rFK | null | true | false | false | false | Post | null | 2025-06-22T01:50:05.112Z | null | false | false | 2 | 2 | 2025-06-23T02:26:20.340Z | false | false | post | [] | null | null | 6hobf8rY2s9uMMWwa | 0 | 12 | 28 | false | 0.114517 | null | false | false | 2025-06-22T01:50:05.112Z | null | null | null | null | null | false | false | null | null | null | false | false | null | null | null | null | null | null | null | null | null | null | false | null | null | null | null | grecHJcgkb3KW5wnM | null | null | null | false | null | [] | null | 7 | 0 | 2025-06-22T01:50:05.112Z | false | false | null | null | true | false | false | 0 | 0 | 0 | null | null | null | null | null | null | null | false | 0 | 0 | namesAttachedReactions | false | [] | 2 | null | null | null | null | [
{
"__typename": "Tag",
"_id": "xexCWMyds6QLWognu",
"adminOnly": false,
"afBaseScore": 0,
"afExtendedScore": {
"reacts": {
"typo": null
},
"usersWhoLiked": []
},
"baseScore": 2,
"canEditUserIds": null,
"core": true,
"createdAt": "2020-06-14T03:38:23.532Z",
"currentUserExtendedVote": null,
"currentUserVote": null,
"deleted": false,
"descriptionTruncationCount": 20,
"extendedScore": {
"reacts": {
"important": null,
"insightful": null,
"thinking": null,
"typo": null
},
"usersWhoLiked": [
{
"_id": "si6LoAENzqPCmi2Dh",
"displayName": "ihatenumbersinusernames7"
},
{
"_id": "B2sXjQTGgwoGE9FES",
"displayName": "SeaIgloo"
}
]
},
"isArbitalImport": false,
"isPlaceholderPage": false,
"isSubforum": false,
"name": "World Optimization",
"needsReview": false,
"noindex": false,
"postCount": 3151,
"score": 2,
"shortName": null,
"slug": "world-optimization",
"suggestedAsFilter": true,
"userId": "XtphY3uYHwruKqDyG",
"voteCount": 2,
"wikiOnly": false
}
] | ma5dgL5yFHRxKLZKv | 0 | 0 | null | false | null | null | 0 | 12 | 0 | 0 | 6 | 0 | TtEoCrFeowCGb6rFK | jkaufman | 2010-11-04T21:42:19.863Z | jkaufman | jefftk | null | null | Jeff Kaufman | 21,921 | 3 | false | false | <p>Co-lead (Near-Term Detection) at the Nucleic Acid Observatory in Boston. Speaking for myself unless I say otherwise.</p> | null | null | 1,018 | 2,211 | 0 | 0 | 1 | 1 | 2 | r38pkCm7wF4M44MDQ | User | null | null | null | [
"trustLevel1",
"canModeratePersonal",
"alignmentVoters"
] | null | null | f6nzCdLyBhPHxkuSo | https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/f6nzCdLyBhPHxkuSo/cayuwcpm9s7uondy7h2i | SocialPreviewType | 6hobf8rY2s9uMMWwa | <p><span>
I recently flew through CLT and spent more time delayed than in the
air. There were summer thunderstorms, and with the lightning it wasn't safe
for workers to be out. This meant no loading, unloading, docking, refueling,
anything. On the way in we sat on the tarmac for 3hr waiting for lightning to
let up; on the way back we sat in the terminal (much better) while the incoming
flight suffered through our prior fate.
</span>
</p><p>
<a href="https://www.jefftk.com/clt-delays-due-to-lightning-big.jpg"><img src="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/f6nzCdLyBhPHxkuSo/jvgtiiamfrcm2gkgbipl" srcset="https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/f6nzCdLyBhPHxkuSo/jvgtiiamfrcm2gkgbipl 550w, https://res.cloudinary.com/lesswrong-2-0/image/upload/f_auto,q_auto/v1/mirroredImages/f6nzCdLyBhPHxkuSo/n8rr24i5kmmnxkoonkfe 1100w"></a></p><div></div>
<p></p><p>
Ground delays due to electrical storms are common, and each minute of closure
is extremely expensive for the airlines in addition to being painful for the
passengers. We don't stop inside work when there's lightning, why can't we
get the same protection for ground workers? This is something we know how to
do: give the electricity a better path to ground.
</p><p>
We could build grounded towers, about 100ft high, and run a grid of
cables between them. Cover the area adjacent to the terminal, which
is the only area you need people working outside. While this wouldn't
be worth it all airports, the ROI at a high-closure airport like
Orlando, Dallas, or Miami would be only 3-4 years. In 2008,
Heitkemper et al. (<a href="https://nap.nationalacademies.org/read/14192/chapter/2">Lightning-Warning
Systems for Use by Airports</a>) estimated that reducing ramp closure
duration by 10min after an area strike would have saved $6.2M summer
2006 at ORD and $2.8M MCO. Let's try to get an annual estimate in 2025 dollars:
</p><p>
</p>
<ul>
<li><p>Annual savings would be moderately higher (perhaps 20%), since
this number is just Jun-Aug and there's some lightning in the off season.
</p></li>
<li><p>We're also talking about reducing the delay from 30min all the
way down to 0min, which should give roughly 3x the savings.
</p></li>
<li><p>Planes are scheduled and filled more tightly than they were in 2006,
increasing the cost of cascading delays. Guessing this is an increase
of 30%.
</p></li>
<li><p>Traffic is up 70% at MCO between <a href="https://en.wikipedia.org/wiki/List_of_busiest_airports_by_passenger_traffic_(2000%E2%80%932009)#2006_statistics">2006</a>
and <a href="https://en.wikipedia.org/wiki/List_of_busiest_airports_by_passenger_traffic">2024</a>,
and 25% at ORD.
</p></li>
<li><p>There's been 60% inflations since 2006.
</p></li>
</ul>
<p>
Taken together, the annual cost savings of eliminating lightning
closures would be ~$36M at MCO and ~$58M at ORD.
</p><p>
The cost of installing a system is hard to say, but it's a matter of
building grounded towers about 100ft tall (above the tails of the
tallest planes) and running cables between them. Everything is much
more expensive due to being at an airport, but if you're doing
hundreds of gates perhaps figure $300k/gate. Then MCO with its 129
gates would be $40M and ORD with it... </p> | I recently flew through CLT and spent more time delayed than in the air. There were summer thunderstorms, and with the lightning it wasn't safe for workers to be out. This meant no loading, unloading, docking, refueling, anything. On the way in we sat on the tarmac for 3hr waiting for lightning to let up; on the way back we sat in the terminal (much better) while the incoming flight suffered through our prior fate.
Ground delays due to electrical storms are common, and each minute of closure is extremely expensive for the airlines in addition to being painful for the passengers. We don't stop inside work when there's lightning, why can't we get the same protection for ground workers? This is something we know how to do: give the electricity a better path to ground.
We could build grounded towers, about 100ft high, and run a grid of cables between them. Cover the area adjacent to the terminal, which is the only area you need people working outside. While this wouldn't be worth it all airports, the ROI at a high-closure airport like Orlando, Dallas, or Miami would be only 3-4 years. In 2008, Heitkemper et al. (Lightning-Warning Systems for Use by Airports) estimated that reducing ramp closure duration by 10min after an area strike would have saved $6.2M summer 2006 at ORD and $2.8M MCO. Let's try to get an annual estimate in 2025 dollars:
* Annual savings would be moderately higher (perhaps 20%), since this number is just Jun-Aug and there's some lightning in the off season.
* We're also talking about reducing the delay from 30min all the way down to 0min, which should give roughly 3x the savings.
* Planes are scheduled and filled more tightly than they were in 2006, increasing the cost of cascading delays. Guessing this is an increase of 30%.
* Traffic is up 70% at MCO between 2006 and 2024, and 25% at ORD.
* There's been 60% inflations since 2006.
Taken together, the annual cost savings of eliminating lightning closures would be ~$36M at MCO and | 575 | 1.0.1 | Revision | false | null | null | CrosspostOutput |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.