diff --git "a/ids-supp/i2c/test/queries.jsonl" "b/ids-supp/i2c/test/queries.jsonl" new file mode 100644--- /dev/null +++ "b/ids-supp/i2c/test/queries.jsonl" @@ -0,0 +1,1534 @@ +{"_id":"q-en-catalog-format-136e00e0ff8f670e342c550ef4e84f11c1e08dd1843e55818f87adbce9807acb","text":"We need the following Add Catalog Seq Number and explain how it is related to Parent Seq Num Make parent seq num independent of MoQT object number\nProposal is to add two fields: Catalog sequence number (CSN)- a required integer starting at 0 and monotonically increasing with each successive catalog update. Parent sequence number (PSN) - an optional reference pointing at a prior (not necessarily previous) catalog sequence number. If the PSN is absent, then the catalog is an independent catalog. If it is present, then the catalog represents a delta update. Having a sequence number independent of the transport sequencing allows catalogs to be distributed over moq-transport, or http or another transport and still be interpreted correctly."} +{"_id":"q-en-catalog-format-e8a6e264c4f1283de294e4dbfd0a31e55e35419f5be6c11de3491814638c8524","text":"There was a feedback from Victor on making track operation string enumeration to enhance readability. It feels like a fine suggestion"} +{"_id":"q-en-catalog-format-016820d74930b3ed460461b7665bf031caad569beda36d087a32f34f70af21d7","text":"Consensus for merging at IETF meeting.\nPer the discussion at URL, it would be useful that if the namespace were absent, that track names would inherit the namespace of the catalog track in which they were defined. This capability allows for relative naming, which aids in portability.\nI know there has been lots of debate on this but it seems reasonable."} +{"_id":"q-en-catalog-format-ecbf9bc5a5f5cfee80c8810b6607550118bc29d7f8e3a1dd09529217a53e1a74","text":"I suggest creating an IANA registry for catalog fields with values auto-populated from Section 3.2.\nWhat is the point of shifting the catalog fields definitions to IANA? IANA is useful for ensuring global uniqueness. If a single specification defines the catalog format, then that spec if a much more efficient and natural place to define the fields used by that format.\nIf we end up needing new fields (which we certainly will), we don't need a revised RFC. The fields table can be amended\/expanded in a number of ways."} +{"_id":"q-en-catalog-format-3afe700b78c85d2f29d89f7e2e1759a9afe7428cf570b50035083bde8ecab4c8","text":"by adding in a new field to indicate if delta updates (patches) will be used by the publisher. Also updated the examples to match."} +{"_id":"q-en-catalog-format-a59dcc880ac4937d422854e3b465091627640a8c5fe540949ab97bbc38d7a3ab","text":"Add a new root object to hold inheritable track fields. Update all examples.\nI am not a big fan of this approach. This led to all sort of problems with SDP Session Level attributes. My main concerns How do we ensure the field that goes in this are not of type that represent aggregated values for all the tracks. Things like BW is easy to fall in this trap What problem is it solving other than moving these to per track level. May be I am missing something here . open to learn more, Thanks\nThere are no aggregated fields currently defined. All track selection properties, such as 'Bitate\" apply only to that track. If multiple tracks have the same bitrate value, then that bitrate value can be placed inside a SelectionProperties object which is placed inside the Common Track Fields object. This is not moving anything to a per track level. Rather, it is providing a clear location to place track properties which are intended to be inherited by all tracks. IN the future, we may wish to add fields to the root which are not intended to be inherited, something for which the prior design left no mechanism for accomplishing.\nmakes sense and understand. We want to careful and not end up adding aggregates . How about we rename it \"Inherirted Fields\" instead and be explicit about it\nIn the current design, any track fields such as selectionParams, initTrack, renderGroup, packaging etc can be added at the root level. This design does not promote extensibility as it prohibits adding future fields to the root that should not be interpreted as inheritable track fields. Proposal is to add a new root object called \"commonTrackFields\" to explicitly hold fields intended to be inherited by all tracks. Example:"} +{"_id":"q-en-draft-birkholz-cose-cometre-ccf-profile-b75d4a716eaaa432ae80ff29e27ccfc14a886d1bd8d51f1fadbf4f704e8d878b","text":"Thanks for the reviews, I have addressed all the comments.\nNAME I don't see any reason not to merge.\nokay to merge after all suggestions are accepted and remaining comments are acknowledged"} +{"_id":"q-en-draft-birkholz-cose-cometre-ccf-profile-9956be73971776be37ff72e4dde3629986c3ea5e0df55ea4a82dbfd1c1fc0d42","text":"This uses the new proof type registry and redefines the signature type. Unlike the CT-style log algorithm the proof type is part of the trusted headers"} +{"_id":"q-en-draft-birkholz-cose-cometre-ccf-profile-4ed099c50603ef49757b210abf9e55861e05a4141b3f66df820ccf170dc47979","text":"NAME NAME I have read through URL, and made some edits here as discussed to correct the leaf definition. This is deliberately quite light on details, but I am obviously very happy to expand wherever necessary.\nNAME I had not seen the updates made in URL, I have attempted to merge them in here.\nNAME if\/when you have a chance, I would be grateful for a review and some feedback on how to take this forward :)\nNAME I have tried to address your feedback, I hope I did not miss anything.\nNAME may I ask you to check if the changes correctly addressed your feedback, and what the next steps are?\nNAME thank you for the discussion yesterday, I have and added requested assignments for the labels. Please let me know if I need to do anything else before this can be merged.\n:+1: let's pull this in and prepare for WG adoption call"} +{"_id":"q-en-draft-birkholz-cose-cometre-ccf-profile-3438128148301a1e08a4466a4fa81a15c4359ac58994a70c2f48c9478020e9a7","text":"As discussed with NAME and NAME because the proof is scoped under , when , and is ultimately wrapped in bstr to make parsing optional, there is no need to obtain assignments for keys it uses internally. This PR is assigning values, and removing the requested assignments accordingly. Edit: I've found that it's possible to be more precise about sizes in CDDL, so I've tightened the schema a bit.\nNAME if you have a minute to have a look ahead of today's meeting, that'd be wonderful :)\nNAME CCF tests now automatically check some of the proofs against the schema: URL Since this is approved, can it be merged?\nNAME yes :-)\nlgtm! Thanks a ton."} +{"_id":"q-en-draft-birkholz-cose-cometre-ccf-profile-1bdd05b95d354aeeef1966c23e1f93930678b6da75dcd88cf2bd458fc9fb8c98","text":"In alignment with draft-bryce-cose-merkle-mountain-range-proofs (i.e. this I-D will request VDS reg entry \"2\" and mmrp will request vds reg entry \"3\").\n2 is a good number. Thanks for cleaning out the attic"} +{"_id":"q-en-draft-birkholz-cose-cometre-ccf-profile-374bd0b2772951f1c1e6ffe4f3173333f2cd5afade7e62a8a14956f5dca8ef8b","text":"URL As promised at the meeting last week I have reviewed both: URL URL This is my review of the Birkholtz draft profile. REVIEW NOTES Overall very good, just a few comments: BIRKHOLTZ [Section 2]: “This document defines inclusion proofs for CCF ledgers. Verifiers MUST reject all other proof types” It’s unclear to me how useful this normative language is. Does it pollute the implementer’s wider implementation of cose-merkle-tree-proofs? In which case I think it’s wrong: verifiers might well verify many different transparent statements from a plurality of logs with a plurality of proof types. Of course for any given transparent statement they need to implement the proof verification correctly but I don’t see the value of this rejection statement. Privacy Considerations could do with some content (or removal if we consider that leaves and log activity are not relevant leakage of privacy info) Security Considerations could do with some content: CCF ledgers have certain trade-offs and assumptions about the environment they run in: should these be noted? Possibly a small nit, but “Historical transaction ledgers produced by Trusted Execution Environments” feels wrong to me. The ledger isn’t produced ‘by’ the TEE. It’s produced by a lump of code running in a TEE protected environment of some flavour. Or at least you have to hope it is. See comment on security considerations. [Section 7] reference to cose-merkle-tree-proofs is broken. Please fix. [Appendix A. Attic]: Casual language “Not ready to throw these texts into the trash bin yet.” … but there’s nothing else in there! Please remove. Jon"} +{"_id":"q-en-draft-birkholz-cose-tsa-tst-header-parameter-b5a2060ace8aca7dd71343f4883841847618ce8130d2e7c4c93068a53ffa1686","text":"URL [ ] In section 3.2, there is language about minimizing dependencies by using the same hash for the timestamp and the signature. This suggestion does not seem to be unique to CTT, so I’d either repeat the language in 3.1 or move the language to a more general location that covers both use cases. [ ] In section 4, I suggest changing “the receiver MUST make sure that the message imprint in the embedded timestamp token matches either the payload or the signature fields, depending on the mode of use” to something like “the receiver MUST make sure that the message imprint in the embedded timestamp token matches a hash of the payload, signature, or signatures field, depending on the mode of use and type of COSE structure” The draft looks good to me. I have two minor suggestions. In section 3.2, there is language about minimizing dependencies by using the same hash for the timestamp and the signature. This suggestion does not seem to be unique to CTT, so I’d either repeat the language in 3.1 or move the language to a more general location that covers both use cases. In section 4, I suggest changing “the receiver MUST make sure that the message imprint in the embedded timestamp token matches either the payload or the signature fields, depending on the mode of use” to something like “the receiver MUST make sure that the message imprint in the embedded timestamp token matches a hash of the payload, signature, or signatures field, depending on the mode of use and type of COSE structure” From: Michael Jones Date: Tuesday, July 30, 2024 at 1:53 PM To: \"EMAIL\" Subject: [COSE] WGLC for draft-ietf-cose-tsa-tst-header-parameter Hi all, This message starts the Working Group Last Call (WGLC) for URL The WGLC will run for two weeks, ending on Tuesday, August 13, 2024. Please review and send any comments or feedback to the working group. Even if your feedback is “this is ready for publication”, please let us know. Thank you, -- Mike and Ivaylo, COSE Chairs _____ COSE mailing list -- EMAIL To unsubscribe send an email to EMAIL"} +{"_id":"q-en-draft-birkholz-cose-tsa-tst-header-parameter-0922da3abe60d0a83ffe38fd22fe28fe0643b97ae245d7063e906eea2d273669","text":"and have pointed out that the current registration requests are rather informal and could be improved using the templates in . All, I have performed a read through of the doc. A few minor language preferences for clarity have been suggested here: URL Otherwise, I was able to implement and test both modes of operation and get the desired results (with stand in labels). That indicates to me that the draft is well formed, concise, and references the other specifications appropriately that I needed to reference. I find this document useful, and we would implement and utilize in production, and support moving this forward. Two minor additional nits below: One question for the group \/ authors - is there a preferred set of values for the TBDs in the COSE labels section, or should that just be left as is for now? An additional style question, is it better to restate the COSE headers requested in the IANA requested format, or is it fine to reference as is done in the draft currently? e.g. current draft reads: \"IANA is requested to add the two COSE header parameters described in Section 3 to the \"COSE Header Parameters\" subregistry of the [URL] registry.\" This could obviously be detailed out. Mike Prorock On Tue, Aug 13, 2024 at 11:43 AM Michael Prorock wrote:\nAnswering Mike P.’s question about IANA registrations, it is strongly preferred to have the IANA Considerations section follow the registration template and contain the exact values to be placed in the registry (with the Label of course, being TBD). An example of such IANA Considerations text is URL Please update the draft to add this content. Thanks, -- Mike From: Michael Prorock Sent: Friday, August 23, 2024 5:40 AM To: Michael Prorock Cc: Michael Jones ; cose Subject: Re: [COSE] Re: WGLC for draft-ietf-cose-tsa-tst-header-parameter All, I have performed a read through of the doc. A few minor language preferences for clarity have been suggested here: URL Otherwise, I was able to implement and test both modes of operation and get the desired results (with stand in labels). That indicates to me that the draft is well formed, concise, and references the other specifications appropriately that I needed to reference. I find this document useful, and we would implement and utilize in production, and support moving this forward. Two minor additional nits below: One question for the group \/ authors - is there a preferred set of values for the TBDs in the COSE labels section, or should that just be left as is for now? An additional style question, is it better to restate the COSE headers requested in the IANA requested format, or is it fine to reference as is done in the draft currently? e.g. current draft reads: \"IANA is requested to add the two COSE header parameters described in Section 3 to the \"COSE Header Parameters\" subregistry of the [URL] registry.\" This could obviously be detailed out. Mike Prorock On Tue, Aug 13, 2024 at 11:43 AM Michael Prorock > wrote: Thanks for the additional time Mike and Ivo. I support moving this along, but I would like to perform a detailed read through. I have some time set aside for this and will have comments back next week. Mike Prorock founder - URL Ivo and I reviewed the results of the WGLC and there have been no responses whatsoever – even from the authors. We are therefore extending the WGLC for another two weeks until Tuesday, August 27th. Unless multiple people support progressing the draft, we will not request publication. Please send your reviews and feedback on the draft. Thank you, -- Mike and Ivo From: Michael Jones > Sent: Tuesday, July 30, 2024 10:53 AM To: EMAIL Subject: [COSE] WGLC for draft-ietf-cose-tsa-tst-header-parameter Hi all, This message starts the Working Group Last Call (WGLC) for URL The WGLC will run for two weeks, ending on Tuesday, August 13, 2024. Please review and send any comments or feedback to the working group. Even if your feedback is “this is ready for publication”, please let us know. Thank you, -- Mike and Ivaylo, COSE Chairs COSE mailing list -- EMAIL To unsubscribe send an email to EMAIL"} +{"_id":"q-en-draft-birkholz-cose-tsa-tst-header-parameter-8a6d8b813e7a5ee222696c46e547f4cc403af6dbfbafe1839d708d587f355c35","text":"In , Orie commented that: The answer is obviously \"no\", but the fact that the question was raised seems to indicate that the reference to 9338 is potentially confusing. Note that if we removed it, nothing would change, which is an incentive to drop. Authors might consider providing some guidance regarding validation of CTT where the COSE claims appear (iat, nbf) to have happened after the timestamp ( time travel ). IIRC cose counter signatures apply to the protected header, payload and signature, whereas CTT only applies to the signatures. This means that the TSA does not countersign any protected information in the header? Some use cases for the 2 modes might improve the document. Security considerations seem light. OS >"} +{"_id":"q-en-draft-birkholz-cose-tsa-tst-header-parameter-a9209152da6cafaaef79a860be8d009d8536d5cfc3ac13126eee1434241fba12","text":"In CTT that can't happen unless you can predict the COSE signature, i.e., the content of the datum. Sorry, I don't understand this. You are making a similar point in which I am also failing to grok. To me, a timestamp asserts the existence of a datum at least at the point in time when the timestamp for that datum is created. I cannot fathom an attack in which pushing the existence of the datum back in time is an attack on the intended use of the datum.\nIn , Orie commented that: \"Security considerations seem light.\"_ We could add prose that succinctly describes: the attacker model (i.e., a dodgy individual trying to manipulate the COSE signer's or the relying party's local clocks), and the trust model (i.e., local clocks are untrusted, and the TSA clock is trusted by both the COSE signer and its relying parties)\nThe only interesting attack is when the attacker moves the RP's clock in the future.\nltgm Authors might consider providing some guidance regarding validation of CTT where the COSE claims appear (iat, nbf) to have happened after the timestamp ( time travel ). IIRC cose counter signatures apply to the protected header, payload and signature, whereas CTT only applies to the signatures. This means that the TSA does not countersign any protected information in the header? Some use cases for the 2 modes might improve the document. Security considerations seem light. OS >"} +{"_id":"q-en-draft-birkholz-cose-tsa-tst-header-parameter-5c47c868226ce686f6ba6ee5bc9f81277a39bb1616ad9042edeba4b2ad429887","text":"In , Orie commented that: \"Some use cases for the 2 modes might improve the document.\"_ E.g., C2PA, SCITT, PDF timestamping.\nAdd a paragraph to the Introduction to explain the use cases.\ngiven enough guidance you can pass as an artist ;) Authors might consider providing some guidance regarding validation of CTT where the COSE claims appear (iat, nbf) to have happened after the timestamp ( time travel ). IIRC cose counter signatures apply to the protected header, payload and signature, whereas CTT only applies to the signatures. This means that the TSA does not countersign any protected information in the header? Some use cases for the 2 modes might improve the document. Security considerations seem light. OS >"} +{"_id":"q-en-draft-birkholz-cose-tsa-tst-header-parameter-897378b5d4d547f074316dc99851d9d08d0de9402ddb88cecc37fe6f4a98e02c","text":"https:\/\/ietf-URL\nNAME NAME please have a look\nTSA is good enough"} +{"_id":"q-en-draft-birkholz-cose-tsa-tst-header-parameter-9d461f85bb796b9827d4f3797fbbde8fab1e037a82cae5fe3f4848cfb44e7b96","text":"See URL Hi! Unfortunately, we couldn't submit the draft before the meeting and discuss it face-to-face. But here's a -01 fresh from the oven. We absorbed the feedback we got in SF and went ahead with one single header. However, the two modes of use ([1], [2]) define different inputs into the timestamping machinery and therefore create a different binding between COSE and TST. At present, the only way to distinguish between the two semantics is by their position in the COSE message (i.e., protected vs unprotected), which does not look like a good design :-) cheers, thanks [1] URL [2] URL ---------- Forwarded message --------- From: Date: Tue, 7 Nov 2023 at 14:36 Subject: New Version Notification for draft-ietf-cose-tsa-tst-header-parameter-URL To: Henk Birkholz , Maik Riechert , Maik Riechert , Thomas Fossati , A new version of Internet-Draft draft-ietf-cose-tsa-tst-header-parameter-URL has been successfully submitted by Thomas Fossati and posted to the IETF repository. Name: draft-ietf-cose-tsa-tst-header-parameter Revision: 01 Title: COSE Header parameter for RFC 3161 Time-Stamp Tokens Date: 2023-11-07 Group: cose Pages: 6 URL: URL Status: URL HTML: URL HTMLized: URL Diff: https:\/\/author-URL Abstract: RFC 3161 provides a method for timestamping a message digest to prove that the message was created before a given time. This document defines a CBOR Signing And Encrypted (COSE) header parameter that can be used to combine COSE message structures used for signing (i.e., COSESign and COSESign1) with existing RFC 3161-based timestamping infrastructure. The IETF Secretariat"} +{"_id":"q-en-draft-birkholz-scitt-architecture-8c7b2eef98484bf37adaa63c8924553ad0ef9b04309940bc9186ed6a9de08ff3","text":"NAME added security considerations and architecture picture\nThanks for the diagram, the sec-sec and all the other additional polish! I thought about tinkering with the diagram, but am in doubt that I'll improve it significantly. LGTM!"} +{"_id":"q-en-draft-birkholz-scitt-architecture-1ea45fa1c08df9f9d48dd7d537a71cf9cfe0b0ab200c4826ab1196dd7ddd4341","text":"as discussed on [SCITT] SCITT threat model. I still need to discuss with Yogesh how to integrate some of his comments.\nNAME : Thank you for this. I think, we should start Threat Model as a separate document to begin with. The reason been, I expect Supply Chain Security been a complex subject the number of threats and mitigation strategies will be a long document. I would prefer a pointer to the document from the master Architecture document, then! Once the document is in final shape post Community review, at that time we can make a final call, should we merge it to Main Arch. Doc or leave it separate.\nConsidering the scale of the addition and the fact that stand-alone terminology or threat models are a bit frowned upon, I think we should merge this PR for now. If there is a need for a separate threat model I-D we can arrive at that conclusion in the WG and with AD and chair guidance. Keeping the number of I-Ds the same during chartering is a pro and not a con, I think."} +{"_id":"q-en-draft-birkholz-scitt-architecture-5613580c99b46539bc677108633b740b01963500eac24b6457ab388f876fcea9","text":"Cleaning up the Federation section, still TBD.\nAlso:\nLinting errors need to be addressed. NAME if this blocks publishing \/ merging perhaps we can merge and then fix?\nYes, I can fix the lint after merging. If this would be a branch and not a fork, I could do it here more easily.\nlgtm, any objections? still in early stage"} +{"_id":"q-en-draft-birkholz-scitt-architecture-dc301cfcc99a2ad33fb5032222e0511f9515719505f8340540c7bc4e7ca177fd","text":"This adds temporary integer header parameter labels to have something to prototype against until the new fields are registered. I used 391 as starting offset as that's S+C+I+T+T."} +{"_id":"q-en-draft-birkholz-scitt-receipts-cb326126a94e13128d5e021287e60a323f523cf686585366afa241a41e30198b","text":"Given that all tree algorithms must include a protected header of the countersigner, it makes sense to move that to the top as a common structure. This serves two purposes: inspecting the protected header without verifying the receipt (useful if verification happened elsewhere and a tree-algorithm-agnostic service wants to index general information about the receipt), and allowing for more flexible key discovery schemes in the future that would rely on information in the protected header. Note that this change is independent of the discussions in the mailing list around COSE countersignatures."} +{"_id":"q-en-draft-birkholz-scitt-receipts-0ed7ae452bbf624f8e9f7c5e69800faa774343dce8fc98e9c81e25eb8518998a","text":"Changes: Remove initial version from CCF tree alg identifier. The idea is to not be linked directly to major CCF software releases but rather version independently here. For now, there is no need to include a version number in the tree alg identifier. Remove CCF hash alg parameter as that is always implied by the signature alg. CCF signs pre-hashed data (the Merkle tree root) and doesn't hash twice."} +{"_id":"q-en-draft-birkholz-scitt-receipts-c11f3ec40b84b233a603d05782199bac5d452aae9f265e6155f0a7dbd9feb354","text":"The first change is to address the fact that the current .md renders correctly as bullet list on Github but not in other contexts such as URL"} +{"_id":"q-en-draft-birkholz-scitt-receipts-5ba5e7f89dba5178b8ba17b3d423d21422fbefa38bbaaf029067c8974e5ca368","text":"Allowing both a receipt and an array of receipts as type is ambiguous here since the receipt is an array itself and not wrapped, so distinguishing the two cases wouldn't work. Having a single type also simplifies client code."} +{"_id":"q-en-draft-birkholz-scitt-scrapi-d9c99c3a1e9d9110530383a8c008ec9421bb01903a6673c750b4186aa39b8360","text":"A few thoughts after IETF 117. Goals: Address identity up front (by not addressing it directly). Frame feeds in the context of the API Relate media types to feeds This PR is mostly placeholders for future work, but hopefully highlights some areas for follow up based on our in person discussions.\nNAME I can't merge this, but we should unless there are changes requested.\nfixed collaborator list\nLGTM"} +{"_id":"q-en-draft-birkholz-scitt-scrapi-0bdbddfe205abf1b9d83d170b192babb62338da4cf58db909587f8acfd4020cb","text":"Strawman proposal for: URL PoC implementation here URL\nDefinite start in the right direction - I would say it is enough to get in and build on\ngeneral thumbs upLGTM After a chat with Richard Barnes, I think there are a few things happening with and key binding, that we should probably consider independently for scitt: Registration Policies that require fresh signatures from issuers. Registration Policies that require issuer's to commit to a recent tree head. Registration Policies that require additional context binding and a fresh signature from the issuer bound to special context. 1 and 2 are basically both about \"time\". If the Transparency Service does not trust the issuer to be honest about time, the transparency service can force the issuer to include a time commitment from the transparency service in their registration with binding to their confirmation public key. In the case the \"nonce\" is a timestamp, the TS rejects key binding tokens that are outside some window. In the case the \"nonce\" is a recent tree head, the TS rejects key binding tokens that are not to a tree head inside some window. In the case the \"nonce\" is a \"challenge token\", the nonce is some special context, the TS rejects key binding tokens that lack this special context. There is an opportunity to make the registration none interactive and one shot, assuming the following is acceptable: The key binding token nonce is a \"recent timestamp \/ tree head\" The transparency service rejects key binding tokens with nonces that are not \"recent\" There is an opportunity to make the registration interactive and context binding, assuming the following is acceptable: The transparency service includes specific context the issuer must commit to in their \"challenge token\". The issuer uses the challenge token as the nonce in their key binding token. The transparency service rejects key binding tokens with nonces that omit required context In this second case, imagine a \"dynamic registration policy\" Holder (Issuer) asks Verifier (Transparency Service) for a challenge token for a Customs Entry for 16056100. The transparency service knows that animal products require certificates to be imported, and issues a challenge token that looks like this: iss: \"https:\/\/transparency.example\" aud: \"https:\/\/transparency.example\" iat: ${now} exp: ${now + 7 day} currenttreehead: ... (not all TS are trees) currentpolicyhash: ... (not all policies are stored in trees) entrynumber: 42 htscodes: [ 16056100 ] The holder starts registering signed statements, each is bound to the same \"challenge token\". The first signed statement covers a phytosanitary certificate for the sea cucumber processing facility. The regulatory requirements change. The second signed statement covers the (now required) export certificate for sea cucumbers. The transparency service would normally have rejected the second signed statement, but because of the context in the challenge token, the transparency service knows that the second signed statement is now actually required. The holder did not need to request a \"new challenge token\" when the regulatory changes happened, and the holder still had to prove possession of the public key bound to both certificates ( and these public keys are different ). I chose to make this context scenario about sea cucumbers, but it could easily have been about network capable hardware devices that need to comply with regulations specific to Radio Antennas and GPUs... In those cases there would be firmware certifications required, and compliance requirements might change inside the transaction window. This also solves the problem of forcing the issuer to commit to the registration policies as they exist at the time the \"challenge token\" is issued. The TS can reject registrations that are committing to stale policies, or accept registrations that are commiting to stale policies, because the TS will know which policies the issuer is committing to by checking the challenge token. In the case that proof of possession is not required, and there is no need to commit to policies, the challenge token can be omitted, and the registration can proceed exactly as it is specified today. OS ORIE STEELE Chief Technology Officer www.transmute.industries"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-6e7cf6c84d670642be0b3b32817543ebace68209e823881294e086e9c092d257","text":"Introduction: Removed excess carriage returns Problem Statement: Added new text (threats + artifacts) and illustration (threats)\nPlease add \"and is still valid\" - see new proposed langauge enable the consumer to verify that software originated from a 'duly authorized signing party' on behalf of the Supplier, and is still valid\n+1 to this change\nClosing as been addressed now!\nI have no concerns with the revised language for 3.1. One minor nit on 3.2 to be consistent with the use case title REPLACE: A consumer of released software component wants: WITH A consumer of released software product wants:\nLGTM\nThis issue has been resolved in the above PR.\nLGTM!"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-f55cbeb7f804158b4c6eaf6313772f24220bb581985ea1edaf8f743cc6a7f68c","text":"Modified the 'Scalable Determination of Trustworthiness in Multi-Stakeholder Ecosystems' use case to improve lay person understanding by reducing the number of terms introduced.\nthank you! lgtm"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-9ca297d789ceeab6f4859c3e15592366278b1f819c687de5507f7572548fdf8e","text":"Added a placeholder section to aggregate problem statements across use cases. Refine and complete this section as use cases stabilize.\n:ship: it"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-871fea242b33c12162b20e3a9837539c98d3a004aa05cf3ac3a0469f4cdb42f6","text":"Signed-off-by: Henk Birkholz\nHenk, The Distributor isn’t always the signing authority. It’s true in some app stores, but beyond that the model breaks, down. Walmart doesn’t digitally sign software it sells\/distributes. Thanks, Dick Brooks Active Member of the CISA Critical Manufacturing Sector, Sector Coordinating Council – A Public-Private Partnership Never trust software, always verify and report! URL Email: NAME NAME Tel: +1 978-696-1788\nThen this is going on the agenda for Monday, too. Thanks Dick\nThanks, Henk. Thanks, Dick Brooks Active Member of the CISA Critical Manufacturing Sector, Sector Coordinating Council – A Public-Private Partnership Never trust software, always verify and report! URL Email: NAME NAME Tel: +1 978-696-1788\nFrom discussion with Henk I believe the use-case here is an app store-like promotion scenario. If that's true, I recommend we refer to this as the relationship between a supplier\/producer and the distributor. While signing is a property of the relationship between an app creator and an app store we can increase the clarity of the use-case by describing the roles in the relationship (producer\/distributor\/consumer). I've made some suggestions on the PR to replace \"Signing Authority\" with \"Distributor\"."} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-97294f951b449214c7b971d5e7f3d76897dcb14e20631ca041bd5679f709a98f","text":"Signed-off-by: Henk Birkholz\nLooks good.\nIf there are no objections until the end of Fri, I'll merge for the meeting on Mon.\nNo objections with merging these changes.\nMy feedback is non blocking, PR authors may dismiss my suggestions.\nLGTM!"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-78123f60ee35b20c6d4310311b32a9a38a4995fbdbf328b5389d6ad5671a6e04","text":"NAME the revisions seem to place greater emphasis on package managers, removing the original intent. The original intent was much broader and could be applied to a wide swath of software distribution, i.e. GitHub, app stores, and package manager repos.\nSure, we can broaden the scope. Not a problem!\nWe had a detail discussion about this between me and Henk. Basically, I was all for the support of going to depict a real world example for each of the situation. However, this makes the document excessively long and verbose. We should make a judgment as a team as to: Where should the Use Case document reside ? Some members feel within Arch. Document, some feel as a separate RFC. Given the number of scenarios, should we abstract it to generic use case or we depict a specific example, next to each of them ? Is too long a document (when a separate RFC) acceptable to the group ?\nNAME NAME NAME I have redone the PR post our meeting today. Please have a look..\nThis has been resolved now!\ngreat improvements"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-bb7f92e036da54fa6ea502a601956b1fd0cb1639d1699011ae1f8858598c0714","text":"Addressing issue and\nThis fixes issue and issue\nPlease add \"and is still valid\" - see new proposed langauge enable the consumer to verify that software originated from a 'duly authorized signing party' on behalf of the Supplier, and is still valid\n+1 to this change\nClosing as been addressed now!\nI have no concerns with the revised language for 3.1. One minor nit on 3.2 to be consistent with the use case title REPLACE: A consumer of released software component wants: WITH A consumer of released software product wants:\nLGTM\nThis issue has been resolved in the above PR.\nLGTM"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-dcb1ac075e98ea33efdb1303dcef1a0b184a7cb7b53d609e0df33336ec0145a0","text":"Updated firmware and Software integrator use case to the same format as the ones above.\nPlease add \"and is still valid\" - see new proposed langauge enable the consumer to verify that software originated from a 'duly authorized signing party' on behalf of the Supplier, and is still valid\n+1 to this change\nClosing as been addressed now!\nI have no concerns with the revised language for 3.1. One minor nit on 3.2 to be consistent with the use case title REPLACE: A consumer of released software component wants: WITH A consumer of released software product wants:\nLGTM\nThis issue has been resolved in the above PR."} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-b725b962acc86ec463d453297776ce89d6d8bb6b776738866754adda506ce1cc","text":"Thanks for merge. I realize these are minor things (especially in developing a draft), so know that I am reading and digesting\/thinking on the actual content. Thanks for your and the others' efforts.\nLGTM! Sincere thanks for spotting them!"} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-f72dc2939a1187c531e836d33044421fa63d1508a6aa33b3d4d2f18a7c883ca8","text":"A few edits for brevity and conciseness Differentiate the VDR as incremental information, where the SBOM is static at the time of initial distribution Signed-off-by: Steve Lasker\nA VDR is a living document that can be linked to an SBOM document directly, see SPDX V2.3 K.1.9; URL\nIs there a proposed change associated with this comment?\nFeels like an agreement to the \"Differentiate the VDR as incremental information, where the SBOM is static at the time of initial distribution\" above?\nThanks NAME I'd suggest the thing we want to capture with SCITT is the continual evolution of information. It's not ok to just update a document as there's no way to know what the state of the document was at a previous date. Example: Jan 1, 2022 - the software was deployed and the VDR showed (all is good). A SCITT claim was submitted, attesting to that current state Jan 10, 2022, the software was deployed, as there were no known issues. A SCITT claim On March 1, 2022 a new vulnerability was discovered that applied to the previous released and deployed software On March 2, 2022, a security auditor might ask: \"Why did you deploy the net-monitor:v1` software when there's clearly a security problem? The system has the VDR report from Jan 10, when the software was deployed, to indicate no known vulnerabilities were known at that point. SCITT and the underlying evidence store will need to provide the most recent version of an artifactType, with the ability to query the history as well. Does this give you what you're looking for?\nSteve, each new VDR will have a different hash value, so any\/all of the VDR artifacts can be checked for trustworthiness, but the \"most current VDR should always be registered in SCITT, IMO.\nNo additional changes are needed, just replace VEX with VDR and we're good to go."} +{"_id":"q-en-draft-birkholz-scitt-software-supply-chain-use-cases-14bff015f454f5cb445e980b563ab95b1c92221d9b98b6e910ebb0e133746fe9","text":"Please note the template is based on: https:\/\/rfc-URL We would like to stick to this template and progress this document!"} +{"_id":"q-en-draft-ietf-alto-new-transport-44bcc9c3ccfed9295818afcff6714d60be99604a7657f14b7d5efdbce2f51fc8","text":"NAME Med, I have migrated the draft to markdown format (see ). The changes are merged as well. I am taking a complete pass of the markdown before merging it to the main branch.\nThank you, NAME Looking forward merging the new branch into the main."} +{"_id":"q-en-draft-ietf-alto-new-transport-4e22351bd5e51bd9e855d3232581ce7872ae5d5c8735348eed3d778f3c0fa60f","text":"This will help me fix this part in the writeup: Describe the document shepherd's review of the IANA considerations section, especially with regard to its consistency with the body of the document. Confirm that all aspects of the document requiring IANA assignments are associated with the appropriate reservations in IANA registries. Confirm that any referenced IANA registries have been clearly identified. Confirm that each newly created IANA registry specifies its initial contents, allocations procedures, and a reasonable name (see [RFC 8126][11])."} +{"_id":"q-en-draft-ietf-alto-new-transport-a0598c0cfa833544a0434a457db3586356a966a6d39f83f740d8bdde5b1c4695","text":"Please add captions to all figures and call them in the text rather than \"below\" thing :-) Thanks."} +{"_id":"q-en-draft-ietf-avtcore-hevc-webrtc-909ce45ed518df55303a428e65f3afd8407fe2c9c21781f631da9bb61caf796e","text":"Fix for\nNAME PTAL\nNAME PTAL\n(moved to issue)\nPACI packet is defined in RFC7798 ((URL, section 4.4.4). According to 4.4.4.2 it defines only a single payload header extension for Temporal Scalability Control Information (TSCI) in RFC7798, and a new payload type 50 is introduced for the PACI packet. In WebRTC, TSCI is included in generic info and sent through RTP video header extension, there is no need to implement payload type 50 to transfer TSCI as RTP payload specifically. We may ignore payload type 50 in H265 depacketizer.\nVirtual interm slide is here: URL\nThrough the GFD (somewhat deprecated) and DD extensions?\nyes. through GFD or DD depending on if it's perfectly layered according to spec.\nURL NAME made a good point here about SFUs. If you negotiate DD on the sending leg, you can not strip it in the SFU or you loose the information about temporal layer. Do we need to capture this or is it more of a \"middleboxes gonna middlebox\" problem?\nNAME I modified the PR to say that if RTP header extensions supporting TSCI are negotiated, then TSCI SHOULD NOT be sent in the PACI and if TSCI is being received in the header extension, then PACI MUST be ignored. I think this covers the broken SFU case.\nClosing. At IETF 118, the proposed text was presented, and there were no objections."} +{"_id":"q-en-draft-ietf-avtcore-hevc-webrtc-45de657bcc113fafb15ba11dc102629c8dfc876bc796d68cb1bdee378862e5bf","text":"Add discussion of the tx-mode SDP parameter. Fix for Rebase of\nNAME PTAL\nI think we need a different approach. We want to say that support for \"SRST\" mode is mandatory If no tx-mode parameter is present, a value of \"SRST\" MUST be inferred. It is up to the offering implementation whether to include SRST or omit it. implementations that do not implement the other modes should not negotiate a codec with that mode\nNAME Updated the PR to include your suggestions.\nPACI packet is defined in RFC7798 ((URL, section 4.4.4). According to 4.4.4.2 it defines only a single payload header extension for Temporal Scalability Control Information (TSCI) in RFC7798, and a new payload type 50 is introduced for the PACI packet. In WebRTC, TSCI is included in generic info and sent through RTP video header extension, there is no need to implement payload type 50 to transfer TSCI as RTP payload specifically. We may ignore payload type 50 in H265 depacketizer.\nVirtual interm slide is here: URL\nThrough the GFD (somewhat deprecated) and DD extensions?\nyes. through GFD or DD depending on if it's perfectly layered according to spec.\nURL NAME made a good point here about SFUs. If you negotiate DD on the sending leg, you can not strip it in the SFU or you loose the information about temporal layer. Do we need to capture this or is it more of a \"middleboxes gonna middlebox\" problem?\nNAME I modified the PR to say that if RTP header extensions supporting TSCI are negotiated, then TSCI SHOULD NOT be sent in the PACI and if TSCI is being received in the header extension, then PACI MUST be ignored. I think this covers the broken SFU case.\nClosing. At IETF 118, the proposed text was presented, and there were no objections."} +{"_id":"q-en-draft-ietf-avtcore-rtp-j2k-scl-169acacf2b1a14de42f5b3207e892c648b451e171f75ba25e43276df552b2559","text":"In regards to not requesting registration in the RTP payload registry, please remove the following sentence in Section 11: The motivation and removal of the registry the above sentence request are in: URL (from Magnus Westerlund)"} +{"_id":"q-en-draft-ietf-avtcore-rtp-j2k-scl-af6ec81f03e43edf75294f9803ba97fbcf30ed735a25406d1dfe00e78c72495d","text":"specifies the The low-order bits should be specifies the low-order bits Subbands to Sub-bands signalling to signaling\nURL"} +{"_id":"q-en-draft-ietf-ccwg-bbr-e3d7ebbdc519cfb1317d9f5d2f569105b6cd27daa77c809cc861fa871bdf9dac","text":"Minor clarifications to the prose Removes explicit references to Youtube\nI am going to redo this on top of the markdown version. cc NAME NAME\nThanks, this 4-commit version with purely editorial changes looks great to merge now. Thanks!\nText LG, but we should decide if we want to impose the line limits I've seen in other Github IETF projects."} +{"_id":"q-en-draft-ietf-drip-arch-78eaa1905d592d4fb88518973efb379268f197de9bab427d4cc18d587af1e8f4","text":"let's publish an I-D with this PR and the one about Personal Data, then I am approving the I-D for publication"} +{"_id":"q-en-draft-ietf-nmop-digital-map-concept-fe20bb18c184f83ed6ef0f1374e85d1d3547e439a9cc6a994b24a09f76b71ce4","text":"Addressed discussion on the nmop group for THREAD1, THREAD2 and THREAD3 as well as issues 1, 2, 3 (partial, just examples), 5, 6, 7"} +{"_id":"q-en-draft-ietf-nmop-digital-map-concept-a9140eb72a866b7c293206b083319e888dd0b050c549842b60d887b612bcd4ef","text":"Addressed discussion on the nmop group for THREAD1, THREAD2 and THREAD3 as well as issues 1, 2, 3, 5, 6, 7 + some minor changes. 3 commits"} +{"_id":"q-en-draft-ietf-nmop-digital-map-concept-3a55ee08684c5cc4586e831d44e5836d88512fda1c47886f762688544214e1b3","text":"Added Service & Infrastructure Maps (SIMAP) terminology Updated inventory use case to mention passive inventory Added requirements REQ-PASSIVE-TOPO: to include passive topology based on input from Brad Peters"} +{"_id":"q-en-draft-ietf-nmop-network-incident-yang-8bc02cf89ff8dc806a95c6463b2374b2a8ce19a6d0fb5176d426ea9125765ea2","text":"We remove incident type yang model, so the number of yang modules we proposed is reduced from 2 into 1. So I think the text still needs some tweak, besides this, all good changes."} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-3493fd3f186f83f6bd3c726375ab50ba5ea75c33ba94132dc5099d7f7f2e2b38","text":"The following changes are encapsulated in this PR: Update references from RFC7523 to RFC7521. Added a recommendation around order of validation for the two JWTs in the client_assertion parameter. Removed the note from the exp and iat claims for the client key Attestation JWT that imply this cannot be re-used across multiple requests. Removed the ability to use MAC based algorithms when producing the client key Attestation JWT. Updated the requirements around the presence of the JTI claim for the client key attestation PoP to be a MUST inline with DPoP PoPs for reliable basic replay attack detection mechanism. Added references to the JWT RFC. Added an implementation consideration that highlights a client instance can re-use a client key attestation JWT in multiple AS interactions\/requests. Added aud as a required claim in the client attestation pop JWT. Add IANA registration request for the new client assertion type and token endpoint authentication method."} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-b4c630cf5c614002db4c7f591c345850b37b58c70d2e12a4736660c4244b50c3","text":"Please change file\/draft name and title, too.\nOpened as a separate issue.\nNAME please check my latest suggestion\nI would like to see the description under the sequence diagram improved, but if you want to address that in another PR, that is fine too."} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-cd6bed0c6f46e4ccc77e68d475b00e7ab6aaf81605796aca83d3363f31156942","text":"As discussed over email, this PR drafts some text to capture the requirement around refresh token binding to the client instance not just the client. The PR also addresses some broken links. Personally I'm still not convinced that the best way to do this binding is through the client attestation key, instead I think we could add a claim that identifies the client instance in the client key attestation\nfew suggestions"} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-2eb997b1e2b270d45e2033d7453c3f6ee5b4addde5c5fa149188fc51e0efa237","text":"In the Terminology section, the term names and term definitions are run together with no punctuation between them. For example: I suggest emulating the formatting in URL"} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-4637b94576b451a540e647c145275d907354e8263908f9b1e2bc0c7d26a84712","text":"Addresses issues with language about the jti claim in the client attestation and pop jwt.\nThe spec states: \"The JWT MUST contain a \"jti\" (JWT ID) claim that provides a unique identifier for the token. The authorization server MAY ensure that JWTs are not replayed by maintaining the set of used \"jti\" values for the length of time for which the JWT would be considered valid based on the applicable \"exp\" instant.\" However the is not shown in the example in the same section.\nThe text says jti is required URL but the example doesn't have jti URL\nDuplicate of\nThe Client Attestation JWT seems like a long(er) lived credential that should be reusable so the client instance doesn't have to go get a new one each time (right?). But this text for the Client Attestation JWT has 'The authorization server MAY ensure that JWTs are not replayed by maintaining the set of used \"jti\" values for the length of time for which the JWT would be considered valid based on the applicable \"exp\" instant' that suggests the AS might limit them to single use. I think that sentence can just be removed. \\ right: URL\nAgreed, need to remove this sentence from step 7 of 4.1.1."} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-528b206bd21c8fdeb6f4b5f1d7f973a28612251fb95fb0470289ed555386d30a","text":"The \"Refresh token binding\" section requires \"The client MUST also use the same Client Attestation.\" What is the intention of the requirement from the following options? The exactly same client attestation JWT. The refresh token becomes invalid after the client attestation JWT expires. Even if the attester issues a new client attestation JWT for the same client key, the new client attestation JWT would not work with the refresh token. If the requirement expects this behavior, AS implementations would simply use the hash of the client attestation JWT to judge whether the client attestation JWT presented in the refresh token flow is identical to the previous one. Any client attestation JWT for the same client key issued by the same attester. If this is the actual intention of the requirement, even after the original client attestation JWT expires, the refresh token can be used with a new client attestation JWT issued by the same attester. In this case, AS implementations would check the combination of the attester's identifier and the client key during the refresh token flow. Any client attestation JWT for the same client key issued by any attester the AS trusts. If this is the actual intention of the requirement, even after the original client attestation JWT expires and\/or even after the attester ceases its service, the refresh token can be used with a new client attestation issued by any attester the AS trusts. In this case, AS implementations would check the JWK thumbprint of the client key only during the refresh token flow and would not check whether the attester is identical to the one of the previous client attestation JWT. FYI: defines a mechanism to bind an X.509 certificate to an access token. The specification states that a JWT access token should include (which represents the SHA-256 thumprint of an X.509 certificate) in its payload if the access token is bound to a certain X.509 certificate. It results in that certificate-bound access tokens and refresh tokens expire when their X.509 certificates expire and that renewing X.509 certificates for the same subject does NOT revive the access tokens and refresh tokens. Therefore, client applications have to go through an authorization flow again to get a new access token once the X.509 certificate bound to the previous access token expires. There are pros and cons. The requirement will need to be rephrased depending on what the \"refresh token binding\" wants to achieve.\nYeah, the statement \"The client MUST also use the same Client Attestation that was used for authentication when the refresh token was issued.\" is problematic and needs to be removed or rephrased. The intent, as I understand it from previously discussing\/pushing the issue of RT binding for this draft, is that the RT needs to be bound to the client instance. And that means binding the RT to the client instance's key. I think Tobias didn't want to go so far as saying that b\/c he thought it was too inflexible. But the client instance's key is currently the only thing here that uniquely identifies the client instance.\nNAME is correct around the intent here, I agree the wording makes it unclear so it should be clarified that the binding is to the client instance key used when the RT was issued.\nWhy not just use the same client attestation JWT? That would not require any client instance identification and is also not a privacy issue. urn:ietf:params:oauth:client-assertion-type:jwt-key-attestation -> urn:ietf:params:oauth:client-assertion-type:jwt-client-attestation Originally posted by NAME in URL\nRelated to . The problem with this model is that the refresh token if bound to the client attestation JWT then limits the RT lifetime."} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-a341acc5432c1841aceacb782b098e1ce475791aa5f4705f8920318250f25df3","text":"Minor language tweaks aligning more to RFC 7521 in how the client authentication mechanism is defined as to not limit its application to only being used at the token endpoint.\nThere is an idea to send the Client attestation already in Authorization Request or PAR instead of with the Token Request. I see the following advantages: the client attestation could act as some kind of mobile capability descriptor, thus allowing the Authorization Server(Issuer) to check these before the user performs user authentication, otherwise this could result in poor UX as the user authenticates and the AS realizes only afterwards in Token Request that the level of client attestation is not enough (e.g. wallet can only handle key type xyz or LoA low) I see the following disadvantage: some flows do not require Authorization Request the user usually does not give consent before sending Authorization Request\nThe client attestation can for sure be used to authenticate with PAR. The HAIP already requires that.\nThe draft just needs to be clarified that this is a general purpose client authentication method that can be used with any interaction with an AS that is requiring or making use of client authentication."} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-381d146bd7189d00b7e24e1452dad6403797f5e5b2f01adfacd936cc868cee51","text":"As highlighted in the title this PR adds a privacy consideration around potential client instance tracking along with a recommendation for client deployments to mitigate such risk."} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-e55800e2233da6543339ebeb78cf7b87b2bfceb57edd0d99a9a177aaa83920f8","text":"URL\nIETF drafts are expected to not exceed 72 characters per line. The diagram in the Introduction currently takes 91"} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-548f98607d014774fbf6342ceae16e98fcc03c11fd5abbabbd854476b6c19c53","text":"Add security consideration for replay attack prevention and add nonce as optional value for PoP JWT URL\nCurrently the specification does not define a mechanism for an AS contributed nonce to feature in the client attestation PoP meaning replay attack detection by authorisation servers requires tracking the JTI within the time window set by the PoP's iat claim. We should elaborate on an AS contributed nonce for the specification.\nWhat about using the authorization code or pre auth code as suggested in our demo?\nI think this is an option, however the nonce for client authentication becomes dependent on the grant type which creates a bit of an awkward coupling when an application wants to say use this mode of client authentication for grant type other than pre auth code or authorisation code.\nHow many implemantations out there actually use different grant types than auth-code? Given that this serves our purposes and imo everybody is using auth-code, I think that we should RECOMMEND that scenario but I#m ok with showing up alternatives as you suggested. NAME What's your opinion on this?\nI think the flow can be used with and without nonce. If it is used without nonce, jti and short expiration must be used to limit the risk. If a nonce is used, I think using the codes is a pragmatic option. Alternatively, the AS could provide a dedicated nonce with the credential offer or authorization response. It could also provide the nonce with a token error response.\ndiscussion on 23.05 is leaning towards DpoP-style error providing nonce\ncan we please bring this issue forward, it is very important? I see some advantages in that approach as it would allow the AS to request nonce use for all endpoints (including PAR).\non that topic, I would suggest such an AS provided nonce should be good up until the AS says otherwise. That would allow a client to use the same nonce for PAR and token endpoint.\nUpdate provided in\nPR\nPR"} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-636f78c1b372e17e31b95af63838ef21c4daab42c7e0697590d0087895cda9f5","text":"This PR makes explicit an already implicit feature of the specification - that it is not possible to rotate the confirmation key. URL) -->\nNAME NAME Any reason this can't be merged?\nWaiting for NAME approval\nCurrently section 5.2 states: In the case that the original client attestation JWT has expired, what should happen? If it is acceptable for the client to re-use the same key in the \"cnf\" field across attestation JWTs, then the client can easily generate a new attestation JWT with the same \"cnf\" key. But if the client needs to use a fresh \"cnf\" key, then I can think of three options: Define a flow which endorses the new \"cnf\" key with the old \"cnf\" key. Define a field in the client attestation JWT that contains a unique identifier for the client instance, and use that identifier to bind the refresh token to the client instance. Explicitly don't permit this in the specification, add text to the specification that makes it clear that the \"cnf\" key must stay the same across refreshed client attestations. My preference would be for option 2.\nCurrently the client instance needs to go back to the client backend and request a new client attestation JWT bound to the same key used in the old JWT. In general because we are effectively using the client instance key to identify the instance at the authorization server this makes key rotation of the client instance key difficult with active refresh tokens. 1 feels too complex, 3 is an option and NAME and I have discussed 2 before. The flip side to consider with option 2 is the privacy implications of formally electing to track client instances, personally i'm not convinced its a bad idea but we should carefully consider the impact.\nDo we? I had assumed the AS identifies the client by the value. I don't see a need to identify a client instance.\nthe forth option is to have long living client attestations, which in my opinion is no problem the client should then use different attestations with different ASs\nThis just makes explicit the tracking of client instances that already existed through the requirement for the \"cnf\" key to stay the same over attestations. This is really a decision for the use-case in my opinion. In the context of software integrity being established through the client attestation, it is trivial to inject malware after the generation of the attestation, so some in some cases it may be a requirement to have fresh attestations.\nIsn't the AS identifying the client also by the refresh token itself? I dislike option 2 due to the privacy implications. As any optional fields in the client attestation JWT are valid, profiles could still use this option, but it should not be a good requirement in my opinion. I see two options: A: the wallet initially gets one client attestation and uses the cnf key over a longer period to bind a sequence of refresh tokens to it. -> similar to option 4? B: the Wallet initially gets one client attestation and the AS binds the first refresh token to the cnf key. When it uses the refresh token, it also provides a new client attestation with a new cnf key and the AS binds a new refresh token to this new key. -> similar to option 1?\nWell the current language doesn't actually guarantee that the AS can track the client instance (which is a good thing IMO), as the instance can in many common cases, rotate the keys it uses in between fetching tokens. But in the event a client instance is exchanging a refresh token for a fresh access token and potentially another refresh token, the client instance must authenticate using a client attestation JWT with the same key in the cnf claim that was in the original attestation used when the refresh token was obtained. This ensures one client instance can't use another client instances refresh token. So in this case the AS can track that the client instance who obtained the refresh token is the client instance now requesting to exchange it for a new access token, which is desired from a security perspective. As I said above in response to Torsten's point, using the key instead of an explicit claim means a single client instance could actually use multiple keys over time meaning the AS can't track the client instance across all of those interactions. The concern if we make an explicit claim in the client attestation JWT for identifying the client instance, is that implementations won't rotate this identifier over time thus meaning the instance would be tracked. Ironically if your goal is to reduce the ability of the AS to track the client instance over time, this option will in fact enable that as the attestation itself becomes the tracking vector. Agreed, the additional requirement of requiring the same client authentication method with the same key just means that obtaining the refresh token alone is not enough for another instance to be able to exercise the refresh token, it doesn't materially change the ability for the AS to track the client instance because as you highlight the refresh token itself creates the same possibility for correlation.\nI think this is basically my option 3. Unless there is a mechanism to issue a refresh token that I am not aware of, it is not possible to rotate keys according to the current language. The spec states: This is what I mean by \"This just makes explicit the tracking of client instances that already existed through the requirement for the \"cnf\" key to stay the same over attestations.\"\nI don't think rotation of the key within the lifetime of a refresh token is something that needs to be accounted for. So my preference would be for option 3 (which is how it functionally is now but could be made more explicit in the text). Adding an explicit claim in the client attestation JWT which identifies the client instance, i.e. option 2, would be okay too. But I think it'd get complicated\/confusing to describe it in the spec.\nBy saying you can't rotate the key within the lifetime of a refresh token, I think this means that you cannot rotate the key without starting from scratch at the authorization endpoint. I just wanted to make sure that is what you meant?\nBasically yeah, I think so. Rotating the the key would mean you'd have to go back to the authorization endpoint. That seems acceptable\/reasonable. Maybe it's too narrow of a view but it seems like if there's a reason to rotate that key then reauthorizing is probably needed or at least acceptable anyway.\nI'm not sure that's true. IMO authorization is more of a user-identification concern, the rotation of the key is a client-security concern. Saying that, I think I agree that it's probably acceptable to say key-rotation isn't possible without returning to the authorization endpoint. Given this I would suggest we add a sub-section to the Implementation Considerations section, something along the lines of this:"} +{"_id":"q-en-draft-ietf-oauth-attestation-based-client-auth-1169d7decf0cf2fa48726bcf637dd36cd66a5bccfbe5fb58b1da6952f18751c2","text":"Based on the discussions had at IETF 119 in the OAuth WG, this PR attempts to make the following changes to the draft Move away from usage of the assertion framework defined in RFC 7521 and RFC 7523 for conveying the client attestation and client attestation pop. Instead use two newly defined HTTP headers OAuth-Client-Attestation and OAuth-Client-Attestation-PoP for conveyance in an HTTP request, akin to how DPoP works. Explicitly mention this mechanism could be used in a variety of places including with a resource server. Note, there are still many outstanding items I would like to add, however I felt this PR was already big enough as it is, these include Error responses for the token endpoint Optimisation for when this mechanism is used with DPoP to avoid having to send two PoP's Formalising whether this mechanism still constitutes a form of client authentication\nFor some reason it seems to not create an html version? This is the rendered text: URL\nI think there were potential objections mentioned about moving to headers as to whether we'd run into size limits (I think 8 KB was mentioned as a common restriction) and people felt jwts with embedded x5c headers etc may get close to these limits?\nNAME indeed. We may have a client attestation with an huge x509 certificate chain or an openid federation trust chain with several metadata, trust marks and jwks. I don't see the benefit of moving from what it is actually more efficient (wa~wa-pop) more over, technically we can have multiple client_assertions that embedes their pop (using ~), while having multiple http headers that divide attestation from pop forces the implementations to loop all the assertions and all the pop to find the matching ones\nAdded client instance definition and updated to use US based spelling of authorization instead of authorisation.\nNAME NAME I can understand the desire to want to use X.509 certificates to validate the client assertion (e.g as an alternative key resolution mechanism to resolving a JWKS document from a domain), however I don't see how the certificate chain would be more then 1 certificate (maybe 2) in length (bearing in mind the root certificate(s) would be communicated to the issuer out of band and pre-trusted for the client). W.r.t other aspects of the client attestation or client attestation pop being too big for an HTTP header could you elaborate on what you see as the contents of your attestation NAME Because IIUC you perhaps want to use the client attestation to convey all client metadata? If that is the case IMO the client attestation isn't for this purpose, client metadata should continue to be communicated outside of these attestations with the pre-existing mechanisms we already have defined. Finally even if we did begin to encroach on header payload limits with a client attestation, we aren't talking about hard limits here, rather just the default maximums that common web server platforms have configured, which can always be adjusted.\nthere some point for clarification assertion weight: a client assertion may have a x5c or an openid federation trust chain within its JWT header. This is an issuer's choice, allowed by RFC 7515 that might be not supported by httpd maximum header body size, that's a deployment\/configuration choice that might be different within different domains. chain length: it could be assumed that a chain may not exceed 2 or 3 certificate breaks. This depends on the number of intermediates, an aspect that falls outside the technical specification and is determined by the federation's topology and composition. client metadata: we have some required claims in the wallet attestations and the possibility to include any other custom claim or any other claims pertaining the wallet capabilities or anything else related to the trust framework. My comment is not related to assertion payload, but to the previous point 1 and 2. However, the payload also impacts in the assertion weight, as pointed out at point 1. The missing point concerns the processing of multiple DPoP tokens within the same request. Handling multiple DPoP tokens necessitates a loop to identify the one that matches a specific assertion. This introduces additional computational and development efforts not present in the current approach (WA~WA-POP), which directly associates an assertion with its proof of possession (PoP), facilitating a straightforward and efficient identification of matching pairs.\nOk great so do we agree that this isn't an issue because servers are freely able to re-configure the max header size based on their application? Because it isn't a hard limit.\nUnderstood but based on the above point is this actually an issue if the limit on header size isn't a hard limit?\nRight but to be clear, the current draft does not handle multiple wallet attestations either though so I think this a new requirement that we should discuss separately, would you mind opening a seperate issue?\nThe issue extends beyond the realm of multiple wallet attestations, as there could also be multiple DPoP tokens involved. An additional consideration, which we have not yet fully addressed but may emerge as a future requirement, is the scenario where a wallet instance operates under more than one trust framework, necessitating specialized assertions for each. While we have consciously chosen to limit the scope to a single wallet attestation to simplify implementation, it's important to acknowledge that this decision does not preclude the possibility of requiring multiple assertions in the future. Implementers may need to adapt or extend the specification to accommodate such needs. However, to avoid digressing further, let's focus on the matter of handling multiple DPoP headers for now. The approach currently documented, which aims to streamline the process by limiting it to a single attestation, is designed to offer the same benefits while minimizing computational overhead.\nWhile it is possible to reconfigure these options, it is important to keep default values of the wide-spread implementations in mind to allow an easy path for adoption. I took the time to look at some of the frameworks I would say are good indicators: nginx: --> 8KB for 1 header max envoy (used in istio): --> 60 KB for headers URL --> 16KB for headers traefik: --> re-uses go\/http: 1MB for headers --> After looking at this, I would say we can go with headers\nAn OpenID Federation trust chain with two intermediates, the trust anchor entity configuration with a long list of trust mark issuers and the leaf entity configuration with a very verbose configuration, such as an AS + Openid4VCI + OpenID4VP (considering an EAA provider), with policies and a consistent number of trust mark might be 20KB. A serialized X.509 certificate chain, with 4 intermediates, custom extensions and QWACs ... Considering each certificate up to 2.2KB, might it be 8.8KB? (please, consider that openid federation brings policy languages, trust marks and multiple protocol specific metadata) I'm worried about these aspects: loop required for parse\/matching the correct DPoP when multiple DPoP are present, while WIA~WIA-POP doesn't bring this problem. What's the benefit of having this breaking change that reduce the flexibility while increasing the complexity? size limitations: Do we have to imagine to standardize something upon the assumption that a federation topology should not have more than 2 or 4 or 8 intermediates, since this decision is out of scope on this specification.\nA bit off-topic for the current discussion: We need to fix the examples - they are way too long which breaks the rendering\/conversion and will cause problems with the datatracker. I do believe that this is also the reason why the html file was not created.\nNote that rfc9449 does not allow for multiple DPoP headers. It could maybe have been stated more clearly but is kinda implied and was absolutely the intent of the RFC overall and specific text like URL\nNAME as I implied with my earlier comment, the current draft text doesn't support multiple attestations so I don't view this proposal to shift to headers as removing functionality, it appears this is a new usecase and I think we should discuss it in a seperate issue. No I dont think we do, what was identified above is that there is no size limit on headers in HTTP rather defaults that servers apply which can be adjusted, in reality web servers also apply limits to the body of an HTTP request (although often much larger then what is allocated for headers).\nDiscussed with NAME I will add an implementation consideration around the default size limits for HTTP headers.\nNAME what about using compressed values? unfortunately this sounds like giving a solution that may bring other problems, at the same time we can consider zlib (RFC 1950) for doing this.\nSomething broke with linking the html rendered versions properly Rendered html: URL\nIt just (re)occurred to me that specs defining new HTTP headers (which the cool kids call fields nowadays) are supposed to request their registration in the . I don't want to hold up this PR for this but will create an issue from this comment so as not to lose track of it.\nI do believe there should be a discussion about terminology (attestation vs assertion) as it seems to create quite a bit of confusion, but I do think a different issue\/PR is better suited for that. Minor nit: Client Attestation seems to be used in lower and upper case somewhat inconsistently. Should every usage be upper case (e.g., also client attestation mechanism)?"} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-70305e7e52fa3fb10c9bfdd3ec598ac635cc2abc4069e244922944b7b545e3e3","text":"\"The parameter is not meant to indicate that a client should request all scopes in the list. The client should still follow best practices and request tokens with as limited scope as possible for the given operation.\""} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-dc35777006c70f26bf537fd85791c15b0830183b06c5096cf8e672fd32afe08f","text":"The security concern is when there are two separate RSs with different sets of scopes, it shouldn't be possible for RS1 to tell the client to ask for scopes for RS2, and then have RS1 be able to use the access token at RS2. Audience-restricted access tokens and resource indicators are a solution.\nApply my wording suggestion if you like it or don't if you don't. Then merge."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-3b5f75cf6484d50ea43093df20492848a43a24c0d617edbf1a39049cd5feb8f7","text":"From BC's comments: URL FAPI Message Signing talks about the RS signing resource responses. This specification could be how the RS publishes the public key, as mentioned in 5.6.6.2 URL\nI can see why this one was tricky. One more suggestion to approve, then merge."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-6d4e4c0ae91bf94dd2685cee40b1809d24f572720f1c575a76bee3350e2386b3","text":"As discussed during IETF 118.\nThe draft (and the examples in it) follow this language This means (taken from an example in the draft): This is inconsistent with RFC8414 (and its normative dependencies) which says This means (taken from an example in RFC8414): The RFC8414 way kind of hampers the ability to host these documents in multi-tenant system on certain deployments but i'm unsure if we can justify not doing it this way.\nThanks NAME this was the subject of my comments in . I do think that the way it's done in the draft currently makes sense for the context and is preferable. But it might raise some objections in IESG etc. review. I don't honestly know what the best course of action is. But wanted to bring it up earlier rather than later. Perhaps Roman's suggestion of asking the HTTP directorate for an early review in hope of avoiding surprise down the line?\nThe draft does say the following , which maybe provides sufficient justification. Maybe."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-9b53d1a4d1dd1f5fa060c0aaabe4e3bdbfe642f8128bba2850a915ed9b1efa0f","text":"Should the header return the full path to the Resource Metadata document, or return the resource identifier? RS Metadata document: the client fetches the exact URL returned Resource Identifier: the client builds the metadata URL by appending .well-known... and then fetches the URL What are the implications of this for resources that differ by path? (see mailing list thread about path-based resources that require different scopes)\nI'm increasingly of the view that the resource identifier should be returned, which is what the draft already does. This aligns with the value in URL , which we normatively use. I propose that we say that during IETF 118 in Prague. If we get consensus on this, it will close our last remaining issue.\nI'm wondering why the ended up the other route (advertising the URI of the resource metadata instead). Advertising the resource identifier in the would be more consistent with existing OAuth specifications as you pointed out. Advertising the resource identifier in the would make it possible for the client application to fetch an application-specific metadata documents () if it can handle it and the generic one () otherwise."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-4a7360601d7b40e65cffeaaa0b9334683ebf33ba6d2ec9e9434301e4fad4e202","text":"This fixes the definition, it incorrectly used \"fragment\" when it meant \"body\" as later shown in section-3.2"} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-d31a19579dcfa64ad71abc3ae4584060a0a1b7f913179f6b88a1e5ddaeedfd34","text":"cc NAME\nFrom Pieter's review: Since \"presentation methods\" has come to have a new meaning. The language above is consistent with the language used in 6750.\nThe text is clear and brings better alignment with the name of the parameter."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-fff15ab507c61256105939e08200a8d13d860962f5a2f92e94f8354d03394d24","text":"cc NAME\nFrom Pieter's review: \"between the host and path components of the protected resource's resource identifier, as shown in Section 3.1 below\"\nLooks good to me."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-34675a5f2f00d5837076b1c546c79f6eff9ed55dacf22539fc9453fda107ea35","text":"Cc NAME\nSection 3.3 should reference the attack described in section 7.3 URL And evaluate whether it makes sense to add a back reference from 7.3 to 3.3.\nLooks good to me."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-da08b3a870777082128d524e671ae1e3ca52e6e13e422b534a8ed5d3f61badff","text":"cc NAME\n[ ] Step 5, add reference to how to validate: \"...according to the validation steps described in Section 2.1 (validation of signed resource metadata if it applies) and Section 3.3\" [ ] Step 5.2: rephrase to \"If the client receives such a WWW-Authenticate response, it SHOULD retrieve the updated protected resource metadata and use the new metadata values obtained.\"\nLooks good to me."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-678bc63a636e2b7978184bc41454dd1f3ea061d8f240c57008e0e802cfa9eade","text":"cc NAME\nURL Move paragraph 3 to the top so that we first talk about what use cases are supported.\nLooks good to me."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-207f13ed2ce05e0be6a7edd02c74cf6c620ace67fc6cc30a9dc778eaef2ca1e4","text":"Cc NAME\nwell that's one way to solve it!\nURL Change \"applications\" to \"deployment\" for clarity\nLooking at making the change, there are ~12 places we'd be changing \"application\" to \"deployment\" and it's not clear that it would improve the readability or clarity of the text. Furthermore, I'll also point out that this is the language used in RFC 8414, and it seems to have worked for people there, so I'm reluctant to make this change, given I don't think it would be a significant enough improvement to warrant being different. cc NAME\nMy comment about this section was that it was generally quite hard to follow, not that \"application\" should be replaced by \"deployment\". This sentence took a while to parse: \"Different applications utilizing OAuth protected resources in application-specific ways may define and register different well-known URI path suffixes used to publish protected resource metadata as used by those applications. \" I think the intent here is to say that an application may be composed of different OAuth protected resources, and each of those may have their own protected resource metadata, and if this is the case, your describing a scheme on how the application may expose the resource metadata to clients for each of the underlying OAuth protected resources. If that is the intent, changing from \"application\" to \"deployment\" may not help (these are different concepts). Perhaps just an introductory sentence for that paragraph - something like: \"An application may access multiple OAuth protected resources in order to complete an operation. Each OAuth protected resource may have their own resource metadata. In this case, the application may define and register different well-known URI path suffixes used to publish protected resource metadata as used by those applications to enable a client to obtain the resource metadata for each of the underlying resources. As I write this, I think my confusion here was more about the lack of clarity on the relationship of the application (which may itself be an OAuth protected resource?) and the OAuth protected resources (which it relies on to perform an action).\nNAME talked about this in Rome. The term \"application\", as used, encompasses all the components used to accomplish the task for the use case. That can include OAuth clients, authorization servers, and protected resources, inclusive of the code running in each of them. Applications are built to solve particular problems and may utilize many components and services. I could add an introductory remark derived from the above, if that would work for you. (Although I would try to make it more concise.)\nLooks good to me."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-279920aace560ad5566c37756008ff2c01127d7869275fd897cae35424acf467","text":"Removed extraneous paragraph about downgrade attacks discussing an issue that's already addressed elsewhere in the specification."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-48e2e7b974b4f00d0dc31914a431954976876157be4fae3013afd840ba5df380","text":"for\nPer my reply to Bo Wu's OpsDir review: NAME can you add your motivating example to the introduction? Thanks!\nI didn't mean to merge that PR without your review, so let me know if it looks good before I close this issue!\nLooks good! The only change I made was to make the WebFinger reference informative. Please close.\nGood catch, not sure why that wasn't informative before.\nIt was previously incorrectly normative. I just happened to notice when reviewing your addition. Thanks again!"} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-cf8aca29ba87d1cc311a8f2775815e16e56f7a08e9c2a28e9d66f427e9be135e","text":"from review comments from John Scudder\nPlease also add John Scudder to the acknowledgements."} +{"_id":"q-en-draft-ietf-oauth-resource-metadata-5530d8d39247c061e2f69032117ea3f31246c78d8eaf39e3df336ad3cbfe9a84","text":"... per additional feedback by NAME Cc: NAME\nI'm checking with IANA.... will reply once they respond. Thanks for the work!"} +{"_id":"q-en-draft-ietf-oauth-status-list-ebc4f3be822174f2a0f2d22dc6da6ea6c9071bc368824799367f1def85e99691","text":"maybe adjust the language around DEFLATE and ZLIB per Originally posted by NAME in URL\nCopied over, to have everything in the issue here: This is the current text: This is the proposal: >The complete byte array is compressed using DEFLATE {{RFC1951}} inside the ZLIB {{RFC1950}} data format. Implementations are RECOMMENDED to use the highest compression level available. I agree that the current text is a bit weird and I wasn't really sure how to formulate it better when writing it.\nMore context copied over via Screenshot: !"} +{"_id":"q-en-draft-ietf-oauth-status-list-b41af278c19d93663dc4b72f3ee1bf41c2eeb1dd9f69e053966985b1f28bb3b2","text":"Fixes most of Relaxes requirements on referenced token in terminology issuer in status list token is only required when also present in referenced token (and matching checked) reorder references"} +{"_id":"q-en-draft-ietf-oauth-status-list-0e5a0cd9054d6886b6fb9b18bcfa50c3939fcd8bc2bb9eee05ae17a2b423888b","text":"Closes # URL\nbatch issued credentials might be revealed in an status list update, which might correlate the related referenced tokens\/credentials\nimprove privacy\/prevent tracking by regular re-issuance of referenced tokens"} +{"_id":"q-en-draft-ietf-oauth-status-list-ab4da410f769849c3fb7d0795cfef36ce9ef379b1b208547ba534f204dbc3782","text":"This PR adds: CBOR\/CWT encoding for status list and status list info Example implementation for CBOR\/CWT Not included yet: examples for referenced tokens (COSE, CWT) but that should be trivial to add. URL\nThe decoded cbor is: The list does not seem to be valid zlib.\nI've tried with cyberchef and it seems to work for me: -> zlib inflate =\nYou are right, I probably tried on the hexstring :D\nI assume there will be a lot of comments. I had some troubles with the claim language for non-CWTs btw. Please review and give feedback. NAME NAME NAME\nI was wondering whether it would be better to define a CDDL for Status, StatusList and StatusListInfo. Basically, having a high-level CDDl description of all objects and then having some specific sections for JWT, COSE, CWT that define how they are used in the corresponding objects.\nAlso keep in mind that claim for CWT will require IANA to assign us an integer number, so we can use it in the CWT.\nThere is a problem with the width of the CBOR diagnostic notation examples. The datatracker only allows a width of 72, but cborg defaults to 100 and apparently can't be configured in CLI mode.\nI can write a small JS script that uses the cborg library and configures the width to 72. Would this be even possible? I didn't check by myself.\nYes that would be possible and is the only option I have found so far. I was searching for a pure python solution (which would be cleaner imho) but that seems to be not possible right now.\nThe abstract, intro, diagrams wrt referenced tokens description should refer to COSE and JOSE instead of CWT\/JWT. Otherwise, non-CWT COSE objects such as ISO 18013-5 MSO cannot use this spec. Same applies to SD-JWT VC encoded secured using JWS.\nWe adjusted the terminology already, but the text in the introduction is not matching yet. In fact COSE is already mentioned at all places, so the question is whether we should align things and say Referenced Token = JOSE\/COSE-based\nUse reference to RFC9052 instead of RFC8152\nAs described in the title, detail the CWT representation of the status list which is complementary to the JWT representation.\nMaybe a naive question but what's the motivation for this draft having the two representations? When positioned as a companion of the SD-JWT VC draft and aiming for the OAuth WG, it seems like keeping the scope tighter and only having the JWT representation would be sufficient\/okay\/preferable.\nNAME credential formats like mDocs do not have a status\/revocation mechanism hence a CWT based representation is beneficial to these applications as they dont typically have JWT support.\nI dunno, I'm not convinced that necessitates having multiple status list token representations. Especially in one draft. This draft, which again is being positioned as a companion to the SD-JWT VC. I suspect the work would be more palatable for adoption with a scope that's reflective of that. But maybe that's just me. Thanks for the context."} +{"_id":"q-en-draft-ietf-oauth-status-list-82bae8cda8552b6d0005d267b0ea94a4e85d68b83c11a45659015ec436544014","text":"adds ttl to the generated examples: Rendered verison: URL\nShould be fine now, with some minor fixes to rendering and adding the proposed values. Can you please check again NAME\nIt appears the example uses a text string instead of a byte string for the idx value.\nGood catch, the referenced token example is wrong!"} +{"_id":"q-en-draft-ietf-oauth-status-list-57e7a2ed8a41d65b46e3a7caff46c0360b8b1da25fdd9a007a19dccf02f247dc","text":"Pretty straightforward fix for an oversight when relaxing the requirements for status list\nThe spec says: However, JWT defines as \"REQUIRED if present in the reference token which means, the REQUIRED word is a bit misleading without the if-statement that follows. Either use OPTIONAL for , or also say in the CWT section."} +{"_id":"q-en-draft-ietf-oauth-status-list-fbce05400af4aa6e3bc586a480ad11c00b33aeb6b79c460ce748cc9b4ae35f58","text":"The first and second sentence of this description seem contradictory to me. The first sentence seems to state that (all?) Status List Tokens should have a unique identifier included as their claim. The second sentence directly contradicts this uniqueness requirement. Perhaps I'm misunderstanding what is meant by 'uniqueness' in the first sentence, but in that case the language is quite ambiguous. As the claim seems to have been added solely for including the from the Referenced Token in the Status List Token (as described in ), perhaps the sentences should simply be reworded to not mention uniqueness. So something like: : REQUIRED. The (subject) claim MUST be equal to the claim contained in the claim of the Referenced Token.\nThe is basically the URI by which the Status List Token is retrieved. As Status List Tokens may be downloaded for offline use, further re-transmission or archiving, its important to also store the URI in the Token, such that it can be compared by others to the of the claim of the Referenced Token. I see that the text may be confusing, it should be clearer as \"URI\" may be better than \"unique string identifier\" and URI is unique anyway. How about: : REQUIRED. The (subject) claim MUST specify the URI of the Status List Token. The value MUST be equal to that of the claim contained in the claim of the Referenced Token.\nYes thank you, this wording clears it up. If you wanted to be very precise instead of you could write . But perhaps my suggestion is too verbose, what you've written gets the idea across."} +{"_id":"q-en-draft-ietf-oauth-status-list-41cbe64bb45a4481c86eca88b5ac44abdc6afdcffbf4b304e702bc98d57577b4","text":"\"endpoint\" is still mentioned in there, unsure we we completely want to get rid of it\nSection '8.1. Status List Request' currently states: This seems redundant when the endpoint is used to retrieve a Status List Token, as the Token's integrity is already guaranteed by requiring it to be signed using an asymmetric cryptographic algorithm. If requiring TLS was intended for providing confidentiality of the Token, this seems unnecessary. The endpoint is publicly accessible - there does not seem to be any client authentication mechanisms described for it - thus the Tokens could be retrieved by anybody.\nHi NAME , thanks for the feedback! I think you are correct that we could relax using TLS when a Status List Token is queried. However, if an unsigned Status List is returned, we need TLS as an integrity mechanism.\nYes I vehemently agree :) Btw, perhaps the issue is a bit broader that just permitting plain HTTP when obtaining Status List Tokens. I don't think it's currently explicitly stated in the specification whether a Status List Provider should\/must simultaneously provide both plain Status Lists and Status List Tokens. Ie the standard states that an RP should use the Accept-Header to indicate the requested response type, but it does not specify what the Status List Provider should do if it is unable to provide the requested response type. I think this is somewhat relevant under this issue as: if you were to simply add a sentence to the spec permitting plain HTTP for Status List Tokens then an Issuer would have to decide whether to put an HTTP or HTTPS URI in the claim of the Referenced Token but this somewhat contradicts with the Accept-Header mechanism which would seem to indicate (to the reader of the spec) that it's up to the RP to choose the response type. Perhaps the Issuer should explicitly specify supported response type(s) in the claim of the Referenced Token? Ie there could be an additional attribute in the object under the claim specifying the supported formats. Such an attribute could prove beneficial in other regards as well. For example, I don't immediately see an use case where an Issuer\/Status List Provider would be providing cbor\/cwt Status Lists for JOSE Referenced Tokens.\nSection mentions Status List Endpoint. The term is not mentioned elsewhere in the specification nor is it defined.\nThanks for the feedback. Discussion Editors Call: connected to the discussion whether Status List Provider is an entity that shall be described in the spec intend to clarify the term for next iteration"} +{"_id":"q-en-draft-ietf-oauth-status-list-8527dcf0ac89def5ae8587e76d858a193e732285671cbc0046ce2ade7fa12b4c","text":"Changes the CWT example to a Hex and annotated Hex version instead of diagnstic. Rendered version: URL Did you have time to check this example with your team NAME\nWe have a CWT in 6.3 in diagnostic notation and decided to add a raw (hex) example to the annex"} +{"_id":"q-en-draft-ietf-oauth-status-list-da404cea5a6b7837796f3f531958d179532a064a0ee5fc77fdc0c90a7647ddd4","text":"Rendered Version: URL\nCOSE typ is now registered in IANA. Because of that, the TBD could be removed."} +{"_id":"q-en-draft-ietf-oauth-status-list-973f9bcbc5a299185ca4e254151407f37d19b63f58c4792fa5829f4e51c79c29","text":"Adds a claim key for used in the status list token in CWT format and fixes the reference."} +{"_id":"q-en-draft-ietf-oauth-status-list-36c3373c96cc74a563c8165499764863fc28838c6ac3dcf25518c7486918d1b4","text":"Status List claim was referencing Chapter 4.1 instead of Chapter 4, which made it ambiguous whether to base64url-encode or not"} +{"_id":"q-en-draft-ietf-oauth-status-list-e08515b23fa35367e8ae61f15761f4f5e748ae05c4ed50c24eef85a80b07ea5e","text":"Redered version: URL\nthe text is existing in JWT encoding, does it make sense to emphasize this change from standard base64url encoding again in our spec?\nI like the solution in sd-jwt - they introduced it as terminology: Should we move that to terminology and reference via base64url in the text?"} +{"_id":"q-en-draft-ietf-oauth-status-list-d4e612afb895c3589b85a21823fb40889bb5471ed4553803f14bee73258ec4ee","text":"Rendered version: URL\nWe should probably add a small section mentioning that should be set to when explaining the endpoint."} +{"_id":"q-en-draft-ietf-oauth-status-list-04d6eb1153c4b7348331c3d0fab85a881694edc538290ed83e4824072c2f6bd4","text":"Micha proposed to change REVOKED to INVALID, as it gives less intention whether this is a permanent state and the issuers could reflect a suspension also with a 1bit size status list as well\nI'm in agreement with Kristina, the more I look at this the more I think we should have one possible enumeration of statuses rather than allowing every issuer to define their own, the question then becomes how big that set needs to be as it controls how many bits are allocated to each credential in the status list.\nI'm fine with defining the first bit as valid\/invalid. I think providing the option for additional bits offers benefits with acceptable tradeoffs\nI've made some changes, please recheck NAME NAME\numm I thought we agreed to remove the concept of the \"typ\" attribute.. so no and no .\nYeah that was captured here NAME URL\nBut I assume the Status List Token will have a JOSE header to properly distinguish it from other JWTs, right?\nI approve, but moved to separate issue\nNAME typo on my end. yes, JWT should be typed ( claim present), the current spec also introduces a body claim as far as I can see, which is what I am not in agreement with.\nNAME I am confused. so the idea is to merge this PR and then remove the \"type\" JWT claim in PR ?\ncomment edited, I got confused here as well. There is an issue open already as a task and I would like to not blow up the PRs too much\nNAME ready for merge\nI am a little confused with calling \"VALID\", \"INVALID\" Status List , but agreed this should be mergedOne minor typo"} +{"_id":"q-en-draft-ietf-oauth-status-list-9e09ac4c436db402a4f3d1f81224f7e224b1851c41c66cb638ee628ad198ae69","text":"fixes the list rendering (missing newline) clarifies IANA text to just reference other tokens Rendered version: URL\nI did further fixes on media types"} +{"_id":"q-en-draft-ietf-oauth-status-list-0b99d269ae00c3228986080fc50fca4736631b8399a70d440a762e609cecd696","text":"Rendered version: URL We need to fix width - otherwise the datatracker will refuse the document. I will make suggestions\nNAME please approve\nEditors Call: rename 6.2 -> Referenced Token in JOSE add SD-JWT example is it possible to merge 6.3 and 6.4 and make some specifics for CWT? include both cwt and mdoc example\nLooks good imho"} +{"_id":"q-en-draft-ietf-oauth-status-list-7c3dc69f3831f62f4f0451615d6d5c141e7b6e47833330b6b49c967fea99b358","text":"Questions for the encoding: base64 vs url-safe base64 bitwise encoding right now is least significant bit to most significant bit The part describing the bit-encoding still needs be rewritten In general: I searched a bit for better ways to encode\/compress but in terms of simplicity and support for different programming languages, it seems some form of byte array and gZIP seem to be a pretty decent pick.\nI've made a first implementation in Kotlin today. To me, it feels very natural to count Byte indexes up from 0 to size(whether this is left to right or right to left feels a little philosophical) and Bit indexes from right to left.\nImho we need another iteration on the description of the bit encoding, most likely with some form of ASCII art. I would prefer to do that in another PR- what do you think?"} +{"_id":"q-en-draft-ietf-oauth-status-list-147536c010f4d18e7f3a95fc74d4e30d87eadf32270fdb97ed53ef4c086a188c","text":"As the similar requirement was relaxed on SD-JWT, I think we should consider the same."} +{"_id":"q-en-draft-ietf-oauth-status-list-0a25eb6aff20c5e74f74b89a078302bbaf8879bc50951ca2d82b7b3193a628ee","text":"The first few entries are valid, revoked, suspended. There may be future well defined values that issuers and verifiers want global interoperability around. There may also be a need for private use of this space. This document should describe the \"reserved for private use range\", and leave room for enough globally interoperable and registered status values\nWe had similar ideas and totally agree\nThe section 7 explains how to setup the registry (URL) but we are missing a corresponding section in the IANA appendix"} +{"_id":"q-en-draft-ietf-oauth-status-list-e9cb0d29f0c36aa35ca32a640a2c7295e1a30d04207bc210dfe7a4525a3075a9","text":"The spec creates a registry and claim for status mechanisms and 1 specific mechanism, the status list. We should adjust the introduction to reflect those changes."} +{"_id":"q-en-draft-ietf-oauth-status-list-56ae68816395e7f902b1aeda6f3f964d5756bb48b9ad04df48c1df99a6a339f2","text":"NAME Whats' failing on the build here?\nbreaks the build All RFC references are resolved interally and linked\nThere are possible applications of this draft that go beyond just verifiable credentials, hence we do not want to use that term in this draft IMO. \"Token\" is already a term broadly used in this space e.g CWT and JWT both use it, \"subject\" was used in this context because it is also a term used heavily w.r.t tokens, feel free to suggest alternative terminology but at this point I think subject token is the best I've heard suggested.\ncan we simply use JWT\/CWT instead of creating any kind of abstraction term like ? One person i asked for feedback associated with a token used to communicate a subject of an email.\nor, bottomline, please stick to . is very confusing.\nI'm with Tobias on this and prefer introducing a new term here. Otherwise things get really weird because the status list is also a JWT\/CWT and a token. We need something that clearly separated the status list token and the other token. I think subject token is the best we have. Also subject is leaving towards what we have as credentialSubject in the VC world, so the term subject is kind of established in this context\nI am really strongly against \"Subject Token\". Not only because it is not intuitive (I have been asking various people in this space and no one would associate what it trying to be conveyed), but we should not be introducing a new term in this already confusing ecosystem. Moreover, this status list mechanism can be used with a JWT\/CWT that does not necessarily contain a subject (like SD-JWT-VC), so using a term subject because it is widely used is not a sufficient reason. Keep referring to it as a token, because this draft should not be opinionated about what is in the JWT\/CWT whose status is being expressed should be fine, if a \"status list JWT\/CWT\" is always referred as such. there are multiple occurrences where \"status list JWT\" is referred to simply as a JWT which should be fixed IMO.\nWe need clear terminology for: the status list token (i think this one fits pretty well) the credential \/ token that is bound to the status list I do believe only calling it token will cause confusion with the status list token. We can also call it something else like credential if we do a proper introduction of the term in the beginning?\nWhy call \"status list token\" and not \"status list JWT\/CWT\"?\nagreed on call: rename Status List to Status List Token introduce Status List for the actual bitarray rename Subject Token to Referenced Token remove Status List Provider for now and introduce it later if neccessary\nbtw sd-jwt vc spec mentions status provider :)\nNAME NAME can you approve this PR?\nHi, thinking more about this, status list Token is being referenced in the token-whose-name-we-are-trying-to-find, so it's the status list token that is actually a \"referenced token\". Can we call token-whose-name-we-are-trying-to-find \"a Token with reference\"?\nYes, I agree and had the same thoughts when I changed the PR. I assume Referencing Token could be better. But let's accept this PR and move discussion to an issue instead...\nMultiple approvals and discussions on editors calls, merging\nopened issue to continue the terminology discussionok. Then please migrate the TODOs afterwards."} +{"_id":"q-en-draft-ietf-oauth-status-list-1d2407eb425ffd8882d575647e93aac1de4c52f0364973e402e4e1efaab72f27","text":"yes, looks great.\nIf an OP wanted to publish the Token Status List JWT on an endpoint, perhaps something like with ... it would be nice to specify that in the spec.\nI'm not sure what you are suggesting. Are you suggesting that AS or RS has a link to a status list token in his metadata or are you suggesting media types? The latter is already defined in the draft.\nIs this covered by ?\nEditors Call: add that may link then to the Aggreation\nNAME does the PR reflect your idea?\nLGTM apart from the minor text change"} +{"_id":"q-en-draft-ietf-oauth-status-list-e8583a540c94d3c8fed25b155ccb54762aeba6be49d2858feb45bd1229b777e3","text":"Rendered version: URL\nWe do not define how to respond to a status list request but have status codes on examples We might also need to add language for redirects?\nOk to merge with the minor suggestion."} +{"_id":"q-en-draft-ietf-oauth-status-list-29bb2a697ab4ee4ab96764a5a5da9c52f9d8e35e1ecd7899f7288e1c6549f952","text":"We would register: CWT Status Mechanisms registry in CWT JWT Status Mechanisms registry in JWT Status Types registry in OAuth parameters Any ideas how we could simplify this?\nlooks good to me in general. I would extend this to more than two weeks, if I look back how the current year went by. I think this is a good start for now, so we should merge this first and then evaluate afterwards if we can merge both status registries.\nJWT\/CWT registries use three week periods, the OAuth Parameter one uses two weeks. Should we just align with these registries for each one?\nSection 4 of RFC 8126 -> supply registration procedures for the registries Define where these registries should be placed\nEditors Call: we propose to put into OAuth Parameters: URL"} +{"_id":"q-en-draft-ietf-oauth-status-list-21025816a587118ddae850c906e4c71b4141a7ccf1f6fb281a9b2a323c1f47c0","text":"after writing this I figured out changing the byte ordering, i.e. starting with byte 0 on the left might be a little more intuitive?\nYeah with fresh eyes I definately think that would be better.\nreversed byte order and added the index to the ascii art"} +{"_id":"q-en-draft-ietf-oauth-status-list-185b829953080430ea64fbfce086c30604bd855c5f90dc6efd7babf8eb81fb04","text":"fixes the examples to use constant timestamps in gzip encoding minor changes\/fixes to the ascii art \/ code blocks"} +{"_id":"q-en-draft-ietf-oauth-status-list-4135029274f9e1e0913f4bc8731dda8af943a924c49f931d076130e76443d6e4","text":"reorder section on Referenced Token and Status List Token URL\nI did a first review of this draft. As a newbie to the topic, I found the following issues: I miss a description of the fundamental concept, i.e. the status list itself is a JWT (issued by the same issuer issuing the JWTs), the status list JWT is obtained from a URL hosted by that issuer or another party, the link between the JWT and its status list is provided through data in the JWT. I suggest to add this in the Introduction (or an Overview) section. When reading Section 3.1. for the first time I was confused whether it describes the status list JWT or the JWT referring to the status list. I think it is the latter and it should be clear to the reader on first sight. I think the order of definition should be changed. Right now, I'm supposed to understand how a status list JWT is used in a JWT without even knowing what that status list JWT is. I suggest to change order: 1) Overview of the concepts, 2) specify status list JWT (structure and where\/how to obtain), 3) embedding in a JWT (syntax, processing rules, ...).\nAgree on 1+2. As mentioned in we need a better word describing the \"Token\" that is being described by the status List. I will make a PR using \"Subject Token\" as this comprises both JSON and CBOR formats. Agree on 3 as well.\nFor reordering it would make sense to get forward with the existing PRs first and do it afterwards..\nAddressing some of these issues in\nWhat's left to do: reorder Status List Token first and Referenced Token afterwards"} +{"_id":"q-en-draft-ietf-oauth-status-list-1e8a87fd5697d1db9b0a86d5d512543427a75d18e78ffc52bc0e2680637594ff","text":"Defines the HTTP GET API URL\nWe need to document end to end the validation procedure for a JWT and CWT referencing a status list including resolution of the status list, validation of the status list and how to get the status information for the JWT referencing the status list."} +{"_id":"q-en-draft-ietf-oauth-status-list-52d37139cb9c2a41e05b69da2f47e029d72ec026d363b0c20f41e05ff72bfeb1","text":"URL\nAs described in the title the size of the list (number of token statuses represented by the list) will have a significant impact on privacy related applications.\nI see this is related to\nShould we define a minimum size or give this as a recommendation in privacy recommendation?\nuse SHOULD for minimum size 16K"} +{"_id":"q-en-draft-ietf-oauth-status-list-b79c290c94105e988fc099a3f513087518208215b4cc535614866f6eb7795366","text":"Use term Relying Party and Issuer URL\nSometimes we use Verifier, sometimes we use Relying Party. After deciding this, we should add them to the Terminology\nAs this is moving to OAuth Security working Group, we should adopt Relying Party as terminology"} +{"_id":"q-en-draft-ietf-oauth-status-list-798b80de524c5c5e47cc21793fa98f64bd83caa4121e07c6ac8cd34a46931ce7","text":"IANA Consideration to register JWT Claim names and media types URL\nPropose and as media types\nsame for jwt types?\nNote that JWT values are actually media types URL which is a little odd sometimes but that's how it is - so you don't need anything different in terms of registration for a jwt type b\/c it is a media type. And the JWT BCP does recommend explicit typing using URL for better or worse, which would suggest that be registered. But it and would also probably have utility in places like HTTP and headers so would make sense to register both. There are some media-type-registration requests in SD-JWT that could be borrowed from URL or you can look at entries in the registry URL"} +{"_id":"q-en-draft-ietf-oauth-status-list-59eb56d448e960a1c1440c69eb75268901b98d9e6ab0eb0c1d189fcd180f64b6","text":"Change typo in Status List Token sub claim description URL\nThe in the \"Status List JWT Format and Processing Requirements\" section has 'The JWT MUST contain a \"sub\" (subject) claim that contains an unique string identifier for that Referenced Token.' but I believe it should be \"... unique string identifier for that Status List Token.\"\nthat's correct, thanks for pointing this out Brian"} +{"_id":"q-en-draft-ietf-oauth-status-list-822f3dd263095bfe12e9754f984836b65bdd14f7366dace1b784597d46fd8863","text":"gathering some ideas on privacy and security considerations\nLooking good NAME you should be able to run 'make fix-lint' locally to fix the whitespace issues\nNeed to remove the trailing whitespaces for the markdown tooling to be happy"} +{"_id":"q-en-draft-ietf-oauth-status-list-dc05576123f64d8ce2263e6c753ef9205fa3e020353be13daa03c8c28182bed0","text":"and . Switches to zlib instead of gzip encoding and adds the recommendation to use the best available compression.\nI'm working on a TypeScript implementation of this spec, and have the encoding \/ decoding of the status list implemented. It seems to work based on the example in the spec and the example python scripts, with one slight difference. When I encode either the 1 or 2 bit example it will output one character difference , while mine outputs an . The decoding seems to work fine, and maybe this is a gzip thing, but I'm wondering if the examples use certain gzip features to get this output? I've adapted my implementation to use compression level 9 (default for python lib) and set the mtime to the same value The input byte array is the same as for the python implementation and the examples in the spec, so it's really something that is happening during the gzip encoding (the output byte array here is different) any idea what is going on here? The implementation and tests can be found here: URL\nSeems to be a difference in the header - from the position I would assume the timestamp (even though you seem be setting it to the fixed value used in the examples in the draft)? There is also a header entry for the operating system - that would also be a very likely candidate for the difference. I will take a look at it when I am back from IIW. We were discussing switching from gzip to zlib (or deflate directly) for that reason - we don't really need these headers and most languages seem to support both gzip and zlib (which was our concern when we initially thought about this). Related Issue:\nThanks for testing! There is a header as well in zlib, but it's considerably smaller. The zlib header does not include a timestamp which makes it better for reproducible results and test vectors. I think we should also give a recommendation for the used compression level, I suspect either default or highest. It would be interesting if the underscore vanishes in zlib?\nI changed the python implementation to use zlib with level 9 and did the same for your typescript implementation --> both resulted in the same: We should really change that imho (and also add the text to recommend compression level).\nI'm a proponent of using zlib instead of gzip. As suggested in the other issue, we don't really need the other metadata. And the result is smaller as well :)"} +{"_id":"q-en-draft-ietf-oauth-status-list-3c8296890846efb195d1391c698d6a83ffe692140361dbf5b8c688bc4f76a8de","text":"This feedback comes from applying the Status List to an implementation of a general OAuth 2.x server issuing JWT Access Tokens. Such implementation will typically not keep record of any issued JWT Access Tokens (their claims, expiry, etc) until they need to be revoked due any reason, e.g. getting leaked or just being explicitly revoked by the client. In such cases the issuer considers the token as VALID when issuing the Status List but this state will not necessarily reflect the token's validity as per the JWT validation rules for claims like nbf or exp. This is important to point out so that the status list is not mistaken for an alternative to actually validating the referenced token's claim set.\nNAME could you merge with main and add a line to the document history please\nNot a member, cannot merge.\nI meant sync, sry. But I can add the history on main also.\nNAME Please review and merge if you approve\nLooks good to me. It might be better positioned in a dedicated Verification Processing section once we have that but its good for now"} +{"_id":"q-en-draft-ietf-oauth-status-list-5c66d2a8d81bbcf113d5b0fb358179ca93796f04c424a20e88ef59c5ad21afd2","text":"Adds access token as an example use-case for status list.\nAaron says that there are zero mentions of the term access token and proposes to mention JWT accesstokens as a possible usecase"} +{"_id":"q-en-draft-ietf-oauth-status-list-469a01da1ab75d8fc97a656dbb66b6eca30e0d23b02078e1ca6cb625af2ff689","text":"URL\nI noticed the draft was renamed from the name it previously had at the last IETF meeting where it was adopted. The previous name was \"JWT and CWT Status List\" and the new name is \"OAuth Status List\". I understand not wanting to have \"JWT\" and \"CWT\" in the name, but the new name is somewhat confusing, since it makes it sound like there is a status of OAuth itself somehow. I suggest renaming this to \"OAuth Token Status List\" to avoid confusion.\nSee We got feedback from multiple people and hence decided to go for as this seemed a reasonable choice by many. I would not change the title now again before the cutoff, but I'm happy to discuss so in Prague.\nFWIW my comments\/feedback on and related conversations were only about the document identifier being For the draft title, I'd go with \"Token Status List\" as suggested URL Note that the document identifier (guess it's called docname in the markdown) needs to start with draft-ietf-oauth- because of IETF convention and the draft being an OAuth WG item. But the document title doesn't have to have \"OAuth\" in it.\nAfter reviewing the discussion I also would support \"Token Status List\". And yes, it looks like the discussion in was about the draft URL, not the title.\nThe name of the issue is \"Feedback from call of adoption: Draft title\". How should it not be about the title then?\nI was commenting on the fact that literally all of the comments only mentioned the draft URL.\nI already stated in the ML, but I support `Token Status List\"\n\"Token Status List\" seems the most supported option, if there is no objection, we would rename next week. I would suggest removing “OAuth” from the title of the draft and make it “Token Status List”. “OAuth Token Status List” sounds like status list mechanism defined in this draft is only for the tokens used in OAuth\/RFC6749, which it is not. Huge +1. Best, Kristina From: OAuth On Behalf Of Michael Jones Sent: Monday, October 23, 2023 10:29 PM To: URL ; Aaron Parecki Cc: oauth Subject: Re: [OAUTH-WG] Call for adoption - JWT and CWT Status List To Aaron’s naming points, I would be fine changing the title in the draft from “OAuth Status List” to “OAuth Token Status List”, if there’s working group consensus to do so. We could have that discussion in Prague. The name change was motivated by feedback from multiple sources that the old name “JWT and CWT Status List” was too specific in token types, seeming to unnecessarily tie our hands. That said, I don’t think we need to change the draft identifier “draft-ietf-oauth-status-list”. I doubt that there’s another kind of status list happening in the working group that might cause confusion. ;-) Besides, the draft identifier is actually ephemeral. Should the working group draft progress, it will be replaced by an RFC number. Cheers, -- Mike From: OAuth > On Behalf Of Rifaat Shekh-Yusef Sent: Monday, October 23, 2023 7:48 AM To: Aaron Parecki > Cc: oauth > Subject: Re: [OAUTH-WG] Call for adoption - JWT and CWT Status List I also noticed you didn't mark it as replacing the individual draft in datatracker. You can email EMAIL and request that they mark it as replacing URL so that the history tracks better. I fixed that. Regards, Rifaat On Mon, Oct 23, 2023 at 10:35 AM Aaron Parecki > wrote: Tobias, Paul, Christian, I just noticed the new working group adopted version of this draft: URL I posted this comment on Github, but I'll repeat it here for others. I find the new name \"OAuth Status List\" confusing. While I understand wanting to remove \"JWT\" and \"CWT\" from the name, I was not aware of that discussion during the call for adoption. I would suggest renaming this to \"OAuth Token Status List\" instead. I also noticed you didn't mark it as replacing the individual draft in datatracker. You can email EMAIL and request that they mark it as replacing URL so that the history tracks better. I also noticed that there are significant changes to the draft between the individual and working group versions. Typically it is better to post a verbatim copy of the individual draft as the adopted version, and then make changes in a -01 version. Thanks! Aaron On Sat, Oct 14, 2023 at 5:56 AM Rifaat Shekh-Yusef > wrote: All, Based on the feedback to this call for adoption, we declare this document adopted as a WG document. Authors, Please, submit this as a working group document at your earliest convenience. Regards, Rifaat & Hannes On Tue, Oct 3, 2023 at 8:51 PM John Bradley > wrote: +1 for adoption All, This is an official call for adoption for the JWT and CWT Status List draft: URL Please, reply on the mailing list and let us know if you are in favor or against adopting this draft as WG document, by Oct 13th. Regards, Rifaat & Hannes OAuth mailing list EMAIL URL OAuth mailing list EMAIL URL"} +{"_id":"q-en-draft-ietf-scitt-architecture-5070179aeefeb845e70186087ff29adbbfeb523ff152977bb9ef7f4acb5d5bd7","text":"and\nPayload and Statement seem to be used intermixed in the text. I tried to disambiguate the via \"Statement payload\". If that does not work, please provide a counter proposal. The term Claim sometimes seem to have meant Signed Statement and sometimes Transparent Statement. Using our interim terminology, that became a selection process for me. Please check, if I made the correct choice in all occurrences (sometimes I had to guess the intent). Consistent Capitalization was a mess. I hope I fixed most of that. Please check. There is still no consistent use of \"Registry\" vs. \"log\" vs. \"ledger\" vs. \"Transparency Service\", I think. Please check. Please check every use of the term \"Evidence\" and if an alternative expression could be used. Please check every occurrence of the term \"guarantee\" or \"proof\" and if an alternative expression could be used. Privacy Considerations and Security Considerations texts require significant improvement.\nHere is my opinion: Registry refers to the facilities used to store and retrieve \"Authenticated Statement Data\" that has been submitted to a \"Transparency Service\" \"operator\". A Registry contains specific \"Statements\" which the Transparency service supports, as documented in their registration policies document. A Registry may employ different technologies needed to carry out its function to store and retrieve \"Statements\" that are permitted in the Registry, per the registration policies defined by the Transparency Service operator. The operator of a Transparency Service serves the role of Gatekeeper to ensure the integrity of the Registry data and services provided to access the Registry data. A Statement is contained in a message \"Payload\" exchanged between a Transparency Service API function and other parties, i.e. consumer inquiries and Notary submissions.\nIt would help to be able to read the document given the changes proposed here. But due to the make errors documented in I haven't yet succeeded in doing so. Does the build infrastructure automatically build documents for other branches? Or can someone else build and publish the document based on this branch?\nEditor's version build via github actions for is located here: https:\/\/ietf-wg-URL\nLGTM, thanks for finishing this off"} +{"_id":"q-en-draft-ietf-scitt-architecture-5c694a04e28058be47035c1a663db56e79974fe74b9e035396d92329fcb509ee","text":"A number of markdown formatting errors caused to fail processing and rendering. As an example, see\nLGTM - nobody loves broken markdown :-)"} +{"_id":"q-en-draft-ietf-scitt-architecture-b4e85827d76a20d4d293d4bafc01ba0a28fabb2a6e4a13265dee8465736d0220","text":"This PR is intended to update URL as the current version has broken codeblock formatting and no longer reflects the current state of the draft URL URL URL URL URL\nI looked through the changes and they are fine. Thanks for fixing the over-long lines in the examples."} +{"_id":"q-en-draft-ietf-scitt-architecture-02bd34890a42651befad80e1b616ed8fd9572e2fad4b72f08308c0a82c30cc5a","text":"Good clarification, with a few nits to reduce and clarify the text\nAll edits have been addressed. I believe this is ready for merging.\nThanks, NAME Sounds like we can address this suggestion separately and we can merge this as-is\nFor the next submission deadline, RegInfo needs more detail to convey it's purpose. Maybe... it may even need a new name, as RegInfo might not be limited to key\/value pairs about registration policy?\nLGTMI would prefer \"The key\/value pair semantics are application-specific, or even possibly specified by each Issuer for each of their Feed.\" but I am fine with both formulations. Thanks."} +{"_id":"q-en-draft-ietf-scitt-architecture-ef19ce8d276aa7a485bd5d6eed17611ffb72acb1341fc8d312155e174c0ad41a","text":"PR addressing the conversation around Feed is redundant with Subject, as defined in CWT This PR intentionally contrasts Choice for folks to vote upon: [ ] Rename Feed to Subject [ ] Should we also include CWT_Claims, which contains the subject\nCopied over from: URL A feed is a great base for how we can create a series of statements for different artifacts, getting freshness for a receipt\/or VEX report. The current definition likely needs to expand a bit to account for: What are the versions of a specific artifact What are all the statements for a version of an artifact What is the latest statement for a specific of a specific versioned : (eg: what's the latest or for the software? If the is a referenced , which stores SBOMs, Scan Reports, how do we drill into each if they all use the same payload of satementByReference?\nI'm not sure if \"feed\" represents the right concept, when it seems to me that the API to the registry will need to offer the ability to query records for specific orgs-products-models-releases, and collect all the relevant records, which likely include a number of ledger entries to various related artifacts. With that said, can imagine also the notion of a different API which would provide a feed of updates to the registry, esp. to allow a mirrored version or to perhaps aggregate from several source registries.\nThis is a good point and one possible improvement that was already brought up at the previous IETF meeting is to enable some structure in the feed header. For instance, using an array of strings would capture many of the common patterns (such as product > model > release), and allow some advanced functions in the API (such as prefix-based querying). Another approach would be to allow each issuer to structure their feed using arbitrary CBOR representations; while this captures many more complex usages (e.g. packages for different Linux distributions, binaries for different architectures) it becomes harder for the transparency service to understand what constitutes related claims (it requires interpretation of the feed format), and thus, harder to define auditing APIs.\nNAME I've got the notes on feeds and I've been working on a proposal. Would you like to collaborate on it? Deferring to 117 for prioritization, but a great discussion as many things will be built upon feeds.\nA note from the We'll be iterating on this a bit more, to create some use cases and examples for how feeds help producers and consumers benefit from SCITT.\nSee: URL There are several problems with this definition. It conflates using aggregations over an header parameter, with the header parameter. It confuses how the TS uses the attribute with how the uses the attribute (same problem as above). Suggested changes: Registration policies only apply to the feed identifier, they do not apply to the feed resource. See URL regarding the Transparency Service exposing the Feed Resource.\nLGTM!LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-b0e4708811ffc55cb23fd095902fd9b0c455b3319b0337a1408fbec6537cc123","text":"LGTM, Thanks, NAME the stakeholders to consumers looks like a great change. I don't know if we'd keep both, or replace. I'll merge this to keep moving forward. Also, thanks for the catch on CBOR"} +{"_id":"q-en-draft-ietf-scitt-architecture-4a4bddf2c043de5266f33034d17c1325b02dba0a09ce04c2b2ac70ba7fa19ee1","text":"This PR makes it clearer that merkle trees are not required, and what the structure of a receipt is.\nJust to be clear, we very much appreciate the work on the cose merkle proof format and intend to use that by default where we can.\nAnd also of course, LGTM\nIf you approved this PR, you should probably review and approve URL\nNAME I added the changes from the IETF Hackathon, including the new statement-specific registration information in the receipt and the description of transparent statement verification algorithm. The examples will have to be aligned to the new CDDL\nThese improvements seem excellent... I would approve them, if I could, but I opened the PR.\nNAME , can you please re-review your \"requested changes\" review\nMerging as Henks comment was resolved\nI also suggest we merge then work on the details. LGTM:+1:"} +{"_id":"q-en-draft-ietf-scitt-architecture-8647e7758d186a349187bf87b07253f4a99ef07b64c7533163eab0146d08ba3d","text":"Extending the architecture with Transparent Statements recording the Transparency Service configuration, notably its registration policy. The syntax of the configuration (notably its supported registration policies) are still TBC."} +{"_id":"q-en-draft-ietf-scitt-architecture-8598e754905bc5efc4c5728b90c9c841c05e9b78a96b2c01012f85e85b1e10cd","text":"Define equivocation and non-equivocation and cite the article that coined the term in analyzing Lamport. Closes ietf-wg-scitt\/draft-ietf-scitt-architecture.\nI think this reference from the CT rfc is useful. URL \"and (2) by violating its append-only property by presenting two different, conflicting views of the Merkle Tree at different times and\/or to different parties.\" this is equivocation. If we replace \"Merkle Tree\" with \"collection of statements\". And that reference is also careful to cast things interms of detectability rather than prevention\nThis is very useful, thanks. I will adjust with this and your other feedback, most appreciated!\nWho reads papers anyway? Jokes aside, thanks for the contributions and making me think about how to communicate clearly on this one.\nDefine the term non-equivocation formally as it relates to this architecture document, per URL\nI like those changes. Reads clean to me thanksLGTM Thanks, NAME for the iterations. This was a tricky one to summarize an entire paper in a few sentences"} +{"_id":"q-en-draft-ietf-scitt-architecture-13094dbc9a2da3f23d8b443458fd2e827fa081367ce02ee5812ac77468c8fbdc","text":"simplified a few references & added a WG, some reordering\nThanks, NAME\nA small fix and then it will be ready to it.LGTM!LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-6b412c171e66ddea2484638056efe1ead9951ab5560e5f7d9d06d8e796a2b5eb","text":"This PR elaborates on the privacy analysis a bit. Describe why it is important (because we cannot delete items from the log) Remove the BCP 14 keyword (MUST NOT). Both the Privacy and Security Considerations sections should have the character of an analysis. They should be recommendations, or highlight gotchas to implementors. Anything binding should be in the main body of the RFC. Yes, this would need to be fixed in the Security Considerations, too. Expand the \"Unless advertised by a Transparency Service\" phrase into a paragraph about access control. This way it's more actionable. (Why would you advertise it? When would it be okay to advertise?)\nAuthors: (NAME NAME NAME NAME NAME I'm inclined to merge this as is. Any disagreements, or additional changes? We can iterate afterward, but seems like a good baseline to add\nYes. this looks pretty stable and uncontroversial. Thanks a ton, NAME"} +{"_id":"q-en-draft-ietf-scitt-architecture-dccf98d47e997130426625c33679590ac78d1ac5cdc8890fd1b1b7c5e71153c8","text":"This PR addresses: URL Related issues: URL\nSome additional details on the construction and use can be found here: URL I'm not including these in the PR, because I don't understand the security properties required by \"SCITT Identifiers\", but if the code I wrote covers the security properties, I am happy to do a follow up PR, after this one is merged.\nLGTM for the consistency of CBOR structures"} +{"_id":"q-en-draft-ietf-scitt-architecture-563cdd457bbe7dbfe6c46b3f75ffb5e567459dc22ed20f44d878bdfd9f1e453b","text":"This is a simple dependency management PR to decouple SCRAPI from the architecture spec. See SCRAPI can be a layer above, and reference the architecture (dependency) as in HTTP implementation. Decoupling allows us to move forward with the architecture doc, continuing to iterate on SCRAPI independently."} +{"_id":"q-en-draft-ietf-scitt-architecture-88ecaddb63eb9fd51fa343e8ae44b10f25df8d48e807f1bfb57dd3a561ec3f07","text":"Looking at URL, I think a that some more diverse examples later that show how multiple statements about one Artifact using the same sub Claim can have different semantics than multiple statements about one Artifact using different sub Claims - including illustrating text when one choice can be used an when the other & pros and cons.\nCovered above\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-62615cf754c3c671222c09976e20f8d2f1898427192d8350f11670e9f8f2ec3c","text":"Renamed to: Structure and Content of Signed Statements Updated content based on this thread: URL Added and made both CWT_Claims and mandatory as per the output of discussions added refs and figure label (which are missing all over the place)\nLGTM\nLGTMLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-82d8433dbfabadd2344d1969f0460934fec9843509e4af521a8f0222ab48fc78","text":"Explains Feed as the logical collection, with (Subject) as the property to represent the collection.\nThat seems pretty reasonable to me. Are there any reasons to keep both?\nWhile I'd agree less is more, this keeps coming back when we try to explain it. A feed is a known concept for a collection of related data We're using CWTClaims , which also makes sense to align with existing specs We avoided Artifact ID Attempting to describe a collection of statements as a subject doesn't resonate. We can keep Feed as the concept, and simply say CWTClaims is how you set an identifier for the feed. How does that work?\nThen we should deprecate the term feed in discussions, too, and just explain how a subject is used as a subject in this context?\nUpdated to merge in PR feedback\nNAME NAME NAME PTAL as I've made some updates."} +{"_id":"q-en-draft-ietf-scitt-architecture-f13a642a3b8edf13669b8f280e2d9a29c5b7636669f4e6249a8fcc708ff78163","text":"!\nThe diagram reflects my intention.\nHere's a slightly modified view to remove the lines behind, add arrows consistenty !"} +{"_id":"q-en-draft-ietf-scitt-architecture-4dbd75fddcd27973ee37f007009b30652cff61bbb1cfb8ff1149a4e6dc53433e","text":"Reverts ietf-wg-scitt\/draft-ietf-scitt-architecture"} +{"_id":"q-en-draft-ietf-scitt-architecture-996351e072040d2827381720d6ef6850c7cb37cae462b0a431e9fb90d630b991","text":"A set of consistency fixes for the next draft.\nSee URL for discussion on the terminology.\nThe doc uses the three interchangeably Is there a preferred reference to be consistent?\nURL is about Media Type. And then references payload and content type. I've been through this confusion before, where Media Type is the definition of a type of identifier. The challenge becomes when using that type of identifier for different parts of a document. When does the media type refer to the envelope itself? When does the media type refer to a property (aka payload)?\nI don't understand the question. is of content type and . It is generally a good idea to use a more specific media type. The payload of a cose-sign1 can have a media type, such as or .\nIt cuts out cruft, so I support this edit, just one minor nit (take or leave it, it's minor to me), before moving onto ."} +{"_id":"q-en-draft-ietf-scitt-architecture-7d89b623047e91ca6047f6c1f2f64a6ea9d7aa3ce62036413906a3c8cfc285e4","text":"Adjustments regarding registration policies.\nNAME I have made significant changes since your last review, including cutting out a lot of redundant text, and shuffling sections and section headers while eliminating the redundancy. I am dismissing your approval based on that, please give this another review.\nApologies for the scope creep of this PR (\"massive\"). This is all intended to be presentable on Monday.\nNAME and NAME : i shall review it later today!\nSubject to addressing the issues opened by this PR: This PR can be taken forward!The biggest issue now that x5t is required is that the issuer for x509-signed claims is not specified. This needs to be solved before we can merge this"} +{"_id":"q-en-draft-ietf-scitt-architecture-9742506b2c8bcc39bf076e907db0b6aef674204cec208d880acec3b68efc4386","text":"In case there is no rough consensus to keep federation section, I raised URL to remove it.\nI'd suggest we've had a few different conversations about federation of everything in a transparency service, vs. selective promotion of a subset of information across Transparency Services. The later being the ability for a consumer of one or more Transparency Services to promote the content for a subset of their software. Microsoft ships lots of software, However a consumer may only want to promote the Office suite of products into their Transparency Service for internal validation. I'd suggest the wording, as current, isn't well defined and I'd vote to cut this section until more well defined.\nIf this PR is dropped then we should create an issue to address the underspecification of unpotected headers at registration, especially the idea to preserve unprotected headers. Federation is a simple use case that highlights this problem\nIts not simple, and we have seen no implementation support for it to date.\nIf we are at the point where we need to start from scratch on configuration\/registration policy and scrap the initialization section, I personally do not see how we can keep federation and not just remove it (not to drive you and others mad, we can always add back later).I think the new text is too specific. To me (and as in the prior text) Federation refers to having multiple TS, and one TS accepting receipts from another TS at registration time is just an advanced case, for which we don't have clarity yet. As discussed earlier, letting the TS decide what unprotected headers to authenticate in its receipts is a major complication. My preference would be (1) keep a general placeholder on Federation --- better than nothing; or else (2) cut the section. But this leaves many questions unanswered, starting with the two TS in the arch diagram."} +{"_id":"q-en-draft-ietf-scitt-architecture-a689739334e8740fa3023240f7ac846aa2e17d45af6d2aa0d2f7881ec86406bf","text":"and\nIn section 5.2.3.3 appears out of the blue. It is not clear what a is.\nHi, NAME I'm looking for the reference to 5.2.3.3 in URL and other versions. I don't see where that section exists. Are you referring to a reference where a client is calls the Transparency Service APIs? Or, as there a mention of \"client, synonymous with consumer, verifier, etc?\nE.g., Is the client the issuer itself? A piece of software that the issuer uses (e.g., a la OAuth2 client)? Does it refer to any role (issuer, verifier, auditor)? IMHO the text should be more specific, e.g., Certificate Transparency RFC explicitly mentions\nThanks, NAME The above text is from Draft 04: I've labeled it for terminology clarification.\nIn that vein, I would support clarifying it with a \"an Issue or Verifier's Transparency Service API client\" in the applicable areas of the draft. Would that be more helpful or does it not reduce ambiguity?\nJust an update, I plan to send a PR with proposed changes this week.\nFrom the architecture\/editors meeting Differentiating: Issuer who signs the statement (The identity of the software component - Wabbit Networks) Client who initiates the registration operation, which encompasses RBAC which is not part of the spec, beyond being authenticated (the build system's identity for access control. The build system has a role to perform operations) - possibly refer to OAuth's definition of client Verifier who evaluates a statement. The verifier cares about the issuer, to know it was signed by a particular issuer. The audit of a supply chain will check the identities involved in the operations.\nAJ and I will review this\nNAME and I met today to discuss a recap of the authors\/editorial meeting (sorry I missed) and how we need to think about 1) a better alternative name and 2) provide a rationale to why we need to have 1, terminology change or not (part of that is the subject of this issue and pending work that should come up in for the latter). For 1, Monty and I are leaning towards agent for now, but we need to brainstorm on adjusted architecture to reflect 2 and \"the way\" and how it relates to requested edits for the relying party issue in . We will post in progress work and continue to collaborate and sync, updating the WG via mailing list when it is ready for review and feedback across the WG. More to follow.\nThanks, NAME NAME for the focused discussion.\nOK, my work on also led me to be confused and think maybe we have meant agent\/client when we said verifier. I made some strawman diagrams to discuss this WG members in URL \/cc NAME\nOK, it seems we are converging in a direction on around verifier and relying party. The diagram based on author feedback demonstrates that with likely updates to the spec forthcoming if that is approved. It turns out, not surprisingly, we need both verifier and relying party. The latter, if tweak terminology, could more clearly refer to client and resolve this issue as well. Given that, and how it relates to here, I would argue that relying party is the correct substitute for client and this could be closed as a duplicate or modified accordingly to address that. This change would be in line with with the RATS specification iff we distinguish between Verifier Owner\/Verifier (person entity\/logical system for that entity) and Relying Party Owner\/Relying Party (person entity\/logical system for that entity) and ironically , various OAuth WG drafts (even if it seems not formally defined), and . It was unclear in previous discussion with authors if this is preferable or required, but it nicely aligns with small tactical changes to the diagram in and relevant spec alignment in terminology and explanations.\nStill active. NAME do you have time to complete this item?\nI had presented on this topic and we had agreed some key revisions had to take place before we could decide to make adjustments. Are we at that point? Additionally, the alignment with RATS and other specs: is this necessary or not? There are some open questions that are not covered, re URL I guess I will take it to the list.\ngood enough"} +{"_id":"q-en-draft-ietf-scitt-architecture-5f791971f95cb62a15d2e78210cdb23e45bd2a0e0ff1694b4498c05b10d88ff8","text":"Retains the concepts and many of the same words as before, but addresses concerns form the mailing list and interim call\nI've merged in suggestions. Please re-review, with LGTM or actionable suggestions.\nyesLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-3030700a88c215a8b123c90990db0606f08d1947a014a82a160a703d52aa2c3f","text":"Upon re-reading the draft to submit an unrelated PR, I noticed there is a missing verb in paragraph about Signed Statements.\nLGTM!LGTM Thanks for the polish NAME"} +{"_id":"q-en-draft-ietf-scitt-architecture-c1b8a43830fe89f0184467c8111db9c13b5e7c13f18c3d52fb02e8a0431aa3b3","text":"While working on a unrelated PR, I noticed that lists that do not have a newline between the list and previous paragraph. The end result is in local testing, and presuming upcoming publication to URL, the lists will not render correctly. I added newlines and also change the equivalent Markdown syntax to match preexisting lists from other sections in the draft that have been around longer. Before: ! After: !\nLGTM Thanks, NAME"} +{"_id":"q-en-draft-ietf-scitt-architecture-c816b90e40441a9486ffd3f244a748b8d9664fcc43388e38c4621c89814eaa97","text":"I found another typo, to make review simple and not mix it in with a more substantive editorial PR that is pending, I am fixing this misspelling in a separate PR for review and likely approval. Correct singed->signed in Section 6.2.2."} +{"_id":"q-en-draft-ietf-scitt-architecture-d19721db6e09028791e39c5f3986baae078398395f7a9cb7ca28e695b969c7af","text":"Related mailing list thread: URL\nthese changes make sense to me\nAs part of , I would like to consider the removal of Consumer and consumer from this specification if we are not going to define it. It is a generalized term and I would rather expend work group time on trying to tailor an appropriate custom definition of it. I will likely post this issue to the mailing list and cross-reference that thread here and leave this issue for review for a week to solicit feedback. Related mailing list thread: URL\nI do think Producer and Consumer is a good generic concept, with more clearly defined roles as Relying Party, Verifier, and Auditor, however, I'd support replacing Consumer with Relying Party or Verifier.\nBut we do not define them nor cite references that do. From a formality and procedures perspective in IETF, is that OK? Thanks for the feedback.\nLGTM with some comments to accept\/reject Hello, I have been reviewing the specification and there are usages of Consumer(s) as a specific term to be defined in the draft and consume\/consumer(s) generically. The use of the term consumer is ill-defined and I would support not consuming time ahead of 119 defining it. Therefore, I am proposing that we remove the capitalized term and some generic usage of it for clarity. I welcome feedback on the pull request to alter or critique specific, individual changes (207), the issue to track the work for more general discussion (206), or in the form of replies in this thread on the mailing list. URL URL If I do not hear back in one week and there is no objection in the pull request, the issue, or mailing list, I will follow up and ask the authors to merge (if it has not been merged already). Thanks in advance for your review and any feedback you provide. Thanks, A.J."} +{"_id":"q-en-draft-ietf-scitt-architecture-6fc53007b487ed2c8a7bc7780e78e08680516fca2365fc0caf0fe0f08e643029","text":"Revisit ### Validation {#validation} section fully: After refactor Originally posted by NAME in URL\nPlease propose a new PR."} +{"_id":"q-en-draft-ietf-scitt-architecture-3b79397aa08e0cb450438d20ae4058ec1aead9c240285d47d00369657f3f6c80","text":"This is an alternate proposal to to This PR: Keeps Client as : an application making protected Transparency Service resource requests on behalf of the resource owner and with its authorization. Replaces Verifier with Relying Party: a relying party depends on Signed or Transparent Statements to verify an Artifact.,\nThanks, NAME Definitely looking for more input, and I’m also ok to close the PRs and issue and use the existing terminology. My main concern is a Relying Party is a consumer of information. They rely on the receipts and signed statements from the Transparency Service. Which makes it a good candidate to replace Verifier If we replace Client with Relying Party, this confuses what we call the entity that registers signed statements. They are not the issuer, as the issuer signs statements, but is not the entity with access rights to the Transparency Service. Both the producer and consumer need access rights. That was what we had as client.\nThere is still additional work that needs to be done. Client applications submit signed statements, and obtain receipts, so they can include transparent statements in downstream products. A consumer of a downstream product is a relying party, they either verify the transparent statements themselves, or they delegate the verification role to the \"verifier\" role.\nAs discussed on today's call. Folks expressed concern that \"Verifier\" was not friendly enough to \"Consumers or Relying Parties\", that are supposed to process receipts. Editors Update: We can choose to: [ ] Accept [ ] Accept [ ] Close both PRs and this issue, and keep the doc with Client and Verifier\nNAME wanted to take a look for a proposal\nJust an update, I plan to send a PR with proposed changes this week. NAME shall we sync up in the comments or out of band elsewhere?\nA.J. Yes, let’s get together and work out a proposal. I’m booked today but tomorrow is open except for 07:00-08:00 and 14:00-15:00 PST (10:00-11:00 and 17:00-18:00 EST) Monty\nSorry, NAME I had to dig a bit to find your email in my inbox from the mailing lists but I found it. Invite sent. I think we can make quick progress and draft a PR tomorrow.\nMet with NAME and we discussed perspectives and work on this in the near-term, summarized with URL More to follow.\nAdding some more notes to plot out the work, Monty and I discussed aligning the diagram with the separate\/distinct RATS architecture, but the separate call out of a distinct Relying Party role and how it pertains to agent was discussed when we met, per discussion I missed in today's author\/editor meeting. For my future reference, I need to re-review the doc, as well as compare\/contrast ,\nSo I did not make as much tangible progress today and have spent a lot of time thinking and reading as I re-read this SCITT Architecture draft and related WG drafts (RATS, OAuth) where Relying Party is defined. So, before I spend a significant effort in drawing this with aasvg, I would like to quickly highlight the gaps in my understanding via Mermaid so I can quickly iterate. I have tried reading the mailing list but it not 100% clear where the dichotomy between Auditor and Verifier occurred, but they seem overlapping and redundant to me unless Verifier is, in fact, some non-person entity, like the software doing the verification itself (so this goes to what should we rename client to in , separate of this), but does that not in fact mean Auditor is the Relying Party of sorts? It seems Auditor is just giving a specific name to a general kind of consumer and that is not what a Verifier is. So on that note, this is the current version of the SVG diagram in Mermaid ( I hope I got this right, I cannot just have labels like \"Issuer\" on the left-hand side so I use subgraphs to box those and not do something clever with dotted lines and cause more problems). (Source: URL) It would seem we really need to change the diagram and the terminology to get rid of Auditor altogether, so first, we would move to this. (Source: URL) I was going to do a subsequent edit, but at this point I will have to ask the group: what is a verifier doing that is different from a relying party owner with a relying party? Could you separate one from the other? (Don't you need to replay the log to verify the sequence of statements?) For reference, there is so much overlap with I think we need to either add compare statements to terminology and touch up the current state diagram to disambiguate the use of these terms. Screenshot below. ! So would that mean, if they are in fact separate, Edit 2 incrementing after Edit 1 to reflect my understanding would be ... this? (Source: URL) If we start to think this way, do we consider the software (client\/agent\/what have you) used by the Issuer to be distinct? Similar but with one of two functionalities (an agent\/client\/what have you can be for Issuers to get receipts of signed statements or a verifier that reads all of them)? \/cc NAME I am not sure this update at the end of the day reflects our discussion, but I feel like I lost the thread of what we discussed and seemed concrete in discussion. Translating it into diagram and wording updates has left me scratching my head.\nAnd just for the sake of dissing RATs... ! IMO this is reversed... to rely on something you must trust a verifier or be a verifier. I think what they mean is for the relying party to depend on a verifier, but what it looks like is that a verifier sits in front of an RP.\nA verifier can be a person or automation that was configured by a person. This is likely a great conversation for the NAME would you be able to lead that discussion with NAME\nhm... if we are talking RATS an entity can take on both the Roles of Verifier and Relying Party at the same time (rendering the creation and conveyance of Attestation Results moot)\nIt seems now obsoletes the diagrams in URL and review in a previous meeting reflecting improvements and viable changes based upon answers to the questions posed in strawman diagrams in that comment.\nAlso of not my updates to the potential (and beneficial) overlap and possible alignment between relying party and client per URL\nPing NAME NAME\nI wrote some of my thoughts on URL and aj-stein-nist\/ietf-wg-scitt, but for me Client was going to replace Relying Party (Client==RP), so I am not sure if making Verifier==RP is what my intent was when I revived the issue, so I will leave to others to decide."} +{"_id":"q-en-draft-ietf-scitt-architecture-f6bc56de50a4c35e54b423d37cbd44e7b999cb5e2b21d69e97183857e2a100a6","text":"My take on trying to make registration auditable. This is still very open-ended in terms of implementation choice\nI have made optional, but still supported\nSteve, Is it too late to add another service API to SCRAPI: Retrieve Registration Policy Thanks, Dick Brooks Active Member of the CISA Critical Manufacturing Sector, Sector Coordinating Council – A Public-Private Partnership Never trust software, always verify and report! URL Email: NAME NAME Tel: +1 978-696-1788\nHi, NAME Once the Architecture is settled, we plan to put more focus on SCRAPI, and yes, we can certainly add\/edit\/remove.\nNAME this PR contains guidance for SCRAPI enabling registration policies to be retrieved: URL It also adds optional endpoints for retrieving signature and payloads, not just receipts.\nThanks NAME NAME I'm still not sure how the architecture handles a revoked certificate used to sign the registry log. Would a new log be started with a different signing key and the old log would remain available for reference - with a warning that the signing key has been revoked or is expired? Still trying to figure out implementation details so that REA can support the initiative. I assume the results of a SCRAPI query would contain a warning that the returned results are from a log that has a revoked signing key.\nSCITT does not have any messages currently related to revoked certificates. What messages do you propose?\nIt can be as simple as a warning in response to a query i.e. “Warning: The signing certificate has been revoked or has expired” Thanks, Dick Brooks Active Member of the CISA Critical Manufacturing Sector, Sector Coordinating Council – A Public-Private Partnership Never trust software, always verify and report! URL Email: NAME NAME Tel: +1 978-696-1788\nDealing with expired or revoked TS certificates should be straightforward, as you can always obtain a fresh receipt for a previously registered transparent statement. For issuer certificate this is a non-issue as SCITT only guarantees the issuer certificate was valid at registration.\nThanks, Antoine. Is this also the case when the certificate used to maintain log integrity has been revoked or has expired. I’m presuming the registry log integrity is also maintained using a digital certificate – correct? Thanks, Dick Brooks Active Member of the CISA Critical Manufacturing Sector, Sector Coordinating Council – A Public-Private Partnership Never trust software, always verify and report! URL Email: NAME NAME Tel: +1 978-696-1788\nYes, that's what I called the \"TS certificates\", i.e. the certificate used to sign the COSE receipt\/countersignature. That's the only certificate relying parties would care about - if there are other internal certificates it's up to the TS implementation to expose the details to auditors through the audit API\nAntoine, What is the set of criteria that the TS should use during the registration policy evaluation? Thanks, Dick Brooks Active Member of the CISA Critical Manufacturing Sector, Sector Coordinating Council – A Public-Private Partnership Never trust software, always verify and report! URL Email: NAME NAME Tel: +1 978-696-1788 From: Antoine Delignat-Lavaud NAME Sent: Monday, March 4, 2024 8:50 AM To: ietf-wg-scitt\/draft-ietf-scitt-architecture NAME Cc: Dick Brooks (REA) NAME Mention NAME Subject: Re: [ietf-wg-scitt\/draft-ietf-scitt-architecture] Registration auditability requirements () (PR ) NAME commented on this pull request. In draft-ietf-scitt-URL : -Transparency Services MUST, at a minimum, perform the following checks before registering a Signed Statement: +During registration, a Transparency Services MUST, at a minimum, authenticate the Issuer of the Signed Statement by validating the COSE signature and checking the identity of the issuer against one of its configured trust anchors, using the (34), (33) and (4) protected headers of the Signed Statement as hints. +For instance, in order to authenticate X.509 Signed Statements, the Transparency Service MUST build and validate a complete certificate chain from the Issuer's certificate identified by , to one of the root certificates most recently registered as a trust anchor of the Transparency Service. +The Transparency Service MUST evaluate the registration policy that was most recently added to the Append-only Log. See the updated comment thread on line 403 (now 405) about possible weakening of this requirement — Reply to this email directly, view it on GitHub , or unsubscribe . You are receiving this because you were mentioned.Message ID: NAME\nSince the refactor and the shift to X.509-based claim signature, the authentication of issuers is no longer auditable - i.e. it is no longer possible for verifiers and auditors to re-validate the claim signature and issuer authentication performed by a TS at registration, which massively increases the trust in the TS (which may now try to register forged claims with plausible deniability - by claiming the TS lost the issuer's certificate used at registration even if none existed). Part of the issue is that claims use x5t instead of x5c. This means there is no mechanism for auditors to obtain the certificate chain used to sign a claim from the TS at registration. Even if x5c was used instead of x5t, only the leaf certificate is required to appear in x5c. Reconstructing the full issuer certificate chain requires at least the TS configuration to link to the configured root CAs used by the TS. Since roots and intermediate certificates expire (or can be updated, as is often the case with PKIX root programs from Mozilla, Google and Microsoft), it is not possible to assume that verifiers can just store the root configuration with the TS's certificate. There are further issues with issuer authentication: >When x5t is present, iss MUST be a string with a value between 1 and 8192 characters in length that fits the regular expression of a distinguished name. This is not well-specified without a reference to such a regular expression. Is this supposed to mean RFC4514 string representation of a distinguished name? The language of RFC4514 doesn't appear to be regular (it is context free), so this must be clarified. There is also no requirement for the issuer's serialized distinguished name to be produced from the claim's X.509 leaf subject. A malicious issuer can put a different issuer's serialized distinguished name in the iss field and is likely to confuse a verifier who cannot detect the mismatch since issuer authentication is not auditable. This makes iss essentially useless for verifiers, and it may be preferable to set iss to nil if x5t\/x5c is used.\nI propose we require in the architecture that the full certificate chain used to identify the issuer and verify its signature at the time of registration be authenticated by the TS via the resulting receipt. Given a transparent statement (and possibly auxiliary transparent statement from the TS log recording root and intermediate certificates) verifiers must be able to retrieve and review the full certificate chain. This clearly separates PKI issues (delegated to certificate issuance) and the evidence required to keep them transparent and auditable. The text included a a way to implement this requirement based on transparent configuration statements, which I found useful but may be too detailed\/prescriptive for the architecture.\nI don't think its in the spirit of COSE to include the full cert chain in a protected header. A thumbprint to a certificate that is in a chain is more concise.\nThis issue will be closed if no PR is raised by Friday.\nClosed with\nThese changes seem appropriateI suggested a couple of improvements, but overall I support this PR. LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-2f4d00200f506cb62ad8c8c92234bcbf2c17f0ae7f7000811fae07238aa97f79","text":"Adjusts text around CDDL and EDN Adjusts text regarding Validation\nLGTM, nice and clear changesLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-4691b3f6f3c188c0c853a250869e90dabfebf6b554477bf1db6043c3d1d9106d","text":"Hi all, This PR comes from a place that agrees: If using an identity scheme, then yes, it must be used in maner consistent with that scheme, But aslo that There is no identity scheme we could all agree on as universally applicable or agreable, further, the space continues to evolve. And, The trustworthiness or otherwise of issuers identity, in the context of a transparency ledger, is temporal and subjective in nature. It's just a question of time frame. This means that regardless of registration time checks, relying parties, auditors and general consumers will have to re evaluate regardless. And will do so according to their own subjective context. Remembering that \"It can never be assumed that some Issuers and some Transparency Services will not be corrupt.\". And also \"A Verifier SHOULD validate a Transparent Statement originating from a given Issuer\" ... \"and would not depend on any other Issuer or Transparency Services.\" It does seem obvious that, Syntactic validation of the issuer identity claims makes good sense and supports interoperability. But, Specifying a single identity scheme in the core architecture as \"minimally required\" does not meaningfully help interoperability when there is no single agreeable scheme, and especially when a TS may be used to underpin identity schemes themselves. Regarding the burden on Relying Parties and Auditors, The definition for \"Relying Party\" states \"a Relying Parties consumes Transparent Statements, verifying their proofs and inspecting the Statement payload, either before using corresponding Artifacts, or later to audit an Artifact's provenance on the supply chain\". IOW: Relying parties are already expected to deeply consider statements. Assessing the identity asserted by the issuer, at the time the artifact is being considered, seems to be a natural and useful part of that process. Registration Auditability: Requiring that Transparency Services \"enough information is made available to Auditors\" ... \"to authenticate and retrieve the Transparent Statements\" is essentially impossible to specify at this level, imposes un-bounded maintenance and storage issues on implementations, and represents a significant layering violation in the architecture.\nEditors meeting: will wait 24hours for reviews, with merger if no objections.\nPer the editors meeting, I've merged the discussed changes and resolved the discussed topics. I've opened to track the MUST\/Should discussion, which wasn't part of this PR. Please review and we'll merge Wednesday morning US Pacific time if there are no objections. I'll incorporate agreed suggestions as well.\nMerging with no other feedback.\ngood enough By and large the document is at a point where it can be called on for final reading. There are some nits or issues that we would like to see done to tighten it up. The introduction discusses distributed, but there is no real discussion of how or what that means and it leads readers to set an expectation that is incorrect. This word should be deleted. Removal of the Feeds definition. This was in the original version, and we have bounced all over the place during the last year to formulate a solution to a problem with Federation. We have decided collectively to park that and thus the use of feeds is gone. In a future overarching product or charter it could come back. There is a bit of a misunderstanding generated by stating that receipts don't expire, which later on is a line that requests for a refreshed receipt can be supported. The logical receipt is intended to never expire and based on the protected content in the ledger's log a new receipt can be generated at any time. This gives us an avenue to bridge a world from today to post quantum cryptography based counter signatures that is important. We need to clear up these two points. What goes into the append only log can vary from implementation to implementation. It may be that one implementation records the hash of the signed statement, plus evidence of how we proved that it was valid at the time of acceptance (things like records of OCSP response, or CRL lookups). Another implementation may simply store the signed statement. This from a generic point of view means that given access to the signed statements you can prove that it is on the append only log always. In section 4.1.1.2 it alludes that an auditor can mine the log to authenticate and retrieve the transparent statement. Based on the last statement this has two problems. The first it assumes that the signed statement is kept and that it can generate a new receipt (see 3) and produce a transparent statement. If it could this would be equivalent to the original in intent but not binary equal. The original receipt (or last created receipt from a refresh) is not required (or intended) to be kept in the ledger. Generically then this is impossible and may be where we would leverage consistency proofs in a later version.By and large the generic aspect of the document is there but we missed one sentence: \"Signed Statements and Artifacts associated with a software package.\" We would suggest removing the \"software package\"."} +{"_id":"q-en-draft-ietf-scitt-architecture-d4a8ba7497803c8825a72da1c970735764a659e9ff7934ec15cba845854186f3","text":"from :\nMUST and \"enough\" does not seem suffiMay be we have to tighten up the you can retrieve the signed statement to something in the future that may only get the hashmore correct than before, still a MUST that is tad bit hard to follow. the change in this scope is okay"} +{"_id":"q-en-draft-ietf-scitt-architecture-f14b99327e442703f1421af6f7580bbc79a928abf58d2335f792bd7b997e28c1","text":"This is how a SCITT Signed Statement associates an Issuer making a statement about an Artifact, which has been core to the draft from the start. Previously, these were properties in the protected header, aligned with DIDs: See The use of was to align with other existing standards, for how we associate these concepts. An alternative is to lift these two properties, independently, to the protected header. We already have an in the protected header for additional properties, reducing the duplicity of in the cwtclaims and protected header, potentially simplifing the model. The approach of using cwt claims allows users to benefit from existing cwt libraries, which seemed the better of the choices.\nOK so the answer is yes then. I am happy with it, but this is a small, tight PR deleting the editors note, I wanted to understand the implication before I (and maybe others) understand before I approve.\nResolve Section :\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-2ddfcb15b26e026663e8486a7781471fe3b402f548d6b7c0afb7d479b5662148","text":"Clarify the terminology around expiry and refreshing receipts. By and large the document is at a point where it can be called on for final reading. There are some nits or issues that we would like to see done to tighten it up. The introduction discusses distributed, but there is no real discussion of how or what that means and it leads readers to set an expectation that is incorrect. This word should be deleted. Removal of the Feeds definition. This was in the original version, and we have bounced all over the place during the last year to formulate a solution to a problem with Federation. We have decided collectively to park that and thus the use of feeds is gone. In a future overarching product or charter it could come back. There is a bit of a misunderstanding generated by stating that receipts don't expire, which later on is a line that requests for a refreshed receipt can be supported. The logical receipt is intended to never expire and based on the protected content in the ledger's log a new receipt can be generated at any time. This gives us an avenue to bridge a world from today to post quantum cryptography based counter signatures that is important. We need to clear up these two points. What goes into the append only log can vary from implementation to implementation. It may be that one implementation records the hash of the signed statement, plus evidence of how we proved that it was valid at the time of acceptance (things like records of OCSP response, or CRL lookups). Another implementation may simply store the signed statement. This from a generic point of view means that given access to the signed statements you can prove that it is on the append only log always. In section 4.1.1.2 it alludes that an auditor can mine the log to authenticate and retrieve the transparent statement. Based on the last statement this has two problems. The first it assumes that the signed statement is kept and that it can generate a new receipt (see 3) and produce a transparent statement. If it could this would be equivalent to the original in intent but not binary equal. The original receipt (or last created receipt from a refresh) is not required (or intended) to be kept in the ledger. Generically then this is impossible and may be where we would leverage consistency proofs in a later version.By and large the generic aspect of the document is there but we missed one sentence: \"Signed Statements and Artifacts associated with a software package.\" We would suggest removing the \"software package\"."} +{"_id":"q-en-draft-ietf-scitt-architecture-54c8d08395fb0844e6ad90858b1f29280adc1359c0c0caf6a8f29974f191601c","text":"To facilitate this forward for submission on Monday, I'm going to merge a few changes for additional discussion\/review.\nFair, could you provide a suggested change? [edit] let me elaborate: not making iat mandatory! But about the fact that extension points in Receipts can render them with an expiration date and that only in the architecture's base definition they do not expire.\nAfter reviewing comments and the paragraph this change sits within, the merged suggestions make a minor improvement, and I don't believe rewinds any problems. While we can\/should make additional improvements, can we adopt this change? NAME NAME you have blocking requests. If the merged suggestions don't resolve your concerns, can you either make an additional suggestion or comment on why this shouldn't be merged?\nNAME here you go: URL\nThe changes that have been made to the pull request has lost somethings. The logical receipt itself does not expire. The evidence captured in the ledger is more than just the hash of the signed statement as it has to capture the tests done to the signature and the results. That evidence and where it sits in the ledger establishes a time basis for the decision. A new signature on top of that linkage to the evidence in the ledger is being equated as a whole as the receipt. The \"thing\" signed is the same (location into the ledger) and the only thing that has changed is possibly the identity key material and algorithm. It is this concept of \"never\" expiring that is important to key.\nfrom :\nTagging NAME\nNAME we'll close this in a week and can reconsider with a PR.\nReferring to , the latest changes provide the flexibility for Transparency Services to maintain previous receipts and re-issue them or generate new receipts which may not be persisted. As such, I'm approving this PR as incremental progress."} +{"_id":"q-en-draft-ietf-scitt-architecture-ca39f8d6c821ccc6548e42b0201d50d142cc647ed6745b6a737a57d5dceac21f","text":"Assuming URL is merged, that document will register everything that is needed for SCITT Receipts (a profile of COSE Receipts). There is clean up work to do in the architecture. Remove the editorial notes regarding 394. Review IANA considerations (AFAIK, 394 was never requested for registration in the document, despite the editor notes).\nI've merged the PR, but we need to publish a new version to the data tracker. Ideally other authors (of COSE Receipts \/ COMETRE) will do an editorial pass, and then push a new version to the data tracker.\nI'd personally prefer to separate the mediatype exclusion (which I agree with removing) to a different PR, but would approve both anyway. LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-d657762d2f670b5ca8484c5685e73365348b2a4c21cb618f08619018803386c4","text":"Reason: Term was used only twice, not defined, and does not seem to play an important role. !\nfiddled with a phrasing nit. This approval is not about the diagram picture, but the content of md that substitutes Identity Documents with URL"} +{"_id":"q-en-draft-ietf-scitt-architecture-a7d9e5f65eb9e5e01bb5168f57d62aca95c64ff7b2916650bbc236f9a4d6e7a2","text":"changes \"document\" to \"Signed Statement\" (was a weird nit close to the suggested change)LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-36536413e3942e9c35f71c715520de87b7e25d72a8badb462bae9a44ecd80d10","text":"removed \"(Artifact)\" which seems to have been an artifact....LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-1fd5b8f6fa7a28ba9cebbecae226e2ef25749782cdaac8a3ea6ca9ed75380ca3","text":"I don't understand the motivation behind replacing \"any other COSE compatible PKI trust anchor.\" with \"a JSON Web Key Set\". Is there an issue with COSE compatible PKI trust anchors that is driving this change?\nNAME : What is an example of a \"any other COSE compatible PKI trust anchor.\" besides the JSON Web Key Set?\nGood question NAME if JSON Web Key Set is indeed the only possible COSE compatible PKI trust anchor then I see no issues with this change.\nI support Steve's suggested change.\nI think the point here is to keep it open and not limit to what exists today.\nNAME I have not worked on COSE for one minute, but I get the impression that a SCITT Transparency statement could include identities other than X.509, which seems to imply that another form of \"trust anchor\" is plausible, i.e. a PGP chain of Trust or even a SCITT trusted party - am I mistaken.\nThese are just examples anyway and as such the list is unbounded. But if you give examples, as you do today, then it better might be good to give examples that are understandable by readers. That is what I have been trying to do. I listed the only COSE example I am aware of.\nLGTM to changes"} +{"_id":"q-en-draft-ietf-scitt-architecture-dc3e2be8cbe9143e34614b70b5ad8bec994a9dbec142ba8f8b7a6950cfcede9b","text":"fiddled with the language that intended to be normative and made it normative languageLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-0896492782a3f4e2e91850f804e53de0d20475df93049ea146c376e873a98bf3","text":"This is not correct if we also consider SCRAPI\nSCRAPI says Auth is, may or isn’t required on specific endpoints, but doesn’t specify how it’s implemented. Is that inconsistent?\nplease check my suggested changeLGTMok with changes to 4.3 step 2LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-4ff475509cd8c57ef10ffe19d79a627c7fd3d4626614863245df4b921c68bf24","text":"Here is the status: I do not believe there is a security problem. Roy says that Antoine thinks there is a problem. Antoine is, however, not at the meeting.\nAntoine can object in the next iteration or during WGLC\nThe replacement loses the security issue and adds a new one. The modification of removing the certificate chain, which will save space in the ledger, opens up a problem of building the chain dynamically may result in the wrong trust anchor.LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-2990324d503b83db428eeab101070ec40611c718f294b3b06b89e74b489d50b0","text":"I generalized the description. You need to rotate all keys.\nI don't have strong opinions about the MUST vs. SHOULD requirement here. The group will have to sort this out\nWas wondering whether the cases where keys are never rotated are actually valid. The implication is the following: if an end-entity certificate has an indefinite lifetime then the certificates of the subordinate and CA certificates also need to have an indefinite lifetime. Is this a realistic deployment? If so, can you describe this deployment?\nSitting in the SUITS meeting, the same question of MUST came up. Someone made the comment that the MIST could be required in the doc, but the doesn’t mean a service that doesn’t follow the MIST guidance, can’t physically operate or even interop with other services. It would mean it’s not conformant to informative guidance, and a security review could seem it not as secure. Which might be fine for a locked down, IoT scenario. After the SUITS discussion, I’m fine with a MUST for operational info, that enables a service to function as they deem fit\nKeeping with MUST as it doesn't change the compatibility, and a service that doesn't rotate would still be interoperable, and be operationally not compliant.\nLGTM!removed the hyphen from cryptoperiodsApproving the MUST text."} +{"_id":"q-en-draft-ietf-scitt-architecture-7908678170623427f83446cebf247b83e8f01444ce473a52fe8da3a96eeee1f4","text":"NAME NAME PTAL at the latest if they resolve your requested changes.\nSuggest we incorporate Option 1 suggestion from Steve!I see this change as constructive, I asked Steve for clarification on his comments too.Changes as of URL improve the previous text. LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-bb9f02f16543edbf2e36610b67739afd4147820c73176603a71004191311c292","text":"You have to explain what this means and include reference. Since you are likely not going to do this, I suggest to delete this sentence.\nI think we should take NAME 's change, and that the current text is problematic.\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-90c20a7c52ecaddc500081680ffc2a8c27caf2d372c2908e9b2078be49d8af4b","text":"I would not assume certain deployment situations. It is likely that there are many Transparency Services that are not necessarily publically accessible to everyone and the entire architecture would still be totally valid\nI agree NAME I know of at least one registry (Transparency Service) that will not be publicly accessible.\nNAME always public or not, should we not consider that as material for privacy considerations that follow the line your recommended deletion?Thanks for the updates LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-3f052e6123a43ff7c08bdf5fb4d9f29a045dc5928c99485c4060efb4d7de9c56","text":"We are inconsistent with our use of \"registered\" and \"recorded\" when referring to the addition of signed statements to the append-only log. I recommend using the term \"registered\" consistently to refer to this act. This implies that some \"process\" was applied before registration can occur on the append-only log.\nDoing a quick search it seems easy and good to make this change, except one place that looks a bit janky. The current document (draft-08 as released) includes a line: It doesn't quite sound technically right to simply replace the word here because the VDS doesn't register things: there are other bits of the higher level TS involved there. Would it satisfy you in this case to change to: I don't want to mess too much with meaningful text changes and it would be a shame to hold up this issue for one detail. And I think this is strictly better because it also suggests that the VDS doesn't record (or validate, or witness, or any other word we might have come up with) Signed Statements that haven't been registered.\nNo objections here Jon. Thanks for resolving this issue so quickly. Please close this issue after applying appropriate changes.\nURL\nClosing as resolved by\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-8e00959dfec00177d97406f2a11c10d020a978e424f650bb2ffb50b54a22b82a","text":"Remove ambiguous details reported in (.\nx5t value MAY be included in the unprotected header in support of certain supply chain scenarios.\nLGTMLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-8f3ed87a0428158e675350dfe66ea99e8d61a2148c1f77f4ff46613d5e44a614","text":"Address feedback from Hannes in .\nRegistry, RFC9162_SHA256 is value 1, which supports -1 (inclusion proofs) and -2 (consistency proofs). I believe this registry is defined in another document. The sentence needs to be changed to make the meaning clear.\nBreaks circular relationship until one is completed.LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-7917c7af8eed26c0abe1bd487b57e523dbe7a7f56c0bb56965b3b33fdbd858fd","text":"with unclear definition and regex requirements with DN currently in draft.\nCalling NAME for some\nPerhaps I missed it, but what is the motivation of changing to a URI?\nWe are specifically customizing the usage of in CWT as defined in . In that document, it is a type and it seems like we shifting away from the base usage of URI to something else and with a regex spec I couldn't find in a relevant I-D, RFC, or standards doc that we can normatively cite as a reference. Is there something I am missing, and is there a valid reference we could use?\nI agree the current text is not clear, as it doesn't define or reference any regex. However, stating that the must be a URI with no further specification is equally unsatisfying. How about we drop any mandate to use a particular format of if the protected header is present? The identity of the issuer will be sourced from the X.509 certificate, so the value is largely redundant.\nThere is a little more detail in the CWT RFC and points to the JWT RFC which in turn points to the string representation of URIs RFC. Are those definitions equally unclear? I don't mind that if others have that feedback I'll close this out and start over. That will obsolete the feedback from Hannes that triggered this PR at the same time. I can bring that on list and reference this discussion.\nAh, yes, I forgot to mention: if we want to retain the use of the string representation of the subject DN of the X.509 certificate for the value, we can reference RFC 4514.\nI can also bring that up as an option on list and adjust accordingly if that is ok.\nI think it’s unclear, as there is no guidance on the content of the iss, only that it is URI-shaped.\nIt is similarly unclear to me in the architecture requirements what one should or must do with the DN. It is implied, but left to the implementer to decide for desired X.509 use cases. It is not clear what one should do with it to correctly validate the cert use per the claim, regardless of DN or URI is chosen. Can we both agree that is part of the problem?\nIn any event, I apologize for the delay, I have created the following thread on list to discuss before moving forward with the PR in any significant way. I may stand corrected and others will prefer your approach (which I dubbed Approach ). URL\nDeferring 'till the week of 8\/19 as some folks are on vacation.\nI support this change on behalf of the URL project.\n8192 characters in length that fits the regular expression of a distinguished name. What is the \"regular expression of a distinguished name\"?\nI do understand the intent, but it would seem that distinguished name is not clearly going to have a specific reference we can use. That reference sets aside the regex requirement. I understand the need but there is some assorted LDAP RFCs and other attempts, but none seem to be the right kind of reference, even if about distinguished names in X.509. I am going to propose we go back to what we know and then put forward my understanding and interpretation of the requirement, that we want a StringOrURI like the underlying CWT requires, perhaps with added length requirements?\nLGTMLGTM Hello, Last week I attempted to pick up some issues thanks to the many contributions by Hannes Tschofenig. One such issue requesting clarification or change is issue . For that, I made an associated PR, pull request . URL URL So far there has only really been feedback from Corey Bonnell (and that has been most appreciated!). I wanted to poll the list for feedback and understand if there is consensus on using the X.509 Distinguished Name for the issuer of the claim, whether or not there is a clear regex and length requirement. The current text reads like so. characters in length that fits the regular expression of a distinguished name. https:\/\/ietf-wg-URL I see a few ways forward. Do nothing, leave the current statement in the specification as-is, and close the issue reported by Hannes as a WONTFIX because it is good enough. Keep the Distinguished Name, and add a useful regex from a relevant LDAP specification proposed by Corey in his comment, RFC4514. Adapt the requirement to match a subset of the underlying data type of the CWT claim iss field, URI (from stringOrUri), and not add an additional requirement for DN regexes and format parsing. Be most permissive and interoperable, remove this requirement entirely. Corey appears to support 2 (please correct me if I am misrepresenting your position, Corey), whereas I prefer 3. I understand the intent of further constraining iss in the presence of X.509 certains and checking against their thumbprint. It makes it more convenient and consistent for the case of Issuers creating Signed Statements with X.509 certificates as a first class solution, but at the expense of the alternatives (short-term and long-term) and resulting interoperability in the long-term. Additionally, the requirement as-is appears to adapt the CWT field to match a differing purpose from its base specification that key use in the context of SCITT. To me, that seems undesirable. Do others have support for one of these positions or related feedback that motivated the current state of the change? Given the feedback over the next week, I will modify or close PR accordingly to meet consensus of authors and WG members who respond on list to the best of my ability. Sincerely, A.J. Stein"} +{"_id":"q-en-draft-ietf-scitt-architecture-9eb563eb6bd1e8261cece99295793af91a62bb3ca1b55c510296066156a6aa0d","text":"Editors call notes: Moving above the diagram is good clarity Changing from defining the roles to the overall diagram caused some confusion. If the PR can maintain the roles, and add diagram clarity, that would be preferred.\nFigure 1 needs to be explained and referenced in the text. The arrows in the figure need to be explained The figure shows two different boxes and the semantic needs to be explained\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-8c63f612677a5e2b18e4db72c22ae2ae6590f5146fcb39d6143a4e633ee4017d","text":"I would argue that in CT Auditors are checking more than Signed Certificate Timestamps\nNAME there is a lot of detail there and I am not sure how far you\/others want to go, but I decided to keep it simple and I propose adding slightly more detail in URL I am sure you and others will let me know if that is too little (I doubt\/hope it is not too much.)\nNAME thanks for the issue. Did you have an intent for what to scope? Reading the , it implies a unique focus on x.509, rather than x.509 is one example of identity types that are supported. Considering CT in terms of SCITT: I'd suggest we limit the focus on x.509, and focus on the generalization that an issuer signs the protected header, which includes producing a timestamped signed statement and a timestamped receipt.\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-e75ac74f77164a8b2e3496a8a27b3875913fd44b9b82f55a8645eff6110782f5","text":"There is no explanation in draft, GitHub, or mailing lists about resolved key manifest. This change proposes replacement with protected header like other portions of the document.\nresolved key manifest) for different Artifacts, or sign all Signed Statements under the same key. What does \"resolved key manifest\" mean?\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-e8e71a61fb212009b221cd5c26b3d1e93c3d105444e3f5df3915923617116c5b","text":"These proposed changes, if accepted,\npotentially be discovered out of band and Registration Policies are not assured on the presence of these optional fields. I do not understand what you want to say with the second part of the sentence. Even the first part of the sentence do not convey any real meaning.\nLGTM of URL"} +{"_id":"q-en-draft-ietf-scitt-architecture-61903ea2a399b80fe71e1ca9b57ebbc03265381a9d5839495fa7a1bd505517f0","text":"Editors note: updated with a link to the referenced text: I don't understand the SHOULD. You have to be in possession of trust anchors to verify a digital signature. It is not an optional feature.\nIt would seem this is correct contingent on your interpretation of , particularly points 2 and 3, so good point.\nIt's not the use of the anchor in verifying that is optional. It is the use of issuer verification as a gate to registration that is the problem. For ts implementations that naturaly have RBAC at the edge, the \"spam\" problem is not really a problem. Statements about statements seem to me to be a better way to deal with use cases where strong issuer verification is necessary. Registration time checks can only ever be \"point in time\"\nMy read of the Issue and the current state of the document is: This section is under , which implies the MUST refers to enabling the registration process. Currently, a registration can not be completed unless the issuer is verified, which can't occur without trust anchors. The SHOULD refers to verifiers which are not part of the Registration Policy. I think their inclusion in this section is confusing. See for a proposal."} +{"_id":"q-en-draft-ietf-scitt-architecture-b42bb799099eb401551bd29540a809749b5a8112d3c64f853f4fb909a7342ced","text":"Note: This PR assumes is merged, and may need to be updated to fit within the context of 's final text.\nFigure 1 needs to be explained and referenced in the text. The arrows in the figure need to be explained The figure shows two different boxes and the semantic needs to be explained"} +{"_id":"q-en-draft-ietf-scitt-architecture-feeaa58bda672b5465117f9af213867b0307dd83f6e1427c31ca3ebfe54f4c66","text":"Closes URL\nexternal attacks and internal misbehavior by some or all of the operators of the Transparency Service. What does the term \"each of these functions\" refer to?\nLGTM to Danny"} +{"_id":"q-en-draft-ietf-scitt-architecture-6e3a6a46929e78193a88b24a19e56d0e79df3af5cf29c3ca0d6a38a4d71a7fdc","text":"by allowing the use of a single, protected header parameter instead of a protected and unprotected pair, for reasons described in the issue. This allows implementations to avoid the risk highlighted at the end of , at the expense of a slightly larger protected header. This is not a purely theoretical concern, the most widely used signing service inside Microsoft has implemented this choice, and the separate from Microsoft has too.\nThe current text mandates the presence of in the protected header when using x509: URL, and allows an unprotected x5chain: URL I propose that the specification allows in the protected header, ideally as a substitute for , but in addition otherwise. The motivation for this change is to add a degree of integrity protection over , and blame for malicious payload otherwise. The current set up does not authenticate or integrity protect the contents of x5chain, only the leaf, which opens up some opportunities for attacks targeting x509 parsers. I can see why this minimal binding can be desirable in some cases (caching large cert chains), but servers that wish to be stricter to reduce their attack surface should be able to.\nThanks for the fix. Seems quite valuable to the overall operation! LGTM!LGTM thanks. For completeness of the record, I was previously against mandatory x5chain and mandatory deep, possibly online checks of the links in the chain. This PR looks eminently sensible for people who want to rely on x5chainDiscussion at the editors meeting to approve. LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-362a5b79d22c1292c556a56b85b701238454f86ee1e026184dd13be5eaea0d4c","text":"This editorial change attempts to per feedback from NAME\nThe three steps from Section 4.3 on Registration are essentially the same. To verify the signature you have to retrieve the issuer credentials and you have to do item 4. Issuer Verification: The Transparency Service MUST syntactically validate the Issuer's identity Claims, which may be different than the Client identity. Signature verification: The Transparency Service MUST verify the signature of the Signed Statement, as described in [RFC9360], using the signature algorithm and verification key of the Issuer. Signed Statement validation: The Transparency Service MUST check that the Signed Statement includes the required protected headers listed above. The Transparency Service MAY verify the Statement payload format, content and other optional properties.\nLGTM for reduction and lack of duplicity"} +{"_id":"q-en-draft-ietf-scitt-architecture-078b94681bbebd83f77275f2f29e114a9218c47a55d67b1f78693c811649b022","text":"This is a long sentence, maybe break it up into multiple 80 character lines? A policy could be 'unrecognized', 'unparsable', or 'not implemented'. Does 'unknown' imply all of these? Originally posted by NAME in URL"} +{"_id":"q-en-draft-ietf-scitt-architecture-4b77a80c7010b29fdee324d713932bba77d4bad86243325e7822219ef12a4d2c","text":"With no content changes (beyond spelling), build success, and to assure other PRs aren't blocked, I'm merging as editor.\n+1"} +{"_id":"q-en-draft-ietf-scitt-architecture-6bf116eea925b46bb64e935c7d449246b7d44aa6a2d2aa1958fdea3779af48fb","text":"NAME PTAL\nSince the use of COSE is mandatory, shouldn't this be MUST? Originally posted by NAME in URL\nIt would seem more appropriate to just say , rather than introduce the notion of a \"standard COSE implementation\". The latter would seem to involve a need to have a certification program for COSE implementations or something unnecessarily involved like that. I don't follow the implications of this. Can it be spelled out some more? I'll also note LGTM!"} +{"_id":"q-en-draft-ietf-scitt-architecture-83dd13628af10b499d263e66527914621cdd9fcf304bd24dadd992be348df0e0","text":"NAME Incorporated Review Comments!\nThanks NAME lets see if we can agree on append-only log|registry today so we can adapt and merge.\nFor the sake of clarity, suggesting \"append only log\"\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-3670e102ccaea8ecb63d4b8ee2d9f4b8bbfb5584690e0f0bfb4a11a6c6470afb","text":"As proposed in at the WG meeting in Tokyo, we would like to move receipts as a generic primitive in COSE. (see issue ) This PR removes all explicit references to the SCITT receipt ID and instead defines Receipts based on . Note that I also incorporated 2 proposed changes to make SCITT statement and receipt signature more symmetrical: Receipts now include the DID of the transparency service. This can be optionally used to discover the receipt signing key. This change was originally proposed at the IETF WG meeting in London. It simplifies statement authorization, as verifiers can only trust the stable DID of the TS instead of worry about keys or certificate that may be rotated Transparency services can now indicate the registration policies that were actually applied. This is very helpful for verifier, for instance to check the freshness of claims. This was proposed in Tokyo\nLGTM, see this implementation: URL and demo: URL\nNAME can you help resolve the errors?\nHi, Does this change mean that implementations based on native merkle tries (eg will have to translate proofs to this format ? And if that is so, what is the basis for trusting the service that does that ? Or is there scope for registering trie algorithms based on native proofs like ERC 1186 ?\nAs we're working to complete for 116, I'm assigning to 116\nSee for a proposed implementation\nClosing as has been merged\nI'd like to more generally discuss additional information provided by the TS as it issues receipts: Some information \"at the root\" refers to its signing (timestamp, keying, ID, etc) but implicitly applies to every statement recorded so far. As discussed at the last hackathon, some information \"at the leaf\" is the outcome of applying the TS registration policy to each signed statement, including e.g. the configuration of the TS at that point (timestamp, authorized keys, DID resolution results, etc) LGTM"} +{"_id":"q-en-draft-ietf-scitt-architecture-42c5898ef50849743e124ed471ad63c4107777ccc911778304ce2da40f6dd869","text":"NAME This is a fundamental bug in the spec, we need to correct it ASAP!\nNAME Thank you for the review. I agree with most of your edits.\nNAME Thanks, I have done all the required edits, please have a look!\nLGTM Line item at 962 could be done later"} +{"_id":"q-en-draft-ietf-scitt-architecture-00ec114b31789eed3f120dbbe90c3c7299bc007958a74f5d9c41129881694044","text":"This PR attempts to address the problem of documenting the various ways that known key content types, such as and are discovered from opaque strings or URIs such as and . This PR also explains how DIDs are similar to OIDC.\nI have a working demo of this on URL, URL URL Its also possible to do same thing with did methods other than , for example ... There is a demo of that working with index db here: URL This shows that there is interoperability across issuers that use different did methods, that conform to the same resolution and dereferencing interfaces.\nmade comments, but I think a bit more discussion\/work is needed.Asked Kristina to also look at this.a proposal for discussion @ IETF 117 Hackathon and meeting seems very useful to me. :+1: LGTM to merge as a baseline for future discussion"} +{"_id":"q-en-draft-ietf-scitt-scrapi-41df5327cac5d6cf35944bd8564c2ac9e6b60cd922011a26773af0461bf292e1","text":"This PR updates SCRAPI to match the latest version of the architecture.\nI tried pushing a revision to the datatracker from my fork... but it did not work... so unless this is published by some other author with write access before monday, we will be -00 for a solid IETF with no updates : )\nMerging based on requests to meet the draft cut off... assuming CI passes\n:ship: itWe should get this up ASAP before the cutoff and prior to hackathon"} +{"_id":"q-en-draft-ietf-scitt-scrapi-068974a27bc6c68408d2f63dc546dd9ba95eaa3b475d766968999376c0314ce8","text":"Issue Also harmonize title with other endpoints\nIn the case of a government end point, why must client authentication be implemented. The signature on the statement is the authentication for the content and is ultimately who is making a reputation statement.\nAgreed. In addition to this I think there are several places where we need to clean up seemingly helpful implementation comments which are actually harmful. For instance the mention of using auth for rate limiting is a fine thing to say but it's nothing to do with SCITT and will be done in other layers by any sensible implementation.\nResolved by\nOn internal or truly public instances this could be MAY"} +{"_id":"q-en-draft-ietf-scitt-scrapi-6cc8582bfa34b4bda892f80912b0d418b0c04d9a0024891c5c9a0d366be16d34","text":"Issue Exposes another issue with signed statement identifiers which is being fixed in a different PR.\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-scrapi-a7b7a2eda5371e434f5f88e6dfe8782c31a37d4a5d1a4c8083343d6617ffc606","text":"Issue\nIn the case of a government end point, why must client authentication be implemented. The signature on the statement is the authentication for the content and is ultimately who is making a reputation statement.\nAgreed. In addition to this I think there are several places where we need to clean up seemingly helpful implementation comments which are actually harmful. For instance the mention of using auth for rate limiting is a fine thing to say but it's nothing to do with SCITT and will be done in other layers by any sensible implementation.\nResolved by\nThe optional API endpoints have intent statements that outline the goal of the Endpoints for example 2.2.1 (Issue Signed Statement Endpoint) has this intent statement: Such a statement is currently missing from both: (Transparency Configuration) and 2.1.2 (Signed Statement Registration).\nI'll take this one and put a PR up with consistency for all the endpoints\nThis was fixed in\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-scrapi-478c63ddf46b5154488ae661a9f95b98b3a68b3fb28e8646a7e6471ec3c206ea","text":"Updated error returns to be CBOR, and a couple of other things. Still more to do\nSince we are talking about , does this mean we must refactor this API endpoint's response with minimally viable CBOR and a CDDL spec in place of JSON?\nYes. Discussed at the hackathon and will make a proposal as part of URL\nAdded to PR - LMK if you like it\nI will have to experiment with it more, but so far I like it. You can link this issue to close it once you merge . Thanks.\nMerged.\nIt's sufficient\/better to just ResolveSignedStatement (meaning you get back what was put in) and then the client can pull the Statement out if they want it. Reduces complexity for no loss of functionality\nSo no separate endpoint, you mean?\nReviewed and merged in\nShowing keys without saying what they're for is unhelpful Cedric suggests it might be even better to remove the specifics and just include a standard key discovery locator (which also helps avoid format\/types arguments in this doc)\nI have attempted to resolve this by removing it in the translation to CBOR in PR - will this suffice?\nReviewed and addressed in\nLGTM"} +{"_id":"q-en-draft-ietf-scitt-scrapi-543ddf617050536547a2b107278b93be20e92f8eae7fbf49396ab5c36c122073","text":"It's not clear why this would be needed, and it doesn't reflect any patterns in the Architecture. Remove?\nNAME there's a reference to \"proof of possession\", so my guess it that this is meant to enable a flow where the client requests a nonce, includes it in the signed statement (protected header or payload) before signing and submission, to \"prove\" that they held the key at the time of submission, if it is substantially different from the time of signing. I agree that this should be removed because: It's poorly specified for its stated purpose, without mandatory transport-level security, it's possible to effectively DoS a client by using the nonce they requested quicker than they can. Without authentication, a malicious client can also spam the service for fresh nonce values, causing legitimate ones to fall out of the expiry window before they can be used. There is no mention of a scenario where this is helpful or necessary. If such a scenario emerges, where an issuer signs or has signed indiscriminately in advance, and it is desirable to be stricter about what can be made transparent, there are two easier mechanisms the registration policy can use that don't require an awkward stateful API extension such as this: in the signed statement a counter-signature with a submission control key (which may or may not be the same as one of the issuer keys) Neither requires additional API surface, nor creates additional DoS opportunities. Opened\nagree to remove. I believe the intent for including that was the cnf claim in a cwt. But per URL it is for the protocol to define if and how a nonce should be used. Removing this does not preclude the use of cnf. And, at the same time, avoids under (or over) specifying in the wrong place.\nAgree I don't believe this endpoint is necessaryLGTM"} +{"_id":"q-en-draft-ietf-scitt-scrapi-288b649772689ac21e56f56fd5248683124b0f86cc4e08787279a76f2a7d6f76","text":"Hackathon entry for 119, fill in required elements from RFC 3552\nI merged the markdown nits, but left the suggestions that should be reviewed by others.\nA bunch of Markdown Nits, and some questions on the correlation between Issuer and RBAC, but happy to merge and iterate through additional PRs"} +{"_id":"q-en-draft-ietf-scitt-software-use-cases-47d49ba11f61b1058e824c92b5e3e395dcd7bc33afb2b6020c79985ddc1a6edb","text":"Account for nits of punctionation, consistent Title Case usage and consistency with bulleted content. This PR does not change the definition or examples of the use cases.\nThe GitHub UX makes it look more than it is. This is just re-formatting of existing content into bullets.\nMore then spelling nits & style in , but I am okay with the addition."} +{"_id":"q-en-draft-ietf-scitt-software-use-cases-3da816fc9505f78e08f009c4e663f8c2c8399f60174d3cfb4b50139dd2ca68ed","text":"Provides baseline discussion for clarifying . See\nMinor nits and there is no paragraph\/subsection included in the proposed use case"} +{"_id":"q-en-draft-ietf-scitt-software-use-cases-317f163d79467979b08e16dff48b878810f1ee2ab5078a69ce7d8a6b1d2f192a","text":"I believe this is a typo, but perhaps I misunderstood the intended meaning.\nGood catch, NAME Thanks for the PR\nLGTM"} +{"_id":"q-en-draft-ietf-snac-simple-77f3037b30636f413c2bead842aa435a58494c7f2d67dc4a51140ab324a862b4","text":"AIL is used as an acronym in multiple places but not defined. also put colons on the glossary terms\nI don't think there was consensus (even close) on the use of CPE router, unfortunately. I don't really agree with s\/need\/needs to\/, but maybe s\/need take no action\/does not need to take action\/ for better clarity?"} +{"_id":"q-en-draft-ietf-snac-simple-7ebe1290d4a49e7378406729af1cef38a65b1318cdc3ff302bc8b3f42dc9bc2f","text":"The working group didn't actually agree on the use of the term \"CPE Router\", and there were substantive objections to it. I've used the term Home Gateway here, and included a definition that I think includes both ISP-provided routers with integrated uplink support (cable modem or DSL modem) and also includes routers where the WAN link is an ethernet connector. Not clear that this is correct, but hopefully it's evolving in the right direction."} +{"_id":"q-en-draft-ietf-snac-simple-03f462fb5636a22a188d4b7e6cdf473acef37f664795501d1d2235b95608fb6e","text":"…flag in the Prefix Information Option flags, but in fact Jonathan's document describing the flags field sets it in the Router Advertisement Flags Extension option. We discussed this in 6man and I think the general consensus was to just use one of the remaining reserved bits rather than requiring the additional extension option prematurely, so I'm updating the text to say that, and have asked Jonathan to update the stub router flags document accordingly."} +{"_id":"q-en-draft-ietf-snac-simple-1b55c41ecf1ac13525c28f6ecd7e4aaba391642eb976c7898038a08db9959ebe","text":"see URL in the working group meeting in London, and in both cases, it seemed that the argument that without NAT64 we can't deliver on \"device must always be able to connect to the internet if it is available\" overrode the objections that were raised. So despite some folks really not loving it, I think it's required. Michael (IIRC) also pointed out that we need DHCPv6 PD to get infrastructure-provided NAT64 to work, which I guess means that if we have IPv4 and infrastructure-provided NAT64, but no DHCPv6 PD, we have to do NAT64 in the stub router. If we have v6-only, infrastructure-provided NAT64, and no DHCPv6 PD, then in order to reach the internet, we'd need NAT66. I'm inclined to declare this situation out of scope, because (A) I really don't want to do NAT66, and (B) I think it's not a likely scenario at the moment, and the more devices out in the field that would break in this situation, the better."} +{"_id":"q-en-draft-ietf-snac-simple-74cfc3ea3298ad5f03902830ab1b1c271c14cf7a2752a2ea777bc5417248a05b","text":"Added a bunch of clarifying text about why to use the DHCPv6 PD prefix, and also clarified the bits about NAT64 with respect to when to use Infrastructure-provided versus stub-router-provided NAT64.\nSee URL in the working group meeting in London, and in both cases, it seemed that the argument that without NAT64 we can't deliver on \"device must always be able to connect to the internet if it is available\" overrode the objections that were raised. So despite some folks really not loving it, I think it's required. Michael (IIRC) also pointed out that we need DHCPv6 PD to get infrastructure-provided NAT64 to work, which I guess means that if we have IPv4 and infrastructure-provided NAT64, but no DHCPv6 PD, we have to do NAT64 in the stub router. If we have v6-only, infrastructure-provided NAT64, and no DHCPv6 PD, then in order to reach the internet, we'd need NAT66. I'm inclined to declare this situation out of scope, because (A) I really don't want to do NAT66, and (B) I think it's not a likely scenario at the moment, and the more devices out in the field that would break in this situation, the better."} +{"_id":"q-en-draft-ietf-snac-simple-99eac16635f879ab7471a7f57ca5e74ae1d19145713fe77304f79d85e1de46bd","text":"I added a section about services provided by the Stub Network. This will need some more work, but it seems useful to have, and it now says that DHCPv6 is generally NOT RECOMMENDED but is permitted, and MUST be off by default.\n(Based on recent discussion in v6ops) We don't specify yet the type of hosts on the IPv6 stub network in detail. For example, an IoT device host only needs an address which it can configure with SLAAC based on the \/64 on-mesh\/on-link prefix. However, there could also be more complex hosts in principle that need their own prefixes (e.g. \/64) delegated to it for internal usage and they may like to use their DHCPv6-PD client for that. For example for running multiple virtual machines \/ dockers inside the host where these VMs are on their own virtual network segment. I expect these cases to be out of scope and there's no requirement on the stub router to do either DHCPv6-PD server or relay role. Do we need to make this explicit in a goal \/ non-goal \/ out-of-scope section ?\nWith my AD hat one, I sincerely hope to keep the stub network with simple devices, else the SNAC WG will try to make a simple problem too complex ;-) Without any hat, I do not mind having an applicability statement with non-goals in it."} +{"_id":"q-en-draft-ietf-snac-simple-aa4af768c58bb3d6e3ed07ce61aa021349493dca1866176b2605e5c86b1d48e5","text":"Didn't have time to add a state machine, but documented the ways that NAT64 advertising can happen.\nThere is a set of possible circumstances that we need to think about for infrastructure-provided NAT64: There is PD support, and infra is providing NAT64 There is no PD support, infra is providing NAT64, and there is no IPv4 on infra There is no PD support, infra is providing NAT64, and there is IPv4 on infra There is PD support, infra is not providing NAT64, infra is providing IPv4 In case (1), we can (and should) use the infrastructure-provided NAT64 service. In case (2), we have no way to do NAT64, because we don’t have a infrastructure-routable OMR prefix on the Thread mesh. So in this case, reachability to the cloud, both for IPv4 and IPv6, is not available. In case (3), although infrastructure is providing NAT64, we can’t use it. There is no IPv6 reachability to the internet, but a BR can provide its own NAT64 service to enable reachability to the internet over IPv4. In case (4), BR can provide NAT64 for IPv4 reachability to the internet, but additionally IPv6 reachability to the internet is present. Add text to the document to describe how we handle these situations, including state machine if appropriate."} +{"_id":"q-en-draft-ietf-snac-simple-b5d0eedf3e29bf6866eac8369bccbfe7708ed6a35c50f418e8fc708cab664c25","text":"states: | If IPv6 prefix delegation is available, which implies that IPv6 | service is also available on the infrastructure link, then the stub | router MAY use IPv6 prefix delegation to acquire a prefix to | advertise on the stub network, rather than allocating one out of its | ULA prefix. and 5.2.3: | Therefore, when DHCPv6-PD is available, the stub router MUST use DHCPv6 | PD rather than its own prefix. That's contradictory, I suppose 5.2.2 should say: | If IPv6 prefix delegation is available, which implies that IPv6 | service is also available on the infrastructure link, then the stub | router MUST use IPv6 prefix delegation to acquire a prefix to | advertise on the stub network, rather than allocating one out of its | ULA prefix. Or maybe just drop the last paragraph from 5.2.2 entirely. URL\nEsko adds: What about the case where DHCPv6-PD is available, but the stub router doesn’t get the desired prefix length or encounters some error in the process? In that exceptional case it still MAY allocate a prefix out of its ULA prefix. Depending on the reader, “is available” and “works for me” can be different things.\nMy suggestion: Right, MAY would be way too unspecific in that case. MAY basically says \"we aren't saying you shouldn't do this, nor that you mustn't do this.\" :) So I think as you say that what we really need here is to be more specific about exactly what \"is available\" means. So, perhaps: If IPv6 prefix delegation and IPv6 service is both available on the infrastructure link, then the stub router MUST attempt to acquire a prefix using IPv6 prefix delegation. and Therefore, when the stub router is able to successfully acquire a prefix using DHCPv6-PD, it MUST use DHCPv6 PD rather than its own self-generated ULA prefix.\nExplain what to do if prefix is to narrow, and also if it is too wide. states: | If IPv6 prefix delegation is available, which implies that IPv6 | service is also available on the infrastructure link, then the stub | router MAY use IPv6 prefix delegation to acquire a prefix to | advertise on the stub network, rather than allocating one out of its | ULA prefix. and 5.2.3: | Therefore, when DHCPv6-PD is available, the stub router MUST use DHCPv6 | PD rather than its own prefix. That's contradictory, I suppose 5.2.2 should say: | If IPv6 prefix delegation is available, which implies that IPv6 | service is also available on the infrastructure link, then the stub | router MUST use IPv6 prefix delegation to acquire a prefix to | advertise on the stub network, rather than allocating one out of its | ULA prefix. Or maybe just drop the last paragraph from 5.2.2 entirely. In my defence, I have been left unsupervised."} +{"_id":"q-en-draft-ietf-snac-simple-6736b084435a9705405bed7a1c01a82eb76c4655a38db7d55ed70fedb07234ee","text":"Actually, the two stub routers might not be connected to the same stub network, and that should be okay. The point is to stabilize the on-link prefix on the AIL if it's provided by a stub router. If stub routers A and B are connected to the same AIL, only one will advertise an on-link prefix. If that one reboots, the other will remember that that prefix is on-link, so it will do the right thing with responses send to hosts on that prefix. The goal is to make sure that the router that rebooted also considers that prefix on-link, even though it's never seen an RA. I think this section needs a bit of work; the clarifications you've requested would be part of that, but it was useful to re-think this based on your comments as well, because I think I see how to make this work in a relatively clean way. If we are not advertising the prefix on the link, because there's a usable prefix on the link, and we get a message from the stub network for an address on the remembered prefix, there are the following possibilities that I can think of: The prefix is a DHCP prefix, meaning it's routable and doesn't belong to the stub router. In this case, the stub router could try to renew it; if that succeeds, it could assume that the prefix is valid on the AIL and forward packets to that prefix to the AIL. If that fails, it assumes the prefix is only reachable through a router, so it forwards the packet using its routing table, and hopes for the best. The prefix is a prefix generated by that stub router. In this case, if the stub router is on a different AIL, it's immaterial, since the prefix isn't routable. So the stub router can assume that that prefix is on-link and forward packets for that prefix to the link; if they reach their intended destination, great. If not, nothing is lost. The prefix is a stub-network-specific prefix. As an example, Thread uses the thread XPANID, which is a 48-bit number unique to a Thread network and its set of credentials, to generate a ULA prefix for the AIL. This means that if there are several stub routers connected to the same AIL and the same Thread network, they will all always advertise the same ULA on the AIL if there's no usable prefix there. This has a really nice benefit of increasing the stability of numbering on the wire in the presence of stub router reboots. This prefix isn't routable, so if the stub router gets connected to a different AIL, there's not much it can do about it. It's not actually clear what to do in this situation, since if there's another stub router still connected to the old AIL, we have ambiguous routing to that prefix on the stub network. In the case of Thread, we don't consider this a serious concern, since thread has similar locality to home networks: if you can't reach the home router, you probably can't reach the thread network you were connected to before either.\nNot all aspects described here are fully clear yet, and there's a tension between this text and other PR text that says ULA Site Prefix is used to generate the AIL ULA link prefix. But it's best to merge the new text now and use that as a basis for further review!"} +{"_id":"q-en-draft-ietf-snac-simple-f649e2adacd64036d0807c60d803c9b17fdc9bbd7a8c41d027d8af49d12daa01","text":"The Makefile allows easy building of .txt, .html, or both output files.\nNAME Maybe you can merge this PR, or give me write access so I can do it ;-) ?"} +{"_id":"q-en-draft-ietf-snac-simple-040a38887a83fc0b23c407bb1d593a3110cf08d01446b463cdbad26ddab2981a","text":"NAME or NAME , can you merge this (now-updated) PR, or give me write access so I can do this ;-) ?\nI don't know how to give you write access, so I merged it. :)\nBased on the original comment: URL and the response: URL it is proposed to add requirements for the stub router behavior on the AIL regarding: advertising itself as a default router on the AIL; advertising a default route on the AIL. For correct operation the stub router MUST NOT advertise both of these. While Section 5.4 has requirements for default router\/route advertising on the stub network, there is no section yet with specific requirements for the AIL side. This needs to be added. (FYI NAME\nSee also some more detailed suggestions at the bottom of mail: URL\nThis issues was self-assigned to NAME Hi, That was the case in the past, but RFC 6724 introduced a \"label\" for ULA that is distinct from GUA. Since matching labels are preferred, the match between two IPv4 addresses (one label for IPv4) is preferred over the mismatched labels of GUA and ULA. There is another possible failure case, when a remote ULA is received via DNS although the traffic would have to go via Internet. This case is solved in RFC 6724 by specifying a lower \"precedence\" for ULA than for IPv4. (Not all hosts implement the latter part, e.g., GNU\/Linux has a higher preference for ULA than for IPv4.) That should work, but RFC 4191 states that in section 4: \"When ceasing to be an advertising interface and sending Router Advertisements with a Router Lifetime of zero, the Router Advertisement SHOULD also set the Route Lifetime to zero in all Route Information Options.\" RFC 4861 describes in section 6.2.3: \"A router might want to send Router Advertisements without advertising itself as a default router. For instance, a router might advertise prefixes for stateless address autoconfiguration while not wishing to forward packets. Such a router sets the Router Lifetime field in outgoing advertisements to zero.\" So using a router lifetime of 0 does not imply that the interface is not an advertising interface. This combination does not seem obvious to me, but it seems valid. Going back to RFC 4191, section 5.2 mentions the combination of router lifetime 0 with RIOs with lifetime > 0 as a solution for type C hosts. The solution for type B hosts for a single isolated network is preference based, and there is no solution for type A hosts. I think that Route Information Options should be accepted from RAs with a Router Lifetime of 0 according to those RFCs. Yes, several stub routers each using each others as Internet gateways would not work. Thus I think that stub routers must not advertise themselves as default routers on the AIL. Perhaps this could be made explicit? I would like the draft to be explicit regarding RA contents, preferably with a rationale. I would really like to read short explanations for the specific settings, i.e., how this is supposed to work. It would be great if draft-ietf-snac-simple could mention section 5.2 of RFC 4191 and that a lifetime of 0 should suffice for type A and B hosts to ignore the stub router, i.e., do no harm, but a type C host is required for stub network reachability from the host. Since supporting a type B host would risk creating problems for a type A host, I would prefer SNAC to only work for type C hosts. As soon as more than one stub network is connected to the AIL, only type C hosts can reliably choose the correct stub router anyway. The case of multiple stub networks is more complex than the scenario from RFC 4191 section 5.2. Best regards, Erik A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable. -- Leslie Lamport\nHello, I can try to respond from my viewpoint. This could help to identify the open issues. I don't see how RFC 8028 is relevant here. It has the following applicability: IPv6 host in a network that has more than one prefix, each allocated by an upstream network that is assumed to implement BCP 38 [RFC2827] ingress filtering The network\/link (AIL) considered for SNAC-simple is a link with a single IPv6 on-link prefix. Either a global\/ULA prefix allocated by e.g. a home router, or if that is missing, a ULA prefix allocated by one of the stub routers. So there's not \"more than one prefix\" and I don't see immediately if any problem would occur if we would have > 1 prefix. For example the IPv6 host on the AIL would remain reachable from an on-stub-network host, regardless of the IPv6 source address that the host on the AIL used. See also below for more info on this case for the packets in the other direction. This part I agree should be clarified in SNAC-simple. I assumed a Type C host for sure. And I would consider Type A and Type B to be even out of scope if required to keep the solution simple! One question is: what if a Type A\/B host sent an IPv6 packet to the ULA dest address of a stub-Host ; and the packet would be routed to the default router of the home network? (E.g. home router \/ Wi-Fi AP \/ ISP box) Wouldn't that default router immediately discover that the correct route is to send it to the stub router, based on the RA RIO advertisements? So the routing is not most efficient then but it works. If that is the case, we can still support Type A\/B hosts although with some degraded performance\/latency. I think we could clarify that the stub router must never ever advertise itself as a default router (or advertise a default route ::\/0) on the AIL. This isn't in the draft yet. Not sure if that is what you meant here? The existing mentions of \"default route(r)\" in SNAC-simple are pertain to advertising itself as default route on the stub network side, not the AIL side, I believe. Given the above, even if the host would not follow rule 5.5, it would pick another source address, but still that address would be based on an IPv6 prefix that is on-link. So it means the stub router can correctly route back to that prefix and use IPv6 ND to find the host back on the link. And the IPv6 host if it is type C would still use the RA RIO advertisements to pick the next-hop router, regardless of its IPv6 source address. The only question I have is for the Type A\/B hosts - they might pick a default router instead and the question is there if it would route back the packet to the right stub router on the AIL instead of sending the packet out onto the wider Internet. (My expectation is that it would work, but ... ) But this Type A\/B host problem is there independent of the source address selection. Or am I missing something? Also keep in mind that there are in total 3 possible cases for a given stub router: Stub router uses its generated ULA prefix to assign a prefix for the stub network; and also to assign a prefix for the AIL. Stub router uses its generated ULA prefix to assign a prefix for the stub network; but another stub-router (there may be N stub routers in total, each with a different stub network) has assigned the AIL prefix. That means the prefixes don't use a shared ULA prefix. Stub router uses its generated ULA prefix to assign a prefix for the stub network; but another non-stub-router (e.g. home AP) has already assigned a prefix for the AIL. That means the prefixes don't use a shared ULA prefix. So we cannot rely on features that only work well when a single ULA prefix is used to derive the AIL on-link prefix and the stub network on-link prefix. Esko -----Original Message----- From: Snac On Behalf Of Erik Auerswald Sent: Sunday, August 20, 2023 20:24 To: SNAC List Subject: [Snac] Host Requirements for SNAC Use? Hello SNAC WG, when looking at the basic idea of connecting a stub network via stub router to an existing link (the AIL), in general a host on the AIL would need to use the stub router as gateway to reach the stub network. If this host already has IPv6 connectivity to the Internet, the stub router may be added to that list. Now the host needs to determine which gateway to use for a given packet. Both the IETF Proposed Standard RFCs and existing host implementations allow to choose the wrong gateway, i.e., a gateway that does not have a next-hop to deliver the packet. It seems to me as if some assumptions beyond the most minimal IPv6 host implementations are required to connect a stub network via stub router without additional configuration. RFC 8504 (BCP 220) requires adherence to RFC 4191 which describes three host types called A, B, and C. Only type C hosts can reliably choose the correct gateway. BCP 220 also recommends (but does not require) adherence to RFC 8028 for first-hop router selection. Among other contents, RFC 8028 recommends use of RFC 6724 section 5 rule 5.5. If the host behaves like a type C host from RFC 4191, and the stub router advertises the stub prefix(es) as off-link, the host chooses the stub router to reach the stub network. If the stub router also uses a low default router preference there is some chance that the host will not use it to reach destinations not on the stub network. Perhaps this could be added to draft-ietf-snac-simple? If the host conforms to RFC 6724, and follows RFC 6724 section 5 rule , and it implements the procedure described in RFC 6724 section 10.6, and the stub router advertises a ULA prefix as on-link on the AIL, and the stub network uses ULAs from the same 48-bit global ULA prefix as advertised by the stub router on the AIL, then the host will use the stub router to reach the stub network, and will not use it to reach other destination. Perhaps this could be added to draft-ietf-snac-simple? Since a host may decide to send packets for destinations off the stub network to the stub router, it might be a good idea to require the stub router to forward such packets to a default router detected on the AIL (e.g., via RA). If there is no IPv6 default router on the AIL, the stub router should make it likely that connecting it does not break IPv4 connectivity to the Internet. If the host conforms to RFC 6724, use of ULAs by the stub router suffices. This, of course, means that the RFC 6724 preference of IPv4 over ULA is required for safe stub router use. Perhaps this could be added to draft-ietf-snac-simple? Best regards, Erik Inside every large problem is a small problem struggling to get out. -- Hoare's Law of Large Problems Snac mailing list EMAIL URL\nHello SNAC WG, when looking at the basic idea of connecting a stub network via stub router to an existing link (the AIL), in general a host on the AIL would need to use the stub router as gateway to reach the stub network. If this host already has IPv6 connectivity to the Internet, the stub router may be added to that list. Now the host needs to determine which gateway to use for a given packet. Both the IETF Proposed Standard RFCs and existing host implementations allow to choose the wrong gateway, i.e., a gateway that does not have a next-hop to deliver the packet. It seems to me as if some assumptions beyond the most minimal IPv6 host implementations are required to connect a stub network via stub router without additional configuration. RFC 8504 (BCP 220) requires adherence to RFC 4191 which describes three host types called A, B, and C. Only type C hosts can reliably choose the correct gateway. BCP 220 also recommends (but does not require) adherence to RFC 8028 for first-hop router selection. Among other contents, RFC 8028 recommends use of RFC 6724 section 5 rule 5.5. If the host behaves like a type C host from RFC 4191, and the stub router advertises the stub prefix(es) as off-link, the host chooses the stub router to reach the stub network. If the stub router also uses a low default router preference there is some chance that the host will not use it to reach destinations not on the stub network. Perhaps this could be added to draft-ietf-snac-simple? If the host conforms to RFC 6724, and follows RFC 6724 section 5 rule , and it implements the procedure described in RFC 6724 section 10.6, and the stub router advertises a ULA prefix as on-link on the AIL, and the stub network uses ULAs from the same 48-bit global ULA prefix as advertised by the stub router on the AIL, then the host will use the stub router to reach the stub network, and will not use it to reach other destination. Perhaps this could be added to draft-ietf-snac-simple? Since a host may decide to send packets for destinations off the stub network to the stub router, it might be a good idea to require the stub router to forward such packets to a default router detected on the AIL (e.g., via RA). If there is no IPv6 default router on the AIL, the stub router should make it likely that connecting it does not break IPv4 connectivity to the Internet. If the host conforms to RFC 6724, use of ULAs by the stub router suffices. This, of course, means that the RFC 6724 preference of IPv4 over ULA is required for safe stub router use. Perhaps this could be added to draft-ietf-snac-simple? Best regards, Erik Inside every large problem is a small problem struggling to get out. -- Hoare's Law of Large Problems"} +{"_id":"q-en-draft-ietf-snac-simple-61f7a87c5230ec8e01b19ae4f740a4179550d26930d5000f153eea2b2263c506","text":"Based on comment: URL and responses: URL URL it is clear that the SNAC stub router solution may in some cases not work if the IPv6 Host on the AIL that needs to send an IPv6 packet to a host on a stub network is a Type A or Type B Host. In fact, our assumption so far has been that it is a Type C host. Proposal: add clarification that solution scope is intended for Type C hosts on the AIL. (Though it might work for Type A and B hosts in particular cases, this is not in our scope.) Refer to RFC 4191 Section for the 3 host types. (FYI NAME\nLooks like a good place to add this scoping information - in the goals section. Thanks Hello, I can try to respond from my viewpoint. This could help to identify the open issues. I don't see how RFC 8028 is relevant here. It has the following applicability: IPv6 host in a network that has more than one prefix, each allocated by an upstream network that is assumed to implement BCP 38 [RFC2827] ingress filtering The network\/link (AIL) considered for SNAC-simple is a link with a single IPv6 on-link prefix. Either a global\/ULA prefix allocated by e.g. a home router, or if that is missing, a ULA prefix allocated by one of the stub routers. So there's not \"more than one prefix\" and I don't see immediately if any problem would occur if we would have > 1 prefix. For example the IPv6 host on the AIL would remain reachable from an on-stub-network host, regardless of the IPv6 source address that the host on the AIL used. See also below for more info on this case for the packets in the other direction. This part I agree should be clarified in SNAC-simple. I assumed a Type C host for sure. And I would consider Type A and Type B to be even out of scope if required to keep the solution simple! One question is: what if a Type A\/B host sent an IPv6 packet to the ULA dest address of a stub-Host ; and the packet would be routed to the default router of the home network? (E.g. home router \/ Wi-Fi AP \/ ISP box) Wouldn't that default router immediately discover that the correct route is to send it to the stub router, based on the RA RIO advertisements? So the routing is not most efficient then but it works. If that is the case, we can still support Type A\/B hosts although with some degraded performance\/latency. I think we could clarify that the stub router must never ever advertise itself as a default router (or advertise a default route ::\/0) on the AIL. This isn't in the draft yet. Not sure if that is what you meant here? The existing mentions of \"default route(r)\" in SNAC-simple are pertain to advertising itself as default route on the stub network side, not the AIL side, I believe. Given the above, even if the host would not follow rule 5.5, it would pick another source address, but still that address would be based on an IPv6 prefix that is on-link. So it means the stub router can correctly route back to that prefix and use IPv6 ND to find the host back on the link. And the IPv6 host if it is type C would still use the RA RIO advertisements to pick the next-hop router, regardless of its IPv6 source address. The only question I have is for the Type A\/B hosts - they might pick a default router instead and the question is there if it would route back the packet to the right stub router on the AIL instead of sending the packet out onto the wider Internet. (My expectation is that it would work, but ... ) But this Type A\/B host problem is there independent of the source address selection. Or am I missing something? Also keep in mind that there are in total 3 possible cases for a given stub router: Stub router uses its generated ULA prefix to assign a prefix for the stub network; and also to assign a prefix for the AIL. Stub router uses its generated ULA prefix to assign a prefix for the stub network; but another stub-router (there may be N stub routers in total, each with a different stub network) has assigned the AIL prefix. That means the prefixes don't use a shared ULA prefix. Stub router uses its generated ULA prefix to assign a prefix for the stub network; but another non-stub-router (e.g. home AP) has already assigned a prefix for the AIL. That means the prefixes don't use a shared ULA prefix. So we cannot rely on features that only work well when a single ULA prefix is used to derive the AIL on-link prefix and the stub network on-link prefix. Esko -----Original Message----- From: Snac On Behalf Of Erik Auerswald Sent: Sunday, August 20, 2023 20:24 To: SNAC List Subject: [Snac] Host Requirements for SNAC Use? Hello SNAC WG, when looking at the basic idea of connecting a stub network via stub router to an existing link (the AIL), in general a host on the AIL would need to use the stub router as gateway to reach the stub network. If this host already has IPv6 connectivity to the Internet, the stub router may be added to that list. Now the host needs to determine which gateway to use for a given packet. Both the IETF Proposed Standard RFCs and existing host implementations allow to choose the wrong gateway, i.e., a gateway that does not have a next-hop to deliver the packet. It seems to me as if some assumptions beyond the most minimal IPv6 host implementations are required to connect a stub network via stub router without additional configuration. RFC 8504 (BCP 220) requires adherence to RFC 4191 which describes three host types called A, B, and C. Only type C hosts can reliably choose the correct gateway. BCP 220 also recommends (but does not require) adherence to RFC 8028 for first-hop router selection. Among other contents, RFC 8028 recommends use of RFC 6724 section 5 rule 5.5. If the host behaves like a type C host from RFC 4191, and the stub router advertises the stub prefix(es) as off-link, the host chooses the stub router to reach the stub network. If the stub router also uses a low default router preference there is some chance that the host will not use it to reach destinations not on the stub network. Perhaps this could be added to draft-ietf-snac-simple? If the host conforms to RFC 6724, and follows RFC 6724 section 5 rule , and it implements the procedure described in RFC 6724 section 10.6, and the stub router advertises a ULA prefix as on-link on the AIL, and the stub network uses ULAs from the same 48-bit global ULA prefix as advertised by the stub router on the AIL, then the host will use the stub router to reach the stub network, and will not use it to reach other destination. Perhaps this could be added to draft-ietf-snac-simple? Since a host may decide to send packets for destinations off the stub network to the stub router, it might be a good idea to require the stub router to forward such packets to a default router detected on the AIL (e.g., via RA). If there is no IPv6 default router on the AIL, the stub router should make it likely that connecting it does not break IPv4 connectivity to the Internet. If the host conforms to RFC 6724, use of ULAs by the stub router suffices. This, of course, means that the RFC 6724 preference of IPv4 over ULA is required for safe stub router use. Perhaps this could be added to draft-ietf-snac-simple? Best regards, Erik Inside every large problem is a small problem struggling to get out. -- Hoare's Law of Large Problems Snac mailing list EMAIL URL\nHi, I was looking for existing methods for a host to select the correct gateway when there are more than one. RFC 8028 is one example. I agree. I would not expect an existing home router to look at any RAs from inside the home network, much less to populate its routing table with the information from those. As such, the home router would either forward the packet via its default route, or drop it. I doubt that to be the case. I think that a stub router following the new SNAC ideas could learn routes from RAs received on the AIL, and use these to forward packets towards the home router or even other stub networks. But as fas as I know this is not common home router behavior. This might be nice to avoid breaking Internet connectivity when adding a stub router to a home network But for reliable connectivity to the stub network, the host would still need to select the stub router as gateway. If the stub router sends an RA with a non-zero lifetime, it advertises itself as a default router. An RFC 4191 type B or C host would evaluate a default router preference and should thus prefer the home router if the stub router uses a Low Preference (unless the home router were also to use a Low Preference), but an RFC 4191 type A host does not evaluate the Preference. I would suggest to require a stub router to use a Low Preference in the RA, but a High Preference for its stub network prefix(es). I have read RFC 6724 again, and I think I made a mistake. RFC 6724 expects that the first hop gateway is selected via destination address, and then describes how to optionally choose a better source address for use with this gateway. This does pertain to RFC 8028, but not to SNAC. I think you are correct. I would expect independent stub networks to use different 48-bit ULA prefixes. In general, a single stub network should suffice with using a single 48-bit ULA prefix, but current implementations may not restrict themselves to this since it is not seen as necessary. I think we cannot rely on many features to be consistently present in existing home networks. Currently, SNAC implicitly requires RFC 4191 type C host behavior. This could be made explicit. I think that would improve the draft. I do not see a rationale for the requirement of a single ULA prefix in draft-ietf-snac-simple-02. It seems logical that a stub router would not need more than one ULA prefix, and one such prefix might be nice. It might help in troubleshooting. It might help keep RAs sufficiently small. It might help keeping routing tables small. It might work together with source address selection to help selecting the stub router as first hop gateway (but that is not described in RFC 6724, but a rather different idea). With no explicit reasons for a single ULA prefix, the requirement could be reduced to a SHOULD or even dropped from the draft. The current motivation to drop the single ULA prefix requirement seems to be that an existing stub router implementation does not limit itself to a single prefix. I find this reasoning quite weak. I wrongly interpreted RFC 6724 section 5 rule 5.5 to help select a gateway after selecting the source address, but it works the other way around. Thus I do not have a strong reason for a single 48-bit ULA prefix any more. I do think that it would be neat to have just one 48-bit ULA prefix per stub network, but that is insufficient for a MUST, I'd say. Best regards, Erik The computing scientist’s main challenge is not to get confused by the complexities of his own making. -- Edsger W. Dijkstra\nHello SNAC WG, when looking at the basic idea of connecting a stub network via stub router to an existing link (the AIL), in general a host on the AIL would need to use the stub router as gateway to reach the stub network. If this host already has IPv6 connectivity to the Internet, the stub router may be added to that list. Now the host needs to determine which gateway to use for a given packet. Both the IETF Proposed Standard RFCs and existing host implementations allow to choose the wrong gateway, i.e., a gateway that does not have a next-hop to deliver the packet. It seems to me as if some assumptions beyond the most minimal IPv6 host implementations are required to connect a stub network via stub router without additional configuration. RFC 8504 (BCP 220) requires adherence to RFC 4191 which describes three host types called A, B, and C. Only type C hosts can reliably choose the correct gateway. BCP 220 also recommends (but does not require) adherence to RFC 8028 for first-hop router selection. Among other contents, RFC 8028 recommends use of RFC 6724 section 5 rule 5.5. If the host behaves like a type C host from RFC 4191, and the stub router advertises the stub prefix(es) as off-link, the host chooses the stub router to reach the stub network. If the stub router also uses a low default router preference there is some chance that the host will not use it to reach destinations not on the stub network. Perhaps this could be added to draft-ietf-snac-simple? If the host conforms to RFC 6724, and follows RFC 6724 section 5 rule , and it implements the procedure described in RFC 6724 section 10.6, and the stub router advertises a ULA prefix as on-link on the AIL, and the stub network uses ULAs from the same 48-bit global ULA prefix as advertised by the stub router on the AIL, then the host will use the stub router to reach the stub network, and will not use it to reach other destination. Perhaps this could be added to draft-ietf-snac-simple? Since a host may decide to send packets for destinations off the stub network to the stub router, it might be a good idea to require the stub router to forward such packets to a default router detected on the AIL (e.g., via RA). If there is no IPv6 default router on the AIL, the stub router should make it likely that connecting it does not break IPv4 connectivity to the Internet. If the host conforms to RFC 6724, use of ULAs by the stub router suffices. This, of course, means that the RFC 6724 preference of IPv4 over ULA is required for safe stub router use. Perhaps this could be added to draft-ietf-snac-simple? Best regards, Erik Inside every large problem is a small problem struggling to get out. -- Hoare's Law of Large Problems"} +{"_id":"q-en-draft-ietf-snac-simple-8b18e9cd4c35f872a8742b435e09b84be7e917eb8a73d4fccaa915acd1089787","text":"NAME The new sections need to go in just after all the \"\", otherwise the XML doesn't build (order is fixed).\nQuestion on the cases included: isn't there a case possible where network is managed and is IPv6 only DHCPv6 addressing is used (as intended by operator) - assume DHCPv6 hands out ULA addresses no RA-Guard is applied user brings stub router, plugs it in, and it advertises a ULA prefix in PIO other devices on the link start configuring a SLAAC IPv6 address and try to use this to communicate (and routing fails because the DHCP managed ULA was not used as the source, but rather the SLAAC based ULA) But maybe this is not relevant because the source address selection algorithm would always pick the correct, managed ULA due to a better prefix match with the IPv6 destination address ?\nRegarding the DHCPv6-only use case, we are assuming that the network operator wants to control which devices have addresses on the local link. In this case, they MUST use RA Guard. If they do not, then there is no protection against random devices sending RAs. So this would be a misconfiguration, and hence we don't need to do anything about it.\nNAME Right, I also agree we don't have to do anything about solving this \"problem\" other than pointing to RA guard. But still I think it's worth mentioning: it's the one identified corner case where the SNAC router has unwanted side effects. I wasn't even sure if I was correct in predicting the effects. Considering that there is I think no other IETF-specified \/ IETF-compliant device that, when plugged in, would instantly disrupt this network in the way that the stub router does. If this is the worst case we can find, it should actually show that our stub router is good to go.\nDuring email discussion in 2023, there were worries that placing a SNAC stub router in a network (with its default configurations) would disturb \/ harm the existing network \/ AIL. For example due to the advertising of extra prefixes, or advertising particular (deviating?) values of bits, routes or time-durations in the RA. One proposed solution was to provide a taxonomy (e.g. an appendix) that explains which particular topologies\/configurations a SNAC stub router can work, and in which ones it doesn't work, and in which ones it can cause malfunctions (if any). Some examples: stub router doesn't work in a network that has RA-guard in a network without RA guard, and where no suitable prefix is advertised on AIL, stub router can \"disrupt\" by advertising its ULA prefix. In this case we assume the ULA prefix is not intended by the admin. (Maybe the admin and\/or network users should know better here, but anyhow it's a possibility.) Having a better overview of success\/fail cases also helps reviewers of the document to judge the risk. FYI here's a pointer to one of the messages in the thread of 2023 that summarizes some cases and how it could be documented: URL\nPR resolves this issue.\nIn -04, \"'L' bit is set\" is required for a suitable prefix. However section 5.1.2.3 allows sending PIO with L flag cleared, in particular cases. This leads to the issue that the generated PIO AIL prefix won't be recognized as a \"suitable prefix\" by other stub routers. So the requirement \"'L' bit is set\" must be relaxed for particular link layer technologies where L=0. Taking one step back – I just tried to summarize SNAC stub router scenario archetypes; both good and bad. Of the below it seems that only scenario 4 is now discussed as having a risk of breaking the network, correct? 1) Non-expert home user connects stub router to own unmanaged home network; it magically works and provides IPv6 stub\/AIL routing – no matter whether user’s ISP offers IPv6 or not. 2) Ignorant employee brings stub router to work and plugs it into well-managed network; companies’ IT network magically protects the network from ill effects (RA guard, and\/or port-based access control, … ) 3) Ignorant employee brings stub router to work and plugs it into unmanaged (IPv6) network; nothing bad happens - there’s already a PIO prefix with ‘A’ on the AIL 4) Ignorant employee brings stub router to work and plugs it into not-so-well managed DHCPv6-only network; stub router sends RA PIO; many hosts on the link now autoconfigure a ULA which somehow breaks connectivity for these existing hosts. 5) Ignorant network admin connects stub router to company network without changing the default settings; then it becomes similar to 4. This case is btw probably out of our scope to solve and could be solved by the mantra “don’t do that then”. (The stub router is only intended for case 1, but of course the stub router cannot tell apart cases 1,4,5.) +1 on clearly defining the possible breakage scenarios. E.g. some devices may not configure multiple addresses and get stuck with a ULA only? Or known issues with source address selection; IPv4 interactions, etc. in presence of ULAs. When doing this the Type A, B and C IPv6 hosts may also be considered. If we find no issues then we may also state this in the draft in an Appendix – what was considered, and why it isn’t a problem. Esko From: Lorenzo Colitti Sent: Friday, September 8, 2023 11:38 To: Ole Trøan Cc: Lorenzo Colitti ; Ted Lemon ; Juliusz Chroboczek ; Esko Dijk ; Michael Richardson ; Handa Wang ; EMAIL Subject: Re: [Snac] Should a stub router learn RA header parameters from other routers? I am not OK with a SNAC router connecting to an existing IPv6 network and breaking it. If you can elaborate on the type of breakage you are concerned about, then we can discuss it and possibly design for it. FWIW, just the fact of the hosts getting new addresses does not qualify as breakage IMO. All IPv6 hosts have multiple addresses and source address selection generally gets hosts to do the right thing."} +{"_id":"q-en-draft-ietf-snac-simple-2643983be58c9a1bea5709a4591db3b1c2d07c77d284d0f5ff592d2f10faac35","text":"Refer to state-begin-advertising to describe the states when ULA site prefix is advertised.\nhas currently: The \"as needed\" part here is not so clear and needs to be detailed. E.g. only if an on-link prefix on the AIL is not present, the stub router provides one from its own ULA prefix only if the stub network does not have an on-link prefix or on-mesh prefix (whatever applies), then it creates one from its own ULA prefix. sizes of the involved prefixes? Better to either specify sizes or say it's up to implementation. state some randomness requirements - although for the ULA Global ID it may be clear that it must be randomized; we also have Subnet ID to have requirements for (or not). In any case state these assumptions.\nThis can be resolved easily by a pointer to other parts of the draft (\"as defined in Section N\"). Because there are still some pending PRs that impact the text here and add the details, this issue can be postponed until after these PRs are merged."} +{"_id":"q-en-draft-ietf-snac-simple-2b19c5a6a19059e0edb7c1b6d4abd31192211c9f7495898781d8b63f7ec489b3","text":"Current Github text (May 9) still has the text In the last SNAC interim and a recent PR we changed this to L flag MUST be set (case of L=0 is out of scope). Above text still needs to be updated accordingly."} +{"_id":"q-en-draft-ietf-snac-simple-99b5fb54f0f3e4a111018320faf098ae20194bdc73395bd5962cb11f93c58ddc","text":"In internet drafts, the IANA Considerations and Security Considerations sections are mandatory. Exact text for security can be adapted also later on if needed.\nIt's not really true that stub routers create a security problem by sending RAs: rather, it's the case that this security problem potentially exists on networks where RA guard is present, but this issue is in no way our responsibility, and probably shouldn't be mentioned in security considerations.\nNAME I've removed the RA Guard text. It was there not because SNAC-routers introduce a security issue, but because we think that there's a group of people\/network-administrators\/IETFers\/others who consider any device that can autostart acting as IPv6 router and can autostart sending ND message on a link as a \"security hazard\" even if it's not. Mentioning RA Guard there could be a proactive way to hopefully avoid too many discussions around this during future document reviews.\nFor security we may add: risk of advertising self-generated ULA prefix? Any specifics? if site employs technology like RA-guard, the RAs of our unmanaged stub router will be blocked\nThese sound more like operational considerations than security considerations. I suppose if we want to advise sites that do not want stub routers attaching to block RA, that would make sense, but I don't think that's really a problem unique to stub routers. Regarding the ULA prefix, the only risk is one we've already addressed: that the prefix could be used for tracking. I think the text on varying prefixes per-link probably needs work, but that addresses the privacy concern."} +{"_id":"q-en-draft-ietf-snac-simple-6ea2e3d696543aafd2869a5575df91511eb54d2b6e60ed4bb678393db4179cda","text":"Closing issue This PR is not building, so need to clean it up before commit. Comments appreciated during that process.\nNAME PR is still needed before anything can build ;-)\nNAME NAME sorry I didn't intend to close this PR, Github did that somehow after me pushing to the branch. But it's good to have this one closed and replaced by , because the merge direction was specified here the wrong way around. The build is now fixed in the branch and also updated to make it merge-able into main. I prefer to merge rather sooner than later - nits can be corrected with future PRs easily."} +{"_id":"q-en-draft-ietf-snac-simple-5f30b5209db82980400606b2c5f64f2d57ed7182ff24486eef89adc63340b0e1","text":"… AIL change situations\nSection 5.2.3 defines the mandatory DHCPv6-PD client role for the stub router. However, there are no details here what should trigger the stub router to renew\/rebind the prefix lease. A reference to RFC 8415 should be added (as it contains some requirements on this topic). See section 18.2.12. Refreshing Configuration Information of RFC 8415. Do we define the SHOULD requirements of 18.2.12 about detecting a new network (new AIL) as mandatory for the stub router, or not? (If not: has consequences for wider IPv6 connectivity in such new-AIL cases) And what about the SHOULD requirement that involves \"if the client detects a significant change regarding the prefixes available on the link (when new prefixes are added or existing prefixes are deprecated), as this may indicate a configuration change. \" -> can we specify this better? What are the significant changes in our case? And how to do this detection? (e.g. monitor RAs?)\nFor this issue, we could reference the improved text in URL instead of RFC 8415. This text should be sufficient detail that stub router implementers know what to do. (Or at least, the DHC WG thinks it is enough for DHCPv6 client implementers to know what to do :-) ) PR with this pointer text is still needed."} +{"_id":"q-en-draft-ietf-snac-simple-56b0ab2fd8caf2cd33ad712087587569315e7576a0b4a88028627821d01d27d9","text":"Flag bits can be referred to as \"bit\", \"flag\", or \"flag bit\". This issue is an editorial reminder to update to a consistent term later on. E.g. \"flag\" or \"flag bit\" is most clear I think.\nAssigning to myself, to make a PR later on. Everyone and NAME feel free to suggest preferred term. I find \"flag bit\" the most clear."} +{"_id":"q-en-draft-ietf-snac-simple-c0ba9960b24aa73b14a276e430231c855798b74fc0f2685359908522c7450978","text":"Update draft version \"stub router flag\" -> SNAC router flag fix a few cases where Esko (I think!) missed changing stub router to SNAC router but should have (there are some cases where \"stub router\" is correct and I hope I did not change any of those.\nIn the update to the 'SNAC router' name () the flag name was not yet taken along in the update. This issue is to decide on that change also, and if needed make the change. (The 6man document would also need to change the flag name then, first.)"} +{"_id":"q-en-draft-ietf-snac-simple-95c150979467cffe6a82dd48da08df27464edfbf7fb70822fd3a4847091a9558","text":"As Esko pointed out, the term \"SNAC network\" crept in during editing a few weeks ago, and doesn't add value. This pull request changes all instances of \"SNAC network\" to \"stub network\".\nIn -05, the term \"SNAC network\" is now used but not defined. Could we use the more generic term \"stub network\" instead? I.e. a SNAC router is able to automatically connect a stub network to an infrastructure network. So the stub network served by this SNAC router is still a \"stub network\". Or we could use the term \"SNAC network\" for any stub network that gets automatically connected via a SNAC router. But doing this adds an extra term throughout the document: it's not possible to mass-replace all occurrences of \"stub network\".\nSee also which overlaps with this issue, but targets a specific definition paragraph.\nRight, this happened when we added a bunch of text in the last interim, and I think was a mistake. I think we just want to use the term \"stub network\" and not invent a new term. The thing that's new here is the \"SNAC router\", not the stub network.\nI agree - \"stub network\" makes more sense here.\nremoved"} +{"_id":"q-en-draft-ietf-snac-simple-4cb9a6cf19df3c1e6a4b35135cefcadd0a0bb49022f0b1e27d0af5a39bbfe011","text":"Add text to the suitable prefix section that allows a prefix that is local and has either the A flag or the P flag set to be considered usable, whereas before only prefixes with the A flag set were considered usable."} +{"_id":"q-en-draft-ietf-snac-simple-91e2dab9b664a759f795cc36cf5af3bc6862ea6a3e58d4b2f9ace3465b02422d","text":"This change does two things: change the SHOULD to a MUST when specifying what size of prefix to request, so that only requests for \/64 prefixes are allowed, and providing a reference to RFC8415 that says how to provide such a hint. Additional text describes what to do if there is more than one prefix.\nOLD: A SNAC router SHOULD request stub network prefixes with length 64. New: A SNAC router should request stub network prefixes using IAPD with IAPrefix with a hint of 64 as defined in RFC 8415 Section 18.3.9.\nAgree with adding the details. Is there a reason to change SHOULD -> should ? Seems wise to keep the SHOULD in caps given the MUST requirement that follows soon after it:"} +{"_id":"q-en-draft-ietf-snac-simple-a08f2f6718aee1d9020b660e5293c6bb1d217799801768789683f7ac15ecec3b","text":"Because CE routers are infrastructure\nThis addresses\nNAME could we change \"customer edge router\" to \"home gateway\" (per our official terminology )? I see now also the RFC reference in the PR is incorrect (rfc 704).\nthe text in this section can be confusing when dealing with 2 usable prefixes (see section 5.2.1.4 - ULA\/non-ULA). In MCR's words: if there are two suitable prefixes, one of them will still need to be deprecated. According to Ted, this issue is orthogonal to reachability. Perhaps, we should explain that this state-machine does not talk about reachability for both the suitable prefixes. At least give forward reference to section 5.4 for complete behavior. This will explain that we can only transition to STATE-BEGIN-ADVERTIZING and in no case we transition to STATE_USABLE\nI think the issue here is not about usable prefixes, but about routes. Suppose we have a DHCPv6 prefix, and the DHCPv6 server doesn't renew it. In this case, we probably want to stop advertising it at some point and start advertising a prefix based on our site ULA. We should talk about the timing of this switch-over. Separately, whenever we have advertised a prefix on either link, we need to continue advertising a route to that prefix until the prefix is no longer valid on the link on which we advertised it. So if we switch from DHCPv6 prefix to ULA prefix on infrastructure, we need to be advertising routes to both for as long as both are valid. Similarly, if we advertise a prefix on the stub network and later withdraw it because a higher-priority prefix is advertised there, we need to continue advertising a route to that prefix on the infrastructure link until the valid lifetime of the prefix we advertised has expired. I think all routers that have seen the prefix need to do this. Then there's the case where a stub router advertising a prefix reboots. We can't assume that it's written anything to stable storage other than what prefix it's allocated for the stub network and for the infrastructure network. So when the router comes up after booting, it needs to advertise routes to these prefixes on the appropriate link until the maximum possible valid lifetime for the prefix has expired. And finally there's the DHCPv6 case, where we got a prefix to advertise on the stub network via DHCPv6. In this case, I think we need to write the DHCPv6 prefix to stable storage when we get it, and remove it from stable storage when it expires. We should limit the lifetime we advertise on the stub network to a constant value (e.g. 30 minutes), and then if we find the DHCPv6 prefix recorded in stable storage, we advertise a route to it on infrastructure for 30 minutes after reboot. After 30 minutes, we can erase it from stable storage so that this doesn't repeat. I think if the BR doesn't have stable storage, then if it reboots with a DHCPv6 prefix still valid on the stub network, we just have to hope that it gets the same prefix when it comes back. If it does not, then that prefix may become unreachable while hosts on the stub network are still using it. We could have the stub network snoop source addresses to discover prefixes still in use on the stub network to ameliorate this situation, but that's somewhat problematic because it's a lot of work.\nI think the conclusion of the WG is that we need to have an explicit state machine for the stub router on-link prefix because it's actually quite different from the state machine for infrastructure, where for example we do not need to do DHCP. Important to mention that we need to explicitly deprecate the DHCP prefix if we switch to ULA and vice versa.\nOne of the things that this state machine needs to talk about is what happens if the SNAC router sees an RIO on \/its\/ stub network that does not have the SNAC router flag bit set. In this case, the SNAC router should treat this network as an infrastructure network and should not perform SNAC router routing functions or service discovery functions between the two networks. Essentially in this case the SNAC router functionality should be completely disabled while such an RA is seen on the stub network."} +{"_id":"q-en-draft-ietf-wimse-arch-e5c3c63e6a05ead011873d363184e108068dd1761b9621dcdee80c0957543fba","text":"Initial try at the definition of a workload.\nCloses issue\nI think we could be a bit more specific here. Every software is executed for a purpose... I believe the key practical difference between workload identity and user run software such as browsers and interactive applications in that there is no user to perform some interactive authentication flow. Maybe we could try and reflect that? For example, For the purpose of this document, workload is defined as a non-interactive instance of software interacting over network with other software and services. Certain workloads can be live for a very short durations of time (nanoseconds) and run for a specific purpose such as to provide a response to an API request. Other kinds of workloads might be executed for a very long durations such as months or years - examples include database services and machine learning training jobs.\nI think I understand what your are saying, but I'm not sure I can express it yet. Also it seems there maybe some special workloads that do interact with user authentication mechanisms. Perhaps those are architecturally distinct pieces \"Authenticaition Services\""} +{"_id":"q-en-draft-ietf-wimse-arch-f7556123dbda8c150c270aa912dc460d732e537ce8cc482ef6ab0fc170f81a33","text":"added Attestation as a function performed within the Server box. defined Agent. defined Attestation.\nThanks I think this helps my understanding, I'm sure we'll have more refining to do, but thanks for providing the text to get started."} +{"_id":"q-en-draft-ietf-wimse-arch-2e6d8b62cb6a9221ad197cc87a16d6d5dae22c6bc90181ba1a4d656b716366f8","text":"Updated figure in\nNAME I made a slight update to remove the word authentication in the text since authentication is one specific type of interaction, there may be many others. This might also be a good place to introduce \"context\" that is propagated (although that seems to be more a topic of the next section"} +{"_id":"q-en-draft-ietf-wimse-arch-e194e86cb3fbc9a1d4321439913dd2764d28044fd3c2d7bcf651757503e5c725","text":"Define workload identity. Resolves issue\nThe section talks about identity then defines identifier -- I would recommend it separate things a little more and clarify the differences and relationship. Is the identifier part of the identity? Does it point to the overall identity? Augment it?\nNAME - I modified the section to differentiate workload identity and identifier a bit more. Let me know if it looks good enough to merge in as a straw man for the identity section. IN any case we can discuss in the 120 meeting\nNAME definitely clearer that they're separate concepts now, and all this can be refined as the draft moves on. Thanks for updating!\nSome (rather nit) comments that are not related to what's being discussed on the mailing list."} +{"_id":"q-en-draft-ietf-wimse-arch-c8af4ee99d41570d9d75187304ab3f9970768a58f6687b154092188f760090a8","text":"Fixed some of the wording and parallelism in the intro to keep the same ordering of X.509 certificates followed by Workload identity tokens. Generalized some of the phrasing. Removed concepts that aren't well defined in the document, such as \"bottom turtle\". I'm not opposed to adding these, I just don't think they make sense without an explanation of the analogy either inline or by-reference; so I tended towards keeping the document self-contained. Provided initial content for the Delegation and Impersonation, and Asynchronous and Batch Requests sections."} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-2d7fc323e46f48c3faceddff35a4983b25c97a0b684059cd9c954ca4ed92ca3b","text":"[8:31 AM May 30 in slack] To summarize the near term plan: Joe: security considerations and interaction with TLS Brian: ID Token and DPoP-inspired Yaron: Message Signatures this is the \"DPoP-inspired\" part A preview editors' copy of this PR can be seen at URL"} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-cf5d9e82cd2cf4ebc911023f6898794c769177d29e2aed9bf33a02736d39a6a8","text":"There's nothing necessarily RESTful about this nor should it be limited to REST. Let's just not use the term. URL is a fun semi-related read, if you have or can find the time.\nQuoting the article: I am not as young but I agree with this sentiment. Having said that, I guess a standard needs to be reasonably precise in its usage of terminology, and so I accept the PR."} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-bb46220d49f587c943107c11b850fa4459db4847fcb1a6f851c90d13d56be7b5","text":"Change trust root to trust anchor, to align with Trust Anchor terminology as used in rfc5280.\nsure"} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-5a4c04ad6473334b06bc94da6e9fb471e24243455b5fb688cf5539e24a8a6935","text":"Editorial.\nCommenting as identity enthusiast as opposed to WIMSE co-chair. Consider replacing WIMSE Identities with WIMSE Credentials since it is the credential that is being verified, which may include a Workload identifier and possibly other claims.\nWIT stays though."} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-5141bd299e6449c838aa413ba7b630992fb4e2041cb8a698c889f3659e64e3b0","text":"Revised the workload identity section to align with RFC 5280 and include trust domain. Aligns closely with SPIFFE, but allows other schemes to be defined."} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-6c7d3eb2b790577fff7b57d2f8df49b8f9d4dcb0d2d691170edcd1614e620ff0","text":"Reworked the TLS section a little. mostly addresses issue\nMostly editorial changes, but also MUST->SHOULD"} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-aea117b566b2aec31b4034c8c0d8a10785bd054fd0bc051d7cb09029b0dbd8e5","text":"sorry about the trailing spaces ...\nSome editorial things, content-wise I'm happy."} +{"_id":"q-en-draft-ietf-wimse-s2s-protocol-1fa7bf7a241afd4bee2017d88653e65fecf37e5ff4b952ec868be5f6d4d55146","text":"Adds 'wth' claim to WPT to bind WPT to WITs.\nNAME please also update the relevant IANA section so we don't forget.\nI realize I missed a meeting but why did we add this?\nWhat's the threat? Different WITs with the same cnf key and a compromised middle thing?\nI just don't see it...\nIss in the WPT is even less useful with this too.\nYes, currently we don't have strong binding between WPT and WIT. Only the fact that . I don't understand the \"even less\" part but I see that this is somewhat duplicated now.\nShould be: WIT sub == WPT iss. And just to state it explicitly: people are likely to extend the WIT with various \"junk\" (i.e., policy) and that wouldn't be covered otherwise."} +{"_id":"q-en-draft-ietf-wimse-workload-identity-practices-89606610b536b7ab69c3e6403b07160ed6dd05b3fde254f24eb1ec6fd52bf191","text":"Taking our last conversation into account I wanted to contribute my take on the problem and wrote the introduction the way I see it. This is not a version that can be merged, but the best way to show it in a diff and allow comments. Let me know what you think."} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-c0efd52c4021c2bd8b6f906156c3d7b6da07aee0445179f98c1881bfc51ffc43","text":"These changes focus this draft on the minimal definitions required to secure the data structures defined in RFC9162 using CBOR and COSE. Extraneous information has been removed. There is a wip implementation available here: URL There is a demo here: URL\nLGTM!sry 4 the l8 approvalI recommend merging now"} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-6db8f73ab54ba5149583f688e8793d15b5ebb695905884e3c40f0a40a75a6f00","text":"Feedback gathered from URL\nConsider attempting better CDDL by applying URL We would like some time to discuss this draft at IETF 117: URL https:\/\/ietf-URL We've made several changes that were requested at 116. We provide a sample implementation demonstrating how to use RFC9162 tree algorithm with COSE Sign1 to create signed \"inclusion\" and \"consistency\" proofs. We updated the draft to support registration of other tree algorithms which can be specified in separate documents. If there are other changes folks would like to see, we are happy to try to get them in before the cut off date. Regards, OS ORIE STEELE Chief Technology Officer www.transmute.industries"} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-14878e73fe1d650bb7bf6d5fb86f1472a57ff3d1bc07eb1039f43aa396b4233c","text":"This PR attempts to generalize \"merkle tree alg\" and associated tags for \"merkle tree proof types\", to: \"verifiable data structure\" and associated \"verifiable data structure proof types\". This is intended to support similar structures that provide similar proof types, but that are not based on merkle trees. We are specifically thinking of cryptographic accumulators, and the merkle^2 paper.\nI'm not attached to \"verifiable data structure\" or \"verifiable data structure proof types\", just needed a word that didn't imply \"trees\".\nThe overall \"verifiable data structure\" generalization is fine, and doesn't impact the structure of the document too much. However, I am not fully satisfied with generalization of proof types. It seems very strange that TBD1 (\"binary merkle tree using sha256\", a tag that appears in the protected header of the COSE signed root) and TBD2 (\"binary merkle tree using sha256, inclusion proof\", a tag that appears in the CBOR type for inclusion proof associated with TBD1). It seems clear that TBD2 and TBD3 are refinements of TBD1 that should ideally be in a separate registry. Furthermore, as discussed last week, the notion of proof type (inclusion proof, consistency proof, freshness proof...) is a lot more meaningful than the notion of verfiable data structure type for protocols that build on COMETRE. Therefore, it may be natural to define either a universal type for each proof type, or at least agree on the convention that proofs for all verifiable data structures must be CBOR tagged with the same proof type, which is then defined as a separate registry. Each RFC that extends COMETE must define a new verifiable data structure but may or may not extend the proof type registry. For instance, it would allow the CCF_ledger to use the same CBOR tags for inclusion proof and consistency proof instead of having to define 3 tags with different semantics.\nThere were basically 2 paths discussed: (implemented in this PR) treealg (TBD1): RFC9162 (1) inclusion proof (TBD2): bytes based on treealg consistency proof (TBD3): bytes based on treealg treealg (TBD1): BitcoinMerkleTree (2) inclusion proof (TBD2): bytes based on treealg consistency proof (TBD3): bytes based on treealg treealg (TBD1): Merkle^2 Tree (3) inclusion proof (TBD2): bytes based on treealg consistency proof (TBD3): bytes based on treealg search proof (TBD4): bytes based on treealg non inclusion proof (TBD5): bytes based on treealg (previously discussed at IETF 115) treealg (TBD1) : \"RFC9162SHA256INCLUSIONPROOF\" (TBDA) proof (TBDB): bytes based on alg treealg (TBD1) : \"RFC9162SHA256CONSISTENCYPROOF\" (TBDC) proof (TBDB): bytes based on alg Other things to consider... Can an unprotected header contain multiple proofs for a single type? Can an unprotected header contain multiple proofs of multiple types?\nNAME NAME thank you for the reviews. All the CDDL examples will need some help, and certainly there is a lot more editorial work that needs to be done to fully align to this approach. I tried to do the minimum in this PR to host the discussion, but if we merge this, there will need to be further cleanup.\nI should point out that the deletion of the consistency proof CDDL, means that it shows up before you define it below. If you think that is ok, I am fine with it.Approved, acknowledging the need for further editorial work & CDDL examplesI agree with Orie's choices; \"verifiable data structure\" and \"proof type\" are good generalizations, and I prefer the 2-layer algorithm as e.g. some verifiers may care about a commitment to a VDS even if they don't use a particular proof type."} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-b4eb4f95ccdb783c34a847d620c7e4fa5f577c609d35ef94b2c8a6e3b67eb883","text":"This PR aligns the abstract and intro with generalizing away from specific merkle trees\nLGTM"} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-df94579bbbab1b20bb23dbf199f3d2fd298d46737a0917eaf77770cdddf3eaa4","text":"For exploration purposes, the document had a number of tree algorithms that are not specified in RFCs or similar. This PR removes them again, transferring them to PRs for further evaluation.\nI'm good to merge this from a process standpoint I still hope we can document these in some acceptable way for IETF."} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-0aaf885673dfb3c814afffcf979b55c9bc7f9ba15e0e936c2a854dceafa2dd99","text":"This PR build on and attempts to clarify what is generic vc what is specific to RFC9162 encodings\nMostly improves on the earlier version, I like the generality it allows for. A couple of larger questions in here amongst the suggestions"} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-fe44e51bf3cd5f027c4a99ac7363dfa87c3d6d94b5d2b43a02b1349ad1854e5c","text":"Informal references: URL URL TODO: Add CCF reference to Merkle tree definition (including details on the leaves). Change target branch to after is merged."} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-4c4804aa5be0fd0b14be1810b7f9b950963c0c5aa321d658ff0eac6836f2b129","text":"Informal reference: URL TODO: Add QLDB reference to Merkle tree definition. Change target branch to after is merged."} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-5aea30c056aa6d259c297d6051502f940384089465c9044e4abc097cccaddeb3","text":"and have been incorrectly merged. This PR brings the content to the state of 00 as in URL\nNAME can you give me merge permissions?"} +{"_id":"q-en-draft-steele-cose-merkle-tree-proofs-556243e21d07648a702e76d0ffaea819e8439d5a6d1704ba691395a599f4e441","text":"Changes from IETF 116 hackathon\nI'm stopping on this PR, before it gets even larger.\nNAME I have made several more changes, please review when you have a chance.\nI am merging, we reviewed this content together at IETF."} +{"_id":"q-en-http-core-2f1f633dd0991a611593414a5c4ce9b2ca93714f8ae0521e37ff43c65cb6f7de","text":"The Connection header field is oddly inconsistent with the other field definitions in how it defines the grammar late in the description. I think it is worth an editorial fix."} +{"_id":"q-en-http-core-90eb157dfcfe69583f00c9e504b97581baac33a8a04cf5789ac5701ae201c0db","text":"See URL\nMailing list discussion:\nWe might want to clarify the nature of the response elsewhere, but this seems fine."} +{"_id":"q-en-http-core-f13a09b8706969e03dcf6ce8801bdfbd1401f8ae3a931120d658190f76a204d3","text":"In Caching 3.3, we say \"a cache may store a response body that is not complete\" -- I think we should drop \"body\" In Caching 7.1, we use \"payload\" -- probably should be \"content\""} +{"_id":"q-en-http-core-fabb6dc9cd957e8aefc5ccf65db7c6edeee677337951349c36632770de9c0e4e","text":"That needs some tweaks ... the action words like SHOULD are supposed to be targeted to implementation roles, not protocol extensions. Can we just copy the same Note from the Accept section?\nI can avoid the SHOULD NOT. I used different text because for transfer codings \"we\" actually own the registry, while for content types we do not...\nHeh, I think it's a stretch of the imagination to think that we control the registry even when we define it ... people still deploy first and register later. In any case, a simple should be sufficient.\nI actually reverse my opinion about the SHOULD NOT - we already use BCP14 keywords in the paragraphs before and after this one. So please consider again the pull request.\nMy .02 - acknowledging that 2119 keywords are only intended for implementations roles, I don't think that throwing up our hands and saying \"people will do what they will\" is productive. While some people ignore (or just don't read) the RFCs, many do, and when they do, we owe it to them to communicate in the clearest, concise way possible. 2119 keywords are a fantastic tool for doing that.\nNAME - so are you ok with this change?\nI am - although if we come up with a unified way to deal with requirements consistently across the documents, it might change.\nContinuing from URL In the current spec, nothing is said about how to handle transfer-parameters. Notably, nothing is said about the case sensitivity of the parameter key. This results in a conflict with the TE header: if you see a \"q\" token, you cannot know if it is a transfer-parameter vs a t-ranking. It is noted that the \"q\" token is case insensitive in section 4.3.\nSee URL\nOK. An edge case of an edge case. Note that among the currently registered transfer codings, none defines parameters. Drastic options might be: Disallow parameters on transfer codings (and thus also on content codings) Remove quality values on transfer coding negotiation. More realistic: in the registration procedure for content\/transfer codings, note that a parameter \"q\" is discouraged (forbidden) due to the ambiguity in \"TE\". Should I make a proposal for this?\nActually, we have a similar situation for Accept:; see :\nNot only \"q\" but also \"Q\": case sensitivity isn't mentioned for transfer parameters."} +{"_id":"q-en-http-core-2a571819873a46d172f63a828f975450dbd0f3f66172e4c5a590ef7b2cfb4fcd","text":"HTTP defines its own protocol versions as having two numbers \"URL\" regardless of anyone's future opinions on the subject. This is necessary for Via and Upgrade. In the past, it has always been assumed that no document claiming to define a version of HTTP would ignore that fact. Regardless, HTTP\/1.1 needs to refer to future versions of HTTP within the scope of its own syntax. Hence, the following text (currently in crefs) has been suggested for the last paragraph of the section \"Protocol Versioning\" in Semantics.\nSGTM\nWhy \"ought to be used\"? I understand that you want to avoid MUST\/SHOULD,..., but why not just \"is used\"? That would make it:\n+1, let's limit the RFC6919 language. You're defining a mapping between formats, not really adding new normative or moral requirements. The \"0\" is simply how a major-only version is rendered in a URL field.\nThe background here is that we used 'ought' liberally in 723x (e.g., 49 times in 7231) to give implementers guidance towards good behaviour without affecting conformance for existing deployments. So this is entirely in the spirit of the existing specifications.\nThat use is often appropriate. In this case, the conditions are such that \"is\" works. It is also more assertive, which works.\nall: try and do your best to focus on the substance of the question rather than the editorial wrappings. Thanks.\nThe substance of the question is obvious, in my opinion. I'd question whether there's a need for discussion on that. However, the editorial wrappings do actually matter in having the final document communicate what we want it to communicate. It's feedback I'd typically give on a PR rather than the issue, but in this case the issue contains proposed text, so the editorial feedback is here too.\nAgree with Mike. I came to this issue almost by chance, I thought it was important because Julian wrote a mail to the mailing list. When I had a look at the issue, the only thing that caught my eye was the \"ought\", because the substance is indeed obvious (at least to me). If \"ought\" is used liberally in 723x, there may be good reasons for it, and in many places it may make sense. But I'd ask the editors to go through these 49++ times and see if it wouldn't be possible to replace some of them with \"is\". \"is\" just reads so much more easily where it's appropriate than \"ought\"."} +{"_id":"q-en-http-core-acc8ad095877f610ab5968ed9c4221c7ad46a77cfe7b40f6590f1dfa0e2902ed","text":"The text is almost the same as in RFC 4918, except that I removed the somehow bogus claim that status code 400 wouldn't be right.\nAre we updating the IANA registry to point to this document? The only slight concern I have here is that this may unintentionally reinforce the view that \"real\" status codes are required to be in the core document. If we can find a way to discourage that view, that would be great (but that's a separable issue).\nYes. For all codes. That occurred to me as well.\nOk, should I merge this one?\nIt's commonly used, but defined in RFC 4918 (WebDAV),, which continues to confuse people. Should we include it in the core specs?\nFor reference: URL\n+1\nDiscussed in Montreal; support for doing this."} +{"_id":"q-en-http-core-644b789a2c30d9acf79f05e8e69dcba58bfc4e727e476f2fe7ef1965b4284f68","text":"Looks good to me.\nincorporate: URL\nWhat do you want to incorporate exactly?\n418 is reserved.\nWell, not in the IANA registry right now. Wasn't the plan to do so without publishing something?\nMy recollection was that a publication is necessary; that's why it's here.\nok"} +{"_id":"q-en-http-core-88b26f5721e899a4a87e62254d98c941a54c4fe300285a63d11eed87f5a1218f","text":"The specification of can be read to allow the response to be reused for another concurrent request, since it's specified in terms of non-volatile memory. Likewise, other cache directives are silent on such reuse. So-called \"request collapsing\" has become quite common in proxies, reverse proxies and CDNs. Clarification as to whether such reuse for temporally concurrent requests would be helpful (in the case of and otherwise).\nThis also seems related to and , which allow the response to be re-used after cache control would say not to re-use it: URL\nDiscussed in Montreal; introduce the concept of a single-use response, and carve out no-store."} +{"_id":"q-en-http-core-a53b6245345d6163dc394a50d211054278cd3a284149e51a4404c5736480f3d6","text":"seems like it should be in messaging.\nPart of it needs to be in messaging (how to do it per version), while a shorter description needs to be here just to point out that it is version dependent. The semantics don't make any sense otherwise.\nAck. Will take a stab at it.\nThe caching document should clarify that only final responses are cached.\nWorks for me."} +{"_id":"q-en-http-core-6df9cceab14139d217d28579987432a6cadc7180993721f94ab61e9c88cca18f","text":"I'll have to update extract-header-URL so that it can generate entries with status \"obsoleted\".\nNAME Go ahead and merge after you update extract-header-URL\nIsn't implemented, generated, used or surfaced to users; can we deprecate?\nProposal: Replace 5.5 Warning with a section noting that the Warning header is deprecated, and should be ignored. Remove all requirements to generate Remove Warning-specific requirements in 3.3 Combining Partial Content Remove Warning-specific requirements in 4.3.4 Freshening Stored Responses upon Validation Remove Warning-specific requirements in 4.3.5 Freshening Responses via HEAD\n+1, with big notes in the Changes section.\nJust curious if anyone has any background, what would be a good alternative to the Warning header for cache limit warnings?\nThis isn't a good place for general discussion. You might want to ask the mailing list, though I am not aware of any cache limit warnings within HTTP because they are only relevant to the cache admin (who typically has an email or phone number stored for such notifications)."} +{"_id":"q-en-http-core-5f60fc738c3acc20be2a04104e732533520f5b3b5c2e7fc3b0ca10e02619b7f9","text":"Sounds good to me.\nMight want to mention the scope of applicability for HTTP caching a bit more carefully, esp. with respect to other caches (esp. in browsers).\nI think the best thing to do might be to change to something like \"Relationship to Other Caches and History Lists\" and expand accordingly.\nApplications using HTTP often specify additional forms of caching. For example, Web browsers often have history mechanisms such as \"Back\" buttons that can be used to redisplay a representation retrieved earlier in a session. Likewise, some Web browsers implement caching of images and other assets within a page view; they may or may not honor HTTP caching semantics. The requirements in this specification do not necessarily apply to how applications use data after it is retrieved from a HTTP cache. That is, a history mechanism can display a previous representation even if it has expired, and an application can use cached data in other ways beyond its freshness lifetime. This does not prohibit the application from taking HTTP caching into account; for example, a history mechanism might tell the user that a view is stale, or it might honor cache directives (e.g., Cache-Control: no-store)."} +{"_id":"q-en-http-core-19e9638a54592fa4e53c3fa591d485b4ac51aa7dbd7a284d5a336a2aa5f488ce","text":"Caching : Can the cache send any old entity-tag it has stored for the URL in these headers, or must it only do it for those that can be selected by the incoming request, as per the secondary cache key algorithm? Also, this section uses \"selected\" in a different meaning; that should probably be changed to avoid confusion.\nI think it needs to constrain it to the selected stored responses.\nWe need to be careful to include the case where an upstream cache (or UA) has sent a conditional request (perhaps with some of its own stored etags) and this cache is forwarding them along.\nWhen sending a conditional request for validation, a cache updates the request it is attempting to satisfy with one or more precondition header fields. These contain validator metadata sourced from the stored response(s) that have the same cache key (both primary and secondary, as applicable). The precondition header fields are then compared by recipients to determine whether any stored response is equivalent to a current representation of the resource. [rest as before]\nIsn't this presuming the validation is part of an upstream request? What if the cache is auto-refreshing stored responses during idle time? Maybe we should define these as separate cases?\nWhen generating a conditional request for validation, a cache starts with either a request it is attempting to satisfy, or -- if it is initiating the request independently -- it synthesises a request using a stored response by copying the method, request-target, and request header fields used for identifying the secondary cache key [ref]. It then updates that request with one or more precondition header fields. These contain validator metadata sourced from stored response(s) that have the same cache key (both primary and secondary, as applicable). The precondition header fields are then compared by recipients to determine whether any stored response is equivalent to a current representation of the resource."} +{"_id":"q-en-http-core-51ac048ae2320273957e06bb1e7627b01c5227311b14719092e7cf4cd0829e0b","text":"NAME PTAL.\nClient\/Server Messaging still is HTTP\/1.1-specific. Suggested deletions below: Also, trailers?\nTextual reason-phrase is also 1.1 specific.\nNAME what's the implication of that? It's already suggested to be deleted.\nIt wasn't in a previous iteration of your suggestion -- see your edit history. Perhaps we noticed this in parallel?\nAh ;)\nSomewhat related is .\nThis text seems out of place in 3.2 Storing Responses to Authenticated Requests. It also omits .\nIt seems like it's covered by 4.2.4 pretty completely."} +{"_id":"q-en-http-core-051f412ff3daafd7d15c87ca57bcb6db9a7ec7159a60cd3d5122845c2823f084","text":"Good catch.\n...currently defined in RFC 7538.\nThis would obsolete 7538?\nYes. Now the good question is how much of RFC 7538 include. My proposal would be just to include parts of URL, but to point back to RFC 7538 for additional information (non-normatively).\nSeems reasonable to me.\nMaybe we should also add the table from URL somewhere.\nPossibly at the top of the 3xx section."} +{"_id":"q-en-http-core-da6a21c6f1b152ef9318d675a74fc13ff44a8ef41324a09e4842895207e7abb1","text":"I had in mind something a little more general, that would include fields which are defined like Vary:"} +{"_id":"q-en-http-core-1b6cbdbb4a0a1b04eb831101c5e69020dacb597605912beaaba02ff0ca4d1252","text":"Note that there's one more reference to pragma remaining, but that will be taken care of by .\n... is still hanging around, and even less relevant. We still recommend sending it in certain circumstances; can we drop that?\nSpecifically: [ ] In Constructing Responses from Caches -- although see [ ] In Pragma, \"When sending a no-cache request, a client ought to include both the pragma and cache-control directives\" [ ] In Pragma, \"When the Cache-Control header field is not present in a request, caches must consider the no-cache request pragma-directive as having the same effect as if \"Cache-Control: no-cache\" were present\" Proposal: Change s registration status to \"obsoleted\" Remove requirements relating to it. Retain text explaining its semantics, and note that it's superseded by .\nBTW, I'm comfortable doing this because we're getting rid of requirements on request CC anyway -- see .\nI have a few editorial changes to suggest here to make it more readable and remove the trailing note, but it is probably easier to commit this as is and we can tweak it later."} +{"_id":"q-en-http-core-af319b16ae25aa6d871dc3f10e41ddf1f08eb4ed2ce6c0725e186ac8fc9fbd53","text":"For .\nCaching still refers to a \"disconnected\" state, as well as \"cannot reach\" and similar states. This needs to be rationalised. See thread: URL\nShould wait for resolution of .\nI'm inclined to move the definition of \"disconnected\" up into \"overview of cache operation\" and refer to it consistently -- including when \"cannot be reached\" is used.\nWow, that is a bad choice of word for that definition in the old text, but +1 to the change. I used to know what the right term was for that ... drawing a blank at the moment. I think it was something like \"origin unreachable\" or \"upstream unavailable\". Bah."} +{"_id":"q-en-http-core-8f462c643a6d6f1754f1b58cd6870a6326df9ccf4eb82603cdb8710858b64c45","text":"For\nI believe the registry URI should actually say \"http-header-fields\". Can I change that?\nThe text might be confusing to IANA as the registry does not exist yet."} +{"_id":"q-en-http-core-43d0888de28f311e9cdcb2523369c112553f355e3c87e1de5cd043dde5158eb4","text":"I also suggest closing here : the main reason for getting data without a request is an error reported by the server before closing (e.g. 408). Another reason may be a server implementation issue (extra CRLF, inconsistent multiple content-length, inconsistent content-length+transfer-encoding, user-fed incorrect error message). In all these cases the message delimitation is wrong and unrecoverable. Thus the client trying to \"fix\" it will in the best case be useless (close following), and in the worst case have nasty side effects. In haproxy the only case we try to address is the extra CRLF that a few old servers send slightly after data. If extra bytes are sent immediately after data, we close. If the extra CRLF happens slightly later, we find it while parsing the next response and the CRLF is silently skipped.\nEveryone OK with this now?\nWell, like Roy, I'd even prefer a MUST close, but I can live with a SHOULD.\nIt doesn't seem to account for the extra CRLF case. Maybe just add \"... unless the data is limited to CRLF\"?\nCR and CRLF?\nI was thinking the same but hesitating. BTW NAME it would be LF and CRLF. But quite frankly we could stick to CRLF only since it's the only known bogus pattern widely deployed in field.\nupdated."} +{"_id":"q-en-http-core-8283190dbb6bbba3be929db6bc158b587fbe4838b57e99be51a91eca3e0e2b4f","text":"Also: change scripts that BAP is invoked with correct options. Note: I did not use \"%i\" for case-insensitive strings as this is the default anyway. If there is confusion about how a certain protocol element is matched, we should re-state that IMHO in the prose. The collected ABNF uses non-extended ABNF syntax. If this is a problem I'll need to do a bit more work on the tooling.\nShould we just adopt wherever we use a string?\nYes, I think that's what the IESG wants us to do. (We were directed to use it in alt-svc)\nI've started a branch (mnot-133). Question: do we want to convert things like this: into: ? We do that a fair amount for case-sensitive strings.\nYes. Let me take care of this.\n+1\nOK. I'll leave my branch there if you want to start with it; otherwise feel free to delete it.\nFWIW, I'd prefer to do this only in the places where the string actually is case-sensitive.\nI guess I forgot to close the ticket.\nLooks good to me. I checked and we also say that they're case-sensitive in the accompanying prose for each instance (although for weak ETags, it's a bit far away)."} +{"_id":"q-en-http-core-8783a04d546b61363006e59e64e298c06fb73cac4085755631e79cc47e4e3764","text":"(alternative approach)\nAcceptAccept-CharsetAccept-Encoding`](URL) request headers is of very limited use, because: With a qvalue higher than another value, it says \"send anything in preference to something specific\", which isn't useful With a qvalue lower than another media type, it says \"send a more specific value in preference to something else\", which is already implied by HTTP With a qvalue of 0, it says \"don't ever send other value\" -- but a server can always ignore that, because this is an optional-to-implement extension In media types (e.g., ), it has been cargo culted a lot, but doesn't actually give the server information it can use. Can we deprecate in these headers, or at least restrict its use \/ give better guidance?\nDiscussed in Montreal; support for documenting limitations \/ caveats.\nWe can't deprecate it. We can explain when and where it isn't useful.\n+1 to generalize this. However, we may want to still reference these rules from each individual header field definition.I think the first sentence needs some editorial work, but +1 to committing this first."} +{"_id":"q-en-http-core-c1973f2c32cec42583650a895783b15822658576c1c5b83bdb47a97cf750f387","text":"This one is almost trivial. Except: previously, the ABNF sort-of ruled out parameters in the ABNF (at least that was the intent) this is now done in prose - we need to say though what to do when parameters are present - is \"SHOULD treat as error\" correct?\nShould not enumerate the actual coding names. Make the ABNF generic and point out that the predefined encodings do not have parameters.\nSee URL\nInstead of saying in each encoding that the presence of an undefined parameter should be treated as an error, it would be better to say that once in the main section for any encoding that does not define its own parameters. But +1 anyway."} +{"_id":"q-en-http-core-a3b6d02146107a359ba4393fa48cdbf6f8d05e21c845a7f662862f3b6d056080","text":"I think this is ready.\nI think the registry is supposed to reference the obsolete spec where it was last defined, not the spec that mentions it briefly in a change note. In this case, it should be referring to RFC2616 Section 14.15.\nNAME - I think the IANA registry should point to both - pull request updated."} +{"_id":"q-en-http-core-79b82ec00974c360f6319d8a2ba35aef61101bef098b99280c9cdced238449ca","text":"() work in progress, feedback appreciated. is the actual rule mapping correct? prose in the convered ABNF, should I add the cardinality as comment?\nre 3) turns out to be hard as long as we have productions that mix list with something else (see separate issue), because we can't inline comments inside and ABNF production\nNAME - your turn to review...\nSee . Mailing list discussion starts here:\nSee also\nA simple fix for this would be to simplify the list expansion to just map , and to put all other constraints in prose (the current approach doesn't work anyway due to the rules for empty list elements which are not counted).\nProposal for recipient list rule mapping: More eyes requested :-) NAME NAME\nThe current proposal is incorrect as it does not allow a singleton element.\nMaybe: better... for readability:\nPR updated\nAnd maybe 1#element => element *( OWS \",\" OWS [ element ] ) right?\nNAME that'd outlaw , which seems wrongish?\nClosed by commit e475f36 for the recipient case. Note that this may become obsolete if we switch to only providing ABNF for what is good to send."} +{"_id":"q-en-http-core-c6009d8d91d9d8cb9d51a5b8f987c606451cce422b7e4cdd833ec19c8df776a7","text":"NAME yes, as per issue discussion I was reconsidering this. See updated PR.\nyeah, I like that. Will update.\nIt'd be good to say something about how weakly-framed content (e.g., no Content-Length in 1.0; no CL or chunked in 1.1) should be handled differently by caches, especially during reload. From URL\nIn , add:\ncurrently has: Observations: the reference really should be more specific and go to \"6.3. Message Body Length\" is this clear enough for protocols other than HTTP\/1.1?\nReferring to h1-messaging 6.3 would make this specific to H1. Referring to 4.3.1 in caching keeps it protocol-generic. If another version doesn't crisply define what's an incomplete message, we can deal with that separately.\nI think the text in 3.1 should change to:\nThis seems reasonable. It's also related to URL, asking whether servers can wait for end-of-stream to consider a message complete in HTTP\/3. (Proposal in URL is \"no, you can't.\")\nNo, HTTP\/1.0 is consistently optimistic. If the server desires more rigor, it can do the work to supply the framing that would indicate an error occurred. When it fails to do so, it is deliberately choosing to maximize performance over correctness. Hence, the assumption has always been that a close-delimited message is always complete.\nI think Mark's suggestion is right. I would characterize HTTP\/1.0 as ambiguous and that causes revalidation deadlocks. Its somewhat worse that this pattern needlessly shows up in HTTP\/1.1 as well.\nThinking on this for a while, I think MAY might be more appropriate than SHOULD here; I have a strong suspicion that most caches will not consider a close-delimited response as incomplete. I could test it, I suppose.\nAlso, in the sentence: ...I think s\/partial\/incomplete\/, based on context. Any concerns there?"} +{"_id":"q-en-http-core-896ff35e23468c10342c5738a969dfd8d1b3e55fbbbd08cc7135d2f3364b91a8","text":"See ; the issue is not just about moving the text.\nI understand that, I just thought it would be good if we could split this into two changes; that makes it easier later on...\nSee - I'm on it.\nI think I'm ok with the actual changes, but we need a summary for the \"Changes from RFC 723\" section. Proposal:\nNAME I think this is ready to go.\nWe still need to document this as a normative change from RFC 723.\nDone.\nLGTM.\nAs discussed quite a bit, retries in HTTP need to be better defined. In particular, HTTP does not offer a guarantee that any particular request will not be automatically retried.\nMessaging - 9.4.1. Retrying Requests contains some things that probably belong in Semantics, probably somewhere up in the architecture section. Beyond that, I think the action here is to clarify a bit more that POST etc. requests are actively retried by many UAs (e.g., most Web browsers) because of H1's connection management issues. NAME can you confirm that that's H1-specific?\ngenerally goaway would determine this in h2\nSemantics - 7.2.2 Idempotent Methods talks about retries some. I'm inclined to add a very short reference to it in architecture -- e.g., \"Some requests can be retried in the event of failure; see ref\" -- around the sentence starting \"A client sends an HTTP request...\" in 2.1 Client Server Messaging. In h1-messaging, I'd like to add the behaviour that UAs actually implement to the list of examples of how they'd know it's OK to retry. AIUI, that would be something like: NAME is that accurate?\nWell, the server could have had an opportunity to process a request, because they routinely start processing based on only a small amount of the header. The fact is that some clients retry based on very weak signals, risking duplicate processing in favour of greater robustness. The retry logic as I understand it is \"have I seen any hint of a response? no? retry.\"\nIt seems like either clients or the spec should change, then. I suspect clients will not...\nThere's two things you might consider here: noting that clients might do some things to retry based on their own determination of when that is appropriate (for example, in the extreme, retrying if the connection breaks before any byte of a response is seen).recommendations about good practice, which probably doesn't want to point at what the most aggressive clients (i.e., browsers) might choose to do Trying to walk the line without making that distinction is probably too hard.\nThat seems like a good direction, except that the spec currently contains a MUST NOT; \"good practice\" isn't that strong... MUST NOT (BUT WE KNOW YOU WILL)?\nYep. Part of this work is in aligning the spec with reality. A SHOULD NOT is probably appropriate. No need to resort to RFC 6919 language.\nProposal:\nLGTM. s\/; f\/. F\/\nWhile I'm fine with this, I'm stil having a big problem with retries. Some of our (haproxy) users are pressuring us to implement the so-called \"L7 retries\" by which if the intermediary faces a problem with the server after the request was delivered, it happily retries it, even for a POST! Not only it's a technical problem (requires all requests to be buffered for the time it takes a server to respond) but it's in total violation of all rules. The problem here is that there's an ongoing trend for doing this and users find it normal since it allows them to hide their constantly crashing servers... And usually they consider that the application deals with replay protection by itself so obviously the infrastructure must retry. Thus my big concern here is to figure till what extent an intermediary could reasonably retry, or how we could have signals between UA and server to indicate if the request is retry-safe.\nA protocol extension seems reasonable for that case; I think it could be interesting to clients -- both intermediary and otherwise.\nJust leaving a note to remember to go through to make sure we got everything.\nI had a bunch of editorial suggestions to make and decided to just make them directly to the pull request to save you the time and annoyance. I hope that is okay, since my changes just reduce the prior diffs."} +{"_id":"q-en-http-core-1e6e4cba5f349744e5d998d93f3dadffc106c44e2296c37cf7c7cadddcdc7a5e","text":"…se) (see )\nI chose \"Payload\" in the reason phrase for consistency with 413... We do use \"enclosed representation\" in many other places, such as in the definitions of PUT, 401, and Content-Location. Maybe use \"payload\" throughout? (I guess we have way more terminology to clean up)\nNAME it's not incorrect, but sure we can tune things even more...\nThe term entity to be dismissed, in favor of It's used here, as 422 is C&P from webdav URL\nThe only thing that causes me concern here is that you're using both \"representation\" and \"payload\" in a manner that's technically correct (I think), but is likely to cause readers confusion about the difference between the two. We don't use \"enclosed representation\" elsewhere, but we do use \"representation data enclosed in the payload body\" and \"enclosed payload body\" (which seems odd to me). It would be very good if we could clarify and consistently use this terminology (here or in another bug). PS I was very tempted to ask you to move the prose tuning to a separate PR, but I won't."} +{"_id":"q-en-http-core-0926ba096a0c1383cfede9f3b1a65befe655246f2147f70e12ef54516686459e","text":"HTTP Methods have 3 boolean properties, all of which says a registration needs to define, but only \"Safe\" and \"Idempotent\" were included in the registry while \"Cacheable\" was omitted. I filed this as URL, but it probably makes more sense as a change for ter.\nI think the only concern I have here is that people often misinterpret \"cacheable\" - e.g., see .\nFWIW, that's because it isn't a boolean property of the method, but rather a capability inherent in the system in which one aspect is the method semantics. I'd prefer to remove that from the registration table. We could still require that the registration describe how and under what conditions the method semantics might be cacheable.\nIt's not in the registration table, and I tend to agree it should be kept out for the reasons above.\nThis was discussed regarding Variants, and I think the answer will be the same; we should record this sort of thing in documentation elsewhere, e.g., the .\nURL says but then the method definitions themselves seem to consistently say After this discussion, I think the method definitions (\"The response is cacheable\") are more correct, and that therefore my initial suggestion in this erratum is incorrect. URL repeats the less-correct form by listing \"cacheable\" along with \"safe\" and \"idempotent\" as properties of the method. I think removing \"cacheable\" from that list, and changing \"If the new method is cacheable, ...\" to something like would have prevented me from getting confused enough to file this issue. Do the experts here like that direction?\nNote that RFC7540 Section 8.2 relies on that distinction by saying that pushed requests MUST be cacheable in order to be pushed, but conditionally defines what happens if the delivered response does turn out to be cacheable. Eliminating the distinction entirely would leave 7540 slightly unmoored.\nI think the proposed edit to considerations for new methods is helpful; we could address the 7540 concern by parenthetically adding \"also known as cacheable methods\" or similar. Note also the first bullet of . If we can come up with a better phrase than \"defined as cacheable\" that's great, but it needs to be distinct from response cacheability.\nietf104: mark needs to research the 7540 question to determine if this can be made editorial based on jyasskin's above comment\n7540 points directly to 723x for this, so we have some leeway. I think we can address this by parenthetically stating that a method that does not define the conditions under which it can be used by a cache is by definition not cacheable. Will do a PR.\nLooks good to me."} +{"_id":"q-en-http-core-fddc98995f80f4d8d57eb7b2a6be3d2ecae547e4e40fae64ed14a8e94656acbb","text":"In its current wording, I interpret to allow to be used in the following two situations: The requested operation is forbidden in the current state of the resource. The requested operation is forbidden (for the user identified) by the provided authentication credentials. This ambiguity is established in the following paragraph: The ambiguity is somewhat lifted (but not entirely) by the last sentence: However, I believe the ambiguity exists and causes a lot of developers to believe that is only applicable to authorization errors. I think the wording in the previous paragraph: …the example given in : …and the wording in : …as well as existing practice establishes that is the correct status code to respond with when the requested operation is not allowed in the current state of the application (use-case 1 above), regardless of the header provided. Introducing the ambiguity of only being applicable to the provided authorization when authorization is available makes the status code harder to reason about and therefore harder to use in practice. Would it be possible to remove this ambiguity somehow? Here's some suggested wording:\nISTM that the phrase \"authorize it\" in the first sentence of 6.5.3 is the root of the issue. I note that 2616 used \"fulfill it\" instead; we should dig around and see if that was an intentional change.\nI agree \"authorize it\" is the root issue, but think it's exacerbated by the following paragraph. It would be good to clarify the entire section to make it obvious that the presence of an header doesn't require the semantics to change.\nDiscussed in Montreal; ok to change.\nThe change from “authorize” to “fulfill” helps to resolve the ambiguity, although I would like to see a in the following paragraph: >If authentication credentials were provided in the request, the server consider them insufficient to grant access. The client SHOULD NOT automatically repeat the request with the same credentials. The client MAY repeat the request with new or different credentials.\nI don't think a \"MAY\" helps here. The current text is: That's a statement of fact. If the credentials would have been sufficient, there wouldn't be an error. (It does not necessarily imply that things would have worked with different credentials).\nI think that’s exactly what the current text implies: That sufficient credentials would have lead to a successful request, regardless of the state of the resource. If we take the example from RFC 7807, there is nothing that a change of credentials can do to make the request succeed — the credit is out regardless of the authorisation.\nQuestion: this change makes 429 a specialization of 403, right?\nBut that proposed \"MAY\" isn't permissive. This is a statement, not necessarily of fact, but of one likely possibility. Perhaps \"the server considers\" => \"the server might consider\""} +{"_id":"q-en-http-core-50af1cf67c5a82d032aa264dc0886e466cc4fae9b4a6d5446cdb6ce723b8bb01","text":": That is incorrect, as the status code could be 204, or the response might come with chunked encoding (even when empty). This is .\nWhat purpose is this requirement serving? It seems to have the effect of prohibiting with a zero length payload. Shouldn't RFC 7230, \"3.3.3. Message Body Length\" be sufficient for determining payload sizes?\nAgreed. The text makes it sound as if OPTIONS is somehow special. Giving advice might be good, but the use of normative keywords makes this confusing. The simplest fix might indeed be to remove the sentence.\nLGTM"} +{"_id":"q-en-http-core-f4ce60354b071923329936caa7756ea13c6aec3ea3a9638bbd506150f4b2bf03","text":"For .\nNAME it didn't so much introduce it as much as it just changed the nature of the example.\nWell, before the term \"timing attack\" did not appear in the spec...\n... needs a rewrite. I'm thinking of splitting up into distinct sections, e.g., Cache Poisoning Timing Attacks Caching of Sensitive Information Information Leakage ... with appropriate mitigations in each (e.g., double keying).\nThis also adds \"Timing Attacks\", right. Updated change log as well."} +{"_id":"q-en-http-core-baa96d3ab85e7daefd5128b5555656c33cfc3d9c75471f527d3fe1860596d90e","text":"As suggested in the issue. Looking at it, more text could be moved up, or the entire section could be moved up (e.g., as a subsection of where this text went), but this seemed reasonable.\n-1; this needs to be where CRLF is used, not a different section.\nLike that? Wasn't sure what you meant by \"moving HTTP version up...\"\nI meant literally move the section titled \"HTTP Version\" up above the section titled \"Message\" and then move \"Parsing\" to a subsection of \"Message\". I'll see if I can tweak it.\nWould this\n... is awfully far from the definition of the message format; IME people read the start of section 3 in isolation and don't see the text below. Maybe move 3.5 up?\nAgreed. I have fixed this in 33c709c76bfeafbc2b44b682f1e88f99877e3aca\nHow about moving the three paras starting with \"Although the line terminator...\" to 2.1, and moving the out of the definition of and into , to be consistent with ? Then all of the text about is in one place.\nSee also .\nMy idea of moving version up to top didn't work out,but moving it down to the third subsection solves both problems, I think."} +{"_id":"q-en-http-core-a167991bb0d76088f847bb0ece3ff7cef2fc220dc3056f1eef0401e2a554daf7","text":"and with the hope of being useful for quicwg\/base-drafts I suspect that we also need to move \"Initiating HTTP over TLS\" down to the section on Routing, but that can be done separately.\nThis is now ready to merge, IMO. There is more work to do after merging, specifically taking the old RFC2818 description of initiating HTTP over TLS out and moving just the 1.1-specific bits to Messaging, but I don't want to do that in the same PR.\nI agree with Roy. This is big enough that merging and iterating is the right path.\nFrom :\nLooks good enough to merge and iterate upon. Will need change notes, of course."} +{"_id":"q-en-http-core-4f86980a834ee48d0a20446be5db2429f90c0d2a0e686162ad13b5c796445e7c","text":"pasted twice, second should have been s-maxage\nThe commit d4695a3e422f677d880dc64504c492bf71b6039d for added a paragraph to s-maxage that contains the directive must-revalidate. I think this was supposed to say s-maxage and worded for shared caches?\nSorry, been travelling. This was intentional (although I forgot to update the directive name). implies , which implies . That seems like a lot of indirection in the definition. I agree this wasn't a great way to do it, though. I think the options are: Leave it as is (i.e., two levels of indirection). Leave it as is, but use linking more explicitly so that the indirections can be followed more easily (at least in HTML renderings). Inline the appropriate text from into both and . I think those are ordered according to my preferences, least to most. (I strongly suspect many people don't realise has these semantics, because of it having a name similar to . Making them inline rather than referenced might help)\nI agree with the conclusion, but am pretty sure that you meant to say s-maxage in that section defining s-maxage. That's all I changed, so it should be fine unless you wanted to add more text to explain why. I don't think it is necessary to explain more because adding s-maxage implies the server wants shared caches to reuse the response.\nI agree with NAME that the additional semantics of are surprising. In particular due to section 4.2.1, comes disguised as the \"different value for shared caches\". So , and the difference is the handling of error cases like with ? Could you please try to explain why the additional semantics of make sense – that is, why did you not just make it the alternative TTL for shred caches? Thank you!\nAh, okay, so the added paragraph had the right directive before but simply doesn’t make any sense given the context."} +{"_id":"q-en-http-core-c5e192afefc2cea0f9f060ed2dae19e7613b926722ecbeba2d8d22feb7416a85","text":"(also adds two paragraph breaks)\nThanks! To be completely formally correct, the term from RFC3986 should not be quoted as \"user authentication information in the URI\" but as \"user authority information in the URI\". Is that correction possible?\nWFM. NAME ?\nNo, it isn't correct to say that it is user authority information. The purpose of userinfo in a server-based URI format (this:\/\/server\/) is to supply canned authentication data for use if the server requests such information. It is strictly for use on the client-side and is not directly passed in HTTP unless that (Basic) happens to be the form of challenge requested by the server.\nThat would be an override by HTTP of the definition in RFC3986 then? (Not saying that is illegal; HTTP predates this URI spec.)\nThe generalises the constrained use for authentication by allowing the entire URI to be a server-side spec and the client to derive a hint about it as a possible authentication user. This can end any such difference in interpretation between HTTP\/bis and the URI format. I do know this is a broadening of the user concept in HTTP; I also believe that this is not without a need.\nRFC3986 says: Specifically, the userinfo supplies the information necessary to gain access to the identified resource via a process of authentication. That process is selected and defined by the origin, not by the URI, and the username is not an identifier of authority. Note that userinfo for http and https URLs has never been allowed, going all the way back to RFC1738. Some applications do use that syntax, internally and in a way that is not subject to the Internet standards process, so we warn about it in general.\nThe states RFC3986 does not forbid or even discourage the \"user\" in the userinfo subcomponent. It only says and continues to describe the intricacies of \":password\" handling. The user is part of the authority section of the URI and its purpose is to zoom in on a scope for authoritative resource addressing. This syntax has in the past been (ab)used for Basic\/Digest authentication details, which only works if visitor and visited resource happen to be the same user; it is this (ab)use that is now deprecated. I have written an that places it in a header with the intention of adding user names in much the same manner as host names, as a selector for a naming scope of resources. I believe that obscuring the authority for the purposes of phishing is not really mitigated by parsing the userinfo; subdomains in DNS offer similar notational flexibility. Parsing does help against misleading password popups, but these are probably easier to remedy when credential inquiries are only made in response to and headers.\nDiscussed in Basel. Correcting the reasoning of why it was deprecated can be addressed editorially. Changing what was deprecated needs to be a much larger discussion and pull in the security community."} +{"_id":"q-en-http-core-621b0d402f1798685b52e9532c47d34d7791c573db13ca9537455b4e2b4c4447","text":"I think this is the right thing to do; we added too much text here in 7234.\nFrom : Both state that a (depending on context private, shared or both types of) \"cache MAY store the response and reuse it for later requests, even if the response would normally be non-cacheable\". To my understanding, this does not intend to override the requirements from the \"Storing Responses in Caches\" section (URL) altogether. Instead, I would assume that it only refers to the condition \"has a status code that is defined as heuristically cacheable\" in that section (or, as it was stated in RFC7234, \"has a status code that is defined as cacheable by default\")? Would it make sense to amend the list of conditions in that section, appending to the \"the response either...\" second-level list: \"contains a private response directive if the cache is not shared\"?\nOP here\nI'll trust you on the actual change :-)"} +{"_id":"q-en-http-core-7d8e93156df66556892da8f219a5e520bce7724cd3e6cb8c7e484a0ba9523375","text":"Header field-names are defined as tokens. This is an extremely permissive syntax, including characters that will cause confusion and likely break some senders\/recipients. Most of the special characters allowed are not in the registry or seen \"in the wild.\" Some research would be good to substantiate their use, but a starting point might be: There are a number of strategies we could take to the transition: Like \/ , mark some characters as \"do not generate\" but \"should consume\" Disallow registration of header fields with those characters, and discourage their use in unregistered headers If we have more confidence that they're not in use, just ignore headers containing those characters.\nMy opinion is that Apache httpd team would prefer to reduce the syntax to reduce the security issues.\nDiscussed in Montreal; interest in doing something, take to list.\nI'm not sure why this is being considered. All browsers support headers such as . Changing that seems likely to break applications.\nThe restrictions exist to protect legacy server gateways (like CGI) that are far more restrictive than browsers need to be due to the terrible idea of passing names through env vars and command-lines.\nin bkk: no real consensus, concern about compatibility. reconsider on impact of a case by cases\nFor the record, NGINX drops request header fields with names containing characters other than letters, digits, hyphens and optionally underscores (if configured using directive). Unfortunately, virtually anything seems to be accepted and forwarded in the response headers.\nSince I'm seeing the conversation moved from the WG to GH, I'm just pasting what I sent there for completeness. Right now haproxy only accepts : \"-\" \/ \"\" \/ \".\" \/ \"+\" \/ DIGIT \/ ALPHA \/ \"!\" \/ \"#\" \/ \"$\" \/ \"%\" \/ \"&\" \/ \"'\" \/ \"*\" \/ \"^\" \/ \"\" \/ \"|\" \/ \"~\" i.e. everything matching a token. Quite honestly, seeing any character from this extra list in a field name would look extremely suspicious to me, and I'd rather get rid of them. As Roy mentioned, the restriction is to avoid trouble with CGIs doing : eval \"hdrname=$value\" Characters like backquote (), dollar ($), or pipe(|) have long been abused to attack servers...\nAlso I proposed that we could do something less extreme than blocking messages containing such header field names, we could recommend to simply drop these fields by default. This will have no impact if they're here by accident. Then we can let the agents decide if they want to let them pass or not. Thus we could make the difference between \"forbidden characters\" (the historical ones) and \"unusual characters\" (those excluded by the new, more restrictive list).\nDiscussed in Bangkok, but no consensus on an approach.\nietf104: still seeking data for characters used in the wild. http archive mentioned as a possible source.\nI'm sceptical about the \"protect CGI servers\" (and servers similar to those). We have 2020; shouldn't these all have their own protections by now?\nDiscussed in Basel; suggestion is to document \"safe\" characters in prose (for now), both below the ABNF and in the new header field recommendations.\nThose CGI servers cannot protect themselves. The issue is that both hyphens and underscores are converted to underscores in CGI servers, so both and are converted to at the protocol level, and CGI servers cannot tell which HTTP header did it originate from, since that information is lost.\nHey Piotr, We chatted about that here and tend to agree -- will remove underscore and see how that goes down.\nWell, to be clear, it is the HTTP server that invokes CGI that is creating the environment variables from the received header fields. That server is responsible for avoiding bad transformations, such as the underscore problem above. However, since we are just talking about the characters that are good practice, we can exclude underscores since they are not commonly used in field names."} +{"_id":"q-en-http-core-1c6199e62c5c81c3a5719ff04bcbd9819f7080f99ac0a84abf1d08b016be3498","text":"Does not address Vary (see ), and some more discussion of some of the conditional request headers might be warranted.\nSimilar to , we need to use consistent language when talking about list values. Includes: [ ] Vary:"} +{"_id":"q-en-http-core-eb0d638985890ae0f0e8342626ba5ac914ed19c417dd5bb1ec6c6c93ed96b553","text":":-(\n(split from URL)\nSo the main question seems to be: is the situation for DELETE the same as for GET, or maybe more similar to OPTIONS. As far as I remember, the latter is what we discussed in Prague 2019 (URL)\nI might be wrong, but I believe the intent of the DELETE method is to just delete the target resource. Most of the cases out there that use a request body in the DELETE method, is to use 'delete' as a sort of rpc call to delete other, unrelated resources. While I think that this use-case is valid, I think that it would be better solved with: A PATCH request on a resource that represents multiple underlying entities, some of which ought to be removed. A DELETE request (without a body) on a resource that exactly represents the full collection of resources that should be deleted. A POST request with some custom media-type with delete instructions. Given that DELETE (unlike OPTIONS) is a non-safe method, and conforming services are likely to ignore request bodies and simply delete the context resource, I believe that DELETE should get the same treatment as GET: Emphasize that the request body has no semantic meaning. Improve the prose so there's less chance of misinterpretation (like GET)\nI agree that using the payload to instruct the server to delete something other than the identified resource is incorrect, and we could mention that. There are uses cases though where the payload would add additional information about the deletion (keep version history? move to waste basket? archive?). In all these cases, this would be consistent with the overall semantics.\nThat's very theoretical. I suspect DELETE is very much in the same bucket as GET -- i.e., many intermediaries, servers, etc. will block a DELETE request with a body.\nI disagree that it's theoretical. This would be a normative change from RFC 723x, so if you think that handling of DELETE is more like GET (than for instance OPTIONS), please prove it.\nThe onus isn't on me to prove that we can use DELETE in a way that isn't currently countenanced by the specs; if you want to open it up, do the work.\nNow I'm confused. Currently we say: That doesn't imply that you can't define semantics for it though.\nExample: URL\nWe can sit here forever and point out individual examples where developers are trying to use a body with DELETE. That's hopeless, particularly for examples like that redhat one where they are providing control data that needs to be in the header fields. In any case, sending a body with DELETE is not interoperable. Compliant servers will read and ignore them. Non-compliant servers will treat the body as the next request, resulting in request smuggling. URL\nYes, that is exactly what it implies. If you send a body, the recipient cannot use that body to have any meaningful impact on the request semantics.\nDoesn't compute for me. An intermediary that discards a DELETE body clearly is broken, right? For an origin server to support certain payloads in DELETE requests has always been ok. If the user agents knows who he's talking to, why not allow him to send the payload? Where is the harm? All I would add would be a warning that even if you provide additional information to the DELETE request, it still needs to be aligned with the base semantics.\nNope. It's only required to read and discard the body.\nSays who?\nFor example, URL\nI know we get a lot of grief from those sentences about \"has no defined semantics\", but I am quite certain that they are trying to forbid using a body while at the same time avoiding message parsers that are dependent on the request semantics. HEAD was a disaster and I was trying to prevent any further damage.\nIMHO: at the end of the day, this ticket is about making a normative change that will break some existing use cases, so it would need a better reason than \"it might not work\". But let's do a proper analysis first. You said that an intermediate MUST read the payload, but MAY drop it (right?). What part of the spec supports this? And is this specific to DELETE, or does it also apply to \"PATCH\" or \"FOO\"?\nDiscussed in Singapore; room sense was to behave like GET (body not allowed)."} +{"_id":"q-en-http-core-e96e600e7f01885450268877191f6971c8647bcee329250e2c2726f67a3005c4","text":"That's\nsays: I think this should be:\nDiscussed in Basel. The current ABNF defines as a special value which only invokes the semantics when it's the entire field value; otherwise, it becomes a field name (since it's valid according to the field-name ABNF) and effectively gets ignored (since it's not going to be in normal requests). An alternative way of looking at it is to say that it's a special field name that, when it occurs anywhere in the Vary field value, it effectively turns off caching, no matter what the rest of the field value contains. Waiting for data to see what implementations do. Either way, this needs some examples in the spec, since it's so special.\nSee some results . It looks like all browser engines will turn off caching if `.\nDuplicate of ? See also .\nNAME shhh, we're trying to inflate our issue stats."} +{"_id":"q-en-http-core-d642298320b2e90751dfc9b65ae56b1e02ca24b869d655c4a97548e70f787a74","text":"Fetch and XMLHttpRequest have for the longest time used comma-followed-by-space (0x2C 0x20) as separator when multiple header fields are combined into a single header field. This is also what at least Firefox and Chrome do from code inspection: URL URL I think this is because of URL encouraging values to start with a single space and folks conceptually still seeing it as multiple values. Now URL has a much more explicit definition, which no longer allows for a space as far as I can tell, \"breaking\" with what implementations are doing. Most of the time this difference is insignificant, but anything with quoted-string or equivalent will be able to tell what was used if those configuring the headers like to play games. I'd like HTTP to either allow 0x2C 0x20 as an alternative separator (and have Fetch mandate it for browsers) or simply adopt 0x2C 0x20 as separator for everyone. I have some tests around this as well, but they only affect APIs, which theoretically could have a different non-HTTP-conformant view, but that'd seem really bad: URL See URL and URL for further context. cc NAME NAME NAME\nWhitespace is allowed (and insignificant). Can you point at the actual difference between 2616 and 7230 which makes you think that something changed?\nYou mean when a quoted-string is broken into several header field instances?\nIn 2616 the same section states that values can be preceded by whitespace. Whereas in 7230 it only describes how to combine, without any mention of whitespace. Both sections are linked in OP. Yeah, also relevant for structured headers, which is how I started stumbling into this. If one client combines with a space, another without, and another with three spaces, you get subtly different strings and possibly room for security bugs.\nThis one? That's not specific to recombination.\nI know...\nAnne, I think what you're suggesting is to change: to: This is meaningful in some admittedly pathological cases, such as a header like this: Because the current text has a combined value of -- whereas most implementations will do .\nSo I think things are even more broken than I thought. A recipient that knows about URL and knows the header, might drop empty or whitespace values. So becomes (or maybe they'd reject it, depending on how the header was defined), but a basic intermediary would turn it into . The final recipient in that case cannot figure out what the sender meant. It seems to me that URL should only ever apply after combining header values, including empty ones, not before.\nFWIW, the message was invalid wrt that header field already, so \"figure out what the sender meant\" doesn't make a lot of sense to me.\nThe problem is that \"invalid\" depends on whether the recipient knows the syntax, which is just broken if the recipient is allowed to combine before forwarding (which makes it \"valid\").\nThe only sensible thing to do is to always combine before determining validity.\nSee PR. NAME I haven't done anything to address your later comments; how do folks feel about adding a statement spelling out that consequence, either here, or in Considerations for New Fields?\nI think explaining the issue would indeed be good.\nNAME NAME - Are we sure what we really want to say \"optional whitespace\" here? Are CR LF or VTAB allowed?"} +{"_id":"q-en-http-core-6da749a0a0d26137c5a9f1498bf7129e6099bc589cb4a35d212035a57457cfda","text":"We should explicitly tell them to reserve it.\nI think the last comma is bad, but not enough to tweak it myself."} +{"_id":"q-en-http-core-f79d9d828b762e37c32c1102273c48ecbd601fe33816717fef23386bf78c7880","text":"This is related to issue On the flight back from Switzerland, I spent time looking at the cache directives and how they differ from the descriptions in RFC2616. I worked out these problems and drafted a pull request, but I haven't checked whether we changed these deliberately for RFC7234, or if the distinctions just got lost in the rewrite. In any case, I think this is an improvement, but definitely needs reviews.\nThis ended up being an editorial change between drafts because the only normative changes were already noted for and , and this is just rewording what we had for .\nFrom : Both state that a (depending on context private, shared or both types of) \"cache MAY store the response and reuse it for later requests, even if the response would normally be non-cacheable\". To my understanding, this does not intend to override the requirements from the \"Storing Responses in Caches\" section (URL) altogether. Instead, I would assume that it only refers to the condition \"has a status code that is defined as heuristically cacheable\" in that section (or, as it was stated in RFC7234, \"has a status code that is defined as cacheable by default\")? Would it make sense to amend the list of conditions in that section, appending to the \"the response either...\" second-level list: \"contains a private response directive if the cache is not shared\"?\nOP here\nlists some directives that allow responses with Authentication to be stored, but some of those directives' definitions do not mention this.\nI think this should also be added for ; see .\nI think this issue needs to be reopened and the specification text reverted to something more like what was in RFC2616. That spec specifically requires public be present to allow responses to a request with Authorization to be stored by a shared cache, regardless of must-revalidate or proxy-revalidate. This is not implied by those other directives."} +{"_id":"q-en-http-core-378da245d7297b17c1087fcbbc6a7b21c8498d896aff96d9223c5d8eb31b6c96","text":"… section;\nHi Roy, Please consider this reference for the privacy considerations section: Bujlow, T., Carela-Español, V., Solé-Pareta, J., & Barlet-Ros, P. (2017). A survey on web tracking: Mechanisms, implications, and defenses. Proceedings of the IEEE, 105(8), 1476–1510. Regards, Rob\nURL\nNAME has done a lot of work in this area, so he might have another suggestion. The referenced paper does cover a good amount of techniques that target HTTP itself, so I'd say that it's worth citing unless a better option presents itself."} +{"_id":"q-en-http-core-d7b68cd5d7a13886f7fde52b0eec256c264c10876f666a306d1002ec651d1e18","text":"NAME see revision.\nThe language for and indicates that they override the rest of the caching model unconditionally; as discussed in , that is too broad. Also, is often used even though the message is still cacheable, because developers misunderstand its semantics.\nThe text in question for private: to any other cache directives present, even if the response would not otherwise be heuristically cacheable by a private cache. lists one or more field names, then only the listed fields are limited to a single user: a shared cache [...] &MAY; store the remainder of the response message without those fields. I think there are two problems here: 1) \"subject to any other cache directives present\" is too narrow; the requirements in 3. Storing Responses in Caches go far beyond cache directives (e.g., method and status code cache ability). 2) The second paragraph omits any such condition. I think this can be addressed by replacing the clause above with something like \"subject to the other requirements in [ref to section 3]\", and repeating it in the second paragraph. We'd also need to add to the list in section 3 (probably below .\nFor public, we have: response even if it would otherwise be prohibited, subject to the constraints of any other response directives present. In other words, public explicitly marks the response as cacheable. Similar to above, I think it should be \"subject to the constraints defined in [ref to section 3]. A note like this would also help:\nLGTM."} +{"_id":"q-en-http-core-667a15b0670d51ca6768461f4c6d2f23e8c40269a3c0547bb1d657a7d2870e22","text":"NAME please review.\nRoy noticed that we use a special meaning of \"invalidate\" and then define it pretty casually below. This can be improved."} +{"_id":"q-en-http-core-45a32155ccce4ea1a69599f3d9bba429f8d5ada46c6f19b0bdf57b84bb3bc829","text":"header fields ought to define its own expectations regarding the order for evaluating such fields in relation to those defined in this document and other conditionals that might be found in practice. This should just be \"HTTP\", no?"} +{"_id":"q-en-http-core-abf0e46f6ec242dbf77820a01df12ed57fd4289de182d0338c7ee6905a34143a","text":"The current section on CONNECT (7.3.6) in the semantics document refers to the proxy forwarding packets: Since the proxy terminates TCP, there is no forwarding of IP packets, but rather the contents of the TCP streams.\nWell spotted! I'd even suggest \"to blindly forward bytes in both directions ...\". Too often I see people who imagine that packet delimitation is respected on both sides of a proxy and I have to tell them that it's just a uninterrupted stream of bytes that each layer decides to delimit wherever it wants.\nI'd say that 7231 is self-consistent with the use of packet in 7230. If a change is needed, we should change wording consistently. Or, was this issue raised on the WIP core doc?\nNAME this issue is being raised on the WIP core doc. The same text does also exist in 7231, but I consider them to both be in error. While we're updating the text for the core doc, let's fix the text to be clearer.\nThanks for the clarification NAME that sounds good to me.\nFwiw, the HiNT draft uses the terminology: blind forwarding of IP payloads, UDP datagram payloads and TCP\/IP packet payload.\nHTTP\/3 says:\nNAME that text looks good. Something along those lines, but that also works for HTTP\/1.1, would be right for this core doc text.\nMaybe just replace \"packets\" with \"data\"?\nSo as mentioned above in , while having a look at the latest draft I was struck again by this \"packets\" there, and even forgot about this old issue. Pinging again. I suggest that we change \"forwarding of packets\" with \"forwarding of the byte stream\" so that we can close these issues."} +{"_id":"q-en-http-core-269dd006dec645063249d57ad3e18a33f68cca015bc6c877aef8d133e470b73a","text":"URL This sentence describes cases where the selected representation exists and has an entity-tag. Shall we also specify that \"if the selected representation does not exist or does not have an entity-tag, then the condition is false\"?"} +{"_id":"q-en-http-core-437892e3efcdcfaed8227ad6dfc35ec4cd4d81b2d0f0f4b28007422f7b5af313","text":"P-A is as: This seems really odd -- P-A is not generally used in a way that's specific to a single target URI.\nI think it's scoped to the request; i.e., if the client retries the same request with appropriate credentials, it's likely to succeed; we can't say much more than that.\nFTR, the text we have is inherited from ."} +{"_id":"q-en-http-core-32be1a250175e2810f8c21721bd311ee5593f52ddb8345580a0ac6371d756b2e","text":"Text taken from existing statement in Location.\nAh, makes sense, of course. Thanks NAME\nNAME - can you check again?\n2616's definition of said: 7231 dropped that, thanks to . We should probably restore it, given it can cause misconceptions like .\nNAME do you remember why you removed that text?\nYes, the requirement moved into the general section on all URI-references. That is not a special case. And, no, its presence or absence would have made no difference to the referenced bug report. If either one of them had correctly followed the spec, they would have been fine. Instead, npm sent a special value without a special scheme, and Cloudflare marked that for rate limiting based on incorrect assumptions of bad behavior rather than the actual specification. Excuses happen.\nNAME - I wasn't talking about the actual bug; I was talking about NAME -- someone who has implemented a very good HTTP engine -- misinterpreting the spec in that comment because it was reader-unfriendly. That should concern us all, because if he and implementers like him misread the spec so easily, it will be mis-implemented. And, the fault here is definitely not with him; it's with the spec. It's not reasonable to expect people to hold the entire specification series in their heads as they interpret each phrase and requirement, especially considering that in most renderings it's not cross-referenced (and even where the renderings support it, we don't cross-reference consistently). I understand that we need to be terse sometimes, and that it's reasonable to have expectations of our readers. But, when we use arcane terms and overload widely-used terms (even if incorrectly) to the point where the only people who understand the spec on a good day are you, Julian and I (and on a bad day, just you), it doesn't do anyone any good.\nI think there are two ways to resolve this: 1) Link back to the URI section in each case where it's used, and adjust language to make it clear that there are requirements there 2) Add language in each case where a URI is a protocol element, and state how it's resolved I think I prefer (2), because is vague about how it specifies this: Since Section 2 is about HTTP URIs in general -- not just as protocol elements -- it's not clear what the scope of this statement is. It's also not a requirement, which doesn't seem good for interoperability.\nWaiting for to land.\nPlaces this will affect: Location Content-Location Referer\nI still don't see any reason for this change. The spec already states how to do relative resolution in general, which it must do because it applies to all header fields (not just the ones we define). Saying it again on every occurrence of a relative or partial URI is just adding length to the spec.\nWe already said it for Location, and the amount of added text is minimal.\n... and it's also said explicitly in . It may be obvious to you, but it's pretty clearly not to many implementers.\nThe only confusion has been our change in 7231 of the Location field to allow relative references. I am not aware of anyone ever being confused about the target URI being the base URI for all header fields, but I can live with more text if it says that consistently. To reiterate, this entire issue is based on a misreading of a completely mistaken assumption about the field content of \"Referer: install\" being flagged as a potential security issue by Cloudflare because it is unusual, not because Cloudflare somehow might not have been aware that the syntax is allowed by the spec as a relative reference. It's defense by spec-lawyering. These changes would not have changed Cloudflare's algorithm and would not have convinced npm to choose a more common default. Actually talking to their own CDN before deploying an HTTP change would have helped. Testing would have prevented it entirely.\nNAME is right that the prose needs to be in sync with the actual ABNF. Will update the PR later today with a proposal."} +{"_id":"q-en-http-core-a75b6a0df537e92693c0b902f04e1a636fa621a101af3d2ff4ebf813921d8d0d","text":"NAME the ABNF is not the source of truth; as discussed, it's the recommended generated form. We have prose in that we use to do things like explicitly allow LFs to be separators and folding to be performed. Given that there is confusion, interoperability issues and security issues around bare CRs, a clear requirement seems proportional - especially when the ABNF does not specify error handling. I could see an argument to downgrade the requirement relating to generation to a note that it's not allowed, and I could see broadening the language in semantics to say that CRLF and LF are also not allowed.\nFor field values (Semantics), we clarify that these are illegal to generate. For recipients, we can say that for error handling, rejecting or converting to SP is common. Not sure what else needs to be said in Messaging.\nI think that the security aspect justifies a new requirement; how about \"MUST either reject or remove; suggest replacing with SP\"? I was thinking about outside the field values -- mostly in the request-line and status-line. That said, I agree it's much less of a concern there, and I'm OK with dropping it if you think it's not worth it. I'll update the PR if you think the above is workable.\ns\/MUST either reject or remove\/MUST either reject or replace\/ ...I guess. Yes, that sounds right to me; although there might be pusback on the MUST (me neutral). The part for Messaging I believe deserves a separate issue and discussion.\nNAME I'll make a change note in Messaging. WRT Semantics, the adjacent text about folding effectively deals with LF and CRLF. If you want to go to the trouble of separating into two issues and two pull requests, feel free. I'd rather just get it closed.\n\/me looks - NAME it already is called out. Please re-review.\nOn my agenda for today.\nopened for field values\nSplitting the issue was king of intrusive, sorry for that. The changes for [Messaging] have been merged. For [Semantics] please see URL\nPopular clients handle LF, CR, and CRLF as newlines. HTTP only defines CRLF. Previously: URL\nTo clarify, this is about the CRLF in the HTTP-message production.\nCovered here: URL ... but I agree it could be more visible (e.g., in 3).\nAnd clients also accept lone CR. It seems worth testing either that they reject (network error) or adding that.\nNAME see - in situ, see URL I'm still not sure it's obvious enough to someone reading in isolation; thoughts?\nNAME that text does not treat CR without LF as a newline, unless I'm missing something. That text even seems to suggest that CRCRLF would be a single newline, whereas I'm pretty sure it ends up being treated as two newlines.\nNAME I'm pretty sure CR alone is not valid, because I remember such a discussion a few years ago by the time haproxy used to accept this, after which I removed support for this. CRCRLF could at best be considered as a CR at the end of a line, though normally it's not a valid character there. It may possibly be replaced by LWS though.\nJust found it, its in RFC7230.5 : URL Although the line terminator for the start-line and header fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.\nRight, I realize it's not valid, but I also know that clients nevertheless accept it, which is why I raised this.\nAll clients?\nIf they don't, they're not interoperable on the Web as we know it.\nThe Web as we know it does not contain any CR-only line-delimited protocol services other than those deliberately attempting response smuggling. The protocol specifically forbids bare CR as a line delimiter, since that is known to not be interoperable with the vast majority of protocol parsers that exclusively check for LF and ignore CR. This was a big deal in 1994, but even then it was obvious that CR-delimited messages wouldn't make it through proxies. Nobody has objected to it since OS X was released and the last of the native Mac servers disappeared. If some browsers interpret CR as a line delimiter in HTTP protocol fields (other than payload content), they have security holes that should be fixed. No change to the RFC is possible here; we can't make changes that introduce new security holes to previously compliant parsers.\nI wrote some tests for this: URL Suggestions for more tests appreciated. Firefox seems to generally take the CR and include it in the value of whatever preceded it (reason phrase or header value, despite those not allowing this character). Chrome and Safari treat it as LF in those places, but they do not allow two CRs to end the header block (network error in Safari, empty body in Chrome). It does seem that a stricter stance is possible here, given the intersection of the results. cc NAME NAME\nIf we can be stricter here and just fail on CRs in headers, that sounds good to me. That having been said, I think changing behavior to anything but just rejecting the response would be worse than the status quo (Treating CRs as part of header values, for instance).\nYeah, it seems like CR CR could be treated as a network error at least. CR \"after\" reason phrase or individual header is less clear. But I agree that making CR part of the value isn't good.\nRegarding at the end of the line, we currently say?"} +{"_id":"q-en-http-core-55117c0028955a8e4692c7c42f738c56051c4b608789f18dd3f3dec407555ee5","text":"These \"(or laters)\" don't seem to be accurate.\nSuggest \"(or later minor revisions of HTTP\/1)\""} +{"_id":"q-en-http-core-1e91ebf33a53b74ae2c83609ab2b2c0a37d225838ae86a58eb91494dbf025227","text":"changes to -messaging pulled from URL\nI'll go ahead with this one as we already reached consensus in the original PR."} +{"_id":"q-en-http-core-54c0c442555411a28eb512daef47602716a721831857bac46c8987d595624cc2","text":"this is the part of URL related to -semantics\nContinueing the discussion from URL Looking at the next text in context: To me it's unclear why we are calling out bare CR here, isn't the situation the same for LF and CRLF?? This also is not a new requirement, the ABNF does not allow CR and friends in field-value, and in 3.2 we say \"A sender MUST NOT generate protocol elements that do not match the grammar defined by the corresponding ABNF rules.\" The subsequent paragraph talks about whitespace, and it may not be clear to everyone which characters we are referring to here (not the ones listed in Section 1.2.1 \"Whitespace\", correct?)\nProposal:\nNAME ?\n(forked from )\nA bit iffy given the sentence above it, but okay."} +{"_id":"q-en-http-core-b36718e954a0acdcc63a91abe07ad3d0386e847e90731490de3c3b6b99286b38","text":"Status code 308 is missing from the list in paragraph 2 of Section 9.1 of Semantics: \"Responses with status codes that are defined as heuristically cacheable (e.g., 200, 203, 204, 206, 300, 301, 404, 405, 410, 414, and 501 in this specification) ...\""} +{"_id":"q-en-http-core-67a5a13d190bf842a0157adbcf1a69954ad9d70c0a923a63668c69874ef74578","text":"It might be worth checking that clients indeed treat it the same as an unknown value.\nThat's an interesting question as well; but it seems to be orthogonal to this issue. Handling content codings is tricky (such as wrt ordering, repetition, and unknown encodings), It would indeed be good to have tests.\n-> move req into field definitions\nfixed by"} +{"_id":"q-en-http-core-7366f8a7b237776f82b33acf036d284f80edb937c039dbd1ce9b6c2eb02d7bee","text":"These are just the cherry picked moves from that are moving sections to Semantics.\ncreated issue for this\nSee"} +{"_id":"q-en-http-core-031902e398b2007e2d9b952dd1f0c98b5bb85263c0c12f6ca35e4ea5cd0be938","text":"In 416 status code section, remove duplicate definition of what makes a range satisfiable and refer instead to each range unit's definition. In sections on bytes ranges and Range, clarify that a selected representation of zero length can only be satisfiable as a suffix range and that a server can still ignore Range for that case.\nRegarding byte ranges, an empty (zero-length) representation is unsatisfiable according to section 2.1, but not unsatisfiable according to section 4.4 if the first-byte-pos is zero. I would like to see an update to the RFC which explicitly resolve this self-contradiction in whatever way seems appropriate. The following is one suggestion, but there may be others: I would like to suggest explicitly specifying that an empty 200 response should be returned in this case. It is the simplest solution to the current self-contradiction in the RFC, since it is a valid response anyway (if the server chooses to ignore the Range header), clients already handle it properly, it provides all necessary information about the representation to the client, and stating it explicitly can prevent subtle edge-case pitfalls in both the RFC and its implementations (as opposed to more intricate solutions). Perhaps the following can be added at the end of section 3.1: \"If all of the preconditions are true and the target representation length is zero, the server SHOULD send a 200 (OK) response.\" (someone in the discussion preferred this to be a MUST, but the SHOULD might be more in line with the previous sections and backward-compatibility if some implementations resolved this in some other way.) I raised this in the mailing list a while back, and it got some discussion and support but did not get officially resolved. More recently I reported it as errata but it was rejected as not being an erratum and redirected here (I apologize - didn't realize issues moved to github, and still not sure what the distinction is :-) ), so here it is, raised again. All suggestions and feedback are welcome.\nSo how do we get this fixed? There was a discussion about it in the IETF mailing list, in which others confirmed the issue. I opened an errata, but it was closed saying it's not an errata. I was pointed here to open the issue, but got no response in 9 months so far. What can I do to get a response or to help move this forward? If the powers that be don't agree that the issue requires fixing, it would be nice to get such feedback explicitly and discuss it. If this issue tracker is inactive and the issue should be posted somewhere else, I'd be happy to do it, just point me in the right direction.\nWe're not currently working on a revision to this document, so an official answer won't be forthcoming for a while. In the meantime, the best thing to do is to discuss it on list, taking the advice you get there. See Roy's response in particular: URL\nJust looked this up to see if it's been forgotten... can I assume from the new labels that it's on someone's todo list and will be looked at eventually, even if it'll take a while longer? (btw the link your referenced actually contains another bug, so I'm referring to both, i.e. the whole discussion).\nNAME Thanks for resolving the issues! Better late than never :-) Indeed now there is no longer a self-contradition in the spec, but I'm curious, why did you change your mind since the discussion in the link above, where you seemed to prefer a 200 result for an empty resource (even suggesting a MUST), to the current fix which leans towards the 416 by defining it explicitly as unsatisfiable (with the option 200 downgraded to MAY), or at best makes it a tossup which option servers will implement? Did you have any specific pros\/cons or use cases in mind? I guess I still prefer a 200 as the simpler and more straightforward solution :-)\nAn implementation can choose to send 200. Making that a SHOULD or MUST would be a new normative requirement that isn't justifiable for interop.\nok. Thanks again!"} +{"_id":"q-en-http-core-b972dc2e65cbf8d0946d3a12a2b3b34e88c5e59bbb1216aba0ea3b615dc557c4","text":"is a bit odd; unlike its peer sections, it's not a field name, it's a general concept. Then 10.1.1.1 goes on to specify how date formats in HTTP work generally, while 10.1.1.2 is the Date header. Given that we now have a section for generic syntax, I think it makes sense to promote 10.1.1.2 up a level, moving 10.1.1.1 out to an item in 4.4.1 (Common Field Value Components).\nDiscussed; makes sense."} +{"_id":"q-en-http-core-f20d303f4c5fadc8316a64123fd73648a72b4eb62bd4fd44fa3fb8b62d53e9fa","text":"... for If-Match and If-None-Match.\nURL to special-case \"*\" as top-level construct.\nAlso need to fix things like to something like\nDiscussed; intent is to add a note at the end of each section pointing out that the behaviour of multiple\/mixed stars is undefined, and that this is a lame way to specify a field."} +{"_id":"q-en-http-core-3f94e5cd0c02aa552e18a48cc2aba1e5946a76c7c1c2e62ba39be010762ed0e0","text":"Leaves registration of \"\" for .\nYes, seems to be a bug in bap. Will check.\nit uses \"\" \/ #vary but \"vary\" is a token, and token allows \"\" anyway. Also, see .\nIn particular, \"\" is a valid field name. We probably should reserve it in the registry.\n(CORS has an issue like that as well, including for a method; see the note at URL)\nProposal: vary-by = \"\" \/ field-name Vary = 1#vary-by Then state that if \"\" appears, no \"real\" field-name is allowed to appear as well (for backwards compat). Do we need to discuss cases like Vary: , ?\nI don't see why that is an improvement.\nI thought we were going to reserve and define its meaning in prose...\nI.e., it should just be\nwhich implies registering \"*\" as reserved field name, right?\nYes. That's\nDiscussed; will create a PR to move ABNF and prose to list-based.\nDo we need to list that as change from 7230?\nProbably a good idea; will commit to master and link here.\nhit the wrong option"} +{"_id":"q-en-http-core-544aa6ccf13adc3bdd8f1e1cf8de5e13480d856921d97ac78442ba72fe30c8f7","text":"I got distracted by old text and revised the introduction of semantics.\nThe introduction should reference the currently defined wire formats (1.1 and 2, also 1.0 maybe). We also need to figure out whether and where to mention differences: expect\/continue transfer codings status phrase\nAlso, the current introduction to all three documents says: That needs to be adjusted to something like:\nHTTP\/1.2? Is there something you need to share with the working group, NAME\nI think (but am not sure) that this has been covered by the changes made in Maybe NAME and NAME could have a look and see if the short HTTP\/2 and HTTP\/3 descriptions need more flowers.\nI think that the number of flowers (->0) is entirely appropriate and I would be happy with those descriptions, with maybe a small tweak to the HTTP\/3 text.\nI left a review; once we resolve comments on the PR, I think this can be closed.\nSome nits, but I like the direction generally.Awesome!"} +{"_id":"q-en-http-core-ebcb0f95d8e7d05b5b1457d47cc680e3947e489b584f28dead0801fc77662f07","text":"Vary is described as: 'Host header field' is redundant; the target URI has that information already (and correctly) included."} +{"_id":"q-en-http-core-506745abe65a0f3d8c9ac2e1a06dee58e3cf1977314c04390f38a3ee62eb8181","text":"As part of the discussion about the Encryption specification we discussed how Content-Encodings can or should take parameters from HTTP headers. The consensus was that they shouldn't. Whatever information a C-E may need should be inside the body of the object, just like the GZIP header etc. We should document this as a rule, if we ever do a HTTP[1].ter\n+1, except for the constraint: \"when possible\". In the case of the encryption coding, it's clearly necessary to have the actual key information needed for description outside the payload.\nwe can add something to URL\nI'm fine with this, but I'm also fine with closing with no action. I'm not sure what value it's really adding, since it's so vague, but I'm not sure what else can be said."} +{"_id":"q-en-http-core-dbe56c97cb96b8c134010f908815833534bfd910bebd9f1d53885a0ffd50718a","text":"Historical note: HTTP 1.0 arguably specified the \"abort request if the client half-closes\" behaviour, see the last paragraph of URL, two pages down, just above URL This behaviour was implemented by several widely used HTTP servers; I worked on a firewall product at the time, and we had to take great care to make sure we handled TCP half-close correctly... The \"arguably' is that RFC 1945 says \"the closing of the connection by either or both parties\", but sending a doesn't really \"close\" the connection - that requires either a or both ends -ing the other end's .\nThe HTTP RFCs say nothing about the expectations around half-closed TCP connections. In practice, I haven't seen any HTTP client in the wild send a request, and then send a FIN () while still waiting for the server's response. Because we haven't see any clients do that, as of Go 1.8, Go's HTTP server is starting to make assumptions that reading an EOF from the client means that the client is no longer interested in the response. (reading EOF being the closest portable approximation to \"the client has gone away\"). But in URL, a user reports that they have an internal HTTP client which does indeed make a half-closed TCP request. It would be nice if the HTTP RFCs provided guidance as to whether this is allowed or frowned upon. I would recommend that the RFC suggest that clients SHOULD NOT half-close their TCP connections while awaiting responses. Because nobody else does, empirically, and relying on reading EOF is a useful signal for servers. \/cc NAME NAME\nI've maintained a server in the past that received the same bug report. As a matter of compliance I decided the client was right - EOF is a way to delimit a message (they were using it to stream request bodies without chunking). But as a matter of practicality I didn't fix the bug because, as you indicate, ignoring EOF in practice means a lot of useless buffering ending in RST or timeouts.\nNAME oh, interesting. And disgusting. I hadn't considered EOF being a means to end an HTTP\/1.0 POST. Perhaps I can change Go to make an exception for HTTP\/1.0 requests with request bodies.\nNAME but NAME points out that RFC 1945 (HTTP 1\/.0) says: So using a half-closing a TCP connection was never a valid way to signal the end of an entity body.\nThis is a pretty important question. Today, it's hard to spec out how cancellation works with REST. I suspect the proposed spec change will break some users. I wonder if it's possible to measure the impact with real traffic, from the server-side, e.g. the success rate of response completion after a half-close is received. I am happy to run some experiments ...\nHere is a example of half-close connection in the wild, Livestatus. Livestatus is a broker for nagios. To use it via sockets you have to make the query, ex. and the close the write channel. After this, the Livestatus returns by the half-closed socket the results. Example in Python The important snippet is this, when closing the Write channel of the socket: Are there any workaround for golang? Source: https:\/\/mathias-URL\nNAME Interesting. But is that HTTP? I see examples like the following on the web page. It does not even look like HTTP\/0.9, though I am not sure what the proper syntax of 0.9 is.\nIt could be with unix socket or tcp socket. My fail I only read TCP, i did not see the repo is http :s. My bad, sorry, I came from another issue that was speaking about TCP without the http specification..\nHello, everyone! I would like to make a case for HTTP\/1 disallowing half-closed TCP connections from the client. What I mean by that is that when an HTTP\/1 client shuts down their writing end of the connection (when they send a TCP frame with the FIN flag), that the server can safely assume that the client lost interest in the unsent remainder of the response. On the other hand, if the server is required to support half-closed TCP connections from the client, in other words if the client is allowed to send a FIN after sending its request and it still expects to receive the response, then the server can only assume that the client lost interest when it receives an RST from the client. To be able to treat an incoming FIN as an indication of the client's loss of interest has advantages for the server: The server can free up resources earlier, upon the reception of the FIN, rather than having to wait for an RST a network round-trip time after sending a data packet. The savings are most pronounced when dealing with responses that trickle information down slowly as a form of long polling. Some stateful firewalls (NAT) between the client and the server drop packets for connections for which they no longer have state, rather than sending back RSTs. The RSTs thus never make it to the server, the server continues trickling down information until the lack of ACKs causes its window to fill and eventually a write to time out. Server resources held on behalf of a client who lost interest are wasted resources. I believe that there would be little to no disadvantage to the client. Furthermore: HTTP\/1.1 encourages the reuse of a TCP connection for a subsequent request. Request message bodies are framed using or : \"The presence of a message body in a request is signaled by a Content-Length or Transfer-Encoding header field.\" ( URL ) TLS 1.0 explicitly precludes half-closing at its layer: When one end receives a \"closenotify\" alert, it is required that it \"respond with a closenotify alert of its own and close down the connection immediately, discarding any pending writes.\" ( URL ) Thank you.\nA half-close has never been an indication that the client isn't interested in the response. It only indicates that the client isn't going to send any future requests on that connection. Since HTTP\/1.x is defined to be independent of the transport, I see no reason to specify half-close – what matters is that the request message be complete. A server is not obligated to send a response, regardless.\nYeah, but TCP is a pretty popular choice. I think it's worth clarifying what this part of TCP means for HTTP. This bug exists because implementations are disagreeing on what it means. We're looking to a spec for guidance.\nIt's pretty hopeless at this point. I propose we clarify that half-close means nothing to HTTP\/1.x in this spec, i.e. it's not a cancellation. If any http\/1->http\/2+ proxies send a rst stream when receiving a half-close from the client, it's a bug.\nThe problem with \"clarifying it for TCP\" is that HTTP runs on any transport with connection qualities and finding a word that means half-close for TCP doesn't always translate into some other transport or session-layer's meaning of half-close, but … I will try to find a way to do that when we get to the whole \"what do we mean by a connection\" rewrite in the semantics spec. In any case, regarding the original question: Go should not interpret an EOF on read as implying an EOF on close. Even if we don't see that as common in standard practice, there are probably thousands of deployed C applications that distinguish the two states; they won't work when some poor soul tries to port them to Go, and you'll be reliving this discussion on a regular basis. For example, IIRC, the first conversation I ever had with NAME (in 2000) was about using half-close to end requests in iCAP. It happens a lot more frequently in private hacks that are merely using the HTTP libraries for some other purpose.\nin bkk: no strong consensus. mike bishop to see if he can identify clients that are using half close actively. agreed scope here is h1\/tcp\nI don't think there are any clients currently using this, but I think it would make sense to add guidance stating that half-close does not have any semantics, for all versions of HTTP. Having servers not kill half-closed connections may allow innovation down the line, and I don't think giving this guidance is risky\nI've seen that quite a bit in field with monitoring scripts involving netcat, as well as data transfer tools. It's a bit less common nowadays since wget and curl are everywhere, but the usual stuff used to be this : host:$port | grep -q OK host:\/d' > acl-URL In haproxy we don't do anything specific with half-close, we only rely on the end of message, unless the admin specifies \"option abortonclose\" in which case a half-closed client connection will be aborted if the server has not started to respond. Also there is no real value in killing half-closed connections. Often people who want to do this mix up the end of message and the end of connection and tend to believe that if the connection is not aborted, there will be no more opportunity to do it. But in fact if the client really aborts, the server will receive RST in response to some send() and will be able to detect the close. I would say that : half-closed (shutdown(SHUT_WR)) is a graceful shutdown (i.e. \"nothing more to say\"), signaled on the wire with a TCP FIN and detected on the server as read()==0 abort (close()) is a real connection abort (i.e. \"stop sending this to me\"), signaled on the wire with a TCP RST and detected on the server as send()==-1 The only case where you don't know is until you've started to send(), which explains why haproxy uses its option to decide what to do before receiving the server's response. I'm just realizing that we could even suggest to send a 100-continue in response to a half-closed connection to probe the connection!\nWhat I've heard back from Microsoft folks with access to old emails is that we've seen half-close behavior from: .NET 2.0's HTTP implementation An IBM Java client, about which we have little additional detail Obviously, these are old clients with negligible share. However, that also illustrates the risk: old clients will not be updated to comply with a new spec.\nGo's HTTP package uses EOF from clients to mean \"the HTTP client is probably no longer interested in the server's response\". We use it to notify callers of the HTTP server who've registered their interest in knowing when the HTTP client is gone (especially one reading a long-polled response, like Server-Sent Events). Notably, we want to know this immediately upon a FIN, without waiting for a write to fail. (We might not have anything to write for some time.) While we might ideally use OS-specific TCP stats kernel interfaces to distinguish FIN from RST, we support a dozen+ OSes and the basic interface we can rely on is EOF on FIN. There often isn't a better userspace API available to know the TCP state. I'd rather the HTTP spec say that clients should not half-close TCP connections, as some servers may interpret a half-closed connection as a client that's no longer interested in the response. Go has a package that lets you do low-level networking-y things. Anybody porting low-level C could port to that. The concern for this bug is about Go's behavior, not its package.\nHi Brad! You must not resort to using the OS to distinguish between the two because you don't need to know that. As you say it's system-specific. And until you send anything you're not guaranteed to get an RST anyway. The problem you're facing is that you're acting as a proxy between the client and the application. You need a way to let the application know that the client indicated that it stops sending, and only the application can decide if it's an indication of end of transfer or of client abort. Sometimes the application will decide that both are equivalent and that's fine. Of course when you get a notification of error via an RST, it's pretty clear and unambigous. If I may suggest, \"just\" pass a shut_read info in one case that applications are free to interpret as aborts if they want, and an abort or close info when you're certain it's closed. But really any network-based application must be able to tell the difference between half-closed and full-closed otherwise it's deemed to either abusively close on regular half-close or ignore real closes sometimes. In practice people tend to see TCP as a bidirectional stream while it's in fact two independent unidirectional streams. The only link between the two are the ACK numbers passed along packets to confirm receipt. But FTP servers for example are well used to seing half-closed connections since the data channel can be half-closed during the whole transfer.\nNAME Go's HTTP package has supported its API since 2013-05-13 and the means it's not going away. We can improve the implementation, but we can't pretend the problem doesn't exist and remove the feature. It came about because it was frequently requested. Authors of HTTP server handlers want to know when people close tabs in their browser, etc.\nNAME OK that's perfect. Then it's more a matter of clarifying what each event means and what shortcuts may be taken with what impacts. In practice it's fine most of the time, it's just that it's important to be clear about what this really means. For example it's fine to say \"you can reasonably assume that a close notification indicates a closed tab and that you can abort an ongoing operation if your application is designed to work solely with a browser, but a more robust application should consider that it only indicates the client has nothing more to send and that the server must close just after sending the final response ; specifically, some scripts or API clients may induce a close event immediately after sending a request and while waiting for a response\".\nDiscussed in Bangkok; we need to collect data about behaviour.\nAs an extra data point, both SwiftNIO and Netty treat receiving EOF on read as an indication the client is no longer interested in the response. We're definitely not happy with this: from our perspective, clients should be able to send FIN without us giving up on them. I'd be in favour of wording to disincentivise servers from doing what we (currently) do.\nietf104: significant interest in documenting somethinghere. at least \"client should not do this\" - hummed the question of discouraging the server from closing connection on rcpt of fin. a decent support for that pr.\nThere seems to be a really simple hack for differentiating FIN\/RST. As NAME noted, a write will fail if a RST was received. This issue only applies to HTTP\/1.1. Therefore: upon read-EOF, immediately write the \"HTTP\/1.1 \" of the response line. If that write fails, you can safely abort generation of the rest of the response, because it was a RST. Or does this not work on certain platforms?\nThere are also various OS level options. Both epoll and kqueue can be used to distinguish FIN and RST, so it’s usually a matter of distinguishing the two cases in higher level APIs.\nGood that this issue is still open :) Since the server always knows what works or not, writing a first-line header in response to FIN does allow most of the servers to detect aborted clients immediately. Maybe the spec can just clarify that FIN (half-close) should not be used by the server to cancel the request unless the server has a way to detect that the underlying TCP connection has been shutdown (i.e. the peer has gone away).\nReading through the discussion, a couple of notes; We already talk about half closed connections on the server side in , so talking about them on the client side isn't prima facie out of scope. It might be helpful to explicitly note that requests are not delimited by half-close in . That's not a normative change, just a clarification, and it aligns with broad implementation. It would also satisfy the desire to document \"client should not do this\" at least in that case. It seems like the best we can do on the server side (in terms of interpreting FIN from the client) is to document that some servers assume it to be a loss of interest, but this may not be reliable in all cases, and should not be relied upon by clients.\n^ PTAL\nIn 9.6 \"tear down\", after \"If the server receives additional data from the client on a fully closed connection, such as another request that was sent by the client before receiving the server's response\", I'd add \"or the message body\". The RST problem is far more visible with POST requests that servers reject due to authentication, redirects, lack of cookies etc, where servers are tempted to respond and immediately close without draining the entire request message. Regarding the questions above, I'm still seeing a few possible hints\/workarounds for server implementers who really want to have good interoperability and good assurance that a tab was closed. One hint is to emit a \"100-continue\" interim response in response to a half-closed request if the request was made over HTTP\/1.1. If the client is still there, it will have no effect. If the client has completely closed, this one will trigger a reset from the client which will be detected by the server. The other solution is to consider that if a client sent a request in HTTP\/1.1 without \"connection: close\" and finally half-closed the request side, it is extremely unlikely to be a simple implementation and that it can be assumed with good confidence that the client really wants to abort request processing.\nHey Willy, Would something like this do the trick? I'm a bit wary of putting this in the spec, since it's so specific and advisory. What do others think?\nI agree both with the text and your concern. Maybe we can enforce the fact that it's absolutely not a spec and just a hint by saying \"... a server that really needs to distinguish these cases could for example send a 100 (Continue) non-final response if it hasn't yet sent a final response\". This way it's clearly worded as a hint to work around existing API limitations and to remind developers that determining whether a client closed a browser window is not rocket science over TCP.\nI would implement something like a zero-byte chunk, or a small mount of dummy data (safe to the MIME type such as JSON), which could also help \"keep-alive\" a streaming response without introducing a non-standard C-T. If a server has already generated the C-L header, I would just let the request run to completion. Or are we concerned about proxies buffering the response body, because 100 will always reach the client immediately?\nBTW, I actually plan to implement some sort of cancellation support soon for web frontends behind URL; and I could help report some data wrt the detection mechanism, with or without the app responding to the cancellation event.\nYou can't send a zero-byte chunk before headers (and even less data of course). The only other reasonably interoperable thing you could send before headers to probe the connection would be a CRLF since most agents will ignore them before a message. Another approach would be to start sending \"HTTP\/1.1\" without the status yet (or just \"HTTP\" without even the version), but it can be trickier as it will require to remember that this part was already emitted. Now after the headers it's different, you're already sending response data so you don't care, you'll know very soon if the client is still there.\nI was wrong. What I proposed (i.e. writing extra body bytes) only works for frameworks that generate headers immediately without waiting for applications to write any data (which might include errors but as application payload). Re: 100-continue response, I noticed that the following restriction stated in rfc2616 was removed from rfc7231. \"An origin server SHOULD NOT send a 100 (Continue) response if the request message does not include an Expect request-header field with the \"100-continue\" expectation. \" Curious what's the background for this change.\nI suspect the reason was the prevision for new 1xx status codes, that just clients can remap to pure 100 when they don't know them. Note, I proposed 100 but any 1xx (except 101) would fit. For example we might instead send \"102 Processing\" (rfc2518), which could be even more suitable and possibly less confusing.\nI could live with this. Will report back how frequently we are seeing FINs with a pending http\/1.1 request on the client side. Yes."} +{"_id":"q-en-http-core-476e64f976d76d224c4cfaeb37a5ef97637face8fb8d082f8ab0a1c9193b8c99","text":"evaluating false on a duplicate change request sending 2xx without validator for reasons unknown.\nHmm, of course, now I found another related message in my archives that prompted me to remember why the requirement was added. I should probably rewrite the paragraph (without that requirement) and describe the edge case as a note.\nRFC7232 in URL has the following clause: It doesn't really explain what security or performance considerations are leading to such a requirement, but it calls for a lot of infrastructure and design work to implement it because that logic can be implemented only if the server is stateful. The stateless server cannot comply with such a requirement. See related mail list thread: URL Either RFC should explain why such logic MUST be implemented or relax \"MUST\" into \"SHOULD\" in that clause or remove that clause completely.\nShould I submit an errata proposing to remove the following clause from RFC 7232, section 3.1: \" In the latter case, the origin server MUST NOT send a validator header field in the response unless it can verify that the request is a duplicate of an immediately prior change made by the same user agent\" It seems like nobody can recall\/explain why this clause is there.\nI finally found the original commit 303b82b12a53c170f6f755aaaf8840ac1240aa77\nAnd the discussion motivating the change is under . Certainly the intent was to remove the constraint that an error be sent when a duplicate request is made, so the extra limitations were to reduce edge cases.\nIt is not an errata. This is the right place to discuss it and either find a better wording or strike the text.\nThanks, NAME for finding the original commit and your response! Sending response for repeating requests makes total sense. My issue is about a sub-condition of response that is stated in the RFC7232. In the original commit that you referenced that sub-condition is stated as: I think I understand the original recommendation not to send back. It would be better to replace with there though. But in RFC7232 that sub-condition clause is changed to: The reason of not sending with back is not explained in RFC7232 and nobody can remember why it is there or explain this requirement. Complying with this requires dev effort and extra infra since we would need to track immediate previous requests from each agent. Our API servers are light-weight and stateless so this is a huge requirement for us. Instead, we would prefer a way simpler approach of always sending back along with responses, but we need to comply with RFC7232 thus this discussion. So, the proposal is to do one of the following (preferred) Remove the following clause from RFC7232: (If aforementioned clause is kept) Explain the reason of not sending in that sub-condition (at least) Change to in that clause.\nThese requirements were motivated by a tiny edge case related to protecting partial updates after two clients simultaneously try a PUT. However, I can't remember if it was when one succeeds and one fails, or one or both responses are lost and they try again, or something like that, nor can I remember what difference was made by not sending ETag. Hence, I intend to remove the requirement as being too obscure to implement in a testable way."} +{"_id":"q-en-http-core-7233ae72b0f4cd215542a64660e245acda10b4c47d3987ed41967b8d23b45917","text":"We decided to move the version-independent parts of the definition of Content-Length to Semantics, but that calls for some new text to explain its semantics in general. For now, we have added (in crefs) the following text to the section on \"Content-Length\" in Semantics:\ns\/representation\/representation data\/?"} +{"_id":"q-en-http-core-6bfa806d1906539b1e449b4784981ae17e3973a38db66f1d0fe7081cab84d06f","text":"These are probably just implementation bugs, but it's somewhat curious that no browser handles from the standard correctly. They all do handle correctly on the other hand. I'll create some manual tests for web-platform-tests, unless someone has a good idea on how to automate this...\nAt it's defined by and by with But in the it is: :confused:\nWhat exactly do you mean by \"but\"?\nYou're right. My fault. I didn't read the correctly. Where is the alternative defined? Thank you for your assistance.\nSee URL and the links from there.\nThank you and sorry for the noise.\nBasic manual test at URL including links to browser bugs (all browsers fail).\nWhen I wrote tests around seven years ago, Konqueror was fixed to pass most of them. That seems to indicate that it's possible to implement the spec.\nOooh, can we convert those to web-platform-tests somehow, even if only manual for now? (I got some pushback from at least one Firefox developer on fixing this and given it's HTTP authentication I suspect most people won't be too enthused, but it also seems worth fixing as having separate parser requirements for any header between a single comma-separated and two headers is just bad and not really compatible with how HTTP is defined.)\nFeel free to steal from URL ...\nWhile it's possible to implement the spec, given that none of the browsers implement it and doing so won't really make anything possible that isn't possible now, could we limit to one challenge per header?\nIn HTTP, intermediaries are allowed to coalesce multiple field instances into a single one, no matter what the field name is. So if you have two header instances, it's legal to combine them into a single one. RFC 7230 made one exception for cookies, because they break otherwise. We looked at WWW-Authenticate and verified that no exception is needed here. So my position remains: implement the spec, or otherwise prove that it's impossible\/would break existing use cases.\nI'm the one Anne talked to. He's right, I'm not particularly excited about the idea of making this change - none of the browsers handle it right now, and with the ambiguity around commas in the grammar, making this change is on the higher side of the \"likely to introduce terrible bugs\" scale. Not to mention there may be intermediaries somewhere that depend on the current behaviour. Changing this could also result in interop issues there, which will be much more difficult to fix than upgrading clients. Given all that, this seems like a case where changing the spec would be more appropriate.\nHow exactly would those be affected?\nEven if you change the standard, some browsers might still have to change their code as URL indicates that, e.g., parses differently across browsers. That doesn't really seem like an acceptable outcome whatever we do.\nFWIW, that's a parse error according to the ABNF. EDIT: in the meantime I realized that the syntax is ok.\nNAME : Currently one can embed a comma somewhere in the auth challenge. These will break if browsers change behavior. Given how long this bug has been around, it's difficult to estimate the effect of such a change. Both FF and Chrome have telemetry, but at least from the perspective of Chrome, we don't have much insight into many corporate or closed networks where middle boxes tend to be much more prevalent. The change itself is fairly trivial from a code perspective. And I understand that either option isn't great. However IMHO the reward doesn't seem to justify the risk of changing browser behavior here.\nNo, it doesn't, if the parsing is done correctly. Please give an example if you think otherwise.\nI don't know, and that's what scares me. Intermediaries have all kinds of busted behaviour already, and changing the way the WWW-Authenticate header is parsed in practice (spec notwithstanding) could have unintended consequences (dropped as a malformed header is the first possibility that comes to mind).\nIt would be helpful if you gave at least one concrete example.\nHTTP is not defined by browser behavior after receipt of a valid message. Even if five major browsers caught fire and imploded upon receipt of multiple challenges, that would only increase the need for those parsers to be fixed (somehow). We can't control the bits received. The easiest fix is to simply parse the field as specified, since that's what the non-browser UAs have been doing for ages.\nNAME unfortunately I can't, as I have neither the time nor the resources to test even the most common intermediaries. I've just been around long enough to not trust them to not be doing something horrible. NAME your hyperbole is not at all useful here, thanks. I'm glad you're confident in all the major browser vendors being so perfect as to not accidentally introduce some corner-case parsing bug that is potentially catastrophic for the security of WWW-Authenticate. I would prefer to be conservative, and recognize the fact that this is the way the internet has operated for years (decades?) and we're probably better off taking the safe course of changing the spec over changing every single browser on the planet's handling of a security-sensitive header.\nNAME What hyperbole? Changing the spec is not a \"safe course\", no matter how you look at it. There exist cases where multiple schemes are in use today. There exist many intermediaries that will, without question, merge those multiple field names into a single field value regardless of sender's choice to send multiple fields. There exist many clients (e.g., API clients) that currently support multiple auth schemes and choose one, which is the primary audience for this bit of the protocol. The only question here is what should a browser do given the same bits received. Regardless of that answer, the spec will not be changed.\nOK, if the answer I give doesn't matter, and the spec won't be changed, then no point engaging.\nWe cannot change HTTP (for which there exist thousands of independent implementations) just because a few browser implementations choose to ignore a valid header field value. We can only make a change when all deployed implementations do the same thing, or when failing to implement it correctly is either impossible or known to be insecure. Servers compensate by not delivering the same options to bug-present user agents that they do for API clients, or rely on a TLS connection being sufficient to prevent most intermediaries from joining field values (which can lead to gateway\/CDN issues, etc.). This has never been good for the security of browsers. They get stuck with Basic while the rest of the HTTP universe is using better auth schemes. Nevertheless, we can't change the spec to prevent header field joining, which deployed implementations have always done, nor can we disallow multiple auth schemes that current APIs depend upon. Hence, an implementation can choose to magically ignore some parts of the field and occasionally fail to support certain (usually more secure) auth schemes when an intermediary is present, or we can get past the FUD and just implement the field as specified. That is an implementation choice, not a spec choice.\nFWIW, I believe there is some confusion about what a conforming client is supposed to do. Hint: it's not \"splitting where a comma is\". That's why I'm asking for examples.\nNAME if I'm not totally wrong a is not allowed within a only in a (double) quoted string So parsing is not straight forward \"splitting where a comma is\".\nNAME - that is true, but it is allowed in a parameter value (when using quoted-string syntax). FWIW, this syntax is common across many many HTTP header fields. A common parser library should do the trick.\nNAME definitely, do parsing not by yourself. But I've no idea how many libraries\/frameworks does it correctly beside of browsers.\nLet me just gently remind the audience here that authentication if anything is possibly even more widely implemented and used in and by non-browser HTTP user-agents. Then, I could add that curl also doesn't parse these headers \"correctly\" and assumes multi-line. And I don't think we've ever had a bug report about it...\nI believe this issue should be closed with no action.\nNAME does \"assumes multi-line\" mean \"only supports multiple challenges if they're on different lines\"?\nah yes, exactly.\nI think the options available to us are: Do nothing (i.e., close this issue without action) Include text advising that putting more than one on a header field line may not be interoperable Create a formal exception for (like Cookies)\nI don't see why a formal exception is needed; the situation is different from Cookies (cookies do not work without the exception, but WWW-A does). My preference is (1), but I could be persuaded to do (2).\nDoes anyone object to (2)? If so, why?"} +{"_id":"q-en-http-core-024a6e27ed44713a5ab667f015d87f5d8a3aa00ee86ffab8c65d5dcaae902ba7","text":"Does this mean it's now valid to interleave HTTP\/2 DATA frames and HEADER frames?\nDoes this mean we cannot implement this on HTTP\/2?\nThe feature was removed during WGLC, but no -- it would have required a new frame type.\nOkay, so chunk extensions remains a unique feature of HTTP\/1 chunked encoding and there is no plan to expand that further?\nYes.\nAs discussed in and URL, mid-stream trailers (i.e., those that arrive before the end of the payload body) are interesting. A few things to discuss: [ ] What are some potential use cases?[ ] We recently stopped automatically combining trailers into headers, as that created a lot of issues. What about these -- are they yet again distinct from trailers as well as headers? Or can they be \"downgraded\" into trailers automatically? In other words, when transitioning from a hop that supports these from one that doesn't, are they just dropped onto the floor?[ ] Related - is the expectation that implementations will create separate APIs for these, or would it be that trailers are just available \/ writable early?[ ] What relationship does one of these have to the payload body? E.g., could I write one that \"knows\" exactly where in the body it occurs, so that it could sign a chunk of the body relative to it? Or could a handling implementation change its ordering in relation to body bytes?\nSome preliminary considerations. Incremental checksums eg. Service management (eg. convey the ETA of the transfer, ...) Sending signatures of partial body (in mid-stream) has some complexities respect to sending a signature of the whole message in the header\/trailer section, and requires a security URL trailers, when used for integrity can't be dropped without informing the user agent They probably need to be defined as 1#field-item because mid-stream trailers will be sent multiple times (eg. one per chunk) Whenever they are tied to the partial transmitted representation, they will eventually be unuseful when merged, unless they have some associative propertyThey will be especially useful when tied to a portion of the body They will be significantly less useful when not tied to a portion of the body\nPotential use cases mostly limited to long-lived streaming traffic, i.e. long-lived \"uploading\/downloading\" in-band signaling support such as load reporting, app-level flow-control, authn update etc Semantics trailers are a special case of mid-stream metadata, e.g. only trailers are able to convey the status of the entire transaction or the entire resource (body) our experience is that in-order delivery is useful and often assumed, since the underlying transport (for HTTP body) is in-order and reliable. Out-of-order metadata is likely a use case for connection-level metadata, IMO with the above assumption on in-order delivery, you really only need one API for delivering in-stream or trailer metadata Multi-part without in-stream metadata, you would have to use multipart (with all the overhead) for structured data, the original content-type can be preserved, e.g. JSON stream; and for media data, the content-type may have to be application\/octet-stream\nCopying over the questions from NAME over in URL, as they are good ones:\nThe semantic model of trailers is that they provide additional metadata that may (or may not) be useful to recipients but was not available prior to the initial body data sent. How the are useful and what they might contain would be defined by the field name. A recipient is free to drop the trailer metadata. A mid-stream trailer frame would have the same role in flow control as a HEADERS frame, which is none at the current time. The proposed frames would not have any specific position, but the values inside trailer fields might identify specific positions (on their own, regardless of framing). A cache would either drop the values or store them separately as trailer fields in the order received.\nHTTP\/2's flow control only cares about DATA frames. But all frames are subject to TCP's flow control. In HTTP\/3, all frames are sent on streams which is subject to QUIC's flow control. Responding to the original question, I don't know what a \"new form of flow control\" would look like and I don't see how one would improve things.\nWhat's the minimal change that would need to happen to semantics to enable this in a future version of HTTP (or an extension to an existing one)?\nI think you would need to make \"trailer part\" less special, and simply say that a version of HTTP can define a way to carry additional field sections during or after the body. H1-H3 currently define a way to carry only one such block, but an extension could provide more."} +{"_id":"q-en-http-core-e4c00d82dcf9f94df3c2f15bd90c0cfd3cfa2d4721bb3e92fa22e00d3299e0cf","text":"reads: This seems to have the effect that 412 cannot happen as part of a 100-continue response, but instead, the entire upload body must be parsed for errors (such as ), and only if it is free of errors can the server return 412 (Precondition Failed). This seems unnecessary and wasteful of bandwidth, and perhaps a result of the fact that server-side validation of documents, including the 422 Unprocessable Entity status code, is a comparatively newer feature of HTTP servers. [1] If this is the case, consider relaxing this requirement so that 412 (Precondition Failed) is the last error tested before the request body is parsed. [1] See mailing list thread:"} +{"_id":"q-en-http-core-9a73a218f6cdfa3b8bbca05e3b2119e02804c6d473e6b6cde0e5115765293a7b","text":"Need to talk about the on-the-wire scenarios; representation w\/ vary vs without.\nWhen making the edit above, I started to wonder what the best strategy was to recommend: Use the most recent valid Vary field value (current text) Just give the one without a Vary field value the lowest priority Thoughts NAME NAME ?\nThis language is a bit ambiguous regarding whether the received response is included in those considered, etc.\nOn second reading this wasn't as bad as I thought; made a few small editorial clarifications."} +{"_id":"q-en-http-core-ddb05e3b45e3f5c2b0c48dd73499c934adacc2b9fec5012cc644815611b2f82c","text":": I think that MAY should be prose, and should also list retrying the request as an option."} +{"_id":"q-en-http-core-b7cb35a1bfced886c1c8d7d6c76f83d4565a637ac0926f99916c727876674fe3","text":"The practice of combining list-form fields into a single value is widespread, and the document addresses this pretty clearly. However it never really says that this combination needs to be limited to a single field section. It's pretty clear that trailers need to be kept discrete, but there is nothing so direct. It's probably worth adding \"within a field section\" somewhere.\nAgree, but I can't find the definition of \"field section\" neither in MESSAGING nor in SEMANTICS :)\nI was pointed to URL which is good, but it doesn't really say that this combining only applies to a single field section (and yes, that is not a term that is defined in the document as far as I can see).\nThanks, this works nicely. Shame that you had to add a new term, but we already got that in QUIC."} +{"_id":"q-en-http-core-d3e9a5d7ffa40cccfa1fbc938587182fde5ffc946b666061234883d839365850","text":"We should really mention this, as it's common and can cause misunderstanding. I also am likely to need to reference it in .\nMaybe add a minimal explanation of \"collapse\"?"} +{"_id":"q-en-http-core-91c03b3c0905880e9c0bfd87f78b69ca0c94018058c7ed4e614df684adb0f529","text":"We say that we don't restrict how other caches (e.g., higher-layer in the browser, back\/forward) interpret cache directives, but I think it would be useful to give a bit more guidance here, as this causes a lot of confusion and interoperability problems (since the browsers still haven't sorted out the various other caches' operation). For example, we could give string guidance (non-normative) that should be respected by other caches.\nAll of these caches are in front of the networking stack and in theory might not even have all response headers if converted to a more optimal representation. If you consider JavaScript modules, they are de-duped based on their request URL in a module map. If the first response for such a URL had , we still wouldn't be making any further requests. The same goes for images, style sheets, and some other resources as I understand it and for some of these changing that would result in compatibility issues (or presumably functional errors in case of module maps). For back-forward in particular it might be worth reading URL There's a worry that an explicit opt-out could end up cargo-culted and make the feature unusable.\nNAME that makes sense, I agree the current wording is too strong. How about changing apparent to or easily controllable by the user, it is strongly encouraged to honour basic control mechanisms like Cache-Control: no-store, as they indicate the resource's intent regarding caching. to: apparent to or easily controllable by the user, it is strongly encouraged to define its operation with respect to HTTP cache directives, so as not to surprise authors who expect caching semantics to be honoured. For example, while it might be reasonable to define an application cache \"above\" HTTP that allows a response containing Cache-Control: no-store to be reused for requests that are directly related to the request that fetched it (such as those created during the same page load), it would likely be surprising and confusing to users and authors if it were allowed to be reused for requests unrelated in any way to the one from which it was obtained.\nI think that's good. Some of the caches I mentioned do cross page loads (e.g., images and style sheets). However, I think the statement about those is correct and we should be better about explaining how they fit in and make sense (and perhaps adjust if they don't).\nIt seems to me that cross-page loads are exactly the type of thing that a sender of no-store is trying to avoid. It is the type of flag sent on highly sensitive or volatile data. Images, stylesheets, etc which have cross-page relevance should not be using it."} +{"_id":"q-en-http-core-019244dc244a6469252abc158f8180997221fbcd7252f2a01e2c32865673561f","text":"says: What about multiple ?\nI think the right approach here is to require adherence to the most restrictive directive received, but I want to write some more tests first.\nOK, the . In a nutshell: No implementation follows the more conservative when multiple are received, either on the same line or separate lines If the directives occur on the same line, most implementations (except Firefox and Apache) prefer the first occurrence. If the directives occur on separate lines, the behaviour is about the same, except that nginx flips and prefers the second line. OTOH most implementations do honour a stricter directive (e.g., or in the presence of more liberal ones like ; the only exceptions there are nuster (a new implementation) and Fastly (this is a known issue). So it seems like the most interoperable thing to do would be to document that when a directive with an argument (like ) appears, the first occurrence should take precedence; however, conflicting directives (like and ) should defer to the strictest."} +{"_id":"q-en-http-core-233efb90a136b88a969cb5cd7f2450a034640bcb729dd2fe27394e6ec199ff61","text":"NAME It already is, this just changes how it's done.\nMy seem to indicate we might want to reconsider how . Of implementations that honour (two don't): none will consider invalid none will consider on two separate field lines invalid none will consider a duplicated value on two lines to make the header invalid if there are two header lines, all will consider only the first (and one, Safari, appears to use the min of them) none considers a non-numeric value to be invalid only Fastly and Firefox consider a non-negative integer to be invalid ... where invalid currently is required to be considered stale."} +{"_id":"q-en-http-core-d86caefa923810346ffa670d592363e959c1fe50e0fa7296a039efb1430a8d26","text":"I believe that we said this could be merged when Julian's issue was addressed, which Roy's suggestion did, so merging.\nSome guidance about how and if new preconditions can be defined would be helpful. See ."} +{"_id":"q-en-http-core-ed6947a7ce7b910f998c8a7acbe9f5b488ca4e7a99e1c0d546caff1e4722fd3c","text":"NAME - is Oct 1999 correct? (taken from Last-Modified)\nLast edit is 1999 but was just a markup fix, original document is dated 1996-10-03. HTH,\nThis seems out of place, although I understand that versioning is very message-based. I'd suggest putting it in 15. Extending HTTP?\nIt's not about extending HTTP. The version number literally exists as a field of the message, or at least inherited by the context of the connection, and must be retained by recipients because of version-specific requirements. That's why we need to define both cases as part of the message abstraction.\nFair enough. Can we open the section with a sentence or two that says that it's a field of the message, to give that context? The current text is pretty abstract."} +{"_id":"q-en-http-core-84893b154d34e61d9e71879d0d8e7be84046763593857953aefa1fcfeb10800b","text":"When I was restructuring Semantics, the paragraphs for defining when a message is complete stuck out like a sore thumb. I moved them to their own section and added a comment for later definition. This issue tracks that definition (editorial)."} +{"_id":"q-en-http-core-19f1ad70c63de09f029d633548138f160bd279fa9b4f63ce826722e2bf9ac52b","text":"URL This seems to place a requirement on the server to serve enough of the representation that all satisfiable ranges requested are fulfilled. But that's actually a burden and makes range requests not as useful as they could be: a server might well not want to implement multipart\/byteranges, which is an awkward format that can't be streamed without prior knowledge of what strings are safe to use as separators; responding with a single range is much simpler to implement. If all satisfiable ranges must be served, then, without multipart\/byteranges, the only compliant options for responding to a range request with large distances between the requested ranges involve sending all of the bytes in between. Even reading the text as meaning that servers MUST NOT only serve a portion of the satisfiable ranges, client behaviour when faced with a misbehaving server is under-specified; the sensible options are re-requesting unfulfilled ranges, retrying without ranges, or signalling an error. Retrying without ranges may be wasteful, and raising an error seems overly fussy - and, if either of those is the better or even necessary option, that definitely ought to be called out. I'd be surprised if retrying without ranges or erroring was needed, because section 10.3.7.3 (Combining Parts) already deals with combining multiple 206 responses together. So, I think it'd make sense to make two changes here: An explicit SHOULD on clients receiving a 206 to check the ranges that they actually received and make further range requests if they still need any ranges which weren't included despite being satisfiable, which they can determine because Content-Range includes enough information to tell which were unsatisfiable and which were simply not fulfilled. Explicitly say that servers MAY only serve a portion of the satisfiable ranges.\nIt says \"one or more\" for exactly that reason. A requirement to send all would have to say ALL, so this is clearly not the case.\nWouldn't the text then be or something very similar? As presently written, I can't come up with a parse that unambiguously means that some of the satisfiable requested ranges may not be fulfilled - and, in any case, I do think the client behaviour on ranges not being fulfilled needs to be specced a bit more.\nI read the current text as implying that in normal operation, the server will send all of the requested data back in some form; it might be in one range or multiple ranges, depending on its preferences and capabilities. NAME AIUI you want to rewrite this so that when a client sends (for example) requests for ranges 10-100, 500-1000 and 5000-6000, the server can just choose to send one (or two, in theory) of those ranges back, rather than all of them. That might be easier for the server in a number of ways, but it relies on the client recognising that the missing ranges need to be re-requested, one-by-one; effectively, it's a new protocol. As such, if you want to enable this new pattern, I think you'd need to signal support for it in some fashion from the client. That's out of scope for this spec effort; it would be an extension.\nThat is indeed what I'd like to have. I think there's definitely still room for at least clarifying the situation, given that two editors here read the text in two contradictory ways! Hmm, I don't quite agree. A well-behaved server would want to see support signalled before leaving some ranges unfulfilled and I agree that that would be a new protocol, but a malfunctioning or badly-behaved server might be ignoring the send-all-ranges requirement - I think that's especially likely to be happening in the wild given that the spec here is hard to interpret. Clients will still have to deal with that, and the spec can at least list reasonable options (re-request satisfiable-but-unfulfilled ranges, re-request the whole thing, error). Here's a new straw suggestion that (I think) doesn't change the required behaviours, just formalising and clarifying exactly what's required: An explicit MUST on servers returning enough of the representation to fulfil all satisfiable ranges. An explicit SHOULD on clients checking the Content-Ranges sent back, just in case a server is misbehaving, and list the three sensible options after detecting misbehaviour with MAY.\nIn normal operation, a server will try to send all of the requested information just for the sake of avoiding another request, unless it doesn't want to for its own reason that we have no need to describe. A 206 response is self-descriptive. A client cannot interpret the response differently than how the response describes itself. The client cannot assume it has all the requested ranges it requested. There is no need to require that a server send all requested parts. There is no reason for a client to assume it would receive all of the parts without looking at the response. Hence, this is not subject to an interop requirement. The text I'd consider adding is a note at the end of that paragraph, like\nAlternatively, just add that requirement to the section on 206.\nI think the most compliant server response, if it doesn't support multipart\/byteranges, is to respond with the range 10-6000. Servers are allowed to send different ranges than the client asked for, but I think a server omitting some of the requested ranges can't really claim to be \"successfully fulfilling a range request.\" The server's response needs to include the bytes the client asked for. That said, I think something like NAME suggested text would be helpful. I've seen clients get tripped up before when the server produces a different set of requested ranges, either by extending a requested range or by coalescing ranges. It looks like a rewording of the text at the end of 10.3.7.2, which would be appropriate to either replicate in 10.3.7.1 (you can't expect exactly the range you requested) or move up to 10.3.7."} +{"_id":"q-en-http-core-a837e70d1e7805ca678a253c8a290a85ae9b842f8cdfa1ec274d1c9ade5d570d","text":"We need to discuss this; prohibition at least semantically, and probably also strongly urge against accepting on the wire for security reasons (although somewhat version-specific).\nDiscussed; just need to align 1xx language with others."} +{"_id":"q-en-http-core-b3f9dc01c9f5fa78e794cdc42062c4c6dbd91fa5e2141375310bc60751b19143","text":"IIUC httpbis-semantics changes the field name registry, thus updating BCP 90 \/ RFC3684. Is that correct? Should httpbis-semantics mention that? NAME\n\"obsoletes\": no, because BCP 90 is also for email. \"updates\": yes, I think so. NAME\nHm; I suppose so, given that it's taking some of 3684's turf away.\nNow does that make SEMANTICS part of BCP90?\nI'm inclined to say no -- it modifies the scope of BCP90 to exclude HTTP header fields. Worth discussing more broadly, though."} +{"_id":"q-en-http-core-f8631c94034ad0f385f40b0fe2e213808fb9c279ccc68c70ced129403f28cb12","text":": >A Last-Modified time, when used as a validator in a request, is implicitly weak unless it is possible to deduce that it is strong, using the following rules: I'm hearing that this 60-second window is problematic for videos, where clients often request a partial video segment within a minute of it being published (thanks to e.g., CMAF). Since those requests use , they require a strong validator. Should we revisit this, given that it is an arbitrarily chosen value by our own admission? I agree that some caution is necessary here, but a one-size-fits-all approach doesn't seem viable.\nDiscussed on call. Sense is to make this much more based upon the judgement of the implementation, rather than just an arbitrary number.\nSuggestion:\nNAME NAME - this creates a nested unordered list which comes out really strangely in HTML, please check. Maybe it would be better to have the two statements (that are nested) in a single list entry."} +{"_id":"q-en-http-core-c3e1a249eaf02c5d1058c2946690115b0a203e171a39990354e6cff9781343f8","text":"I think this should be something like \"Message Framing and Incomplete Messages\", and should be after the headers \/ payload \/ trailers sections."} +{"_id":"q-en-http-core-afa70ba2483c1d756b4f9e565cefac9e58c215112882007356203c4fb153a226","text":"This seems out of place, although I understand that versioning is very message-based. I'd suggest putting it in 15. Extending HTTP?\nIt's not about extending HTTP. The version number literally exists as a field of the message, or at least inherited by the context of the connection, and must be retained by recipients because of version-specific requirements. That's why we need to define both cases as part of the message abstraction.\nFair enough. Can we open the section with a sentence or two that says that it's a field of the message, to give that context? The current text is pretty abstract."} +{"_id":"q-en-http-core-f8dcadbabb85721f03d7df54d4f8dffca36a801750157c1b26e876646f420902","text":"Now that we have three different places referring to the text on updating stored headers, it deserves its own section. In the process, I noticed that the current text forgot to mention that CC: private and no-cache can have arguments that affect header storage.\nsays: As per other discussions, this needs to be more nuanced; either a reference to 4.3.4, or new text.\nThe that no one is really doing this; is either converted to a GET on the backend, or it doesn't seem to touch the cache. I think the most sane thing to do is to refer to the rules for a 304 update, but make it clear that this is optional.\nCould you clarify how HEAD is relevant for \"Combining Partial Content\"?\nArg, sorry, wrong issue."} +{"_id":"q-en-http-core-69b98caa8830ce24b3b13242e8d64e672198f4f6da5fd19fd0535a3a5084ca6b","text":"\"The following core rules are included by reference, as defined in [RFC5234], Appendix B.1: ALPHA (letters), CR (carriage return), CRLF (CR LF), CTL (controls), DIGIT (decimal 0-9), DQUOTE (double quote), HEXDIG (hexadecimal 0-9\/A-F\/a-f), HTAB (horizontal tab), LF (line feed), OCTET (any 8-bit sequence of data), SP (space), and VCHAR (any visible [USASCII] character).\" Of these, only DIGIT is actually used."} +{"_id":"q-en-http-core-a0825fc7252c88ec260f8c5fc41bb367721fbdcfe84c70faabce95897a702b47","text":"seems very out of place. I could almost see it as content in 3, as a kind of introduction to the terminology. There are of course other places it could go, but the current location doesn't make much sense.\nI moved the ascii art diagram back to the origin server section (where it came from in 7230) since it leads into the other diagrams."} +{"_id":"q-en-http-core-206daf7aea4afecbbf6300d55599134aa544db3d1d65e529a668812d26ff58b3","text":"As discussed on call, we need to say that PUT requests can contain response headers, and what that means."} +{"_id":"q-en-http-core-7826e2b20096acf2ec5ebc78f26f9facbbf11a8dac6b9d991e67a8c36b0b63ef","text":"I think we discussed this last week and Julian has pending changes to be made before review.\nThis just moves the ABNF, not the definition of the transfer codings.\nThis is the remaining ABNF production that Semantics imports from Messaging (for the definition of \"TE\"). Maybe we should move the production to messaging, reversing the dependency.\nDiscussed on call; worth a try.\nURL out that there are a few more things to consider: Semantics currently claims there are regular parameters here, but transfer-parameter is special wrt BWS should we move the special case of \"trailers\" in the grammar into prose? In any case, what would it mean to have a ranking on \"trailers\"? Invalid, thus ignore? \"t-ranking\" seems to duplicate \"weight\"\nFine to merge, but shouldn't we also move the extensibility and IANA registry sections?"} +{"_id":"q-en-http-core-a07b375f069c9636c1c511c3dd6c09e7d72f6fe2e595d3872b084df3ff9cb378","text":"(again)\nWe say that we don't restrict how other caches (e.g., higher-layer in the browser, back\/forward) interpret cache directives, but I think it would be useful to give a bit more guidance here, as this causes a lot of confusion and interoperability problems (since the browsers still haven't sorted out the various other caches' operation). For example, we could give string guidance (non-normative) that should be respected by other caches.\nAll of these caches are in front of the networking stack and in theory might not even have all response headers if converted to a more optimal representation. If you consider JavaScript modules, they are de-duped based on their request URL in a module map. If the first response for such a URL had , we still wouldn't be making any further requests. The same goes for images, style sheets, and some other resources as I understand it and for some of these changing that would result in compatibility issues (or presumably functional errors in case of module maps). For back-forward in particular it might be worth reading URL There's a worry that an explicit opt-out could end up cargo-culted and make the feature unusable.\nNAME that makes sense, I agree the current wording is too strong. How about changing apparent to or easily controllable by the user, it is strongly encouraged to honour basic control mechanisms like Cache-Control: no-store, as they indicate the resource's intent regarding caching. to: apparent to or easily controllable by the user, it is strongly encouraged to define its operation with respect to HTTP cache directives, so as not to surprise authors who expect caching semantics to be honoured. For example, while it might be reasonable to define an application cache \"above\" HTTP that allows a response containing Cache-Control: no-store to be reused for requests that are directly related to the request that fetched it (such as those created during the same page load), it would likely be surprising and confusing to users and authors if it were allowed to be reused for requests unrelated in any way to the one from which it was obtained.\nI think that's good. Some of the caches I mentioned do cross page loads (e.g., images and style sheets). However, I think the statement about those is correct and we should be better about explaining how they fit in and make sense (and perhaps adjust if they don't).\nIt seems to me that cross-page loads are exactly the type of thing that a sender of no-store is trying to avoid. It is the type of flag sent on highly sensitive or volatile data. Images, stylesheets, etc which have cross-page relevance should not be using it."} +{"_id":"q-en-http-core-de2acd845086c454706cadb08c9232be043d389312971d30e67533d2188741bf","text":"and only define the versioning there; leave message version requirements in control data. (no content changes)"} +{"_id":"q-en-http-core-bc2ab1b3881a5e2cd53b489a3a0cc21f4fe8ec37af4c4140cef662ef2fa430d2","text":"This eliminates confusion with the message body protocol element of HTTP\/1.1 and other usage of multipart body parts, and should make it easier to describe HTTP\/2 and HTTP\/3 data frames.\nexcept for multipart body parts or when referring specifically to HTTP\/1.1 message body.\nSee related comment: URL\nI believe this is good; but more eyes are needed :-) If we go ahead with that, we may want to call out the new terminolody in one of the \"changes from 723x\" sections."} +{"_id":"q-en-http-core-bbab2fa7f23c702b096ee5b87cc8f9657d446244e4f5c9372f865ea4cf611b95","text":"Recently I noticed that the host header field is being discussed in . I believe that we should move the definition of the host header to the HTTP\/1.1 messaging draft, and discuss about \"authority\" as a concept in the semantics draft, as the \"host\" header is specific to HTTP\/1.1 (except for handling of broken HTTP\/2 clients)?\nHi Kazuho, We discussed this on the editors' call. Host's semantics aren't specific to HTTP\/1.1, even if its requirements are different in other protocol versions. That's why we moved it to Semantics; it's not version-specific (and really very few headers should be). Closing; if you feel it needs more discussion please request re-opening and we'll do so.\nThank you for the consideration. While I do understand the argument that the semantics of the header field is not version specific, the use is. I've opened a PR that tries to clarify that.\nThank you for the changes!"} +{"_id":"q-en-http-core-40a8e38259315a87ffe87d8246c9d953701c75b70c01cfce4f16d3872b4d0b08","text":"by changing DL to a table and tweaking description of HEAD.\n\"HEAD: the same representation as GET, but without the representation data;\" Should that be \"payload data\"?\nNo, that's correct, but I will tweak it slightly: HEAD: the same representation as GET, but without transferring the representation data;\nNote that the context is a discussion of what representation is included in a 200 response, which is why we are talking about representation data here instead of payload data."} +{"_id":"q-en-http-core-29ee8df1ca7631929de4cc02254a0715670812490fa10abfd20b553d0f59c420","text":"\"The 203 response is similar to the Warning code of 214 Transformation Applied (Section 5.5 of [Caching]), which has the advantage of being applicable to responses with any status code.\" But then we have removed \"Warning\"...\nI think we can just remove this text; if it's important, we can refer to 7234 and mention that it's deprecated.\nI say just remove the sentence."} +{"_id":"q-en-http-core-8de650d3ac26296d8fbd62e4dcd7bfb16be803102adcba6bc8e9f26032ba8667","text":"\" For example, services that depend on individual user authentication often require a connection to be secured with TLS (\"Transport Layer Security\", [RFC8446]) prior to exchanging any credentials.\" This originates from RFC 7235. Nowaday, shouldn't we instead say \"require HTTPS\"?\nI think it should be ... authentication require a secured connection prior to exchanging any credentials (ref target=\"URL\"). and x:ref around the term \"secured\" (defined in that section)."} +{"_id":"q-en-http-core-0ec6731a9adac284712b7d2940160e4b7385c90c1fd2a344b4242e157fd7d032","text":"\" Considerations related to message syntax, parsing, and routing are discussed in Section 11 of [Messaging].\" Make clear that these are only the HTTP\/1.1 considerations. Optionally add links for HTTP\/2 and HTTP\/3."} +{"_id":"q-en-http-core-5f5ca535a2d58a9162cc7e17d6f5ac258148e0175438ea4d830d4cf20256d0e8","text":"\"This document defines the semantics of HTTP: its architecture, terminology, the \"http\" and \"https\" Uniform Resource Identifier (URI) schemes, core request methods, request header fields, response status codes, response header fields, and content negotiation.\" replace by just \"fields\"?\nI think it's useful to retain the current text, because those phrases are more likely to be recognised by readers coming to the document.\nNo real opinion, but it should be rewritten anyway: This document defines the semantics shared by all versions of the Hypertext Transfer Protocol (HTTP), including its architecture, terminology, and core protocol elements, along with the \"http\" and \"https\" identifier schemes. It could go on from there to describe the core elements, and maybe the core features as well, but shrug."} +{"_id":"q-en-http-core-aea05593e4917d20fc7a0f7c71107e6adde0197d8c697066f7c618118ea63cb1","text":"initial commit is just moving sections and updating headings; second commit will update some of \"the following\" comments to refer to the specific fields.\nI believe it needs to be under \"Proactive Negotiation\" , or be a sibling of these..."} +{"_id":"q-en-http-core-77b1bafddff864da5e8dd7be5c0dc0ccaa8461394a82c94fecee5c7f593cd283","text":"(this text is present in RFC 7233 as well) \"A valid entity-tag can be distinguished from a valid HTTP-date by examining the first two characters for a DQUOTE.\" Hm, no. If we want to properly handle weak etags, we either need to inspect three characters (actually position 0 and 2) for DQUOTE, or check pos 0 for DQUOTE or pos 1 for \"\/\". If we're not interested in weak etags, checking the first character is sufficient."} +{"_id":"q-en-http-core-5e1843a2e40ffea8be7a418ea8cca1cee12e8f70ae05d58982d54ac622e0811d","text":"\"The reason phrases listed here are only recommendations — they can be replaced by local equivalents without affecting the protocol.\" \"URL left out altogether...\" ?\n(need to check where the fix went)"} +{"_id":"q-en-http-core-5ba5a00b9b9b316992c8e124547c599e388312c47ce7ca8cd18b745f0efd1a05","text":"to be removed ... it was just leftover from the old org. Also, we should call it Request Context Fields."} +{"_id":"q-en-http-core-66b7ec0279e084ee23ef0df17efd97f0d7189f206036af7daa952e3e914405e6","text":"\"If the client has an outstanding request in transit, the client MAY repeat that request on a new connection.\"\nI don't think this needs to be changed (connection is a defined term), but if you can think of some other way to say that the 408 implies partial receipt of the last request (which means the connection gone broke) then go ahead.\nSuggestion:\n+1"} +{"_id":"q-en-http-core-abb78a1d58407ddbdc9b02ba19dc0fbc013819d1a368ed3088341ef84af92ac9","text":": \"A server MUST ignore a Range header field received with a request method other than GET.\" : \"Likewise, if the new method might have some use for partial response semantics (Section 14.2), it ought to document this, too.\"\nI can see why someone might want to use it with SEARCH as well, though they would more likely use a page parameter. I can't think of any other use case.\nFWIW, that's exactly how it was used by MS in Exchange SEARCH: We need to decide which of the statements in our spec is correct.\nHuh, that's news for me. We could have used that example for Range extensions.\nChange requirement to MUST ignore Range when not defined for that method or the method is unrecognized."} +{"_id":"q-en-http-core-7f80b5ee3916a025ad5043ae061b651051e1e33ce1d1bfbee5042a823959009e","text":"Note that I also cleaned up the surrounding paragraph and removed parentheses from around the statement that the response always terminates after the header section.\ncurrently links to \"Message Payload\", but that is not that helpful in order to find out what fields actually are \"payload header fields\".\nNote. We currently say: \"The server SHOULD send the same header fields in response to a HEAD request as it would have sent if the request had been a GET, except that the payload header fields (Section 6.4) MAY be omitted.\" URL would change this to: \"The server SHOULD send the same header fields in response to a HEAD request as it would have sent if the request had been a GET, except that header fields that describe message body encoding or transmission (e.g., Transfer-Encoding) MAY be omitted.\" (me confused by that)\nI should probably just list them all. Content-Length and Transfer-Encoding are the obvious ones, but I would have to go through the spec to be sure there are not more. The reason for this is that servers do not generate the body on HEAD responses, so the fields that are defined while buffering the first 8K of body (when they can still be sent as headers) are not sent on HEAD responses. Maybe we should just say that."} +{"_id":"q-en-http-core-2c62a06e8a89a16682fd5bd10b7eede9d7b96b616c73ecfaeda95acca7747c11","text":"\"This directive is NOT a reliable or sufficient mechanism for ensuring privacy.\" Should be lowercase. Could use .\nalso in 5.2.2.4"} +{"_id":"q-en-http-core-1cc8783221ecaff103283d30a6a332980bb692d81c7547c12d9b40d3e51b72e1","text":"\"Additionally, specific HTTP versions can use it to indicate the transfer codings the client is willing to accept in the response.\" If we phrase it this way, we'd have to say somewhere in Semantics what transfer codings actually are. Or we decide that they are specific to HTTP\/1.*, rephrase this, and link to Messaging."} +{"_id":"q-en-http-core-455f3f35332eb10c1c18062cb05ad17a38fe232d45c1d6d0ad5314efd06ca82b","text":"Good point.\nFor .\nURL be good to have tests...\ngoes back to URL\nsent mail: URL\nSee URL\nProposal: leave weight in ABNF for consistency with other Accept-* fields change second requirement (on recipients) not to rely on \"q\" being last"} +{"_id":"q-en-http-core-df46ab6b02083b651574254ead637fa01ca39d3ef594088772d351297d1a84f4","text":"Im ok with replacing the 0.9 reference (although having the link here acknowledges where 0.9 came from, right). The idea was to mention 0.9 in the context of RFC 1945 as well as that is an IETF document which actually describes what 0.9 is.\nURL out that RFC 1945 defines HTTP\/0.9 as well (see URL). Maybe we should add this reference, or replace the existing one.\nWe do reference 1945 for HTTP\/1.0. Revised proposal: just mention that that RFC describes 0.9 as well."} +{"_id":"q-en-http-core-2a718f702fa685f7da2c8315980f8ffb38492c71f3f5878365c9878cdb16ccf8","text":"This issue is from a related discussion on Apache httpd, which was trying to comply by sending a 504 even though the revalidation error was a 502. That's nuts. The current text for says: However, if a proxy\/gateway received a 5xx response while trying to revalidate, this requirement forces it to toss that information and supply a useless 504 instead. That isn't desirable. The entire purpose of this requirement is to make sure that the cache does not respond with its stale entry. Hence, any 5xx code is sufficient.\nTo fill in more detail, the change in question is this one: URL And covers the wide question of \"which 5xx response is returned in low level network failure cases, such as DNS error, connection refused, connection timed out, etc\". The commit message referred to RFC2616 14.9.4 Cache Revalidation and Reload Controls, but this seems to be a wider issue.\nYes, that MUST is too strong; I suspect it's there to make the mechanism reliable, not override other potential status codes. Will work on a PR."} +{"_id":"q-en-http-core-c7f8b98aed936ad8b669019c0b05e9655150c62290c07bda8f7c20218a187a16","text":"Note outstanding question on the issue regarding PUT.\nNAME any thoughts about whether we should mention PUT?\nI don't think we need to.\nI think PUT is already covered in PUT.\nHypothetically, you could be uploading a partial representation of an object, so I'm not sure it's worth prohibiting. However, I've not dealt with any systems that allow partial PUTs. Mostly you PUT (or POST) the blocks as individual resources, then make a final call with the blocklist to assemble the object from the existing blocks. Unless anyone has seen it in action, it's worth indicating it's not common practice. Perhaps a note saying that the behavior is dependent on the method and isn't defined for any known methods?\nThe essential problem is that it is not interoperable in general for PUT without private agreement between client and server. Furthermore, there is a risk of bypassing content filtering for security, etc. I could see it being usable with a new method.\nMy understanding has always been that you couldn't have a partial request. We've discussed this extensively before; would need to dig up the references.\n(it may have been in discussion of PATCH, which is the proper way to do this)\nProposal: ... to align with the language used in Range.\nmaybe since this case doesn't matter if the method is unrecognized.\nShould we mention PUT's requirement to generate a 400 if C-R is present?\nSorry... I just overlooked this, which is related to the ongoing work with digest NAME My understanding is that with a new method, this could be possible.\nFwiw my minor survey showed that most storage oriented services actually seem to offer some variant of request with Content-Range, see URL\nBut most of these use PUT, for which this is explicitly called out as \"MUST NOT\".\nI'll propose a RESUME method + Content-Range for that.\nI think the simplest possible approach would be PATCH with a payload format that just describes offset, length and payload. No need for a new method.\nWhich brings us back to the discussion about whether we're defining restrictions or describing usage, doesn't it? Lucas's survey demonstrates that PUT with Content-Range is widely used, even if currently prohibited.\nThe existing restriction is that PUT means make this payload the current representation. There is no way to remove that restriction without breaking interop for PUT, which pre-existed Content-Range by several years. We are talking about a dozen private implementations of partial PUT versus a billion deployed servers that will gleefully ignore the Content-Range or store it with the new representation (because it starts with Content-*). The only reason this isn't obvious is because almost all Internet-facing HTTP servers disable PUT, so we are left with a few private servers that don't care about Internet standards.\nBut if we look inside the firewall at internal servers that support WebDAV and other variants of HTTP PUT, they do not support the use of Content-Range to perform a partial PUT. They would be able to support a PATCH method.\nI just checked the code and it seems mod_dav has supported partial PUT on flat files since RFC2616.\nThis is a very big change at the last minute. At a minimum, it needs to be discussed on list. I'm also not very happy documenting a new feature that admits it's not interoperable or backwards-compatible, based upon one implementation in the wild (so far). IMO a much better way of doing this would be to slightly loosen the requirement to generate a 400 when PUT contains a content-range, saying that it can be allowed by an extension or an out-of-band arrangement. Then, if someone wants to, they can create a separate spec for partial PUT.\nWell, sure, but that's what the PR says. I don't see how removing the current requirement to generate 400 (which exists to preserve interop) is possible without documenting what should be done instead to preserve interop, even if that happens to be a \"Hail Mary\" pass more likely to be dropped than caught. I have no idea if this matches any specific implementation. I'd give it around a 50% chance, as opposed to the 10% that I previously assumed. It definitely needs to be discussed on list, but it seems to match the general intent if we allow Content-Range to be used at all. This doesn't prevent anyone from publishing an update in the future. In any case, I decided to propose it now since I did most of the arguing against this feature in the past.\nI'm proposing that we just say something like:\nIn the interest of getting the spec out, I slightly prefer Mark's proposal. That said, if we can get the description of partial PUT in (and that might be possible), I'm ok with that as well.\nI think the current implementation landscape suggests us to be open about partial representation in requests (and that's what is relevant WRT ). Which were the compelling reasons for forbidding ?\nNAME - see URL\nBTW, the Google API also uses a status code to indicate a not-ready-yet error.\nThey promised to get rid of the 308 years ago; maybe it's just the documentation.\nNAME do you still aim for this spec to go to full Internet Standard?\nIs this a trick question? Yes, I think we should try. But then, we did add other new things, and also did a lot of re-org."} +{"_id":"q-en-http-core-30c479b82348ac536074613a95b00be9e0fe3a2494944f34dab08ed9b32372d7","text":"Including a re-org of Considerations for New Fields\nI think it should be mentioned as an aside in the section on common rules for defining field values.\nmaybe wait until RFC is out..."} +{"_id":"q-en-http-core-66d3f8b7bdd8104e2ab52a9932e95b7bfe4e9d4af21f775b895ff435b5f4720d","text":"I haven't checked the errata yet, but I have to go now and this is worth reviewing.\nMaybe end the last paragraph with: \"Because of this default, senders in general should mark it as non-cacheable using ...\".\nsry, accidentally merged early; we still need to deal with the erratum\nI don't understand what happened here, but master does not contain the changes that github thinks have been merged.\nYes, I tried to undo my mistake.\nI think we need to copy the 421 status code definition from since it is generally applicable for rejecting a misdirected request.\ncurious if NAME has an opinion on moving 421 to http-core\nA slight preference for moving it, I think.\nSince we're doing h2bis and since it is applicable more broadly, I agree.\nWhen doing so, we need to consider URL\nHmm, the errata is about the cacheable paragraph (which is now heuristically) and I was going to remove that for the same reason but decided to import without changes first. I am just going to remove it now.\nThat might work as well; but then we'll have to mention in \"changes\" that the default changed.\nA much belated approval from me. This looks good."} +{"_id":"q-en-http-core-397f86574f06ea93e0a5fb251ce491656de6d2fa65e2f7866fa6f2f522a2f478","text":"This is an attempted recovery of the prematurely merged pull request that left us in a bad state.\n(we might want to link to the erratum in the change notes, let me know if I should do that)\nNAME - are you ok with this one?\nYes. I indicated my support for the pull request.\nI think we need to copy the 421 status code definition from since it is generally applicable for rejecting a misdirected request.\ncurious if NAME has an opinion on moving 421 to http-core\nA slight preference for moving it, I think.\nSince we're doing h2bis and since it is applicable more broadly, I agree.\nWhen doing so, we need to consider URL\nHmm, the errata is about the cacheable paragraph (which is now heuristically) and I was going to remove that for the same reason but decided to import without changes first. I am just going to remove it now.\nThat might work as well; but then we'll have to mention in \"changes\" that the default changed."} +{"_id":"q-en-http-core-f0c35f3c9b0871d86155a4abf94c1aff38d5315d74f01c47bdff3fab713165ff","text":"For\nIn the abstract: \"semantics... including its architecture, terminology...\" seems incorrect; suggest \"semantics... and its architecture...\"\nI always find possessive for \"HTTP\" a little hard to handle."} +{"_id":"q-en-http-core-7774b04d01bf8eb122d7234af6dcc0fb251c091df59cf5e3ced029ebef29842d","text":"... full stop.\nAlso 500f337 (committed instead of PR'd, sorry; if we don't do this that will need to be backed out)"} +{"_id":"q-en-http-core-d87de63e29ef33bf12fe4af897fa1de473d35dfe96b65e9a067bdc7ff5eb5624","text":"so it's adjacent to the other range-related requirements, rather than separated from them by the authenticated requirements."} +{"_id":"q-en-http-core-35c8b09ff419db2286990667e08dbb421958386e2a135fdd2b7b557b23ad9991","text":"refers to a captive portal as a type of intermediary; we should make sure this aligns with the more recent CAPPORT drafts (IIRC current usage of that term is for the origin that the user is redirected to when captured; it's not an intermediary).\nIt is probably enough to strike \"captive portal\". \"Interception proxy\" will do, if \"attacker\" does not. The term was never something that RFC 7230 relied on and as you say the modern understanding of the term has evolved."} +{"_id":"q-en-http-core-8d68d83e344566d0adea53cbc67ce41af7ae1630d953d97deac4428f42e7035e","text":"Considerations for new method should mention that the target URI has to be absolute. from URL\n+1, thanks for filing!\nWe currently say: Isn't that sufficiently clear?\nIt wasn't clear to me because I didn't read the whole doc when I was searching for this information.\nIt's weird to say this because a URI is always in absolute form (part of what makes it Uniform). Only a reference can be relative, like within the request-target of 1.1 (absolute-path) or anything called a URI-reference. I think what you want to add is a reminder that new method extensions cannot use the host:port or asterisk forms of request target. Note that this only requires a request target in absolute form if the scheme or host:port are necessary to define the target, which they are for CONNECT-UDP."} +{"_id":"q-en-http-core-f89628c2253f1e452f084d7d59fae8bf62a5c0a31b639c21dc3a618451a338cf","text":"This MUST is very fuzzy, and obviously isn't testable. Should it just be prose?\nNot convinced, but wouldn't object. Maybe \"needs to\"?\nThat's what makes the ABNF normative. I don't see a need to change it now.\nSorry, what does the ABNF have to do with this? The issue is the phrase \"MUST be able to parse any value of reasonable length\", where \"reasonable length\" is not defined.\nIt is defined as being reasonable. I have no problem testing that (subjectively). ;-)\nI meant the second half.\nHow about\nI like it"} +{"_id":"q-en-http-core-0df98ed9276a94f5dce934be67e680d3e34fb0752e45c94a30abee1a19671566","text":"There's still a difference when the response mixes strong and weak validators, right? If both types are present, the weak ones would get ignored. Is that intentional?\nYes, that is intentional.\nis silent about what to do when there's more than one validator in the message. Freshening responses with HEAD (just below) specifies that any validators must match. (use of 'match' there is a bit fuzzy, should probably link to where it's defined)\nI think this section would be better written in terms of entity tags and dates, rather than the abstract notion of strong validators. But I don't know if we have time for that. The basic idea is that a cache might have sent several strong validators in the request and is now being told that at least some of those validators indicate valid responses, so each validator in the 304 indicates a valid response. I don't think the origin server has a preference on which one, but in general the strong etag validators should be considered first."} +{"_id":"q-en-http-core-1827984bf301e72101693d3695a9140d9e907a7315edb61b559ded84b3195714","text":"... it's just deployments of very very old caches that would be of concern here. Given how unlikely this is in 2020, how broken they would be for other reasons, and the prevalence of HTTPS, it seems safe to modify these statements.\nIs this warning still relevant? If we want to keep it, I think we need to qualify it; e.g., \"A few old HTTP\/1.0 caches that are unlikely to be used on the open Internet...\" Similar text in cc: no-cache.\nI think the warnings can be removed."} +{"_id":"q-en-http-core-fbbb6c3f42d4f2d237e54a90a657f6bc0455b056162c47e8a947d8f22653f6cc","text":"It occurs to me that if we specify to always be used in conjunction with , and that its semantics are that caches that understand the semantics of the status code can ignore the , all current implementations should already be conferment (if not optimal). Currently, I'm not aware of any cache that implements (because it's new)."} +{"_id":"q-en-http-core-d763314de2a7fed7255009ad70f7a17f5895b90bdbd1154152af1f242dfa5eab","text":"They're still requirements, and without the changes, we don't explicitly say how to evaluate the conditionals; elsewhere (e.g., we refer to them as evaluating to either true or false, hence these changes.\nI see. Maybe a less intrusive change would be to say: \"... The origin server SHOULD NOT perform the requested method if the condition evaluates to false: the selected representation's last modification date is earlier than or equal to the date provided in the field value; instead, the origin server SHOULD generate a 304 (Not Modified) response, including only those metadata that are useful for identifying or updating a previously cached response.\"\nre-specifies precondition processing from . E.g., [...] Do we want to try to fold these into Semantics so that it isn't duplicated? That would require some small adjustments to Semantics AFAICT (e.g., to make it clear what the selected representation is in each case, etc.).\nWhy not just end each first sentence with \", as defined in Section X.x of Semantics.\" and remove the rest?\nEr, never mind ... having looked at the full section, I think this should stay as is because it is refining the requirements in terms of the cache's stored responses.\nI think we can close this. It isn't duplicated: Semantics already refers to this section to define cache handling.\nRe-reading the current text, I think I'm finding it most disturbing that semantics 13.1.3 If-Modified-Since doesn't say how to evaluate the condition, whereas caching 4.3.2 does (but just for caches). I think it needs to be removed from caching and made generic. This is necessary because semantics 13.2 refers to evaluation in their algorithms. INM and if-match define how to evaluate, but IMS and IUS don't.\nIt defines it near the end of the section because of the etag discussion. Anyway, we can improve that.\nI proposed a fix for that, but now that is proposed as well, some of the text in caching is clearly duplicative. Will adjust and push another commit or two; please review.\nThere is a larger problem here -- the section on evaluation was split into two sections (evaluation and precedence) but the conditionals only reference the first. This can be fixed by moving precedence back within evaluation with subsections for the other content in evaluation.\nOkay, I have updated the PR to make evaluation one section (with two subsections), fixed a few typos, made the requirements self-descriptive, and updated If-Range to be consistent as well.\n(but see comments)"} +{"_id":"q-en-http-core-7531d738b29307b1a4ec71eb68cc86c4aa5395f21a03b55bae9c870015460fcb","text":"Those changes are editorial.\nspecifies what to do when it isn't present (use the received time), but not what to do when it doesn't parse. The intuitive thing to do is align, and say that the received time should be used in this situation as well -- but probably not require implementations to replace it if forwarded. I might write some tests...\nOkay, I am still waiting on those new glasses. Too much Age. It's fine with me if we interpret an invalid Date as the received time, but that assumes we know the received time when we parse the Date. I think as long as the scope is specific to Caching, we should specify that a cache always record the time received (regardless of Date) and use the received time if the Date field value is invalid.\nis used in a number of places in caching, so that modification will be pretty invasive. Why do you think it needs to be restrained to just caching? I suppose it could be specified in semantics as something like 'If the Date field value does not parse, it MAY be replaced with the time the message was received. Caches SHOULD consider the message receipt time as the Date value when it cannot be parsed.'\nThe \"it\" in that last sentence has an unclear subject (is it the date, the arrival time, or the cache?), and \"consider\" -> \"use\".\nThe only reason I said it is okay for caching is because we already depend on the cache having a clock and storing time received. It seems reasonable, regardless of role, if the recipient has a clock. But I wasn't suggesting a new requirement."} +{"_id":"q-en-http-core-aef6e1b91e46964aff99cd2591afb6440bfe27196b4ad97392360a78190e3a20","text":"This is how it works in practice. RFC 6125 is due for an update, I guess. iPAddress subjectAltName has been supported for years and years, but it never managed to capture it properly. This is definitely a big change in that it now outlaws CN-ID. And the definition of IP-ID here is probably overreach, but I don't see how we can avoid that. I don't want to update RFC 6125 here (as much as it needs updating), so making the definition standalone seemed like the right way to handle it.\nDiscovered via a question NAME asked; it appears that 8aa1781d4fa7d25ab946f6cb1dd1b337435e0a17 made the change to require looking for URI-IDs in the certificate and I don't see any follow-up discussion on that in . However, I've never seen a certificate that offers a URI-ID for an HTTP endpoint. Shouldn't this be a DNS-ID to match current usage? Or perhaps either, with URI-ID preferred, if we believe that's the future?\nEr, right, my mistake -- I was trying to describe it as verifying a URI as input, not as looking only at the certificate field for URI-ID. It should either be left unsaid, because it is defined by that RFC, or possibly specify DNS-ID with maybe URI-ID as a backup?\nI am not entirely sure that there is need to mention URI-ID (I, for one, don't know of usage of it with HTTP; I barely know of usage of it at all). My understanding is that current practice prefers DNS-ID with a grudging tolerance for CN-ID as fallback (though there are efforts underway to fully deprecate use of CN-ID). Note also that when referencing the RFC 6125 procedures, the protocol document is expected not only to specify what name and type are to be used as input to verification, but also whether wildcard certificate are allowed. It is sometimes also useful to give instruction about what procedures are to be used for revocation checking, though 6125 does leave that explicitly out of its own scope. I am also trying to promote the definition and use of \"IPADDR-ID\" for the iPAddress SAN, but that's somewhat unrelated.\nDo we want to state a position on adoption of URI-ID? If not, then I think we say DNS-ID and MAY consider CN-ID, plus the wildcard language. I'm not aware of anything to reference for IPADDR-ID, and this doesn't seem like the place to define it.\nPlease don't use CN for anything. Nor rfc822Address nor domainComponent.\nI'll be happy to accept any change here that folks generally agree to, preferably in the form of a PR. I already tried just writing it myself and that's what we have now, but it's wrong because I haven't implemented this myself.\nSorry for the late comment; apparently my github notifications aren't set to send me email when a new PR references an issue I'm watching.I'll trust NAME :-)"} +{"_id":"q-en-http-core-bf613bf935d7657a7f7bac548e63c6660028bad5d65c76c016cb16a50920fa3e","text":"Without getting into the current politics and history of the term, there are two instances of this term in SEMANTICS that don't appear to contribute substantial value versus using a less loaded choice. : Here, \"list\" seems sufficient. Perhaps \"sites that have been explicitly permitted\"?\nGood catch. I thought I already checked for those when I replaced man-in-the-middle.\nFor a direct replacement both safelist and allowlist have seen some uptake."} +{"_id":"q-en-http-core-198eba9b38583cb32303f8acd8b0a10e18430327a7b4394fba9819246ef3e29e","text":"This isn't a place for equivocation. This could have been MUST except that that isn't good to use here. I considered just \"conforms to\" as well. That would also work, but that is incompatible with \"wishes to\" and I was less confident in removing that. I could do \"conforms to\" with \"gateway that interoperates with inbound HTTP servers\", but that's a bigger change."} +{"_id":"q-en-http-core-81cf3518f764af11e436b6efb7f6139464b8dd571b9cadaa0f4509f17f72eb69","text":"In the definition of the https scheme, the phrase \"strong encryption\" is used. This implies some sort of judgment. I think that the key is that both client and server need to agree on the adequacy of the protections. Noting that this might be contextual is somewhat obvious at that point, but I thought it might be worth pointing out that what is generally acceptable might change over time."} +{"_id":"q-en-http-core-551117ba2eb809dbfbca40ed0f353752222d20f1ea243a7fc053b4b732c710c1","text":"I know that this is pretty obvious and implied, but maybe it's worth saying.\nmaybe just s\/TE header field\/TE request header field\/?"} +{"_id":"q-en-http-core-dbe280769990383d06040f284a3e89c55272ea62a3f3cbeb2da85c40ad317772","text":"Though I think that this text is correct, an HTTP\/3 response only ends when the stream is closed, not at the end of the header section. I think that it's not necessary to say this bit anyway.\nThe related parts were done in so this bit can be applied now as editorial."} +{"_id":"q-en-http-core-8c91c35a547e190602870fa9c7e4bc7477e4597f5775ece0b7fe41aef5bec5c3","text":"The original text here sort of implied that it was enough for an origin server to just send 2xx. That assumed that the request was for the origin server itself and elided a bunch of detail. This is shorter and clearer, I think.\nI didn't intend anything here other than a clarification of what was written. I'm mostly guessing what was intended as I don't know what the original text truly intended to say. (I think that it was always possible to use CONNECT with a target of the origin server to do URL But I didn't want to highlight that here; it's a little bit of an odd thing to decide to do and so I wouldn't want to encourage it.)"} +{"_id":"q-en-http-core-a3114ce7f45576458c8d50dab2f2ae44f38ecc9b156871d722bc53df391bf2ab","text":"The first paragraph of the section 5.6.2 on Tokens, which describes delimiters, is misplaced. It should either be the last paragraph in that section or the first paragraph in the parent (Common Rules).\nMoving it up would be a bit strange because to the reference to tokens. Moving to the end of the section seems good though."} +{"_id":"q-en-http-core-d1a4e75acb3c2749f4108a987cfd71910d8c6889f958695ccdcf2db884d3ea0c","text":"In : This is probably overly broad considering that the ABNF says (I question the wisdom of including HTAB, but now is not the time to worry about that). Could this be changed to \"printable or visible US-ASCII octets, plus SP and HTAB\"? (Incidentally, why doesn't the rule use the ABNF rule?)"} +{"_id":"q-en-http-core-883241bf7cd61cdbd3f06b42d85e03636e27b1c286a612790751650b633d450d","text":"…levant when some of the acceptable ones are redundant (\nHowever, from item 4 implies otherwise. I think that this only applies if the server wishes to choose between codings, as opposed to apply them all. Compression codings are usually mutually exclusive, but other codings are not necessarily. Splitting this point into a paragraph might be the way to resolve this:\nSee also URL"} +{"_id":"q-en-http-core-edb70f3eeaa567a81fed837cdd629076bd88279677b865174a1453927b2415d1","text":"I wasn't sure where exactly to include the citation, but the end of the sentence seemed harmless."} +{"_id":"q-en-http-core-57088f88cb6fb7692697286ef6946b70051e824f1d747cd0f15f2641eaa65510","text":"The section was talking about server load as well, but it doesn't mention that being a cost of closing connections.\nI thought that the assumption was OK-ish: the expectation that clients will retry. That said, I agree that this entire thing could be better. I was looking for a minimal fix."} +{"_id":"q-en-http-core-6d78b63ab829cadb232ade81da66ff8497d83184986fb35ffcccf4e166f74190","text":"draft-ietf-httpbis-semantics says in the intro: the multipart\/byteranges media type. Is this truly something specific to 1.1? Are there difference in other versions?\nGood catch.\nokay (editorial)"} +{"_id":"q-en-http-core-453785135e7389b2b80c2d5e174fb499ab97f448557f4d457de37497cfe69ab3","text":"This text: URL a condition on the conformance requirement that suggests that clients that do know the server will support HTTP\/1.1 (or later) can do something else. But it doesn't really address what constraints then apply. I think that this was just to permit the use of the chunked transfer encoding for delimiting the message body, but that does not follow. I think that this might be rephrased as follows:\nIt reads the same to me, but +1."} +{"_id":"q-en-http-core-2ee13ecb15c3c54aa088682441d7fd0daee7607a605bae84f542f1e68017ae0d","text":"A response is incomplete if the TLS connection close was incomplete. Remove text about TLS reuse\/resumption, which was very inconsistent. On the first, I removed a case of \"partial closure\" rather than \"incomplete closure\". I also linked the incomplete message handling section with the TLS connection closure section.\nApparently I'm not allow to link to after the fact. Sorry for not having done so in the opening comment.\nThis is okay, but the paragraph before your first change should be rewritten instead. I can fix that later."} +{"_id":"q-en-http-core-ed942b7e0e2be150116b83b186b5824a3c981f36a279321e727afc4165457bf9","text":"I didn't want to make this too disruptive, so I concentrated on the high points. Related to the changes in . Breakdown: integrity is important (existing text) HTTPS provides integrity, but you have to be careful with truncation (includes existing example) HTTP provides no integrity extensions can be used to provide additional protection, including managing the risk of intermediaries messing around (massaging existing text)\ntalks about message integrity in general terms. The introductory sentence is good, but it then seemingly intermingles several different factors that might be relevant: Accidental corruption of messages in the network. This is not something that applies to HTTPS, but it is relevant for the cleartext variant. The use of intermediaries that might not be trusted not to modify messages. Noting here that this is likely selective as intermediaries need to modify certain parts of messages. Attacks on the protocol, which break down further: a. Attacks on cleartext HTTP, which has no real defense. b. Attacks on TLS, which has good defenses on the whole, though less so with respect to truncation of responses. This is due to the way that HTTP\/1.1 allows connection termination (which is under attacker influence) to end a response and the way that TLS in practice is really poor at propagating close_notify in a way that is distinguishable from TCP closure. Overall, I think that this text could be made clearer."} +{"_id":"q-en-http-core-9a9469b21da196afc9b75522f055939b4571bde4e9609241cd98e48052982e3f","text":"In , Section 2 of -caching is cited. But it would seem like the entire document is a better reference here."} +{"_id":"q-en-http-core-5339c791552feacb98e78c7f613d7bc56c0e40e40b8de44aaa0238267f537b77","text":"Good point.\nI guess all the options are ugly. Should we leave the text wehre ut was and just apply the proposed change in URL ?\nDoes better belong in -messaging?\nInto 6.2?\nSeems like the best place, yeah.\nNAME : URL\nBut I don't know what this accomplishes given the paragraph after this one also refers to Transfer-Encoding.This makes sense on its own (with Martin's adjustment). Still not thrilled about having transfer-codings in semantics.Thanks. Fairly simple fix here. I've suggested a tweak for Roy's concern (which I can't make a suggestion before because of GitHub limitations."} +{"_id":"q-en-http-core-037d0779711c30afa20c610ffec9648c4cf45ad8eadbf361fd88fc624865f8cc","text":"is deprecated due to the ubiquity of UTF-8, but the section makes no mention of UTF-8. This might be worth a mention.\nIt's only used for though and that section already mentions UTF-8. As does the Media Type section. Since is deprecated, perhaps the production should simply be removed and takes directly?\nAccept-Charset is deprecated primarily for privacy. It also happens to be less useful now because of utf-8. In contrast, the charset parameter is still used most of the time, mainly to reduce sniffing. We can add something about utf-8, but promoting it too hard will cause other (social) problems.\nNAME observation is correct. But then, charset is used in Content-Type as well, just not referenced directly as it is used over there as a regular parameter.\nIt's not clear to me at least that the production section also applies to .\nSyntactically, it does not (because that relies on a more generic syntax for arbitrary parameters). But I agree, this needs cleanup.\nProposal is to remove the ABNF production and to clarify the text (so that it applies to Content-Type as well)."} +{"_id":"q-en-http-core-1d5e0f7b4142e6b2b7ae22102cb7f4d91e95fa3b482b3b940b684dbd5b615889","text":"This might be obvious, but the -caching draft never concretely says that a cache key is the information that a cache uses to select a single cached response. It just : \"The cache key is comprised of, [...]\"\nLGTM"} +{"_id":"q-en-http-core-458ef26770cd10660aa4d9ce7fce2e0c406424cc1eb3152b188d58f01fb2df2b","text":"A private cache that can be manipulated so that it mistakenly creates a cache entry might not have the reach of a shared cache, but it can still be a problem. Cache poisoning occurs when a cache can be written to by an entity other than the one that the cache would normally recognize as being authorized to write to it. Take the websocket poisoning where a cache misread a websocket stream as HTTP queries and established cached records under arbitrary origins. This would allow for poisoning of entries normally not controlled by the host that was using the websocket. Even though that might have been a single user affected by the poisoning, the effect was serious enough and motivated the inclusion of masking in websocket.\nSimple :)"} +{"_id":"q-en-http-core-7d7474982d7fb2fe53b28c0387fa863ad7c26b507f765855fe8fe230042113fe","text":"The discussion of seems like it would be more appropriate under 'connections' rather than 'intermediaries.'"} +{"_id":"q-en-http-core-b89b1475ff25eade7d90c76396a7f8da352536520677ccf7bbad352d1d06c150","text":"After \"the server SHOULD respond with a 400 (Bad Request) response\" I'd add \"and close the connection\" as it's not necessarily obvious to everyone."} +{"_id":"q-en-http-core-0f848cb15f834109d2612a3d757d2c9256ed57bfef2039a69dcec34a80bc22cd","text":"Regarding the case of multi-valued header, we have this: I'm not seeing anything there which forbids the use of \"Content-length: 5,10\" anymore. I'd rather emphasize the need for extremely strict parsing of Content-length and keep the duplicate case as the only acceptable exception to this rule, approximately like this: \"The Content-Length header field plays a crucial role in message delimitation in HTTP\/1.1, and a different error handling between agents was shown to have important consequences in terms of security. As such, it is extremely important that agents do not accept Content-Length header field values that do not strictly comply with the ABNF, that they take care of rejecting values that they are not able to accurately represent internally, and that multiple occurrences of the header field are always checked for, and rejected as invalid. As an exception, if the header field is received multiple times with the same value, or is received as a comma-separated list of identical values, the recipient MAY replace all of them with a single valid Content-Length field containing that decimal value instead of rejecting the message.\"\nMessaging 6.3 says: I think that covers it for HTTP\/1.1 (if Transfer-Encoding and Content-Length are both present, that's encouraged to be handled like an error). HTTP\/2 says: HTTP\/3 says: ... so we could say something in semantics about invalid content-length fields when it's not being used for delimitation, if we wanted to.\nHmmm interesting. I haven't read enough of both parts yet but it's not that bad because I'm in the same situation as someone seeking a solution. I think in the semantics part, there should be an explicit reference to the messaging, for example, \"Certain versions of HTTP use Content-Length for message delimitation and may impose stricter rules\". This could be sufficient to make the reader check the respective messaging parts. What about the point about being careful about internal representation (i.e. no overflow nor loss of resolution) ? I know I placed it in the middle of the text because it flowed naturally, but this remains a valid item as well.\nAs a general rule, I think we should not add text to semantics that says some other document has a requirement unless that specifically enlightens the reader regarding the meaning of that protocol element. This is important because the Semantics spec is already \"too long\" for easy review. If the requirement matters, it will be read by readers of that other document."} +{"_id":"q-en-http-core-cdeb0ce532474247c44d99696c490ccce37f6b5b8d42a411d4fe4142c241c496","text":"Can you please factor the whitespace changes out into a separate PR?\nSorry, my editor did that without my knowledge. I didn't realize the draft was so messy to start with. Will do.\n: This is good, but I think that the phrasing here could be more direct. \"any semantic implied by the URI itself\" - when you read suggests that you might infer something from a URI about what it does. However, I think that this underplays the key point. The point that should be made here is that a resource cannot decide to change the semantics of a method to suit its purposes (I guess unless it is willing to assume all responsibility for doing so, the usual caveat we apply here for things like request logging). Separately, the specific reference to 9.2.1 might be read to imply that safeness is special, but resources don't get to override the semantics of idempotent methods either (or any other well-defined properties of methods). So I would instead phrase this more directly and use the citation as an example only:\nNAME can you PR that?\nSure."} +{"_id":"q-en-http-core-078287b8517afb01f3b007ba67d442e4c48c7c2391c4cf8839f211edc8cbea7b","text":"is saying something about using From for authentication, but I don't understand it: I think that this could be saying one of two things, maybe the latter: From contains only an unsupported claim about the identity of the generator of the message, so it can't be used for authentication. Including a From value that also contains a secret, such as a password, that could be used to authorize a request is unwise because recipients of the message won't necessarily treat the value as confidential and so the secret might leak. That latter is a bit strange, because an authorization could be bound to the specific request in various ways that might limit these risks. I can't tell because the logical chain between \"this is who made the request\" and use of the value for authorization isn't very well developed. It might be better to say nothing here instead.\nMy .02 - I think the intent was more the first -- just a warning that this is not an authentication mechanism to ward off the uninformed and optimistic. Given that this isn't widely used anyway, I don't think the text causes harm; I propose we just leave it as is.\nPlease, drop everything from \"since\" onward in that case. The reasoning makes no sense to me and the unsupported recommendation is clear enough.\n+1"} +{"_id":"q-en-http-core-b73c5c791381a47f7b95600dc54ca94a95c5caa2df03b609e3774beb98803d0c","text":"though I do wonder if HPACK\/QPACK needs to be explained as well, since the order in which messages are processed does impact their compression, but not their interpretation.\nI would probably say that as the semantics are still stateless, this document probably doesn't need to say anything about that. But that's a weak preference only.\nThe change looks good to me. That said, it might indeed be good to mention the impact of header compression.\nI clarified that this was about semantics. I'm hesitant to talk about header compression, as that's likely to just confuse the matter.\nI thought that was already clear. We might need to mention HPACK\/QPACK anyway because they are only stateful with regard to the in-transit encoding between peers, and thus deliver stateless messages at the HTTP layer. Or something.\nClarify if the request needs to be the 1st request after a connection is opened.\nIs there a reason why you ask? Are there problems when it's not the 1st request?\nI'm not aware of any such problem, and for example in haproxy we do support upgrades anywhere.\nJust a question I had .. which I feel the spec may want to call it out, either 1st request only or no such restriction.\nNothing in HTTP depends on where a request is placed upon a connection, because it's a stateless protocol. see last paragraph (this is likely to move to soon, which is more appropriate. Should we spell this out more clearly there?\nProbably, it never hurts to remind it.\nI see the case for proxies to reuse and upgrade an existing connection to the server. Mentioning this will encourage correct server implementations."} +{"_id":"q-en-http-core-eee066147695315efa680ce47bc147b186614bd19a697701a246a285f5e1760e","text":"I saw that too, but favoured compactness over avoiding parenthesis. Happy to take suggestions.\nTry that.\nbriefly touches this topic. More explicit descriptions are helpful, such as: responses can be sent at any time, successful for failed ones responses may be completed before requests are completed. This is described for but should not be limited to 100-continue. for 2), should a client that sees a failure response continues sending the remaining request content?\n3) is somewhat addressed in when the server also closes the connection\nFor (1) and (2), we could adjust to say (3) is situational. We could say something like 'A client that receives a response to its request while that request is still being sent SHOULD continue sending the request, unless some other signal indicates otherwise.' (with appropriate references, perhaps).\nOkay, I have tweaked this editorially so that the paragraph makes sense, IMO."} +{"_id":"q-en-http-core-ffad67ac8a9cb576d1f5d8ff232dffca217b853cbc7317ca420ed297b75232a9","text":"... RFC7230 + RFC7231? This would be noting an editorial change, but okay.\nPeople who are reading other specifications that use the old terminology may have questions.\nAlso worth putting 'body' in context, since many (e.g., the Fetch API) still use that term.\nI could do that directly in Fetch as part of URL I'll have to do some reading though as I've lost the plot a bit.\nWith or without Roy's addition."} +{"_id":"q-en-http-core-17291c7bdfac31efff064dc3dd41015dc3c3170e130af890fa4c67259ffc69bd","text":"implies a requirement: I would have said instead that HTTP itself doesn't care whether content adheres to this stricture, though some implementations could (or MAY) normalize line endings before sending or after receiving content that uses a media type.\nMaybe we can improve the spec by just killing the whole subsection? It refers to the canonical form in MIME, then say we don't neef that, and then makes recommendations for text\/* that AFAIU everbody ignores.\nThis was a huge deal back in 1995-96 and was necessary to bless deployed practice of sending native line endings and expecting the user agents to process them. I guess we can ignore that now? Don't be surprised if JK brings this up in IESG last call.Not seeing anything here worth saving."} +{"_id":"q-en-http-core-ac6ed2ac75a760af37b544bbd39bfd8f6ba71d283d6e43e488ef011cf47bbff7","text":"Note that if we say it has serious security hazards and we don't mention that in Security Considerations, someone might complain.\nPTAL\nThe text says I'd want to add: \"Implementations must be careful about accurately parsing large values or rejecting the messages, as failure to accurately represent the advertised value due to overflows or loss of precision may have serious security consequences.\" I think it is important to remind, because implementations seem to have progressively shifted towards 64 bit repressentations since 7230, 32-bit ones are still much present, and the risk of desynchronization by advertising more than 4GB is high. By the way, some languages do no provide more than 52 bits of accuracy because they use floats to represent any number... The risk remains low (1 injected byte every 4 PB) but its worth asking to be careful.\nTweaked proposal:\nWFM, but I'm still a bit concerned by the fact that we don't say what the receiver should do here, and it doesn't decide to receive too large a chunk :-\/ Should we explicitly mention \"a recipient must reject the message as invalid if it cannot accurately represent a received chunk size ?\"\nI think if we say something about that, we should align it with the language for Content-Length:\nOK, but then conversion overflows is not the only issue (storing in a float or double is a huge problem, and nowadays sometimes developers don't know, they store it into a \"number\"). Probably that we should use the same wording for both and slightly adjust the end to say . We can hardly do more but this needs to be said. Using a float with 24-bit mantissa to represent a content-length is already a problem, but in a chunk, 48 MB are enough to miss 3 bytes and desynchronize on a final chunk!"} +{"_id":"q-en-http-core-f9d2a126e5120af583b0881cce4f01b17b0fb5ee9ff54d342055f9ab14eb08b9","text":"It isn't a normative change because the ABNF already requires port to be present in this form of request. The description is just inconsistent. I also intend to change the reference to 3986 to the right section of this document.\nThe as used for CONNECT describes the request target as comprising an authority, which consists of an IP or reg-name and, optionally, a port number. The example shows an explicit port of 80. What happens if an explicit port is not included? Does the proxy use the port number associated with the implied protocol (80 if the inbound connection is TCP without TLS, 443 if it is TLS)? Or is it an error? As far as I am aware, a port is always provided in practice. Maybe the answer is to explicitly require that the port number is always present and note that no default port can be supplied in this case as there is no scheme and therefore no associated default port.\ncc NAME\nNAME +1. This is the most natural outcome based on our established consensus in H2 and H3 that CONNECT requests is not accompanied by a scheme.\nIt's supposed to be a 400 error.\nI verified that Apache mod_proxy will respond with 400 if there is no \":\" or if the port is empty or non-numeric, and 403 if the port is not configured for use.\nIs this something that needs to be listed as normatice change?"} +{"_id":"q-en-http-core-30e8974f6650c60c485383965ccf1aa1167ead719962c239924c56eff853f121","text":"I'm disappointed that RFC 6919 doesn't list \"ARE EXPECTED TO\".\nTime for a bis?\nIn general, an unknown or new method can be handled by a resource by rejecting the request (with 406 for instance). However, some entities (intermediaries in particular) might be forced to engage with methods that are unknown to them. I couldn't find text in or Section 16.1 on this. However, the draft might benefit from advice on this. There are a few things that might be said, with reference to Section 9, which already says much of this: Intermediaries are expected to forward new methods. This enables the deployment of new methods. New methods might be safe, idempotent, or cacheable. Or they might not be. Actions taken need to consider this possibility and act accordingly. For instance, an intermediary that might be able to automatically retry a request cannot do so for an unknown method, which it does not know whether it is idempotent (9.2) or a cache can't cache a response unless the method is known to be cacheable (9.3).\nPTAL. I don't think your second bullet is actionable; that's already clear in 16.1, and intermediaries aren't required to do any of those things."} +{"_id":"q-en-http-core-1d2d6887f48cb2d4b566de12017308e464f7780c98e0bf9444fabd68c158ce9e","text":"It's written: Actually I'd rather say \"MUST send a version no higher than their own in forwarded messages\" (poor wording, I know, maybe someone can propose better). Indeed, if an intermediary receives an HTTP\/1.0 request and passes it as HTTP\/1.1, the server will wrongly assume that the client can deal with 1.1 (e.g. chunks) and the message may have to be degraded by the intermediary (such as de-chunking and rely on close only). Furthermore, seeing 1.0 for a server is often an indication of very limited (or possibly bogus) client. For example some intermediaries might avoid compressing or caching when facing HTTP\/1.0 messages, and as such it's preferable to let such versions be properly advertised in messages that could be considered as potentially unsafe. Maybe a different wording should be \"intermediaries... MUST NOT pass a message showing a version they do not support, and MUST make sure the message always conforms to the advertised version\".\nI agree with Willy: the underlying requirement is that intermediaries accept and send messages only in versions that they understand (unless acting as a tunnel, I guess). The original text basically means that, even if it is phrased in terms of what value is placed in a specific field rather than as the true requirement.\nDiscussed at Feb 21 interim; should work on a proposal to loosen this somewhat, at least for intermediaries. SHOULD + context?\nOK let's try with that.\nbut see feedback from NAME"} +{"_id":"q-en-http-core-c09fca7181842ed45d65bf2b94cd8fa63df9d0550345e833200ea1160b26fc92","text":"Apologies for the lateness, but going through , I think we need to adjust handling of invalid headers: Right now, it's that caches SHOULD treat responses with invalid values as stale. However: No cache tested treats a response with in this manner; all ignore the field. Only Fastly and Firefox treat a response with in this manner; the rest ignore the field. Interestingly, most caches do consider a response with to be stale, except for Apache and Chrome. We specify that the first field line should be used when multiple are present. That's almost universally adhered to (with one deviation by Safari), but we omit multiple values on the same field line from this rule (which are treated as invalids as per above). However: When that line is , all caches reuse the response. When that line is , only Safari and Apache reuse it. This suggests that we could align with current implementations and improve interoperability by: Specifying that invalid fields should be ignored, not make the response stale. This is nearly universal practice. Specifying that the first field value should be used when multiple can be parsed. This is also nearly universal practice."} +{"_id":"q-en-http-core-af7a0dc5fddea304310e4e7854112d3094a5cf5850e64e59a132d447147606b7","text":"has a number of caveats about date parsing that aren't mentioned in ; they should be referenced."} +{"_id":"q-en-http-core-863125301383789d145030c02feb9d4192130156113caf8aeb411de929db3810","text":"That's what this does; the text above already has that effect. It does raise the question of whether we should add CTL to , however. Thoughts?\nI believe the proposed text now is somewhat misleading, as it does not mention that CTLs are (and always have been invalid. No, that would be a breaking change.\nWe still need to rewrite or remove\ndraft-ietf-httpbis-semantics-latest currently says: However, when I implemented an HTTP\/1.1 parsing library that followed this rule, it turned out to broken in the real world. Specifically, users reported that sites using Google Analytics were setting cookies with values containing the ASCII character , which is a control character (URL). And in my investigations at the time, browsers and popular clients like curl all supported this just fine. It would be great if the next round of RFCs could look into this in more detail and define the production in a way that would be acceptable to e.g. the browsers. In my library we currently reject NUL and exotic whitespace (, , , ), on the grounds that those are high-risk for interop bugs and splitting exploits, but allow all other bytes to pass through, including control characters. I don't know if that's the best solution, but in our experience it's at least closer to being real-world compatible than what the RFC draft currently says.\nSee . URL is what browsers implement, which is slightly more lenient than what you have there. web-platform-tests has tests for that as well.\nWas this bug reported to Google Analytics?\nTo be clear, at Mozilla we found the same thing on routers and such. It's not an isolated incident.\nAs with the discussion we’re having on-thread about , is it worth us revisiting the language in this specification when we know that non-conformance is widespread?\nNo, control characters as a set being disallowed in header fields is a relatively new requirement, whereas HTTP has explicitly forbidden CR-only line breaks since 1993 because that is a well-known security hole. Allowing some of the control characters to pass through is not going to decrease the security of valid implementations, whereas allowing CR will only increase the number of insecure browsers subjecting their own users to cache poisoning attacks.\nHmmm - unless I'm missing something they weren't allowed in RFC 2616 either. What's new are the instructions for recipients how to deal with malformed values.\nRight, that makes sense, though more strict handling of field values in 7230 was part of the move away from lenient handling for robustness in 2616.\nAs an HTTP implementer, I'd like to respectfully plead for you to reconsider here, and align the RFC text with the WHAT-WG text for better real-world interoperability. YOLOing together a HTTP\/1.1 implementation that's interoperable but insecure is easy. Writing a secure but non-interoperable implementation is pointless, because no-one will use it. Writing a secure AND interoperable implementation is extremely difficult, because it requires that you find the narrowest possible rules that allow only the things that are required to interoperate, and reject everything else, and that's essentially impossible without browser-maker-level visibility into exactly what weird HTTP implementations are out there. The RFCs are an incredible resource for anyone who's trying to build a high-quality implementation, because they contain so much distilled wisdom on where this line is... except for bits like the field-value production, where if you follow the spec you'll get burned and end up having to YOLO something anyway. It's very frustrating to carefully follow the specs, and then be punished for it. And it makes it hard to justify following the spec versus using some YOLO implementation.\nWe can't align the text. HTTP has always forbidden bare CR as a line break. Almost all deployed implementations rely on that while parsing CR into field values and (CR)LF as line breaks. If we changed that now, all deployed implementations with a cache would have a security hole, instead of just the two evergreen browsers that broke their own parsers and can easily change them back. What servers have been doing (and slowly deploying) is active removal of bare CR in fields when they do character-by-character parsing, just to fix the new browser brokenness. This is a workaround.\nWith regard to other CTL characters, many are found in the wild, but it would be a stretch to say that they are interoperable when sent in HTTP. Some might be. I don't think we have time to validate such a change.\nThe WHAT-WG spec text that NAME linked above forbids NUL, CR, and LF inside field values, while allowing everything else. So I'm not suggesting allowing CR. Re: the other control characters: if you can't validate interoperability, then that's a tough position to be in for sure. But surely there's something that would be better than the current text? Currently, all general-usage http clients MUST actively violate the spec. Even if all you can say is \"you should definitely disallow NUL\/CR\/LF and for everything else we're not sure, use your best judgement\", then that seems like it would at least be more accurate and helpful than what we have now.\nIt really would be interesting to see what the effect of converting-to-SP instead of preserving the CTL characters would be. That is, are those characters sent intentionally (I see that this probably is ghe case for G Analytics). Unfortunately, even if we ran an experiment right now, it wouldn't help in this LC.\nFWIW URL suggests that the HTTP\/2 has a \"MUST reject\" rule. When we achieve consensus here, the H2bis probably should align.\nYeah, specifically the CTL characters from G Analytics occur in \/ headers, so it's an opaque value that the server expects to get back. I don't know for sure, but my impression is that someone chose as a \"clever\" separator character that they knew wouldn't occur inside the values being separated, so converting to SP would probably break it.\nI see. Another related question would be: is this something that's specific to cookies?\nNote also that the G analytics cookies refer to an old version of custom variables in a library that has been supplanted at least twice (URL --> URL --> URL). It's possible that they were removed because of lacking interop. URL\nDiscussed in Feb 21 Interim; align with Fetch for hard refusal of characters, strongly caution against CTL (minus NUL) for senders \/ field definitions. Don't require CTL (minus NUL) to be replaced with space. CR+NUL MUST be converted to space or rejected."} +{"_id":"q-en-http-core-bb7a1deec6ecd2ad6f55d420504eb50ad5619838844fbd216b5bae0e993904f7","text":"defines Referer and says: There are two other things that I think are relevant to mention here: Referer is often suppressed when the referring resource is \"https\" at a different origin than the request target. Referer can contain only an origin rather than the referring resource identity. might not be worth citing here as it is probably too specific to browsers, but these other constraints are worth noting. Especially the first as this has real security consequences. Though the text in is excellent as a high-level principle, the steps that are taken to avoid URI leakage are meaningful and very relevant to this section.\nIs that \"often suppressed\" or \"always suppressed by good user agents\"? I agree that we should update Referer to be consistent with current privacy guidelines.\nIt is \"always suppressed by good user agents, unless the referring origin explicitly asks for it not to be suppressed\". These go beyond privacy guidelines into security. Despite the general acknowledgments of the position stated in Section 17.9, there are still resources that leak security-sensitive information through their URL. The notion of a capability URL remains a powerful, and widely used, tool."} +{"_id":"q-en-http-core-3678b8ea2a0cac5a06ebd7e50abef6302776f72e10eece912b96f7cccc9034ee","text":"When the text suggests that credentials can be automatically applied to all requests made within a protection space, that begs the question: how do I know what requests are in the same protection space? The answer is \"well, you don't really\". Authentication schemes might define something, but otherwise you are left to guess. This PR says that as directly as I could manage. I considered adding another sentence here that says \"In the absence of specific information about the extent of a protection space, clients &MAY; assume that the protection space extent is the origin of the server.\" I'd like thoughts on whether that is helpful. I think that it might be, as otherwise this isn't really implementable.\nPlease factor out whitespace changes...\nUgh, it's like picking up toys after children.\nNeither of you have comments about saying what the extent of the protection scope is in the absence of information?\nI'd rather leave that out.\nAFAIU, it depends on the authentication scheme. At the end of the day, we can only document here what we inherited from earlier specs, and what is actually done in practice. If you have information on the latter which helps us making this better, great. Do you?\nI proposed maybe adding: Which I believe matches what clients do when they produce credentials and the authentication scheme doesn't define a clear scope. Do you think that helps?\nNot sure. What problem are we trying to solve here? Auth scheme definitions that fail to define the protection space? In which case, shouldn't we say something about this in the \"Considerations for new schemes\" section?\nas: This says that it is a parameter, but does it appear on challenges or responses? only establishes that authentication parameters parameterize authentication schemes, there is no mention of how those relate to what is sent. I think that this first problem only requires a mention of WWW-Authenticate and Proxy-Authenticate. The second problem is in : Clients would seem to have no way of knowing whether reuse is likely to be successful. A protection space is defined as the tuple of origin and realm, but there is no acknowledgment that how a protection space might correspond to the URI space is only known to the server. (The next sentence, which I omitted from this quote acknowledges that the client needs special knowledge in order to understand that a protection space might span origins; that's very useful information.) I think that this requires only that the text acknowledge this uncertainty and note that clients could decide to provide authentication information on every request made to the origin, without knowledge of the extent of the protection space. It might also note that particular authentication schemes might define mechanisms that allow clients to decide where to use credentials. RFC 7616 defines , which allows for scoping; RFC 7617 has a section on .\nre the first point: we are defining the framework here; whether a scheme uses realm, and how exactly it is used, is out of scope here. What we do is reserve the parameter name \"realm\", and say what it is for. Could an auth scheme use it in credentials? I believe the answer is \"yes\".\nI think both aspects are up to the authentication scheme in question. Martin, do you feel strongly about this, or can we close?\nAs you say, it's kinda up for grabs. The main spec isn't very satisfactory though in that it doesn't specify, but doesn't really admit to why that is. Maybe if I can be given a chance to think of some text for this and you can see if you like it."} +{"_id":"q-en-http-core-4a221a54713c14c02e772ba3ebf463b9b6e8ba344973a5c57c5bef5deb797f71","text":"-- URL This doesn't seem to be enforceable in any way. This isn't really talking about whether the syntax is valid, but about the semantics, so it's easy to disavow knowledge of falsehood (my browser isn't necessary aware that me typing \"the sky is pink\" is false). I think that it might be better to simply observe that it is contrary to function - and possibly interests - of a sender to generate protocol elements that convey semantics that are incorrect or contrary to the interests of the sender. Normative language implies interoperability failure if it is ever violated, and I don't think that is the case here.\nIt is enforceable. If a sender communicates a syntactically valid construct that is deliberately misleading, then the recipient's behavior in response will be equally confusing to all parties. We saw exactly that both in the deployment of HTTP\/1.1 and in the early deployment of DNT. Both resulted in interop failure, and both failures were only corrected after the standards were enforced by recipients refusing to accept invalid signals.\nWe might be talking past each other. NAME - could you give us a concrete example that this text is trying to rule out?\nOr, for example, P3P, where people deliberately set false and misleading policies so that browsers wouldn't complain. Roy, it's a nice thought, but this is the kind of regulation that's done by law, not architecture.\nBoth laws and RFCs are defining standards of conduct. The difference with laws is sovereign enforcement, not social versus technical. In any case, this has been true of HTTP implementation (Apache httpd, for example) since 1996. A server will check for compliance based on User-Agent version match and ignore protocol signals that are known to be false or at least unreliably implemented by that sender. Browsers do the same for some servers that deliver default Content-Type settings by mistake. We consider the workarounds to be standards-compliant because of this MUST NOT. And, as I said, we enforce that ALL OF THE TIME.\nDiscussed at Feb 21 interim; add context to help readers understand why this requirement is here \/ important. NAME to PR.\nContext... ? Then add an example, probably a fictitious one out of deference for the sensitivities in all the truly good examples.\nThis works very nicely. Thanks Roy. You can point very clearly at the DNT problems, but it's very subtle. And it doesn't detract from the general utility of the text."} +{"_id":"q-en-http-core-e7bbb3e6b0e25e8a36d090d4386465ccb1124193cebc92b16d81354e3da3f527","text":"I think it would be better to separate this into the purely editorial changes (all except TRACE), and then give the change to TRACE the full treatment separately (it's a normative change after all). Will work on the former first.\nSorry for this being a question. I answered the same question for -caching by searching through and realizing that while caching might be a version-independent extension, a practical consequence of the structure of documents is that it is inextricably linked to -semantics. That's cool. Having the definition of HTTP\/N rely on -caching is eminently sensible. There is a normative reference to -messaging though. Why would HTTP\/3 care about that? If this were software, I'd be horrified by pulling in a dependency like that. Most of the references to -messaging are strictly examples. I could only see two cases where there was a potentially normative dependency: recommends \"message\/http\" and . These seem like they could be broken (the second easier than the first, certainly) and the reference made normative.\nI agree that this would be good.\nA SHOULD-level requirement to use message\/http. Moving the media type definition over is not going to help, because it essentially defines the HTTP\/1.1 wire format as media type. It seems the only way to fix this would be to relax the SHOULD. That one could probably be replaced with a reference to the transfer coding registry (which is what counts here), and the pointer to the spec text would then become informative.\nHistory in and linked issues.\nConcrete recommendation for 1: a server (intermediary) responding to TRACE MUST generate a representation of the request that it received in any format. The message\/http is one possible way to represent a request message, so that can be an informative reference instead."} +{"_id":"q-en-http-core-1c86d204d4f5c9d0ab61c5375ad6d3eb45c2a6d5dafde27c95c881a5eed2b0a8","text":"Trailers that come after the body has well-defined semantics (i.e. metadata applicable to the entire body). Interleaved, mid-stream trailers are considered and implemented as extensions, and maybe we should delay adding them to the core semantics. This is however an important topic for HTTPAPI, IMO.\nSee related mailing list thread at .\nDiscussed at Feb 21 interim; sentiment is to remove it; NAME to create a PR.\nWFM, thanks."} +{"_id":"q-en-http-core-0029fecc40de432d78e318c31dd82ef9864eec4f92aa8d2f8b24a0a1130243ac","text":"Make sure that all references to MESSAGING which are not normative are clearly informative. (This ticket is not about those references which right now are indeed normative, see )."} +{"_id":"q-en-http-core-9ec8acc2bd30974a5fde856bb6f36c624f4cf90284f227fb8399126beb313526","text":"… in that the request must be understood in order to parse and interpret the response.\nWell, the simple version is a normative statement in sheep's clothing. Let's just make it normative.\nSection 6 claims that . It uses the word \"intended\", which is an intention that is broadly achieved, with only one exception that I'm aware of at the syntactic level. Having recently implemented message\/http, I discovered that some response messages cannot be processed without knowledge of the corresponding request. This is pretty annoying and might be worth noting as a possible exception here. As noted in the response to a HEAD request might include Content-Length, but the content of the response is always empty. At a semantic level, I don't think that any response can be considered without knowledge of the request, so I don't think that self-descriptiveness can possibly apply there. Besides, this section is clearly intended to provide an abstract definition of the syntax and not semantics.\nSee also"} +{"_id":"q-en-http-core-93b0f7b3da3ea19a8dd7446ff8cfdcfe03c932f74ec7395902908732c415ea2c","text":"in Section 12.2 uses \"representation of the response\", which I can't find a definition for. I think this should be \"representation data\" or \"representation data contained in the response\".\nMuch simpler, thanks.This is fine, but \"a preferred kind of content\" doesn't read well."} +{"_id":"q-en-http-core-58da226d4ffc3f0638ad057d8d67e06dc55592a0cb938b6dbda3ab037740a45a","text":"... components, depending on the form of request-target, and when authority might be empty. This is an alternative to\nIn determining the request target, the following text is included after failing to find an authority in configuration, the authority form target, or the Host header: This is a security risk because it means that servers might not agree with clients about the identity of the resource that the request is directed to. This is especially bad as a TLS certificate is good for every port and often good for multiple names, which can mean a fairly large scope over which an attacker can exploit this confusion. The origin form description doesn't use 2119 language, it says \"A Host header field is also sent\", citing . And that section is a bit weaselly in that regard, using a \"URL\". If this were a \"MUST ... unless\" in -semantics, this problem would not exist (at least in theory). Just to confuse things, the absolute-form says , which might be read to imply that Host truly is mandatory: The asterisk-form is the only other option that naturally lacks an authority, and that says nothing, so that might be fixed up similarly.\nKeep in mind that the first text is what to do when receiving a message without Host or absolute-form, whereas the other texts are requirements on generation. Yes, it is a security risk to not send Host, and I assume it would be one to provide service for an https resource without verifying that the client's request indicates one of the server's host (somehow). So, PR welcome, but the final text still needs to accept non-TLS HTTP\/1.0 requests without a Host (for ass-backwards reasons).\nBTW, the reason it says SHOULD here instead of MUST is because this requirement about the ordering of fields in the header section, not about whether or not Host needs to be sent.\nOK, that's good. It's not a genuine problem, we just need to be clearer about generation requirements (and we might be able to carefully condition the inclusion of a default).\nbetter again"} +{"_id":"q-en-http-core-af85e3829ee62288f5c5a1914bb9a587bdc3c559fdc3c0757b044496b5f63b64","text":"In 3.2.3. authority-form it says Whereas it needs to specifically require host:port, as in the definition of :\ndoes this merit a change log entry?"} +{"_id":"q-en-http-core-9f9433cd7dafe004dd275677a7b3d039f00c2593c0412f781115763f877ff736","text":"The spec currently says this about data sent along with a CONNECT request: Is there a demarcation between data \"within\" a request and data to be sent on the tunnel if it opens successfully? HTTP\/2 and HTTP\/3 are fairly specific that all DATA frames are data for the tunnel. Do we need to say anything about the client prospectively sending data before the proxy responds?\nWe should decide what we want to allow. If HTTP\/3 is going to send CONNECT data within HTTP\/3 frame payloads after (or before) the recipient accepts the CONNECT, then we should specify it differently than how we specified CONNECT in 1.1. I am fine with that.\nHm. Does this imply we special-case 1.1, and require the HTTP\/2\/3 behaviour for new versions?\nI think it is already a special case for 1.1, so that's not a problem.\nOK, let me rephrase this then :-). Are we trying to make it consistent for all \"new\" versions?\nGiven that CONNECT is specified on a version-by-version basis anyway, I think I'd rephrase it with something like this: CONNECT is one of those weird edges of the protocol where each version will have to respecify its behavior; we don't necessarily need to prescribe what it will be, just make allowances for versions to differ."} +{"_id":"q-en-http-core-e56ff438cc9b02f880dd62a71d002e47040e0feb4b2f1e39fa0a02c6f2c0792d","text":"The text on doesn't assign any behaviours to roles; it just passively states that http(s) URIs 'are' normalised and compared. This leads to some confusion about what roles are responsible for normalising. E.g., can a server consider and as two different target URIs? I think we all agree on the answer to that, but it isn't clearly stated in the specs AFAICT.\nI'm not sure about that example (as a browser would never emit the latter), but they can certainly rely on some of the other variants and have been known to do so.\nWe currently just point to URL - so \"path segment normalization\" is something that may or may not happen, depending on who's doing the comparison. My answer would be \"a server could do that, but it would be stupid\". What's your take?\nI agree with your answer, but think we should say something about it -- perhaps going as far as saying a server should\/must not do that (as it's not interoperable).\nSecurity is probably a stronger reason not to allow in request targets. Another reason that specific example, as opposed to '+' vs '%20', might be a bad idea.\nsee also URL\nI think the comment is more confusing than the spec. The request target is whatever is provided by the client. If the server responds to the second example target with a 301\/302 to the first, then it does consider them to be two different target URIs for the same resource. In practice, that is the best way to respond because it forces the client to change its own copy of the target URI before it makes the \"right\" request. IIRC, that what we do in Apache httpd. This type of response is not canonicalization -- it's reasonable handling of an unsafe target without breaking the intended request. We might call that responsible handling after the client fails to canonicalize a reference, but keep in mind that a \"..\" path segment only has meaning for relative references (not references that are already in absolute form). In any case, a client is not required to canonicalize beyond resolving of relative form to absolute form. Some other servers might want to respond with 403 (e.g., an authoring server responding to a link checker). That's okay too.\nOn the face of it, that conflicts with : Regardless, the question is whether we give any advice about servers considering these to be genuinely separate resources (putting aside corrective actions, as you illustrate), as it's not practically interoperable with most HTTP implementations. Even something in the normalisation section to the effect of:\nThere's clearly no consensus whether the current relevant specs (RFC 3986 and RFC 7230) allow it or not. This is indeed an obscure case, modern browsers don't request such URIs (not even programatically, from content scripts), neither does cURL with default options. But it should still be settled, for the sake of implementors of web servers and other backend tools, frameworks and authoring tools which help with IA by emitting appropriate server configuration (the question being whether they should allow, possibly with a discouraging message, to create different resources at such URIs, or even assume, given some site not created in them, that they must be the same) and also for defensive coders who believe, as Sir Tim does, that cool URIs don't change, and want to tightly control the set of URIs they respond to with representations of resources and thus admit that the URI is valid – identifies some resource. (I do. Therefore I forbid (default to 404 if they're encountered) trailing slashes, empty path segments, query strings containing only a question mark and so on, except when they are after explicit consideration added to a whitelist.) For all the reasons above I ask you to either declare request with dot segments in targets invalid (some handling may and probably should be defined, but if a client fails to conform to standards by sending malformed requests and gets inconsistent responses, so be it, some bets may be off, the semantics is preserved on the condition of conformance) or specify (MUST level) that the URIs are equivalent (and in thas case it's the servers which are non-conforming if they serve different resources at them). There are also purely practical reasons to do either of the two (as opposed to a SHOULD contemplated in some comments above). Being an uncommon case, it presents a problem for implementations, causing numerous bugs, and confusion (around bugs or otherwise) even among knowledgeable people. Cf. a sample of links resulting from my research of the subject: URL URL URL URL URL URL URL URL\nAs for the percent-encoding, section 6 of RFC 3986 seems to have the answer that servers aren't allowed to interpret URIs differing only in them as identifying different resources. However, I have 3 more remarks: I wish the MUST requirement were stated explicitly to avoid confusion among implementors, content authors, server administrators and other audiences. The following excerpt seems to erroneously (because absolute examples are present in the following section) limit the whole chapter 6. to relative URI references, thus excluding e.g. the of s (RFC 7230, section 5.3.2.):The difference, introduced in RFC 3986, between URIs (which are always absolute and don't undergo resolution) and URI references (which may be absolute or relative and chapter 5. defines how to resolve them to obtain a URI) is subtle. Extra care to pick the right one each time should be exercised when using the terms. Particularly with normalization. Currently it's not easy to find and definitively interpret text in the standards clarifying whether both are normalized, when, and how normalization rules for them differ.\nURL is useful to the extent it provides a way to talk about this. Would it be sane to say that servers MUST do the things described in 6.2.2 of that document? When I'm a client implementor I would be shocked, and my caching software might well break, in dealing with a server that didn't. Section 6.2.3? I think so probably, but it's not nearly as clear and concise as 6.2.2.\nNAME I concur, make the language of 6.2.2 and 6.2.3 more precise and with uppercase MUSTs instead of present shoulds. On the other hand, 6.2.4 is bogus in this context, may be moved somewhere and rephrased to be clearly just informative advice. A trailing slash does make a URI semantically different, at least with the and schemes, unless it's immediately preceded by an authority.\nSaying servers MUST do anything has the effect of requiring intermediaries to modify URIs on the way through. That's clearly not going to happen, especially given how many different places URIs occur (request target, , , , etc.). OTOH there are some security implications if policy is applied before normalisation. Caches are less efficient as well (not an interop problem, just an efficiency one). Roy, you say 'The request target is whatever is provided by the client.' Do you mean the bytes on the wire (if so, what about h2\/h3)? or do you mean after some amount of processing? I think we're just talking about the level of processing that's necessary before the protocol element is consumed (but not necessarily forwarded).\nIs there a notion of a server which isn't an intermediary for a given request? If so, how about requiring it only of them? If not, can creators and administrators of resources (people minting URIs for them, either by hand or directing some tool to do it, possibly dynamically) be distinguished as an audience subject to its own set of requirements and have them MUST conform? The advantage of this approach would lie in abstracting from authorities, origins, websites, servers and intermediaries – just the assumptions about the semantics embodied in returned representations and metadata (responses in the case of HTTP) would count as the measure by which to judge conformance. (And they could achieve it with a single server, a server cluster, a CDN, proxies, including non-transparent ones, or whatever else.)\nSee PR.\nThank you, LGTM mostly. It still does allow minting URIs for different resources which are equal after normalization. It's a SHOULD NOT. I assume that the reluctance to go with MUST NOT results from the concerns raised above, but still it's such a corner case that support for it should IMO be trumped by clarity of semantics and ease of use and implementation. Though users and implementors are, by a MAY, explicitly allowed to conflate such URIs, making some resources unavailable via URI from their perspective, so it's a very strong SHOULD NOT and groups of people wishing to violate it for a good cause (whatever that may be, can you give a use case?) have to ensure there's nothing in either their systems or any intermediaries that would thwart their endeavour by performing normalization (which is always legal per the proposed spec). Would it be feasible to spec that equivalent URIs simply do identify the same resource, and if different representations are returned for different forms, it's a strange but unambiguously identifiable resource? (That's exactly already the semantics for URIs resolved by servers using their file systems,which is the most popular method. If there's an RSS feed at , then the file gets deleted, and after some time a new webmaster (possibly even not knowing that there used to be an RSS feed) places an icon (like ) there (and configures the appropriate ), the resource doesn't change. It's just a weird resource whose representation (independent of other factors, thus no ) is an RSS feed for some period, then it has no representation ( or ), and still later it's represented by an icon.) It doesn't break backward compatibility with implementations doing crazy things, those things remain technically spec compliant, it just pinpoints the semantics, saying that, despite possibly different representations, some things are the same resource (which current specs leave unclear, so some might have assumed otherwise, but, as I wrote above, it's such a corner case that I'd be surprised to learn that somebody actually not only held this interpretation, but also relied on it). The added burden of compliance to the stricter semantics is on people minting URIs for resources. And if they're unaware of normalization, we'd actually do them a favour by defaulting to the anyway likely desirable thing – having all those other equivalent forms of the URI identify the same resource (and yield the same representation, unless they put enough effort to circumvent normalization and have different representations for the same resource depending on which of the equivalent URIs was used, but doing that indicates they do know of this stuff). One more nit. Is normalization all or nothing? Or are HTTP components allowed to e.g. only omit port 80 and keep dot segments? I think it should be specified and prefer the former (which also follows Postel's law), though I realize there may be popular implementations doing otherwise; if so, a SHOULD is probably the most that could be counted on. (For completeness: are they allowed to denormalize, e.g. introduce case variation in the authority part?)"} +{"_id":"q-en-http-core-0b9b7b290c054f79fb6a76225e03030f1ed52a54bf4c7da5b9880eaa812ad8bc","text":"The close connection item used to be defined in Connection as whereas now its effect is described in Messaging without definition. I think it should be defined somewhere.\ngets pretty close to that text: ... with text below that elaborates. What do you want to see here?\nlooks good except as noted by NAME the bare close strings look weird.LGTM, except for 'the close' being a bit weird (but I don't feel that strongly about it; suggestions for replacement inline)."} +{"_id":"q-en-http-core-7809041ec724bd47498e74c96bae49a84bb6ecb66567bb406f93625fdbd0c9c4","text":"without changing section numbering\nimplies that a precondition failure is a good safe mode. However, a precondition that isn't recognized by a server will not share this property. A server that does not recognize a header as a precondition will ignore it and likely proceed with handling the request even if the precondition expressed should have failed. Indeed, the same is true if a server ignores preconditions, though the specification effectively . is the only text I could find on defining new preconditions. (One thing I really want to avoid is server's applying special heuristics to headers that start with , because that is one of many poor outcomes if this is left.)\nThere is also the example in the section above that I suppose we could make a new section on precondition extensions, but they aren't a separate category (like If*). For example, my precondition proposal for 1.1 was an all-inclusive Unless that enclosed a logic bag (structured field value). Note that they have always depended on compliance-when-implemented, since I added the first (If-Modified-Since) in early 1994.\nDoes this need to be in this specification?\nI think we're almost in the extension territory (I like the idea of a header field that lists the applicable precondition fields). As far as this specification goes, simply noting that new preconditions won't share properties of the core preconditions, in particular the expectation that it is understood by servers.\nOne grammatical nit."} +{"_id":"q-en-http-core-9db87aab5d55c4574ed73006eb247574243e2cf2abf115ae39fdf1d3113f46cc","text":"to clients that attempt to guess (editorial)\nI have no idea what this is trying to say and it is 1.1-specific. I suggest we delete it.\nIt was added as part of URL, adressing .\nHow about\nMay end up encouraging this kind of implementation, by providing an example here ... ? Maybe just leave this as \"Some clients take a riskier approach and attempt to guess when an automatic retry is possible because the original request may not have been received by the server.\" ....\nI don't see any problem with the original text, but NAME your replacement is fine too (although the second sentence is a fragment).\nHow about"} +{"_id":"q-en-http-core-a3aa2bb936ea87043f9a2f3aebfcae5c3b8321c0292c37c5234269d30b78ddf9","text":"I am seeing a lot of warnings about lines being too long (longer than 72 characters), but they are unrelated to these changes.\nThis changed \"previous authors, editors, and Working Group Chairs\" to \"current and previous authors\"; that's misleading, as it actually does not contain any current authors. Originally posted by NAME in URL\nThis is what I had in mind in terms of bringing back the original acks."} +{"_id":"q-en-http-core-e69fea51f13f1d0d4e0a861c3e0db1e07f9dded63c40bfc42b2ff5ba8c191fec","text":"Nits on semantics flags this: -- The draft header indicates that this document updates RFC3864, but the abstract doesn't seem to mention this, which it should.\nCan you point to a document that states this?"} +{"_id":"q-en-http-core-48a054d95405bbe6b5d612df89eb90142f7281472d61cff5c1bb4665eff19c20","text":"Should we leave these open until after IESG last call, or just apply them now?\nGood Q. NAME ?\nI think it'd be preferable to not cut another version until we get through the IETF last call. However, if we want to merge this and just keep a note of the changes, I don't see any issue.\nI think it would be preferable, just to avoid people raising the same issues again, and also to get the Index into the last call version.\nHEAD says whereas the new text for GET is and similarly for DELETE. We should update HEAD as well to be\nDigging in the change logs: the text for GET comes from URL (with subsequent tuning)."} +{"_id":"q-en-http-core-5d27391fbb5215be647b38b44fc1af8914970908cf180d16da773baf5216949e","text":"Used in 5.6.1, defined in 5.6.3. Minimally make the forward reference explicit.\nAnd 5.3."} +{"_id":"q-en-http-core-a3cf4e4dc49ab9ef39b2b05bf7c972cc153b59c47e9c372dc3663115dcab6e67","text":"Hi! I post here my combined comments for draft-ietf-httpbis-semantics-16, as I don't think they deserve one issue each :) I hope that's fine. My comments range from minor to very minor to nit, and are mostly editorials and clarifications. As this can possibly be me, I'll leave it up to you to decide if they can be useful to improve readability, or not. Thanks for your work! Francesca [x] 1. ----- Additional (social) requirements are placed on implementations, FP: I am not sure I understand the meaning behind the term \"social requirement\". Editors: URL (URL) [x] 2. ----- When a major version of HTTP does not define any minor versions, the minor version \"0\" is implied and is used when referring to that protocol within a protocol element that requires sending a minor version. FP: I had a hard time parsing this sentence (starting at \"and is used...\"), maybe breaking it up with commas would be enough. Especially the two \"that\" are unclear to me. Editors: URL (URL) [x] 3. ----- RFC 3986, which allows for an empty path to be used in references, FP: very minor, I'd prefer \"in URI references\". Editors: URL, change applied in URL [x] 4. ----- The process for determining that access is defined by the URI scheme and often uses data within the URI components, such as the authority component when the generic syntax is used. However, authoritative FP: I had to read the sentence above 3 times before realizing that the subject of \"is defined\" is not \"access\", but \"The process\", and understanding it. Editors: URL, change applied in URL [x] 5. ----- A client MAY discard or truncate received field lines that are larger FP: I assume and It might have been good to clarify that \"truncate\" a field lines is equivalent to dropping a member of the list of values. Editors: URL [x] 6. ----- Note that double-quote delimiters are almost always used with the quoted-string production; using a different syntax inside double- FP: I am not sure I understand \"quoted-string production\" Editors: added missing fwd reference - URL [x] 7. ----- content, it is often desirable for the sender to supply, or the recipient to determine, an identifier for a resource corresponding to that representation. FP: It might have been good here to give a hint or example on why such identifiers could be desirable. Editors: URL (URL) [x] 8. ----- Fields (Section 5) that are located within a trailer section are are referred to as \"trailer fields\" (or just \"trailers\", FP: s\/are are\/are Editors: URL [x] 9. ----- All responses, regardless of the status code (including interim responses) can be sent at any time after a request is received, even if the request is not yet complete. A response can complete before FP: I think at this point in the text there is no definition of what qualifies a request or response being \"complete\". Editors: actually, the definition is earlier in the text; added a reference anyway: URL [x] 10. ----- An intermediary MAY combine an ordered subsequence of Via header field list members into a single member if the entries have identical received-protocol values. For example, FP: Is there any requirement on the intermediary to be able to collapse the ordered sequence into a single member (such as storing and being able to expand this information)? Might want to expand a little. Editors: Declined because there is no requirement to do so. We already explain that one might want to do so for information hiding of intermediaries within a private network (URL and URL) [x] 11. ----- This definition of safe methods does not prevent an implementation from including behavior that is potentially harmful, that is not entirely read-only, or that causes side effects while invoking a safe method. What is important, however, is that the client did not request that additional behavior and cannot be held accountable for it. For example, most servers append request information to access FP: I might be missing the motivation for it, but this paragraph seems strange to me. Why is it important that the client cannot be held accountable for it? The concern I have is that the text seems to be saying \"safe is essentially read only\/not harmful\/..., but not really\". (And same for idempotent) Editors: URL [x] 12. ----- referring to making a GET request. A successful response reflects the quality of \"sameness\" identified by the target URI. In turn, FP: I am confused by this terminology: quality of \"sameness\". Editors: URL [x] FP: The text could add something for when it is acceptable or expected to make sense (so why this is a SHOULD). Editors: Because the protocol has historically preferred to separate semantics from parsing requirements so that the semantics can be extended over time without breaking parsers. It is a SHOULD NOT for GET because the original protocol did not define a meaning for such message content and some implementations took advantage of that leniency to implement and deploy features that work among consenting implementations but are non-interoperable with intermediaries and other implementations. Hence, it's a bad idea, but the protocol does not prevent it from happening. See for why this is impossible to change. [x] 14. ----- A server SHOULD NOT use the From header field for access control or authentication. FP: Same as the previous comment, when is it acceptable to use the From header field for access control or authentication? Editors: URL [If it is acceptable to the server that does it, then HTTP doesn't care in terms of interop, but it is still a bad idea because the protocol has other mechanisms for doing access control and authentication which are less likely to be leaked by accident (URL)] [x] 15. ----- FP: I don't know if I missed it, and please correct me if I did, but I was hoping to find a sentence stating something of the sort for the header fields: \"If not otherwise stated, these fields can be used with any method or response code\". Editors: URL [x] 16. ----- FP: Most of the header fields contain examples, which I find helpful. I think it would be good to have examples for all of them (including TE, Trailer). Editors: URL [x] 17. ----- | Accept. Future media types are discouraged from registering | any parameter named \"q\". FP: I wonder if it would be worth making a not out of it in the media type registry (so that it's easily remembered by experts) Editors: URL [x] 18. ----- Note that an If-None-Match header field with a list value containing \"\\\" and other values (including other instances of \"\\\") is unlikely to be interoperable. FP: what does it mean for a header field specific value to be not-interoperable? I am also not sure how \"*\" between the list of values would be processed. 2. above does not specify it, only stating: If the field value is a list of entity-tags, the condition is false if one of the listed tags matches the entity-tag of the selected representation. FP: I would possibly interpret this as \"search for tags matching the entity tag - ignore all others\". Editors: URL and URL [x] 19. ----- senders might send erroneously send multiple values, and both FP: remove one \"send\". Editors: URL [x] 20. ----- the String data type of [RFC8941]), or a field-specific encoding). FP: remove the first parenthesis. Editors: URL [x] 21. ----- FP: This is more of a check, but I see that all references only point to sections. I hope this will be translated into \"x.x.x of RFCthis\", is that correct? Editors: URL\nYou mean in the tables for IANA registrations? That's for brevity; IANA knows how to deal with that.\nThe media type registry is not owned by \"us\". I don't think we have the authority to change it.\nI will look into this with IANA and the DEs and get back. Just to be clear, the document states \"future media types are discouraged...\" but what we really mean is \"experts should be conservative with assigning ...\" , is that accurate? Yes, in theory we'd like to discourage media types to be registering \"q\", but the practical way to make sure that doesn't happen is to make the DEs react to it if it does.\nmaybe \"...whenreferring to that protocol version...\"? NAME ?\nYes, thanks!\nI don't think it's restricted to that.\nFor instance, to provide a bookmark\/make something accessable for a subsequent GET... NAME - do you want to add that?\nI don't get that question. The spec allows collapsing. It doesn't require it. Could you please clarify your question?\nThe motivation is that if something is defined as \"safe\", I can URL do that operation without being held accountable. For instance, I can always GET a URI (such as for prefetch\/preload\/indexing) and if the server does do something stupid with that request (such as deleting a resource, or adding something to a shopping basked), it's their problem. Re idempotenty: this makes it possible to repeat a request.\nThe text states that intermediaries are allowed to collapse \"an ordered subsequence of Via header field list members into a single member\", and they can only do so \"if the entries have identical received-protocol values.\" So I read 2 rules: the order matter, i.e. if (A,B) collapses to C, (B,A) does not map to C (A,B) must have identical recived-protocol values I was wondering about any additional rules that the intermediaries have to follow in order to collapse the list into one: for example, is it ever expected for the intermediary to remember that this single member was this collapsed ordered sequence? Are they ever requested to un-collapse?\nOrder matters for the purpose of collapsing. If you have \"D, E, F\", and E has a different protcol than D or F, you can't collapse. If \"A, B\" can be collapsed, so can \"B, A\" (because they have the same protocol) We are not saying that, so there is no expectation that the collapsing can be undone.\nOk, so yes I understand - the idea of this text is to say: \"this is the intent, BUT be aware that implementations might do something stupid and not comply with this intent.\" I guess what I got stuck with was \"the client can't be held accountable\", as it wasn't clear to me what would it mean for a client to be held accountable, as clients in general are not aware of what's the action on the server their request will result in.\nThis might require a separate issue. A field value containing more than one list element and including \"*\" is invalid, and the spec really doesn't say what that means. The intent of saying \"interoperable\" is (IMHO) to prevent people from assuming the different servers will do the same thing with it. I believe we should clarify that a value like that is invalid, and thust MUST NOT be sent. NAME , NAME ?\nThat refers to URL - I guess we should cite that? NAME ? Edit: opened URL\nOn the other hand, people complain about the spec being too long already. I don't think we should add examples unless they are needed to make things easier to understand.\nOk! (As I said, these are so minor that they are definitely up to you. Same with references and terminology comments)\nI believe the simplest possible fix would be to remove \"to be used in references\", making this: NAME ? Edit: -> URL\nI think that we should probably convert this one (and similar) to MUST NOT -- there isn't any situation where we recommend this behaviour, as it has interoperability and security implications.\nNot convinced at all. The \"SHOULD NOT\" requirement is new and was controversial already. There is code out there using GET with body (in controlled environments) which just works.\nThat's not HTTP; that's a private protocol.\nTell that Google :-) Anyway. This is a normative requierement and we are past WG LC. If you want to change it, this absolutely needs to be confirmed explicitly by the WG (and would deserve a seaparate issue).\nMaybe \"The process for determining whether access is granted is defined...\"? Edit: -> URL\nYou mean generally, for the fields we define? I don't think that's true; they're defined for the semantics they specify. If a use isn't specified, it might be extended in the future, so we rarely prohibit such a use. Saying that they can be used with any method or status code would imply that they have meaning in each circumstance, and that's often not true.\nauthentication. This isn't important for interoperability, it's just a bad idea. Maybe 'The From header field is not intended or suitable for use as a means of access control or authentication.'?\nFWIW, the requirement is from RFC 7231, and widely accepted (I guess), so I don't think we should change it.\nFor 4, proposed rewrite:\nWRT -- I don't think I'd use the word \"widely\" in conjunction with this header field. Again, if we use SHOULD NOT, we should explain the qualification.\nIn the sense that it's an interoperability requirement on a sender (not a recipient), I agree.\n\"MUST NOT be generated\" maybe?\nMUST NOT be generated, because recipient behaviour is not interoperable.\nURL already say \"MUST NOT generate ABNF-invalid messages\" in general. So maybe we don't want to repeat BCP14 terms here... -> URL\nI've reviewed the most recent commits, think this can be closed now.\nThanks for addressing my comments. I checked the commits and your answers: I can see everything was addressed, either by changes or by answers clarifying what I found confusing, except for: the one which is now issue point 14: giving context around the \"SHOULD NOT\" for the 'From' header field. Note that I am not asking you to make any change (such as removing the BCP 14 text), but following what BCP 14 defines: it seems important that the reader understands, or at least gets a hint of what these reasons and particular circumstances are, in order to be able to understand the full implications. This is why I think it's important to clarify the context around the SHOULD \/ SHOULD NOT, and was also my comment for the GET requirement, but I see that that comment has now transformed in a bigger issue the wg needs to find consensus on.\nI disagree. You are presuming that we know those particular circumstances, and that is almost always false when SHOULD is being used. When we have a detailed list of conditions, we use \"MUST NOT unless ...\". The SHOULD is literally saying that the reader needs to take care and understand the risk\/benefit for their own context, not that we need to explain to them based on our own imagination. It is fine to suggest additional text to address a specific concern, but I personally think that this one is self-explanatory. If we added a general description about why auth parameters shouldn't be sent in fields that are not expected to be treated as auth parameters by every recipient, such as within a security consideration, then we could reference it here.\nI don't think we should know all possible circumstances, but I presume we do have an inkling that implementations going against the RECOMMENDations exist, for whatever reason, right? As a reader, putting myself in a new implementer shoes, I personally would have found it useful to get that hint of what to consider, but if you think that this one is self-explanatory, I won't insist. The type of sentence I was thinking about would be something like: (Just trying to explain myself here, again, feel free not to take this suggestion if you and the wg don't think it helps the reader)\nOkay, I have added URL because I think we should explain it if it isn't obvious to others.\nThank you NAME !"} +{"_id":"q-en-http-core-370ee389de1ae621242a465c3241054f14bb5b74465db456733f5e7f7f1e5d84","text":"It is spelled as Acknowledgments in every RFC. Fix the one inconsistent case instead.\nFWIW, I checked the style guide. Will check again. That doesn't support what you say. The current text uses the slightly minority spelling which is also the one used in the RFC Styled Guide. Feel free to flip to the other spelling if you have a strong preference.\nWell, FFS, the style guide also says to use either US spelling or British spelling, but not both. It is spelled Acknowledgments in the US and all RFCs prior to that style guide. The RFC editor deliberately changed my text to Acknowledgments the last time this came up. I don't care which way it is spelled.\n[x] Section 9.1. , paragraph 2, comment: I'm not sure I understand what is meant by \"transport link\" and how connections would \"apply\" to one. Editors: URL\n[x] Section 9.4. , paragraph 4, comment: Using larger number of multiple connections can even cause side effects in otherwise uncongested networks, because their aggregate and initially synchronized sending behavior can cause congestion that would not have been present if fewer parallel connections had been used. Editors: URL\n[x] Section 13.1. , paragraph 14, comment: This can become an informative reference, so to not create a DOWNREF, since it's only used in the description of an IANA codepoint. Editors: URL\n[X] Section 13.2. , paragraph 12, comment: I think it is common practice to normatively cite an RFC that is being obsoleted. Editors: URL\nAll comments below are about very minor potential issues that you may choose to address in some way - or ignore - as you see fit. Some were flagged by automated tools (via URL), so there will likely be some false positives. There is no need to let me know what you did with these suggestions.\n[X] Section 9.3. , paragraph 3, nit: This got garbled, the suggestion was to rephrase to: A recipient determines whether a connection is persistent or not based on the protocol version and Connection header field in the most recently received message (Section 7.6.1 of [Semantics]), if any: Editors: URL\n[X] Section 11.2. , paragraph 2, nit:[X] Section 12.3. , paragraph 3, nit: Editors: this is intentional. Leave it to the RFC Editor?\n[X] Section 7.1.1. , paragraph 6, nit: Consider shortening this phrase to just \"whether\". It is correct though if you mean \"regardless of whether\". Editors: URL\n[x] Section 9.5. , paragraph 2, nit: Do not mix variants of the same word (\"acknowledgement\" and \"acknowledgment\") within a single text. Editors: URL\n[x] Section 9.8. , paragraph 2, nit: The abbreviation is missing a period after the last letter. Editors: URL\n[X] Section 10.2. , paragraph 17, nit: Consider using \"many\". Editors: this is in 11.4. URL\n[x] \"Appendix B. \", paragraph 2, nit: Did you mean \"ought to stop\"? Editors: URL\n[x] \"B.3. \", paragraph 2, nit: Do not mix variants of the same word (\"acknowledgement\" and \"acknowledgment\") within a single text. [x] \"B.4. \", paragraph 1, nit: Do not mix variants of the same word (\"acknowledgement\" and \"acknowledgment\") within a single text. Editors: URL\n[x] \"C.2.2. \", paragraph 4, nit: This abbreviation for \"identification\" is spelled all-uppercase. Editors: URL\n[x] Uncited references: [RFC7231]. Editors: URL\n[x] Obsolete reference to RFC2068, obsoleted by RFC2616 (this may be on purpose). Editors: this is indeed intended, as we're citing something that is in 2068 but not 2616\n[x] These URLs point to URL, which is being deprecated: URL URL Editors: these are added by xml2rfc; you may want to open a ticket for that tool :-)\n[x] These URLs in the document can probably be converted to HTTPS: URL Editors: URL\nIt is used normatively for the definition of the \"compress\" coding. (also, this is no change from RFC 723x).\nSorry? We are obsoleting that spec, thus we will not have a normative dependency on it.\nwould \"apply\" to one. Suggest:\notherwise uncongested networks, because their aggregate and initially synchronized sending behavior can cause congestion that would not have been present if fewer parallel connections had been used. With some light editing, I think this would be useful to add.\nI think we should commit this now and let the RFC editor decide on spelling."} +{"_id":"q-en-http-core-0b83a35124c6d04ddf024df945024ff5394b817604fb6eb241abb6ea3ff77122","text":"The first is an incorrect change -- \"advance configuration\" means configured prior to the request, not \"advanced\". The second is indeed a typo.\nRoman Danyliw has entered the following ballot position for draft-ietf-httpbis-semantics-16: Yes When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: [x] ** Section 2.2. Per “Additional (social) requirements are placed on implementations …”, what’s a “social” requirement? Editors: removed in (URL)\n[x] ** Section 4.2.2. http + https origins on the same host uniformly. Editors: added text in\n[x] ** Section 5.1. Convention question. Per “Field names … ought to be registered within the …” HTTP field name registry, I have a question about the strength of the recommendation based on the use of the verb “ought” – is that RECOMMENDED? SHOULD? Section 8.3.1, 8.3.2, 8.4.1, 8.8.3, etc use a similar construct. Editors: we intentionally do not to use normative language for registration requirements.\n[x] ** Section 7.2. Does this text allow for the possibility for both a Host and :authority be included? Editors: it's syntactically possible in H2 and H3. It's up to those specs to describe what that would mean.\n[x] ** Section 9.2.1. In addition to example of the access logs files filling the disk, there could be significant CPU load if the target is a script. Editors: URL\n[x] ** Section 17.1. The text provides helpful caution by stressing that “… the user's local name resolution service [is used] to determine where it can find authoritative responses. This means that any attack on a user's network host table, cached names, or name resolution libraries becomes an avenue for attack on establishing authority for \"http\" URIs.” The subsequent text highlights DNSSEC as improving authenticity. It seems that the integrity provided by DoT or DoH would also be relevant. Editors: added text in (see also URL)\n[x] ** Section 17.2 people who run them; HTTP itself cannot solve this problem. No disagreement with the sentiment, but I would recommend not framing it in term of the trustworthiness of the people (i.e., intermediaries with poor security or privacy practices is not necessarily due to the lack of trustworthiness of the engineers operating the service; perhaps these services are also run in of jurisdictions where confidence in them should be reduced). NEW Users need to be aware that intermediaries are no more trustworthy that the entities that operate them and the policies governing them; HTTP itself cannot solve this problem. Editors: URL\n[x] ** Section 17.5. More generically describe the attack OLD NEW Editors: d466398e\n** Editorial\n[x] -- Section 3.5. Should s\/advance configuration choices\/advanced configuration choices\/? Editors: no, advance has the same meaning as \"configuration choices made prior to request\"\n[x] -- Section 4.2.2. This text goes from referring to “secured” in the first sentence to “[acceptable] cryptographic mechanisms” in the second sentence. To link them, perhaps s\/are secured\/are cryptographically secured\/ Editors: \"secured\" is a defined term for this spec.\n[x] -- Section 6.5. Typo. s\/section are are\/section are\/ Editors: URL\n[x] -- Section 11.1. The text (at least when read as a .txt) isn’t showing RFC7617 or RFC7616 as references. Editors: it does in HTML: URL\n[x] -- Section 14.1.1. Typo. s\/gramar\/grammar\/ Editors: URL\ndisk, there could be significant CPU load if the target is a script. Yes. But this is the definition of \"safe methods\", not a discussion of all potential attack vectors of a server.\nuser's local name resolution service [is used] to determine where it can find authoritative responses. This means that any attack on a user's network host table, cached names, or name resolution libraries becomes an avenue for attack on establishing authority for \"http\" URIs.” The subsequent text highlights DNSSEC as improving authenticity. It seems that the integrity provided by DoT or DoH would also be relevant. Understood. Q: do we attempt an exhaustive list here, or just mention examples? If we did mention DoT and DoH, woulkd that be exhaustive?\nBy nature I don't think we can be exhaustive here. If anything, I'd replace DNSSEC with DoH, as it's getting more adoption (both in implementation and deployment) on the web.\nhttp + https origins on the same host uniformly. We already say that in several places, and I don't think repeating ourselves helps the document become more readable or useful.\nThere are technical interop requirements for the protocol and social requirements to keep people safe on the Internet (or the Web safe from the people abusing it). This is just a short-hand version of old IETF lore, and probably isn't worth repeating in the spec."} +{"_id":"q-en-http-core-9debff6ac29f981ce68a7801c9ce187865123c4a6638b5adab0ceaacb55ec629","text":"LGTM.\nNeed to explain or reference explanation.\nLGTM.\nokay, I think this reads better, or we can split them into separate PRs if needed"} +{"_id":"q-en-http-core-d269bae758624ed092495cab5f254b84cc75db19e488f0872ae0137f59de3056","text":"I'm a little uneasy that the majority of headers that are prefixed with are actually scoped to the selected representation. E.g., , , , . The only registered, current headers that actually about the content (using the current definition in Semantics) are and , AFAICT. Not sure about ; it's defined in terms of the selected representation, but could equally be defined in terms of content, I suspect. It might help to add a small note to the that explains that the prefix doesn't necessarily imply that the header is strictly about the content. Also might be worth mentioning in the .\nReading semantics I have some doubts on C-L wrt partial representation this seems to be the full representation data (eg. see C-L + HEAD). But then what happens with value in a range response? The complete representation-data or the actual content?\n\"A Content-Length header field present in a 206 response indicates the number of octets in the content of this message, which is usually not the complete length of the selected representation. Each Content-Range header field includes information about the selected representation's complete length.\" So the case of a 206 response is special-cased.\nIt seems to me that C-L is more often used to convey \"messaging\" information, than the actual metadata (which happens only when HEAD is used)."} +{"_id":"q-en-http-core-35c8ba2bcc7d1ebd7384513779034173ee4ba89c25aa8ca51273e58a6f03344a","text":"Source:\nIt's completely irrelevant to the recipient what port number was used for an upstream intermediary, which is why it says MAY here.\nThere are two issues here: 1) The protocol-name doesn't reflect the scheme in use, for HTTPS, it's still 'HTTP'. The practical implication of that is that when HTTPS is used, the port needs to be explicit, even if it's the default for HTTPS. A clarifying note might help here. 2) HTTP\/3 uses UDP, not TCP. Omitting 'TCP' from the MAY requirement might help.\nThis is for Via. HTTPS is not a separate protocol. \"https\" is a scheme that, among other things, doesn't allow an intermediary to forward HTTP messages until they are beyond the origin secured connection (and hence no longer HTTP\/TLS). Whatever the post-TLS origin does to forward its messages via CDN or downstream origins is beyond the scope here.\nYes, we can just remove TCP from that sentence.\nThat would be acceptable."} +{"_id":"q-en-http-core-46e95fe65a19dfbbbfd35ff17396f8aa60ccb2530d2f08caf2d0e02291d23b9a","text":"For\n[x] (7.6.3) Via \"If a port is not provided, a recipient MAY interpret that as meaning it was received on the default TCP port, if any, for the received-protocol.\" So if received-protocol is \"3\", it's a UDP port. If received-protocol is \"1\" or \"1.1\", is the default port 80 or 443? IIUC the scheme isn't included to determine this. Editors: URL [x] (7.7) Message Transformations \"A proxy that transforms the content of a 200 (OK) response can inform downstream recipients that a transformation has been applied by changing the response status code to 203 (Non-Authoritative Information)\" Why not an normative word, instead of \"can\"? Editors: URL [x] (12.5.3) Is it correct that \"identity\" and having no field value for Accept-Encoding are synonymous? Editors: URL [x] \"Servers that fail a request due to an unsupported content coding ought to respond with a 415 (Unsupported Media Type) status\" Why not s\/ought to\/SHOULD ? Editors: URL [x] (14.3) Why can only origin servers send \"Accept-Ranges: bytes\"? Why not intermediaries? Editors: URL [x] (15.3.7) \"A sender that generates a 206 response with an If-Range header field\"... (13.1.5) leads me to believe that only clients can send If-Range. So how can there be a response with If-Range? Editors: URL [x] (15.3.7.2) The last instance of THISSTRINGSEPARATES has a trailing '--'. If this is intentional, it ought to be explained. Editors: URL [x] (16.3.1) says field names SHOULD begin with a letter, but (16.3.2.1) says they SHOULD begin with \"an alphanumeric character\". More broadly, the \"Field name:\" description in (16.3.1) should probably refer to (16.3.2.1) unless I'm misunderstanding the scope of these sections. Editors: URL [x] (17.13) s\/TCP behavior\/TCP or QUIC behavior Editors: URL [x] (B) It would be good to mention here that accept-ext has been removed in (12.5.1), and accept-charset is deprecated in (12.5.2), if that is new to this spec. Editors: see URL and URL\nBecause it isn't required, nor is it widely implemented (if at all). We try not to make requirements that we know won't be honoured.\nBecause the header is information about the capabilities of the origin server, not other members of the request chain. Intermediaries can change (e.g., with proxy configuration). Note that a gateway (e.g., a CDN) is effectively acting as an origin server, so it can do this. The language in this section needs to be tweaked, however; there are a few instances of 'server' that should be 'origin server'.\nfield\"... (13.1.5) leads me to believe that only clients can send If-Range. So how can there be a response with If-Range? It looks like the phrase 'to a request' was dropped somewhere between and now.\nthis is intentional, it ought to be explained. This is how multipart works; see . I don't think there's value in re-specifying it here.\nAs per the spec: ...\nBecause we avoid making existing servers that predate this spec non-compliant (unless there's a very good reason, such as security)"} +{"_id":"q-en-http-core-38ef1699db1e7938d1913b12a7c287de95eb05d1c485966bc5ac64a47a125be7","text":"says: [...] says about : [...] Because we're saying that validation occurs against all responses for a given URI (see ), this has the effect of allowing a response with a header to update all of the responses sent for that URI in that second (provided the requirement about is met) -- which would lead to e.g., response with updating other languages. That probably isn't desirable. The obvious way to fix this is to avoid promoting to a strong validator when it's being used by a cache. However, see -- it's necessary for it to be strong for . It might also be worth refining caching 3.4.3 to differentiate between entity-tag updates and last-modified updates.\nIn 4.3.5, updating is done like this: Note that it's specified in terms of responses that could have been selected for that request, rather than \"stored... responses for the applicable request URI.\" If we modified 4.3.4 to specify that responses that could have been selected for the request sent are updated (still subject to further filtering down by any validators in the 304 response), I think it should work."} +{"_id":"q-en-http-core-8ecd15d19d9f3d9a58cecee00acaeaac15a6bf5262c70a5587e9dfa0962809cd","text":"Note that if we go in this directions, there are some editorial improvements in that we should pull in separately.\nI find Roy's proposal more precise. The overall text does slightly help me find how to figure a scheme in H1.\nFrom NAME - Let's discuss whether the currently specified procedures for reconstructing the target URI from a request-target in absolute-form provide adequate security properties, at the origin server. I'm specifically concerned about taking the scheme directly from the request target, i.e., making the distinction between the \"http\" and \"https\" schemes. The simple procedure of \"take the scheme from the request-target\" would seem to allow for the client to cause the server to engage processing for the \"https\" origin without receiving the protection that https is supposed to provide. (The converse case does not immediately seem to present much risk but is probably worth preventing as well on general principles of retaining consistency.) I don't remember seeing any text that would require the server to validate the scheme from the request-target against the actual properties of the transport (or the configured fixed URI scheme as might be provisioned with a trusted outbound gateway, etc.) While we do reference §7.4 of [Semantics] with a note that reconstructing the target URI is only part of the process of identifying a target resource, that part of [Semantics] does not mention scheme validation as part of rejecting misdirected requests. Does the origin server need to validate the scheme from an absolute-form request-target? What is the scope of consequences if it fails to do so?\nI said: In 3.3 of 1.1 I was skimming by the initial text: The target URI is the request-target when the request-target is in absolute-form. ... whereas you obviously picked up on it. I agree that there's an issue here. The decisions about absolute-form requests were made way back when. My reading of the archive (circa September and October 1995 -- I'm sure those that were there will correct me) is that Host headers were added to address the multiple-hosts-on-an-IP problem in a way that was backwards-compatible with HTTP\/1.0, but because some folks wanted to enable the use of URNs, proxies were required to support and use the absolute form, so that URNs could be (theoretically) resolvable through them. That didn't happen, but it is possible to use e.g., FTP through a HTTP proxy as a result (last I looked). I think the solution here is to restrict the statement above so that it only applies to proxies, and to add a requirement for origin servers (including gateways) to specifically check absolute-form URIs for alignment regarding the scheme.\nIt seems to me that the ability to do cross-protocol requests is a deliberate feature of HTTP, and not one we should interfere with now, even if it's rarely used. In fact, RFC 8164 explicitly relies on the ability to request \"http\" resources over an \"https\" connection. I agree that servers should not respond to requests that assume a secure channel over an insecure one. While Sections 4.2.2 and 4.3.3 seem to put most of the burden on the client not to make such requests, the server should probably consider such a request misdirected even if it is authoritative. Perhaps the least disruptive solution would be text in 7.4 adding https:\/\/ URIs received over insecure connections as another instance of a potentially misdirected request?\nBefore I get into the text, I need to clarify that this question is misplaced. The procedure for reconstructing the target URI is about reconstructing what the client sent. It is not about security (yet). It has nothing to do with ensuring the connection is secure. It is about understanding the request as received. AFTER the request is understood, the server needs to determine whether it can answer the request in that context. How it determines that is an internal configuration issue, not a protocol standard (because HTTP requests can arrive via any underlying transport), but there are things we can add to make the process clearer without assuming that all HTTP requests are made by general-purpose browsers."} +{"_id":"q-en-http-core-a1a0e55ed6f23a87231ad09e8a4c8fbf2270cfe79505bdb5c724d0976822e485","text":"replaces with my own edits. Note that I added \"without revalidation\" to the description of Vary since the cache can use the stored response, regardless of Vary, once it has been revalidated by the origin server (e.g., 304 with etag).\nFirst changeset is pretty wordy..."} +{"_id":"q-en-http-core-60a589cb02e615eccd37f033a628a2effb29ade4a3764db5d2b3d5c098484a35","text":"This is related to the suggestion in but provides the modern reason why all servers work this way.\n'This allows servers to be deployed in non-traditional configurations, such as within a custom service mesh or content distribution network, where the proxy\/origin distinction is handled upstream.' I don't really buy this; suggest just removing this sentence."} +{"_id":"q-en-http-core-e0a354bccb47bd1c3c1bda4e0c6f95cb16299895399ff3940b8664dc3719777d","text":"Clarify that Accept-Ranges can be sent by any server, remove \"none\" from the ABNF because it is now a reserved range unit, and allow the field to be sent in a trailer section while noting why that is much less useful than as a header field For this is an alternative to\nIMHO... allowing non-origins to service range requests, but disallowing them to advertise that is really really weird. At the end of the day, any \"Accept-*\" field value received is always a statement about the present, and not a promise for future requests. This is not different from other fields. Do you believe it would actually be harmful if a client was mislead to believe it can do a range request, and then it receives the full content with a 200?\nA header field is defined by the intent of the people who minted and deployed it. That was Lou Montulli. Its original meaning was confused in the pre-RFC2068 draft because of an editorial mistake, not because the header field ever changed its meaning as deployed. The main use case was restarting a big download, and that is not specific to origin servers. Today, this field is largely ignored because of the poor specification. We can improve such things without harm.\nI don't think this is worth too much more time, I'll just say I disagree -- it's not very useful in this form, and it differs from every other bit of similar metadata we put into headers (e.g., , , ). I'll make a suggestion to try to make the ambiguity here a bit more clear.\nYou are starting with an assumption that this is an origin characteristic and saying that this doesn't fit that assumption. I agree that the assumption does not fit the header field's definition, nor is it a fit for purpose. The field's purpose is to tell the client that range requests may help if there is a problem receiving the whole content. That's not limited to origin servers and has nothing in common with Allow or Vary. If a proxy cache can process range requests on large cached items (e.g., a company training video) and its clients can benefit from that, why would we not want to advertise that fact to the client? The theory that this may cause a problem with multiple paths just doesn't hold for proxies (they are client chosen) and wouldn't matter anyway for range requests because the worst case is a 200 response. This was explained in 1996 and the definition was fixed in the next draft. It just wasn't fixed enough. This doesn't change the fact that the header field isn't very useful, simply because it is advisory only and most clients will ignore that advice when it doesn't matter to them if 200 is the response. That's okay."} +{"_id":"q-en-http-core-563b82fec40d2a7b383d9e8f9c02be4cb8466dfafd7049bbf60b8962729e9d8a","text":"… might be allowed by consenting adults\nMy previous comment:\nNAME you pushed back on MUST. My position is that our task is to define what's interoperable in HTTP, and bodies on GET clearly aren't. Using SHOULD here is a concession to a vocal minority who want to use the protocol in this way because they have pairwise agreements; I understand that if we use MUST here it'll be violated, but that puts the responsibility squarely and unambiguously on their shoulders. NAME point is valid -- if we use SHOULD, that ought (ha!) to be contextualised. To do so here, we'd need to write something like: The problem is that the message can still be handled by software that isn't party to or aware of that negotiation -- especially, intermediaries. So this isn't really something we should be encouraging. Hence, MUST.\nI continue to disagree for the reasons you cited me above. Explaining the \"SHOULD NOT\" would work for me. If you want a change here, we might want to get those involved who actually use it in practice.\nNAME please disambiguate 'above'.\nThe 'people who use it in practice' are not using HTTP -- they're building private protocols based upon it that don't interoperate with normal HTTP implementations. What they're doing breaks HTTP and HTTP implementations.\nRFC 7231: \"A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request.\" I disagree that makes an implementation sending content \"not HTTP\". You are pushing for a change here, and I really believe you need better arguments to support that. If the intent is to discourage things like that, I'm all in, but using \"MUST NOT\" is not likely to achieve that goal.\nWe got here because the 7231 language was ambiguous enough that people misinterpreted it, so we added a RFC2119 requirement. Our AD is rightly pointing out that if we use SHOULD, we need to contextualise it with when it's acceptable to violate the requirement. What text do you propose to perform that function?\n\"A client SHOULD NOT generate content in a GET request, unless support for content is negotiated (in or out of band).\" sounds good to me\nAs I explained above, that won't work. Out-of-Band negotiation isn't available to intermediaries. In-band negotiation is the SEARCH method (or whatever it ends up being called); it would be strange for the WG to allow other methods of negotiation when it's working on how to accommodate this properly. Also, while we place the requirement on the client here, realize that the impact here is really on the server -- it must not require a request to have a body in order to interoperate, because that precludes the client from using otherwise conformant HTTP implementations and infrastructure.\nHow so? Could you please clarify that? Are you saying that intermediaries are allowed to drop the payload? Where do we say that?\nThis discussion is not about future code, but about existing, deployed code which currently works for people, and which IMHO is not strictly forbidden by the existing spec. In past discussions, we agreed on strengthening how we discourage this, but a \"MUST NOT\" is very different from a \"SHOULD NOT\". For instance, do you expect HTTP libraries that currently support request bodies on GET to stop doing that, although there's production code relying on it?\nNo, I'm saying that the pressure we see to allow bodies on GET is being placed upon clients by servers who create services that use it, doing so because they read the current ambiguous language and find enough wiggle room to think it's allowed and a good idea. This is because the power dynamic between a client and server is often unbalanced, especially for APIs. It does not work in an interoperable fashion on the scale of the Internet - which is what we're documenting. Those are private agreements. The fact that it's not strictly forbidden is how we got here. Not any time soon, but those libraries aren't supporting HTTP in doing so; they're effectively supporting the creation of new, HTTP-derived protocols. How about:\nNot following. It does work for some people (maybe because they use HTTPS or control the request path by other means). Saying that if something does not work \"on the scale of the Internet\" then it's not \"HTTP\" or shouldn't inform our spec does not work for me. If this was the case, we really would need to start removing many more pieces from the protocol. Also, telling libraries that they should stop supporting a use case that was allowed (with care) in RFC 2616 and 723x and which is used in practice doesn't look right to me. We are past WG LC and IETF LC; if you really really want to change this normative requirement now (instead of explaining the SHOULD), then you should get support for that on the mailing list.\nWhat I mean is that if you create a resource that uses a body on GET, you can't just use any HTTP implementation - you have to find one that does bodies on GET. Here, 'implementation' includes the framework in use, the server software, reverse proxies, CDNs, description languages, etc. Likewise on the client side; it's no longer a HTTP service, in that I can use any HTTP client; I have to find one that supports this extra constraint. Furthermore for proxies I might want to use, etc. Effectively, it's HTTP+GETBODY, not HTTP. It's not an extension, because it's not backwards-compatible. You say it's \"used in practice\", but as far as I see it, it's a few large services using their popularity \/ market power to convince people to break the protocol. That ignores the security and interoperability issues that this practice brings. Practically no one uses , but that doesn't matter. This does. If you can suggest a qualification for the SHOULD that doesn't compound the problem, I'm all for it. Absent that, I'll take it to the list.\nAlso, looking at the text in situ, I think the current second sentence doesn't need modification, leaving the proposal as:\n... and since we've tried to align this language in HEAD and DELETE, the change would also be made there.\nI really disagree with changing this. There are enough mentions on the net about \"what happens if I send a body with a GET\" that it's pretty sure we're going to declare a few in-house deployments non-compliant. For example we could imagine that some local deployments use a request body for an auth scheme. We could even more imagine this with H2. And with H2-to-H1 gateways, the risk of seeing a request body with a GET is even higher. The real problem with this spec is that from the very early days we only say how to emit but much less what to receive, and that many implementors on the receive path tend to take the server's \"MUST NOT\" for granted and ignore corner cases. Marking a MUST NOT here would certainly re-open some smuggling issues just because some implementors would be less careful about this and would never check the presence of a content-length. So I definitely agree with Julian here, I'd rather keep the \"SHOULD NOT unless negotiated by other means\" at least to make sure everyone remains careful about this. Last, I'd rather not make GET such a special method after all the work we've done in 723x to try to uniformize processing for all of them! And as seen in a recent issue, at least both Apache and HAProxy deal fine with bodies in GET requests.\nMaybe we split the difference -- there are certainly plenty of instances of \"MUST NOT... unless...\" in IETF specs.\nI'm fine with this approach.\nAm 16.07.2021 um 16:03 schrieb Mike Bishop: Hmmm. A \"MUST NOT, unless...\" really seems to be a \"SHOULD NOT\", no? Best regards, Julian\nI think there's a difference, but it's subtle. \"SHOULD NOT\" implies that there are valid reasons you might choose to violate the spec, and describes what you should consider in making that choice. \"MUST NOT unless\" is an absolute prohibition outside of the described circumstance. I think Mark makes an excellent case for an absolute prohibition in the absence of prearranged support.\nWith that said, that will still declare all 723x-compliant gateways non-compliant, because they will emit this if they receive it. I'm really embarrassed by this last minute change considering the time spent on this during 723x before reaching that situation :-\/\nThat is the very definition of a different protocol though. Why not just \"MUST NOT\" then?\nIt is HTTP. Is has not been forbidden in HTTP. It works in certain scenarios, and these are deployed. We can't undo that, even by \"yelling\". If we want to discourage people from doing this in new protocols, a \"SHOULD NOT\" wtih explanation is just right.\nRight now you could use curl as the client, haproxy as the gateway and apache as the server and all of them would happily process this without a sweat because it properly respects the messaging. Regarding the semantics, it's up to the application making use of that to define it. Actually it is even already used and documented: the broadly used ElasticSearch explicitly documents how to perform JSON search requests using : URL So please let's stick to \"SHOULD NOT\" with Julian's \"unless negotiated via other means\" or something like this, instead of suddenly adding dangerous exceptions to the protocol and breaking interoperability between already deployed and working components."} +{"_id":"q-en-http-core-09aa9eb9f51da983bd3dc9d5cf115aab8df4c051559c3ec9a4fdf5ac8f031d67","text":"[x] 10.2 note In Section 10.2 we had some good text cleanup (I think, prompted by one of my comments -- thank you!), but the removed text included a note about how the semantics of a response header field might be refined by the semantics of the request method and\/or the response status code. That seems like it would be useful to have mentioned, and I'm not sure if this text was replicated elsewhere. Editors: The problem with that wording was that a field has defined semantics that include variations based on the context in which it appears, and those variations are included as part of the field definition (not refined by other parts of the specification). Ultimately, we decided that this doesn't need to be said in the section intro. [x] updating 3864 This document updates RFC 3864, which is part of BCP 90. However, this document is targeting Proposed Standard status, which means it cannot become a part of BCP 90 as part of that update. Did we consider splitting out the RFC 3864 updates into a separate, BCP-level, document, that would become part of BCP 90? Editors: The intended status for this document is full standard. The reason it updates RFC 3864 (which previously defined the registry for HTTP header fields as part of a registry for all application-level IMF-like protocols) is that IETF thinking has changed since 3864. Having a single IMF-wide definition of fields was unsuccessful and led to more confusion when fields diverged. Hence, this document is obsoleting only the HTTP parts of RFC 3864 by moving them back to the standards track. It is an update only because there is no status for partial obsoleting. [x] Section 1.2 the existing TLS and TCP protocols for exchanging concurrent HTTP messages with efficient field compression and server push. HTTP\/3 ([HTTP3]) provides greater independence for concurrent messages by using QUIC as a secure multiplexed transport over UDP instead of TCP. My understanding was that h2 and h3 also use non-text-based headers, in contrast to HTTP\/1.1's \"text-based messaging syntax\" that we mention earlier. Is that non-text nature worth noting here? Editors: No. This is not an overview of the differences between the protocols; just a brief introduction. [x] Section 3.7 organization's HTTP requests through a common intermediary for the sake of security, annotation services, or shared caching. [...] The term \"security\" can mean so many different things to different audiences that its meaning in isolation is pretty minimal. I suggest finding a more specific term for the intended usage, perhaps relating to an auditing, exfiltration protection, and\/or content-filtering function. Editors: That is often how proxy-based products are sold\/positioned in the market. [x] Section 3.7 as a transparent proxy [RFC1919]) differs from an HTTP proxy because it is not chosen by the client. Instead, an interception proxy filters or redirects outgoing TCP port 80 packets (and occasionally other common port traffic). Interception proxies are commonly found on public network access points, as a means of enforcing account subscription prior to allowing use of non-local Internet services, and within corporate firewalls to enforce network usage policies. Is this text still accurate in the era of https-everywhere and Let's Encrypt? Editors: They are still deployed, yes. On a public access network, the first TLS request will fail. User agents recognize such failures and fall back to a plain HTTP access to a common URL, which is then intercepted by the filter and the user agent is directed to login for Internet access. You can see this in every hotel, cafe, and convention center. [x] Section 3.9 As Éric notes, OpenSSL 0.9.7l supports only SSL and TLSv1.0, which per RFC 8996 is no longer permitted -- I concur with his recommendation to update the example (potentially including Last-Modified). Editors: already addressed. [x] Section 4.2.x the target resource within that origin server's name space. Would a BCP 190 reference be appropriate here (emphasizing that the name space belongs to the origin server)? Editors: Not really. This section is defining what those components are for. BCP 190 is advice for specifications that assume certain hierarchies within applications. Most readers would find that to be an unnecessary distraction at best, or a circular down-reference at worst. [x] Section 4.2.2 within the hierarchical namespace governed by a potential origin server listening for TCP connections on a given port and capable of establishing a TLS ([RFC8446]) connection that has been secured for HTTP communication. [...] Is \"capable\" the correct prerequisite, or does the server need to actually do so on that port? (Given the following definition of \"secured\", though, the ability to successfully do so would seem to depend on the trust anchor configuration on the client, which is not really something the server can control...) Editors: \"capable\" is correct. As stated above that in 4.2, the server does not need to exist. [x] Section 4.3.3 authoritative response, nor does it imply that an authoritative response is always necessary (see [Caching]). Is it intentional that this paragraph diverges from the analogous content in §4.3.2 (which also mentions Alt-Svc and other protocols \"outside the scope of this document\")? Editors: Yes, it is intentional. It isn't necessary to repeat the Alt-Svc example, and the last sentence (for alternative access to \"http\" resources) is encompassed by the definition of \"https\" authority by certificate match. [x] Section 5.3 | Note: In practice, the \"Set-Cookie\" header field ([RFC6265]) | often appears in a response message across multiple field lines | and does not use the list syntax, violating the above | requirements on multiple field lines with the same field name. | Since it cannot be combined into a single field value, | recipients ought to handle \"Set-Cookie\" as a special case while | processing fields. (See Appendix A.2.3 of [Kri2001] for | details.) The reference seems to conclude only that the situation for \"Set-Cookie\" is underspecified, and doesn't really give me much guidance on what to do if I receive a message with multiple field lines for \"Set-Cookie\". (It does talk about the \"Cookie\" field and how semicolon is used to separate cookie values, which implies that \"Cookie\" would get special treatment to use semicolon to join field lines, but doesn't really give me the impression that \"Set-Cookie\" should also have such treatment.) Editors: Handling for Set-Cookie and Cookie are not defined by this specification; this is just an informative note. [x] Section 5.4 than the client wishes to process if the field semantics are such that the dropped value(s) can be safely ignored without changing the message framing or response semantics. Is it worth saying anything about fields that the client does not recognize? (Per the previous discussion, the server needs to either know that the client recognizes the field or only send fields that are safe to ignore if unrecognized, if I understand correctly...) Editors: That's not relevant here. [x] Section 6.4.1 method and the response status code (Section 15). For example, the content of a 200 (OK) response to GET (Section 9.3.1) represents the current state of the target resource, as observed at the time of the message origination date (Section 10.2.2), whereas the content of the same status code in a response to POST might represent either the processing result or the new state of the target resource after applying the processing. Doesn't the last clause mean that there is some additional (meta)data that can affect the content's purpose (e.g., a Content-Location field)? Or how else would one know if the 200 POST response is the processing result vs the new state? It seems incomplete to just say \"is defined by both\" and list only method and status code as the defining factors. Editors: URL [x] Section 7.6.3 [I had the same question as Martin Duke about default TCP port, and the interaction with the scheme. I see that it has been answered since I initially drafted these notes, hooray.] [x] received target URI when forwarding it to the next inbound server, except as noted above to replace an empty path with \"\/\" or \"\". I found where (in the discussion of normalization in §4.2.3) we say to replace the empty path with \"\/\" for non-OPTIONS requests. I couldn't find anywhere \"above\" where it was noted to replace an empty path with \"\" (presumably, for the OPTIONS requests), though. Editors: [x] Section 8.3 to disable such sniffing. \"encouraged to provide a means to disable\" could be read as also encouraging implementation of the (sniffing) mechanism itself. Is it actually the case that we encourage implementation of MIME sniffing? Editors: URL [x] Section 8.8.1 strong validator is unique across all versions of all representations associated with a particular resource over time. [...] My understanding is that, e.g., a cryptographic hash over the representation and metadata would be intended to be a strong validator, but for such a construction the \"unique\" property can only be guaranteed probabilistically. Are we comfortable with this phrasing that implies an absolute requirement? Editors: yes. [x] Section 8.8.4 SHOULD send the Last-Modified value in non-subrange cache validation requests (using If-Modified-Since) if only a Last- Modified value has been provided by the origin server. MAY send the Last-Modified value in subrange cache validation requests (using If-Unmodified-Since) if only a Last-Modified value has been provided by an HTTP\/1.0 origin server. The user agent SHOULD provide a way to disable this, in case of difficulty. I'm failing to come up with an explanation for why it's important to specifically call out the HTTP\/1.0 origin server in the latter case. What's special about an HTTP\/1.1 origin server that only provided a Last-Modified value and subrange cache validation requests that makes the MAY inapplicable? (What's the actual expected behavior for that situation?) Editors: [x] Section 9.2.2 the server of multiple identical requests with that method is the same as the effect for a single such request. [...] I sometimes worry that a definition of idempotent like this hides the interaction of repeated idempotent requests with other requests modifying the same resource. A+A is equivalent to A, but A+B+A is often not equivalent to A+B... Editors: The definition of idempotent is about a single user agent's intent being repeatable (automatically retried on failure). The user's intent does not depend on the resource state unless the user makes it so using the conditional request mechanism defined in Section 13. We could add a forward reference here, but it is already discussed in 8.8. [x] Section 9.3.5 deactivated or archived as a result of a DELETE, such as database or gateway connections. In general, it is assumed that the origin server will only allow DELETE on resources for which it has a prescribed mechanism for accomplishing the deletion. The specific phrasing of \"only allow DELETE [...]\" calls to mind (for me) an expectation of authorization checks as well. In some sense this is no different than for POST or PUT, and thus may not be worth particular mention here, but I thought I'd ask whether it makes sense to mention authorization (and authentication). Editors: Not worth particular mention. [x] Section 9.3.5 received in a DELETE request has no defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request. We had a similar paragraph earlier in the discussion of GET and HEAD, but those paragraphs included a clause about \"close the connection because of its potential as a request smuggling attack\" -- is DELETE not at risk of use for request smuggling? Editors: this has been fixed in a prior issue [x] Section 10.1.1 A server that responds with a final status code before reading the entire request content SHOULD indicate whether it intends to close the connection (e.g., see Section 9.6 of [Messaging]) or continue reading the request content. The referenced section seems to cover the \"close\" connection option, which is a positive signal of intent to close. Is the absence of that connection option to be interpreted as a positive signal of intent to continue reading the request content, or is there some other positive signal of such intent to continue reading? Editors: It is version specific. For example, HTTP\/1.1 is persistent by default, so the absence of close is a positive signal. [x] Section 10.1.2 authentication. It seems that the level of security provided by the From header field is at most that of a bearer token, and that the natural choice of such token is easily guessable (though unguessable choices are possible). I'm having a hard time coming up with an IETF-consensus scenario where it would make sense to use From for access control or authentication (i.e., could this be MUST NOT instead?). Editors: already discussed as part of Francesca's feedback. [x] Section 10.1.3 denying links from other sites (so-called \"deep linking\") or restricting cross-site request forgery (CSRF), but not all requests contain it. I think we should say something about the effectiveness of Referer checks as a CSRF mitigation mechanism. Editors: That's a moving target that is often browser or organization-dependent. If we could get anyone to agree on a common opinion, let alone a common implementation, it might make an interesting BCP. [x] Section 10.1.3 Referer header field when the referring resource is a local \"file\" or \"data\" URI. A user agent SHOULD NOT send a Referer header field if This seems like a curious statement. Are we expecting future general-purpose user agents to emulate this behavior? If so, then why not recommend it explicitly? Editors: Referer policy is more under control of the W3C's WebAppSec WG; these are just general guidelines. [x] Section 10.1.3 request target has an origin differing from that of the referring resource, unless the referring resource explicitly allows Referer to be sent. A user agent MUST NOT send a Referer header field in an How does a referring resource indicate that Referer should be sent? Editors: Out of scope for this document. W3C defines referer-policy, but that's browser-centric. [x] Section 10.1.4 optional parameters (except for the special case \"trailers\"). Should the prose mention the 'weight' part of the t-codings construction (the \"weight\" production itself does not seem to be defined until §11.4.2)? Editors: URL [x] Section 10.1.5 For example, a sender might indicate that a message integrity check will be computed as the content is being streamed and provide the final signature as a trailer field. This allows a recipient to Please pick one of \"message integrity check\" and \"signature\" and use it consistently; these are both cryptographic terms of art (with different meanings). Editors: URL [x] Section 10.1.5 also be used by downstream recipients to discover when a trailer field has been removed from a message. It seems that this usage is only possible if sending the Trailer field is a binding commitment to emit the relevant trailer fields; otherwise the recipient cannot distinguish between a removal by an intermediary and a sender declining to generate the trailer field. Editors: 6ee7338d [x] Section 10.1.6 request unless specifically configured not to do so. (I assume that a reference to client-hints (or UA-CH) was considered and rejected.) Editors: yes. [x] Section 10.1.6 needlessly fine-grained detail and SHOULD limit the addition of subproducts by third parties. Overly long and detailed User-Agent field values increase request latency and the risk of a user being identified against their wishes (\"fingerprinting\"). client-hints might even be more appropriate as a reference here than it would be URL just in §17.13. Editors: CH is Experimental. [x] Section 10.2 It seems like it might be worth listing the fields already defined in the previous section (as request context fields) that can also appear as response context fields. Editors: Good catch. Trailer and Date are bidirectional fields, so it would be better to make a separate section for them, which would be either 10.1 or 10.3 (depending on on references in HTTP\/3). [x] Section 12.2 Reactive negotiation suffers from the disadvantages of transmitting a list of alternatives to the user agent, which degrades user-perceived latency if transmitted in the header section, and needing a second request to obtain an alternate representation. Furthermore, this specification does not define a mechanism for supporting automatic selection, though it does not prevent such a mechanism from being developed as an extension. I'm not sure that I understand how an HTTP extension would help specify a mechanism for automatic selection in reactive negotiation; isn't this just an implementation detail in the user-agent? Editors: Perhaps we should just remove \"as an extension\", since this isn't specific to HTTP? The URI and Alternates fields were proposed long ago for that purpose but did not attain sufficient implementation to remain in the standard. It is commonly implemented today using JavaScript in non-uniform ways. Likewise, HTML was extended to include the attribute on . [x] Section 12.5.1 | Note: Use of the \"q\" parameter name to control content | negotiation is due to historical practice. Although this | prevents any media type parameter named \"q\" from being used | with a media range, such an event is believed to be unlikely | given the lack of any \"q\" parameters in the IANA media type | registry and the rare usage of any media type parameters in | Accept. Future media types are discouraged from registering | any parameter named \"q\". This note seems like it would be more useful in the IANA media-types registry than \"some random protocol specification that uses media types\". Editors: same as URL [x] Section 12.5.3 Are these supposed to be multiple standalone examples or one single example with multiple field lines? (I note that they appear in a single element in the XML source.) If they are supposed to be one single example, I would have expected some remark about the combination of \"\" and \";q=0\" (my understanding is that the q=0 renders codings not listed as unacceptable, even despite the implicitly q=1 wildcard). It seems that in other instances where we provide multiple examples in a single artwork, the prefacing text is \"Examples:\" plural, that makes some effort to disambiguate. Editors: URL [x] Section 12.5.3 | Note: Most HTTP\/1.0 applications do not recognize or obey | qvalues associated with content-codings. This means that | qvalues might not work and are not permitted with x-gzip or | x-compress. This wording implies to me that there is a normative requirement somewhere else that qvalues cannot be used with x-gzip and x-compress, but I'm not sure where that would be. (It's also a bit hard to understand how x-gzip would be affected but not plain gzip, given that lists it as an alias for gzip ... additional restrictions don't quite match up with an \"alias\" nature.) Editors: This note reflects historical practice in 1996. Removed in URL [x] Section 12.5.4 an Accept-Language header field with the complete linguistic preferences of the user in every request (Section 17.13). This leaves me wondering how to improve on the situation and pick which subset of requests to send the header field in. I would expect that a blind random sampling approach would not yield privacy improvements over always sending them. Editors: this comment does not appear actionable. [x] Section 12.5.5 for selecting a representation varies based on aspects of the request message other than the method and target URI, unless the variance cannot be crossed or the origin server has been deliberately configured to prevent cache transparency. [...] I don't think I know what it means to \"cross\" a variance. The example (elided from this comment) about Authorization not needing to be included gives some hint as to what is meant, but I still don't have a clear picture. Editors: [x] Section 13.2.2 When the method is GET and both Range and If-Range are present, evaluate the If-Range precondition: if the validator matches and the Range specification is applicable to the selected representation, respond 206 (Partial Content) Otherwise, all conditions are met, so perform the requested action and respond according to its success or failure. I think that if the If-Range doesn't match, we're supposed to ignore the Range header field when performing the requested action, which doesn't seem to match up with this unadorned directive to \"perform the requested action\" (which would include the Range header field). (We might also change point (5) to use the \"if true\" phrasing that the other items use in the context of evaluating the precondition.) Editors: [x] Section 15.4.9 | Note: This status code is much younger (June 2014) than its | sibling codes, and thus might not be recognized everywhere. | See Section 4 of [RFC7538] for deployment considerations. This document obsoletes RFC 7538; if we believe that content is still useful we should probably consider incorporating it into this document. Editors: nope. We already here again and again that the spec is too long. Readers who care about these deployment issues can easily navigate to the reference spec. It's not required to understand the protocol. [x] Section 16.3.1 (appointed by the IESG or their delegate). Fields with the status 'permanent' are Specification Required ([RFC8126], Section 4.6). I would have expected IANA to ask for the phrase \"Expert Review\" to be used for the general case (if they did not already), since that's the relevant registration policy defined in RFC 8126. Editors: And yet, they did not. [x] [...] Reference to the document that specifies the field, preferably If the registration consists of \"at least\" a group of information that includes a specification document, doesn't that mean the policy is always \"Specification Required\", not just for permanent registrations? [x] Section 16.3.1 consultation with the community - the Expert(s) find that they are not in use. The Experts can change a provisional entry's status to permanent at any time. (The ability to freely convert a provisional registration to permanent seems to also require a specification document to always be present, even for provisional registrations.) Editors: no action evident. [x] Section 17 A few potential considerations that don't seem to be mentioned in the subsections: Implementation divergence in handling multi-member field values when singletons are expected, could lead to security issues (in a similar vein as how request smuggling works) Though ETag is formally opaque to clients, any internal structure to the values could still be inspected and attacked by a malicious client. We might consider giving guidance that ETag values should be unpredictable. When the same information is present at multiple protocol layers (e.g., the transport port number and the Host field value), in the general case, attacks are possible if there is not check for consistency of the values in the different layers. It's often helpful to provide guidance on which entit(ies) should perform the check, to avoid scenarios where all parties are expecting \"someone else\" to do it. Relatedly, the port value is part of the https \"origin\" concept, but is not authenticated by the certificate and could be modified (in the transport layer) by an on-path attacker. The safety of per-origin isolation relies on the server to check that the port intended by the client matches the port the request was actually received on. We mention that in response to some 3xx redirection responses, a client capable of link editing might do so automatically. Doing so for http-not-s responses would allow for a form of privilege escalation, converting even a temporary access into more permanent changes on referring resources. We make heavy use of URIs and URI components; referencing the security considerations of RFC 3986 might be worthwhile Editors: It is very late in the process to introduce such substantial text, especially when it would need additional review due to security impact. As this is a COMMENT, not a DISCUSS, we will not act upon this. [x] Section 17.1 For example, phishing is an attack on the user's perception of authority, where that perception can be misled by presenting similar branding in hypertext, possibly aided by userinfo obfuscating the authority component (see Section 4.2.1). [...] We might also mention \"confusable\" domain names here as well (which are possible even without resorting to IDNs). Editors: Same as above. [x] Section 17.5 Should we also discuss situations where there might be redundant lengths at different encoding layers (e.g., HTTP framing and MIME multipart boundaries), in a similar vein to URL ? Editors: Same as above. [x] Section 17.16.3 establishing a protection space will expose credentials to all resources on an origin server. [...] There's also not any clear authorization mechanism for the origin to claim use of a given realm value, which can lead to the client sending credentials for the claimed realm without knowing that the server should be receiving such credentials. Editors: This doesn't appear to be actionable. [x] Section 19.2 Should RFC 5322 be normative? We rely on it for, e.g., the \"mailbox\" ABNF construction. Editors: URL [x] Appendix A [Just noting that I did not attempt to validate the ABNF, since the shepherd writeup notes that they have been validated] [x] without a concept of modification time. (Section 13.1.4) I couldn't really locate which text was supposed to be providing this clarification. [x] Section 3.1 identified by a Uniform Resource Identifier (URI), as described in Section 4. [...] HTTP relies upon the Uniform Resource Identifier (URI) standard [RFC3986] to indicate the target resource (Section 7.1) and relationships between resources. Are these two statements compatible? (What is used for the non-URI resource identification scenarios?) Editors: the \"most\" is referring to the fact that some resources don't have explicit identifiers (as explained elsewhere). [x] Section 5.5 We seem to use the obs-text ABNF construction prior to its definition, which is in Section 5.6.4. Editors: URL [x] generate empty list elements. In other words, a sender MUST generate lists that satisfy the following syntax: Are the two formulations equivalent without some restriction on 'element' itself? [x] Section 6.4.2 If the request method is GET and the response status code is 200 (OK), the content is a representation of the resource identified by the target URI (Section 7.1). If the request method is GET and the response status code is 203 (Non-Authoritative Information), the content is a potentially modified or enhanced representation of the target resource as provided by an intermediary. If the request method is GET and the response status code is 206 (Partial Content), the content is one or more parts of a representation of the resource identified by the target URI (Section 7.1). If the response has a Content-Location header field and its field value is a reference to the same URI as the target URI, the content is a representation of the target resource. I count two \"target resource\" and two \"resource identified by the target URI\". Is there an important distinction between those two phrasings or could we normalize on a single term? Editors: c5db347 [x] Section 7.3.3 routine, usually specific to the target URI's scheme, to connect directly to an origin for the target resource. How that is accomplished is dependent on the target URI scheme and defined by its associated specification. This document is the relevant specification for the \"http\" and \"https\" URI schemes; a section reference to the corresponding procedures might be in order. Editors: [x] is later than the server's time of message origination (Date). If I suspect some relevant details for this clock are covered in §10.2.2; maybe a forward reference would be useful. [x] Section 10.2 the target resource for potential use in later requests. I didn't see a previous enumeration of fields such that \"remaining\" would have meaning. (Also, the whole toplevel section seems to contain multiple sentences that are nearly redundant.) Editors: That text has been rewritten based upon other comments. [x] Section 10.2.2 [...] Are we using \"with a clock\" as shorthand for \"have a clock capable of providing a reasonable approximation of the current instant in Coordinated Universal Time\"? It might be worth clarifying if this different phrasing than above is intended to convey different semantics. Editors: [x] Authentication-Info, except [...] Is it worth calling out again that it can be sent as a trailer field, in case someone specifically goes searching for trailer fields? Editors: [x] Section 13.2.1 MUST ignore the conditional request header fields defined by this specification when received with a request method that does not involve the selection or modification of a selected representation, such as CONNECT, OPTIONS, or TRACE. We do say \"can be used with any method\" regarding If-Match, earlier, which is not very well aligned with this \"MUST ignore\". Editors: [X] Section 15.4 content-specific header fields, including (but not limited to) Content-Encoding, Content-Language, Content-Location, Content-Type, Content-Length, Digest, ETag, Last-Modified. The discussion in §8.8.3 seems to indicate that ETag is only used in responses, not requests, so I'm not sure in what scenarios it would need to be removed from the redirected request. Editors: [x] | Note:* In HTTP\/1.0, the status codes 301 (Moved Permanently) | and 302 (Found) were defined for the first type of redirect | ([RFC1945], Section 9.3). Early user agents split on whether | the method applied to the redirect target would be the same as | the original request or would be rewritten as GET. Although | HTTP originally defined the former semantics for 301 and 302 | (to match its original implementation at CERN), and defined 303 | (See Other) to match the latter semantics, prevailing practice | gradually converged on the latter semantics for 301 and 302 as | well. The first revision of HTTP\/1.1 added 307 (Temporary | Redirect) to indicate the former semantics of 302 without being | impacted by divergent practice. For the same reason, 308 | (Permanent Redirect) was later on added in [RFC7538] to match | 301. [...] I had to read this text several times to find a way to understand it that seems to make sense to me (but might still be wrong!). I think part of my confusion is that the word \"former\" is being used in two different senses (the first of the two choices, and the historical\/earlier version). Perhaps it's more clear to just talk about \"method rewriting\" (and not rewriting) instead of using the overloaded term. Editors:\nRe: Appendix B.4 -- This text originally referred to the resolution of , but that was subsequently overwritten when we aligned the way we specified conditionals. It probably needs to be re-introduced (and perhaps looked at for the other conditionals).\nClosing, as all remaining issues have been split out.\nSince I put the effort in to track it down, I'll note for posterity that Francesca's corresponding feedback was item 14 at URL and bde626d7cdfd2a6221e1d2f4ec6c87f40dd8c7be is how it got addressed. (I had gotten confused about the relative timing of the respective reviews which prompted me to actually look at the history.)"} +{"_id":"q-en-http-core-6f8cb3e253d0ac883d66b62726f9508997a4bb4fa707e9d909715b1f08ae531e","text":"…Control (\nSee URL\nI think we might re-visit this advice, since CC is so widely implemented, but the errata is correct, so yes."} +{"_id":"q-en-http-core-dc920596a25b0b7528d92f38f9900326499bda7212ac1c5604a80e5698004499","text":"[x] Section 9.4 Furthermore, using multiple connections can cause undesirable side effects in congested networks. Using larger number of multiple connections can also cause side effects in otherwise uncongested networks, because their aggregate and initially synchronized sending behavior can cause congestion that would not have been present if fewer parallel connections had been used. nit: \"Using larger number of multiple connections\" doesn't seem right, and possibly in more ways than just the singular\/plural mismatch. Editors: URL [x] trailers and chunked The only general comment I have is that [Semantics] did such a good job of portraying the trailer section as a generic concept that I was surprised to see it presented as specific to the chunked transfer-encoding in this document. It seems to me (naively, of course), that when the content can accurately be delimited, whether by Content-Length or the chunked transfer-encoding, a trailer section could be read after the request or response and clearly distinguished from the start of a new request or response. I recognize that we have a significant deployed base to be mindful of backwards compatibility with, and so do not propose to recklessly add trailer sections everywhere. It might be worth some more prominent acknowledgment that in HTTP\/1.1 the trailers section is limited to the chunked transfer-encoding, and discussion of why trailers are not usable in other HTTP\/1.1 scenarios, though. Editors: The HTTP\/1.1 message format does not allow adding trailers as a section following the message body. Trailers is allowed in chunked because the content itself is encoded into a format that concludes with an optional trailer section (within the message body). There is no possibility of generalizing this further for HTTP\/1.x. [x] 2.2 trick a server into ignoring that field line or processing the line after it as a new request, either of which might result in a security vulnerability if other implementations within the request chain interpret the same message differently. [...] Given the previous procedure that gives as a permitted behavior to \"consume the line without further processing\", it seems like an attempt to get the server to ignore the field line would have succeeded if this procedure is followed? I suppose the important difference is that the field line is completely suppressed from any version of the message transmitted downstream, thus avoiding the opportunity for a different interpretation. Regardless, though, it seems like the text of the guidance as written (not quoted above) reads like it is setting us up for vulnerabilities in the presence of non-compliant (or HTTP\/1.0?) implementations in the request chain. We might want to put in a bit more explanation of how the stated procedure avoids the vulnerability. Editors: [x] Section 3.2 400 (Bad Request) error or a 301 (Moved Permanently) redirect with the request-target properly encoded. [...] (I assume 301 rather than 308 was an intentional choice for maximum compatibility with old\/broken clients.) Editors: This requirement existed long before 308 was minted. In any case, 301 would be preferred since we don't want to assume too much about why a non-GET request method is being used with an invalid request-line (usually meaning that the client failed to encode spaces in the hypertext reference). [x] Section 3.3 secured connection is inherently unsafe if there is any chance that the user agent's intended authority might differ from the selected default. A server that can uniquely identify an authority from the request context MAY use that identity as a default without this risk. Is the contents of the TLS SNI extension sufficient request context to uniquely identify an intended authority? Editors: Maybe. Using a certificate that is specific to that authority (one that can't be shared by multiple origins and thus was definitely chosen by the client) would be sufficient. We don't need to define this further. [x] Section 5.1 does not include any leading or trailing whitespace: OWS occurring before the first non-whitespace octet of the field line value or after the last non-whitespace octet of the field line value ought to be excluded by parsers when extracting the field line value from a field line. I have in general tried to refrain from commenting on the extensive use of the phrase \"ought to\" in this group of documents, but this particular scenario seems like a strong candidate for a BCP 14 keyword. Editors: [x] Section 9.8 closure alert is received, an implementation can be assured that no further data will be received on that connection. TLS implementations MUST initiate an exchange of closure alerts before closing a connection. A TLS implementation MAY, after sending a closure alert, close the connection without waiting for the peer to send its closure alert, generating an \"incomplete close\". [...] This is written as if it's imposing normative requirements on generic TLS implementations (not placing restrictions on what TLS implementations are suitable for HTTPS). Fortunately, these \"MUST initiate\" and \"MAY close without waiting\" requirements seem to already be present in RFC 8446... Editors: Rewritten by URL [x] Section 9.8 This SHOULD only be done when the application knows (typically through detecting HTTP message boundaries) that it has sent or received all the message data that it cares about. ...whereas this SHOULD does not have an obvious analogue in RFC 8446, and thus it would make sense to retain the BCP 14 keyword for. Editors: None of that reads well (it was imported from RFC2818). This is better rewritten as a factual statement of when it knows, as committed above. [x] Section 4 interpreted in light of the semantics defined for that status code. See Section 15 of [Semantics] for information about the semantics of status codes, including the classes of status code (indicated by the first digit), the status codes defined by this specification, In some sense it seems that the referenced status codes are defined by [Semantics], not \"this specification\". I was initially going to propose (in my PR) a change to \"defined for\", but that seems incorrect and I don't have a better proposal handy. Editors: URL [x] Section 5.2 (i.e., that has any field line value that contains a match to the obs-fold rule) unless [...] Since we don't include the obs-fold production as a component of any other production, and field-value excludes CRLF, it seems that any such field line value would already be in violation of the ABNF and thus forbidden. I don't really want to advocate for including obs-fold in the field-value production in -semantics, though, so maybe accepting this nit is the least bad choice here. Editors: I believe this is more than a nit. Opened URL [x] Section 9.2 MUST maintain a list of outstanding requests in the order sent and MUST associate each received response message on that connection to the highest ordered request that has not yet received a final (non- 1xx) response. \"Highest ordered\" implies some numerical rank-list of ordering, but we don't seem to clearly indicate whether older or newer requests receive higher numerical indices. It seems simples to just say \"oldest\" (or \"newest\", if that was the intent) rather than applying numerical ranking. Editors:\nclosure alert is received, an implementation can be assured that no further data will be received on that connection. TLS implementations MUST initiate an exchange of closure alerts before closing a connection. A TLS implementation MAY, after sending a closure alert, close the connection without waiting for the peer to send its closure alert, generating an \"incomplete close\". [...] TLS implementations (not placing restrictions on what TLS implementations are suitable for HTTPS). Fortunately, these \"MUST initiate\" and \"MAY close without waiting\" requirements seem to already be present in RFC 8446... I'm not a TLS expert. That said, if this is correct, would be lowercasing the two keywords be sufficient?\nURL is for section 9.8\nChange for 9.8 LGTM (as what might pass for a TLS expert)."} +{"_id":"q-en-http-core-0b09c8135dcb4bfe142dd265834f6c151f13c8409ada5f5a115b510bb1b8bd6d","text":"Please do :-)\n[x] Section 9.4 Furthermore, using multiple connections can cause undesirable side effects in congested networks. Using larger number of multiple connections can also cause side effects in otherwise uncongested networks, because their aggregate and initially synchronized sending behavior can cause congestion that would not have been present if fewer parallel connections had been used. nit: \"Using larger number of multiple connections\" doesn't seem right, and possibly in more ways than just the singular\/plural mismatch. Editors: URL [x] trailers and chunked The only general comment I have is that [Semantics] did such a good job of portraying the trailer section as a generic concept that I was surprised to see it presented as specific to the chunked transfer-encoding in this document. It seems to me (naively, of course), that when the content can accurately be delimited, whether by Content-Length or the chunked transfer-encoding, a trailer section could be read after the request or response and clearly distinguished from the start of a new request or response. I recognize that we have a significant deployed base to be mindful of backwards compatibility with, and so do not propose to recklessly add trailer sections everywhere. It might be worth some more prominent acknowledgment that in HTTP\/1.1 the trailers section is limited to the chunked transfer-encoding, and discussion of why trailers are not usable in other HTTP\/1.1 scenarios, though. Editors: The HTTP\/1.1 message format does not allow adding trailers as a section following the message body. Trailers is allowed in chunked because the content itself is encoded into a format that concludes with an optional trailer section (within the message body). There is no possibility of generalizing this further for HTTP\/1.x. [x] 2.2 trick a server into ignoring that field line or processing the line after it as a new request, either of which might result in a security vulnerability if other implementations within the request chain interpret the same message differently. [...] Given the previous procedure that gives as a permitted behavior to \"consume the line without further processing\", it seems like an attempt to get the server to ignore the field line would have succeeded if this procedure is followed? I suppose the important difference is that the field line is completely suppressed from any version of the message transmitted downstream, thus avoiding the opportunity for a different interpretation. Regardless, though, it seems like the text of the guidance as written (not quoted above) reads like it is setting us up for vulnerabilities in the presence of non-compliant (or HTTP\/1.0?) implementations in the request chain. We might want to put in a bit more explanation of how the stated procedure avoids the vulnerability. Editors: [x] Section 3.2 400 (Bad Request) error or a 301 (Moved Permanently) redirect with the request-target properly encoded. [...] (I assume 301 rather than 308 was an intentional choice for maximum compatibility with old\/broken clients.) Editors: This requirement existed long before 308 was minted. In any case, 301 would be preferred since we don't want to assume too much about why a non-GET request method is being used with an invalid request-line (usually meaning that the client failed to encode spaces in the hypertext reference). [x] Section 3.3 secured connection is inherently unsafe if there is any chance that the user agent's intended authority might differ from the selected default. A server that can uniquely identify an authority from the request context MAY use that identity as a default without this risk. Is the contents of the TLS SNI extension sufficient request context to uniquely identify an intended authority? Editors: Maybe. Using a certificate that is specific to that authority (one that can't be shared by multiple origins and thus was definitely chosen by the client) would be sufficient. We don't need to define this further. [x] Section 5.1 does not include any leading or trailing whitespace: OWS occurring before the first non-whitespace octet of the field line value or after the last non-whitespace octet of the field line value ought to be excluded by parsers when extracting the field line value from a field line. I have in general tried to refrain from commenting on the extensive use of the phrase \"ought to\" in this group of documents, but this particular scenario seems like a strong candidate for a BCP 14 keyword. Editors: [x] Section 9.8 closure alert is received, an implementation can be assured that no further data will be received on that connection. TLS implementations MUST initiate an exchange of closure alerts before closing a connection. A TLS implementation MAY, after sending a closure alert, close the connection without waiting for the peer to send its closure alert, generating an \"incomplete close\". [...] This is written as if it's imposing normative requirements on generic TLS implementations (not placing restrictions on what TLS implementations are suitable for HTTPS). Fortunately, these \"MUST initiate\" and \"MAY close without waiting\" requirements seem to already be present in RFC 8446... Editors: Rewritten by URL [x] Section 9.8 This SHOULD only be done when the application knows (typically through detecting HTTP message boundaries) that it has sent or received all the message data that it cares about. ...whereas this SHOULD does not have an obvious analogue in RFC 8446, and thus it would make sense to retain the BCP 14 keyword for. Editors: None of that reads well (it was imported from RFC2818). This is better rewritten as a factual statement of when it knows, as committed above. [x] Section 4 interpreted in light of the semantics defined for that status code. See Section 15 of [Semantics] for information about the semantics of status codes, including the classes of status code (indicated by the first digit), the status codes defined by this specification, In some sense it seems that the referenced status codes are defined by [Semantics], not \"this specification\". I was initially going to propose (in my PR) a change to \"defined for\", but that seems incorrect and I don't have a better proposal handy. Editors: URL [x] Section 5.2 (i.e., that has any field line value that contains a match to the obs-fold rule) unless [...] Since we don't include the obs-fold production as a component of any other production, and field-value excludes CRLF, it seems that any such field line value would already be in violation of the ABNF and thus forbidden. I don't really want to advocate for including obs-fold in the field-value production in -semantics, though, so maybe accepting this nit is the least bad choice here. Editors: I believe this is more than a nit. Opened URL [x] Section 9.2 MUST maintain a list of outstanding requests in the order sent and MUST associate each received response message on that connection to the highest ordered request that has not yet received a final (non- 1xx) response. \"Highest ordered\" implies some numerical rank-list of ordering, but we don't seem to clearly indicate whether older or newer requests receive higher numerical indices. It seems simples to just say \"oldest\" (or \"newest\", if that was the intent) rather than applying numerical ranking. Editors:\nclosure alert is received, an implementation can be assured that no further data will be received on that connection. TLS implementations MUST initiate an exchange of closure alerts before closing a connection. A TLS implementation MAY, after sending a closure alert, close the connection without waiting for the peer to send its closure alert, generating an \"incomplete close\". [...] TLS implementations (not placing restrictions on what TLS implementations are suitable for HTTPS). Fortunately, these \"MUST initiate\" and \"MAY close without waiting\" requirements seem to already be present in RFC 8446... I'm not a TLS expert. That said, if this is correct, would be lowercasing the two keywords be sufficient?\nURL is for section 9.8\nChange for 9.8 LGTM (as what might pass for a TLS expert).\nsuggestion"} +{"_id":"q-en-http-core-883122380a65ef8c38cc8165629e325a27974d1cda40d80dbe0f8ed02ee1f7a7","text":"…to explain better for\n[x] Section 9.4 Furthermore, using multiple connections can cause undesirable side effects in congested networks. Using larger number of multiple connections can also cause side effects in otherwise uncongested networks, because their aggregate and initially synchronized sending behavior can cause congestion that would not have been present if fewer parallel connections had been used. nit: \"Using larger number of multiple connections\" doesn't seem right, and possibly in more ways than just the singular\/plural mismatch. Editors: URL [x] trailers and chunked The only general comment I have is that [Semantics] did such a good job of portraying the trailer section as a generic concept that I was surprised to see it presented as specific to the chunked transfer-encoding in this document. It seems to me (naively, of course), that when the content can accurately be delimited, whether by Content-Length or the chunked transfer-encoding, a trailer section could be read after the request or response and clearly distinguished from the start of a new request or response. I recognize that we have a significant deployed base to be mindful of backwards compatibility with, and so do not propose to recklessly add trailer sections everywhere. It might be worth some more prominent acknowledgment that in HTTP\/1.1 the trailers section is limited to the chunked transfer-encoding, and discussion of why trailers are not usable in other HTTP\/1.1 scenarios, though. Editors: The HTTP\/1.1 message format does not allow adding trailers as a section following the message body. Trailers is allowed in chunked because the content itself is encoded into a format that concludes with an optional trailer section (within the message body). There is no possibility of generalizing this further for HTTP\/1.x. [x] 2.2 trick a server into ignoring that field line or processing the line after it as a new request, either of which might result in a security vulnerability if other implementations within the request chain interpret the same message differently. [...] Given the previous procedure that gives as a permitted behavior to \"consume the line without further processing\", it seems like an attempt to get the server to ignore the field line would have succeeded if this procedure is followed? I suppose the important difference is that the field line is completely suppressed from any version of the message transmitted downstream, thus avoiding the opportunity for a different interpretation. Regardless, though, it seems like the text of the guidance as written (not quoted above) reads like it is setting us up for vulnerabilities in the presence of non-compliant (or HTTP\/1.0?) implementations in the request chain. We might want to put in a bit more explanation of how the stated procedure avoids the vulnerability. Editors: [x] Section 3.2 400 (Bad Request) error or a 301 (Moved Permanently) redirect with the request-target properly encoded. [...] (I assume 301 rather than 308 was an intentional choice for maximum compatibility with old\/broken clients.) Editors: This requirement existed long before 308 was minted. In any case, 301 would be preferred since we don't want to assume too much about why a non-GET request method is being used with an invalid request-line (usually meaning that the client failed to encode spaces in the hypertext reference). [x] Section 3.3 secured connection is inherently unsafe if there is any chance that the user agent's intended authority might differ from the selected default. A server that can uniquely identify an authority from the request context MAY use that identity as a default without this risk. Is the contents of the TLS SNI extension sufficient request context to uniquely identify an intended authority? Editors: Maybe. Using a certificate that is specific to that authority (one that can't be shared by multiple origins and thus was definitely chosen by the client) would be sufficient. We don't need to define this further. [x] Section 5.1 does not include any leading or trailing whitespace: OWS occurring before the first non-whitespace octet of the field line value or after the last non-whitespace octet of the field line value ought to be excluded by parsers when extracting the field line value from a field line. I have in general tried to refrain from commenting on the extensive use of the phrase \"ought to\" in this group of documents, but this particular scenario seems like a strong candidate for a BCP 14 keyword. Editors: [x] Section 9.8 closure alert is received, an implementation can be assured that no further data will be received on that connection. TLS implementations MUST initiate an exchange of closure alerts before closing a connection. A TLS implementation MAY, after sending a closure alert, close the connection without waiting for the peer to send its closure alert, generating an \"incomplete close\". [...] This is written as if it's imposing normative requirements on generic TLS implementations (not placing restrictions on what TLS implementations are suitable for HTTPS). Fortunately, these \"MUST initiate\" and \"MAY close without waiting\" requirements seem to already be present in RFC 8446... Editors: Rewritten by URL [x] Section 9.8 This SHOULD only be done when the application knows (typically through detecting HTTP message boundaries) that it has sent or received all the message data that it cares about. ...whereas this SHOULD does not have an obvious analogue in RFC 8446, and thus it would make sense to retain the BCP 14 keyword for. Editors: None of that reads well (it was imported from RFC2818). This is better rewritten as a factual statement of when it knows, as committed above. [x] Section 4 interpreted in light of the semantics defined for that status code. See Section 15 of [Semantics] for information about the semantics of status codes, including the classes of status code (indicated by the first digit), the status codes defined by this specification, In some sense it seems that the referenced status codes are defined by [Semantics], not \"this specification\". I was initially going to propose (in my PR) a change to \"defined for\", but that seems incorrect and I don't have a better proposal handy. Editors: URL [x] Section 5.2 (i.e., that has any field line value that contains a match to the obs-fold rule) unless [...] Since we don't include the obs-fold production as a component of any other production, and field-value excludes CRLF, it seems that any such field line value would already be in violation of the ABNF and thus forbidden. I don't really want to advocate for including obs-fold in the field-value production in -semantics, though, so maybe accepting this nit is the least bad choice here. Editors: I believe this is more than a nit. Opened URL [x] Section 9.2 MUST maintain a list of outstanding requests in the order sent and MUST associate each received response message on that connection to the highest ordered request that has not yet received a final (non- 1xx) response. \"Highest ordered\" implies some numerical rank-list of ordering, but we don't seem to clearly indicate whether older or newer requests receive higher numerical indices. It seems simples to just say \"oldest\" (or \"newest\", if that was the intent) rather than applying numerical ranking. Editors:\nclosure alert is received, an implementation can be assured that no further data will be received on that connection. TLS implementations MUST initiate an exchange of closure alerts before closing a connection. A TLS implementation MAY, after sending a closure alert, close the connection without waiting for the peer to send its closure alert, generating an \"incomplete close\". [...] TLS implementations (not placing restrictions on what TLS implementations are suitable for HTTPS). Fortunately, these \"MUST initiate\" and \"MAY close without waiting\" requirements seem to already be present in RFC 8446... I'm not a TLS expert. That said, if this is correct, would be lowercasing the two keywords be sufficient?\nURL is for section 9.8\nChange for 9.8 LGTM (as what might pass for a TLS expert)."} +{"_id":"q-en-http-core-044702f581fac8495ffdcf3b8ad805ce71717fa7c2f1568a716792ed5f0ae6e2","text":"In semantics 7.7: I found where (in the discussion of normalization in §4.2.3) we say to replace the empty path with \"\/\" for non-OPTIONS requests. I couldn't find anywhere \"above\" where it was noted to replace an empty path with \"*\" (presumably, for the OPTIONS requests), though.\nThis is a reference to text that was moved to HTTP\/1.1 because it is version specific. I suggest we replace it with\nLGTM"} +{"_id":"q-en-http-core-ee2ac06614bf06b98e49783f68cbec6040fb1edfe8a0fb042650d325f254cd3a","text":"This works well on multiple levels because these fields are not about the request or response context, but rather about the message itself. Date is now defined early and close to the date value definitions of 5.6.7. Trailer is now defined just after the description of trailers. Also, it avoids changing any xrefs from HTTP\/3. Only the section heading and intro are new -- the moved text has not been changed.\nOops, that wasn't intentional. The Trailer field is the main one that goes both directions. It should really be in a different section. Date is the other.\nI think we should move it to 6.6 (in the message abstraction, just after trailers are defined).\nI also forgot that Date can be sent in requests. How about a 10.3 section on \"Generation Context\" that would contain Date and Trailer? I am a little worried that this might change the section numbering for HTTP\/3, so we should check that first.\nI checked HTTP\/3 and there are no references to section 10 of [SEMANTICS].\nLooks good, want to see it in situ tho"} +{"_id":"q-en-http-core-cff0c6f5aa2d67a20b827e709c22c56623374d11b4b8121a6977cb849169d843","text":"Note that this also rephrases the SHOULD requirement, but doesn't change its effect.\nAs mentioned in ... >Section 12.5.5"} +{"_id":"q-en-http-core-a53cd80b34710ed8e1579ce1224189b327aef438b350f6d4db9a3fb6f3664917","text":"From : without a concept of modification time. (Section 13.1.4) I couldn't really locate which text was supposed to be providing this clarification.\nRe: Appendix B.4 -- This text originally referred to the resolution of , but that was subsequently overwritten when we aligned the way we specified conditionals. It probably needs to be re-introduced (and perhaps looked at for the other conditionals)."} +{"_id":"q-en-http-core-c5561685831c0424c3b4e099483baf8a1abf9ce4414464749c6d9bf3984d683e","text":"In Section 7.3.3 (as noted by ) If no proxy is applicable, a typical client will invoke a handler routine, usually specific to the target URI's scheme, to connect directly to an origin for the target resource. How that is accomplished is dependent on the target URI scheme and defined by its associated specification. This document is the relevant specification for the \"http\" and \"https\" URI schemes; a section reference to the corresponding procedures might be in order.\nThis was already implied, but it does read better if we close the loop on routing by sending the request message with the identified request target."} +{"_id":"q-en-http-core-a94caf3d46d910a396787c2f68ad4aa2d9f0d6d1d3544512555c1f79cb5c18ac","text":"For\nRegistration requests consist of at least the following information: [...] Specification document(s): Reference to the document that specifies the field, preferably If the registration consists of \"at least\" a group of information that includes a specification document, doesn't that mean the policy is always \"Specification Required\", not just for permanent registrations?\nI think the intent here was to encourage but not require a specification for provisional registration. Will PR."} +{"_id":"q-en-http-core-c12f4ab4b606d402811ece1673987675446447c4abb92261c66111b4d3dd22b8","text":"This goes back to URL\nSimplest possible fix would be to mark it as allowed in trailers as well."} +{"_id":"q-en-http-core-1fc55bd19c5fde9ad7f68ce182ba9e8b0df10debca956c5ec67deb0be7eb42e1","text":"Me confused. Which PR is relevant now? Both?\nThe other one is an evolution of this one.\nWe use the phrases 'selected representation' and 'selected response' in Semantics and Caching (respectively) to refer to two completely different concepts. Can we do better?\nNo\nI think the two concepts match pretty well (and are used correctly in the four sentences). However, I also think that \"chosen response\" would be okay given it is only used in four places. I really don't think it needs a separate definition, since it means what the two words mean. So, maybe.\nIt was actually used in a few more places as the response selected. I think I found all of them.\nSee ."} +{"_id":"q-en-http-core-d79595d678140de751474da1d699fd07c112cd3a169c038e02f515952b70c72f","text":"… (now entirely handled by HTTP\/1.1 messaging). Include LF in discussion of what characters are invalid in field values. Require obs-fold to be processed when consuming message\/http data.\nCiting Ben Kaduk (URL): (i.e., that has any field line value that contains a match to the obs-fold rule) unless [...] other production, and field-value excludes CRLF, it seems that any such field line value would already be in violation of the ABNF and thus forbidden. I don't really want to advocate for including obs-fold in the field-value production in -semantics, though, so maybe accepting this nit is the least bad choice here. In RFC723x, field-value allowed obs-fold. We need to: tune the prose above, and somehow properly allow it in message\/http (maybe by modifying the field-value for that specific case???)\nsee also URL\nThis is a good change. A few things that might be worth considering, when comparing this to recent HTTP\/2 changes. HTTP\/2 prohibits additional things in field names: SP (0x20), COLON (0x3a), 0x01-0x1f, and 0x7f-0xff (inclusive), specifically. The other things in h2 are specific to that. That includes leading and trailing space in values (you just say \"A field value does not include leading or trailing whitespace\") and uppercase field characters are version-specific."} +{"_id":"q-en-http-core-1e0b6d8200da133ff6d658e5bd89b8c9fe2d0cab1ded5a2dd62be7dfe2dd90c3","text":"Reviewer: Marco Tiloca Review result: Ready with Nits Thanks for this document! I have found it very well written and I believe it's basically ready. Please, see below some minor comments and nits. Best, \/Marco [X] [Section 1.2] As to \"absolute-path\", it is more precise to point to Section 4.1 of [HTTP]. Editors: URL [X] [Section 3] \"HTTP does not place a predefined limit on the length of a request-line, as described in Section 2 of [HTTP]\" This can better point to Section 2.3 of [HTTP]. Editors: URL [X] [Section 3.2] \"A client MUST send a Host header field in all HTTP\/1.1 request messages.\" This sentence can be expanded to point to Section 7.2 of [HTTP]. \"... excluding any userinfo subcomponent and its \"NAME delimiter ...\" This should point to Section 4.2.4 of [HTTP]. Editors: URL [X] [Section 3.3] \"Alternatively, it might be better to redirect the request to a safe resource that explains how to obtain a new client.\" Is \"client\" actually the intended word here? Or is it about using redirection to explain the client how to obtain something else (e.g. a proper authority component for a follow-up request) ? Editors: client is the intended word here: this case is only possible if the client sent an HTTP\/0.9 or HTTP\/1.0 request without a Host header field, which should not be in any deployed client since 1995. [X] [Section 7.1.2] I believe it's better for the reader if the last paragraph is split into 2 or 3 sentences Editors: [X] [Section 9.8] \"When encountering an incomplete close, a client SHOULD treat as completed all requests for which it has received ...\" Shouldn't this be about received responses? Or does it refer to the completion of the exchanges started by the mentioned requests? Editors: yes, it's about the completion of the exchanges. [X] [Appendix A] The first paragraph can better point to Section 5.6.1 of [HTTP]. Editors: URL The reference for \"absolute-path\" should be Section 4.1 of [HTTP]. Editors: URL\nLong indeed. NAME - do you want to give that a try?"} +{"_id":"q-en-http-core-1f1a6defb55b37706079b0d86af8c9fc9a4d50b6ed1ae7ecbc13f2e97d44a6f3","text":"The text in the document says: The Media Type Designated Experts' suggested to make that stronger, and have the following note on the : Let's agree on either \"discouraged\" or \"prohibited\", and align the text in the document and in the IANA registry note.\nI believe what the IANA registry says is good, so what's left to do is to align our spec text with that. My understanding is that no new text in the IANA Considerations is needed, as the change to the registry already happened.\nIf we agree to add the reference to this RFC in the IANA note, once the RFC is published, I can ask IANA to do that.\nGood point. We could add that to the IANA Considerations. Should I?\nI think that would make sense. Let's hear from others."} +{"_id":"q-en-http-core-d624e2dcf87b6c022e062339d9bcc3d3515a7f85df21d6a2ae1414c1f2e56f02","text":"Inspired by my ballot comments ( ) I started trying to write something about how it's required to send \"close_notify\" in normal connection shutdown, but didn't like how that worked, and ended up here. I am not sure if I regressed in some other axes while improving on this one, though."} +{"_id":"q-en-http-core-9ba7334e6fd7574a6c239cf669ddd54dcdbb16cb672fe85e4eb8b43af768d0d2","text":"The follow-up discussion on indicated that the nature of the updates to RFC 3864 is essentially a \"partial obsoletes\", which means that we do not want to be part of BCP 90 even if we could -- we're carving off a piece of it and branching out on our own. This PR attempts to reword the description slightly to indicate that we're not just tweaking part of RFC 3864, but rather replacing part of it (the parts that relate to HTTP), hopefully without bloating the text too much."} +{"_id":"q-en-http-core-a543292387219ce7b91cd181af46c3170abe915b7a918d43fd3317735d25d365","text":"Inspired by some discussion on : while just the word \"security\" is indeed used in marketing literature for proxies, it's meaning to different parties is so varied so as to not really convey much useful information. In the IETF we try hard to be clear about what security provides we need and\/or provide, avoiding the vague catch-all term. However (as I am reminded out of band), the audience of this document is not exactly security experts, and it is easy to go overboard trying to be precise, at least in this instance. [actual commit message body retained below] The mere act of inserting a proxy into the chain does not, in and of itself, do much of anything for the security of the system (and a badly implemented proxy can make the security of the system much worse). A proxy can, however, provide security services such as auditing access, annotating content from untrustworthy sources, exfiltration avoidance, etc. It is these services that are the security-related motivation for using a proxy, so say that \"security services\", rather than just \"security\", are being provided by the proxy."} +{"_id":"q-en-http-core-0e9cd2030ecc1db6d81002a1da93dce1414d291242bc3b35ada2161c8451a8f2","text":"It may or may not be useful to have an HTTP extension in order to perform reactive negotiation -- one possible approach that does not require an extension would be policy in the user-agent. In the spirit of \"less is more\", and to not overly constrain our forward-looking statement, just remove the phrase \"as an extension\". (inspired by follow-up discussions on )\nI almost made the same edit earlier."} +{"_id":"q-en-http-core-47e2fae8c073e79f912c4df9b13fbcd5501f92725e7f0ac7402b5cf6605d7e88","text":"The current text of \"ought to ... where possible\" is easy to read as saying \"if you have the technical capability, do it\". There are, however, some subtleties, such as if you get the 308 response over http-not-s, in which case the question of whether or not to re-link is not so clear-cut. Change from \"where possible\" to \"where appropriate\" to hint that there is some logic needed here beyond \"blindly accept\". (inspired by follow-up discussions from )\nI like Roy's version better than mine :) I guess I was too conservative...\nI think that begs a follow-on question about when it is appropriate. I would be more comfortable with changing the entire sentence to be The change itself is good, but it needs to be done for 301 as well."} +{"_id":"q-en-http-core-fd8c1a581af787c453b0cde81fc05946062fe9ea6b8fdc4fbc784a83825a7fc5","text":"Inspired by follow-up discussions on , make a concrete proposal for how the RFC 3986 security considerations might be mentioned in our own security considerations. I understand the editors' stance on adding substantial new text at this point, and will defer to them on whether this constitutes \"substantial\"."} +{"_id":"q-en-http-core-9dde8ae30e22a5719eba7d8358082320d013592b1d0e9ed24a72abf98c6f4e5b","text":"note in message-headers registry above move of HTTP registry in message-headers registry, remove obsolete note about HTTP field name registry note in new http field registry"} +{"_id":"q-en-http-core-7000e8591d5cd3d6b6ff4fc5cc33729d62d86f63f79580fcaddd6d04b82c30c3","text":"Should we mention this on the mailing list?\nSure.\nspecified that identity content-coding could be made unacceptable (e.g., indicating very strong preference for response compression) by means of an Accept-Encoding request header field value like or without an identity-specific preference, and then recommended that the server SHOULD respond with 406 (Not Acceptable) when it cannot send an acceptable response. But and the update that last point to \"SHOULD send a response without any content-coding\", presumably primarily intended to avoid sending 406 (Not Acceptable) when the acceptability of identity content-coding is unspecified. The text still includes a description of when identity content-coding is unacceptable and still accommodates a \"compression is mandatory\" interpretation of , but seems to strongly discourage it in the sense of (\"there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course\"). I suspect this to be unintentional, and would propose clarification in semantics: In the absence of such clarification, the text describing when identity content-coding is unacceptable seems to be largely pointless.\nI agree that this is not optimal, but the proposed text would change the protocol from what is has been since RFC 7231. Given that the current draft is approved and in the publication queue, I don't think that a change is possible or even desirable.\nThe move away from sending 406 was intentional in RFC7231. There is no implementation support for identity;q=0 overriding basic interoperability (200 response without encoding), unlike Accept for unwanted media types, because encodings are often applied late in the request handling process based on the server's analysis of whether they are efficient\/possible. Hence, clients simply list the acceptable encodings and variance in the field values cause unwanted cache misses. The feature only made sense for non-compression encodings. This does not prevent a server from responding with 406, but doing so is less interoperable (without some additional knowledge about why the client is asking for that) than sending no encoding.\nIf it is indeed intentional for absence of content coding to always be considered acceptable, then what is the purpose of keeping (emphasis mine)?\nI meant that the move away from 406 was intentional. Suggesting that \"identity;q=0\" not be respected was not intentional, but is also not implemented AFAIK (It only makes sense for encryption-related encodings). I would suggest a less normative fix, like and just send that now to the RFC editor as a clarification.\nGreat, thanks!\nNAME - are you going to create a PR for this?\nLGTM"} +{"_id":"q-en-http-extensions-10a034cf6bd057c4cf56d94334724b8b1d9280b9428128bb95546b81812d8064","text":"NAME - can you review?\nThe client hint spec does not seem to specify if there's a maximum lifetime for the origin opt-in preferences. There seem to be spec that the data should be cleared when site data, cookies, cache, etc. are cleared, but barring that should there be some sort of reasonable limit?\nI'm new to this process; I'm not sure how to add the \"client-hints\" label\nNAME thoughts? You were the one pushing for an implicit lifetime. Do we want to cap it?\nI think that what you should say is that the browser can discard this state, but it should do so only when it drops all other state for the origin (e.g., when clearing cookies). Modeling this as part of origin-bound state like that is far easier to reason about and more reliable for sites.\nWe already have \"Implementers SHOULD support Client Hints opt-in mechanisms and MUST clear persisted opt-in preferences when any one of site data, browsing history, browsing cache, or similar, are cleared.\" Do we need to also add something around origin-bound state to that?\nAs Safari is already capping script-set storage lifetime to 7 days (Brave does w\/ cookies, plans to do for all storage too), and some other vendors have hinted at the same, I suggest a max lifetime of 7 days. Especially given the possibility that these values can be tracking vectors (discussed in the fingerprinting conversation).\nThe text above means that browsers that choose to set max lifetime to storage\/cookies will MUST do the same for Client Hints. I don't think we need to bake in the specific max age that some UAs chose into an IETF draft.\nI hear what you're saying, but the purpose of a standard is not to say \"some people will act X and some folks will act Y\", its to bring X and Y as close as possible. What is the use case for a lifetime of > 7 days? Like any cache value, its perf utility decreases as life time increases, but the privacy risk stays constant. Put differently, what is the concern w\/ a maximum value, to at least partially constrain the privacy risk?\nSounds like the restriction you're talking about is of larger scope than Client Hints, and applies the origin storage in general. I suggest you'd try to enforce those specific values e.g. on session cookies, and Client Hints will follow out of that. At the same time, I was under the impressions that when we , we tied the lifetime of Client-Hints persistence to session cookies, but looks like we did not. I'll submit a PR on that front."} +{"_id":"q-en-http-extensions-f774e802ed9ebe718c1504bf72e7c58032419a3fe791307bc832169eae461622","text":"An alternative to - removes HTAB everywhere, replaces with .\nRight now we use OWS, and suggests moving from SP to RWS for inner list separation. Should we allow tabs in SH? Error on them? If we want to allow them, 983 makes sense, as it's more consistent. However, we'll need to test for tabs...\nWhat's the reason to allow tabs? Compatibility with current headers? Do actual tab characters show up in the wild that often?\nWe had them in there because we were reusing OWS, not purposefully. AFAICT they don't appear in the wild much at all. So +1 for .\nLGTMThank you for working on this. LGTM."} +{"_id":"q-en-http-extensions-b1f883d220c28df51c47505eb20d5791368bc47fe946a0a34f3fdbb695c991f2","text":"... and allow token to start with \"*\".\nI'm running my SH implementation against the HTTP Archive dumps (400,000,000+ HTTP requests) to see how the from BSH parse in something like the real world. It looks like some headers are failing because AcceptAccess-Control-Allow-Origin`. If we were to use a different character to delimit Byte Sequences, we could include in Token (necessarily as a first character, as well as in the rest, I assume). Worthwhile? If so, what should we use for Byte Sequences?\n(via ) already allows , but not as a first character. '%' is short in HPACK and unlikely to trigger funny handling, but maybe it will conflict with pct-encoding to start. '=' has the virtue of being already allowed. It has the drawback of causing byte array parameters to look funny ( for instance). Interestingly, it makes removal of padding easier (base64 padding is silly).\nHow about ? I don't' think is a good idea; if an implementation enforces padding (which we caution against, but do not prohibit), it's confusing.\nChecking with NAME NAME - any issues changing byte sequence delimiters from to ?\n(the only conflicting use that comes to mind ATM for is that if someone wants to refer to H2 pseudo-headers like in a field that uses tokens, they'll be out of luck. I think that's OK, because existing fields that take header names as a list-of-tokens don't refer to pseudo-headers, and new fields can be defined as lists of tokens-or-strings.)\nis free"} +{"_id":"q-en-http-extensions-4dc5e44f2b81763bc6e13245281352387f94d74c671eeea91b15d9bb196831b6","text":"(Related to now-closed ) Should implementations be able to store parsed floats in IEEE754 binary64? And if so, should we expect that they round trip between parsing and serialization? It seems like the current limits were designed to support that notion, but there are cases where they do not. (Note, this isn't about being able to serialize arbitrary C++ doubles; the spec sensibly only supports a tiny subset of all possible 64-bit floats. This is about whether should always be true) IEEE 754 guarantees that if a string with no more than 15 digits is converted to a double and then back to a string, it should result in the original string, but that appears to require rounding in some cases; truncation, which the serialization algorithm calls for, can change the string representation. Parsing the string \"12345678901234.1\" as a float, for instance, results in a binary64 representation with value 0x42a674e79c5fe433, whose exact decimal expansion is 12345678901234.099609375. Following the text of the serialization algorithm, will be 1, and so the number will be serialized as \"12345678901234.0\" It appears, as an implementer, that the only way to ensure that a float parsed from a valid string can be serialized back to the original string is to store it as a fixed point decimal in two parts, or else use a float with >64 bits of precision for storage.\nWould it be adequate to change truncation to rounding in step 9 of Serializing a Float?\nThat's a can of worms. Which rounding? (nearest-or-even, nearest-or-away from zero, to zero, up, down?)\nI know nearest-or-even is the recommended rounding technique. Can we specify that?\nRound, nearest-or-even, to fractionaldigitsavail digits? While we're at it, should the spec be stricter about trailing zeros? My implementation currently strips any trailing zeros after the first position past the decimal point (so 1.230000 is serialized as 1.23, but 1.000000 is serialized as 1.0, as is 1.0000001, after rounding\/truncation), which passes the test cases in URL The spec just says \"at most fractionaldigitsavail digits, but that allows multiple \"correct\" serializations of a given float.\nIt's also weird to only require roundtripping for floats that can be represented in binary64. The simplest might solution here might be to just accept that floats are lossy. If an implementer needs precision, use a string. This trade-off seems to work well enough for JSON parsers.\nThe spec already restricts floats to those which can theoretically be round-tripped into a binary64 and back; this is just about ensuring that that actually happens, if a binary64 is used as the backing store (which is natural for at least C++ and JS implementations)\nnearest-or-even SGTM. Stripping trailing zeros as well as leading ones in the integer component sounds good as well. I'll do a PR...\nPTAL at the commit. I've added a test for this (\"tricky precision float\") -- NAME are you using the tests?\n\/me reads above - yes you are, sorry for the noise.\nI do not think we should make stripping trailing zeros decimal digits a strict requirement, the number of decimals presented can be out-of-band metadata about the actual precision involved. For instace a timeinterval serialized as 3.0 vs 3.000 vs. 3.000000 might be used to communicate the precision of the measurements as tenths of seconds \/ milliseconds \/ microseconds respectively. But we should probably recommend stripping trailing zero decimal digits for bandwidth reasons, if there is no communicative reason to retain them.\nIf someone wants to use the precision of measurement that way, it'd need to be available in all implementations to be interoperable -- i.e., we'd have to require implementations to preserve that precision. I'm pretty dubious on that being possible and correctly implemented across the board. E.g., requiring every implementation to parse \"0.42000\" and make the trailing zeros correctly available.\nIf we're going to round, we probably need to do it before serializing the integer part, since the rounded fractional part may round up to 1.0, as well.\nFor floats, I think there are two things that we want from this spec (and please let me know if that's not right): Any string which parses as an float can have that float's value stored in a binary64, and serializing that value should reproduce the original string. (modulo some number of 0's at either end) Any real value (regardless of storage implementation) in the range -(1e14-0.05) to (1e14-0.05), exclusive, should serialize to its closest valid sh-float representation. (rounding ties towards an even least-significant-digit) Is that reasonable?\nDiscussed in Singapore: will discuss on list, but sense in the room is to have a fixed point. Hum in the room prefers fixed point. (Need to still determine number of decimals; q-value uses 3).\nSpecifically pinging NAME NAME NAME for thoughts re: moving float to fixed-point (straw-man: three digits). This seemed to have strong support in the room, but wanted to make sure you're comfortable with it.\nWould fixed point be materially different that what is currently specced? Currently, the spec implies a fixed-point with 6 digits of precision, and a maximum of 15 total digits, which (not at all coincidentally) is the same range that is guaranteed to round trip through an IEEE-754 binary64 representation, which is useful for actual implementations. Edit: I see the three-digit difference now; would the integer portion be restricted to 12 digits in that case?\nYes.\nI don't have a strong opinion on switching to 12+3 fixed; I think it would cover all of the use cases that I can currently imagine (and I'm trying hard to discount any sunk costs from implementing the previous draft :stuckouttongue:) If implementation complexity (or perceived complexity) factors in to the decision at all, Chrome's current sh-float serializer, which passes the public test suite, is\nIf people think it's useful to have a fixed-point number, and can settle on the optimal number of digits for most of their use-cases, then\nThe (only!) two \"real\" reason we have float\/decimal in the first place is values and subsecond resolution time-{intervals,stamps}, and the one \"unreal\" reason is that JSON offered that. I think has three decimals, either by norm or convention. Time-intervals such as \"process-time for this request\" are often sub-millisecond, and that has by and large been the argument for \"at least six decimals\". Time-intervals do not convincingly overcome Gettys 1st rule[1] or Phil Karltons rule[2]. so 12.3 would be fine with me. [1] Do not add new functionality unless an implementor cannot complete a real application without it. [2] The only thing worse than generalizing from one example is generalizing from no examples at all.\nThese days at least, it's by specification. It's a number in the range 0.000 to 1.000, and all algorithms I know of that use it would be just as well served by an integer from 0 to 1000. So the only reason to include the dot is if we want to backport existing headers as-is. Not backporting existing headers was a justification for some decisions earlier on, so if that's been reversed, should we revisit some of those other decisions?\nAlthough thinking more about it, most of the earlier decisions are sort of OBE, so it's maybe not such a big deal.\nPTAL"} +{"_id":"q-en-http-extensions-d24d3fca53ac8f99d15a1b4c2a807e2b74d34db62176584bc0b7b8c802a91d3b","text":"Missed in\nI'm running my SH implementation against the HTTP Archive dumps (400,000,000+ HTTP requests) to see how the from BSH parse in something like the real world. It looks like some headers are failing because AcceptAccess-Control-Allow-Origin`. If we were to use a different character to delimit Byte Sequences, we could include in Token (necessarily as a first character, as well as in the rest, I assume). Worthwhile? If so, what should we use for Byte Sequences?\n(via ) already allows , but not as a first character. '%' is short in HPACK and unlikely to trigger funny handling, but maybe it will conflict with pct-encoding to start. '=' has the virtue of being already allowed. It has the drawback of causing byte array parameters to look funny ( for instance). Interestingly, it makes removal of padding easier (base64 padding is silly).\nHow about ? I don't' think is a good idea; if an implementation enforces padding (which we caution against, but do not prohibit), it's confusing.\nChecking with NAME NAME - any issues changing byte sequence delimiters from to ?\n(the only conflicting use that comes to mind ATM for is that if someone wants to refer to H2 pseudo-headers like in a field that uses tokens, they'll be out of luck. I think that's OK, because existing fields that take header names as a list-of-tokens don't refer to pseudo-headers, and new fields can be defined as lists of tokens-or-strings.)\nis free"} +{"_id":"q-en-http-extensions-c0c92e63b48d7aecacc164a4bf9f0bf83f29882e158000f9391e857e6c827541","text":"Explicitly disallow thirteen digits before the dot.\nNAME I think this is the right way to validate the number of digits in a decimal."} +{"_id":"q-en-http-extensions-a78febcd5e9e98aef7dafcad9ebef6b13b558bb6eccdb5033895f2a7bc3fc239","text":"This patch replaces with , and corrects inadvertant usage of non-example domains in the hopes of avoiding confusion. Closes httpwg\/http-extensions.\nNAME mind taking a look?\nLanding this. NAME if you object, let me know.\nThere are still com TLD remaining: and\nis real domain. (currently not owned) seems better to use example domain in rfc.\nShould we use the .example TLD or stay with URL It's easier to contextualize with .example, in this case webmail.example and projects.example.\nSGTM in projects.example, webmail.example etc also URL are fine too."} +{"_id":"q-en-http-extensions-50daa676591d27d530c490b691cd72cbc1577bced731133cb6d77e1d20442a05","text":"This patch alters the cookie parsing algorithm to treat as creating a cookie with an empty name and a value of \"token\". It also rejects cookies with neither names nor values (e.g. and . Tests in web-platform-tests\/wpt. Closes httpwg\/http-extensions.\nNAME Ping?\nLanding this based on the discussion in . NAME If the wording needs more cleanup, I'd appreciate feedback!\nThe cookie-pair ABNF rule still requires presence of . Is it intentional?\nAs a part of discussion in whatwg\/html I've made a of the modern browsers compatibility with the RFC 6265. It appears that all browsers nowadays allows cookies without key or (in case of Safari) without value, thus making it de facto standard. However, it's debatable how cookies like should be treated: as the cookie without value or without key. Thinking of cookie jar as some kind of key\/value store makes it more logical to treat such cookie as cookies without value, but on the other hand, currently most implementers treats them as the cookies with the special empty key.\nDiscussed at Berlin IETF; Julian points out that we should check previous discussion in cookie WG. Martin points out that we should talk to the implementation that has divergent behaviour.\nContext: around URL, ticket in URL\nFor clarity, are we saying Chrome, Firefox, and Edge treat such a cookie as key: \"\" and value \"foo\", and Safari treats it like key: \"foo\" and value: \"\"?\nPer the table linked in OP Safari serializes input as whereas others serialize as . Firefox attempted to be more strict on cookie setting lacking and found it broke a router: URL\nURL is the relevant test, in particular the following subtests: sends . Chromium stores a cookie with an empty name and value of , serialized as . Firefox does the same. Safari drops the cookie entirely. Given Firefox's experience with and related bugs above, this seems like it might be baked into some user-visible sites that are unlikely to update. I'd suggest that we change this test expectation to , and update the spec accordingly. sends , , and . Chromium stores three cookies: , , and , serialized as . Firefox and Safari store two cookies: and , serialized as . The latter is clearly correct and matches the spec; filed URL to change Chromium's behavior. sends , , and . Chromium stores three cookies: , , and , serialized as . Firefox does the same. Safari stores two cookies: and , serialized as . If we change 0004 above, then we'd update this test as well to match Chromium and Firefox. sends , , and . Chromium stores three cookies: , , and , serialized as . Firefox does the same. Safari also stores the same three cookies, but serializes them as . I think this ordering problem is similarly represented in and , and is something that can be dealt with separately from this issue. sends and . Chromium stores , and then overwrites it with . Firefox stores and ignores the empty cookie. Safari ignores both (as per 0004 and 0021 above). I think Firefox's behavior is most correct here as a logical consequence of the spec change to fix 0004, and I believe Chromium will match it once the bug filed for 0020 above is addressed. sends and . Chromium stores , and then overwrites it with . Firefox does the same. Safari ignores both. If we change 0004 as above, then Firefox and Chromium's behavior will match the spec, and the test should be updated. I'll put up a PR against the spec if folks are on board with the suggestions above.\nPutting together a , I ran into more excitement: 0024 (, ), 0025 (, ), and 0026 (, ) show that Firefox overwrites with , and serializes it as \"\". 0028 would show the same, but it's incorrect in the WPT repository; it should contain a tab character, and the expectations are simply wrong (also in the original repo). I'd suggest that this behavior doesn't make much sense, and that it would be better for browsers to align on requiring cookies to have either a name or a value, and to ignore headers that parse with empty names and empty values. That would change the expectations for these four tests to expect a serialization of \"Cookie: foo\". WDYT, NAME NAME\nFor clarity: https:\/\/chromium-URL shows the test expectations I'd suggest we end up with.\nTl;dr: I think changing the spec to match current implementations for to produce (i.e. 0004 case) makes sense. I think disallowing (0020 case) and (0023 case) would make sense too, but that might break things. In the 0004 case: We've been parsing as an empty name with a non-empty value for a long time, afaict. Since Firefox seems to do the same thing, I think we should definitely update the spec to match the implementations (i.e. for a Set-Cookie line without a but with a valid token, treat the token as the value and, for cookies without a name, don't put a when serializing). In the 0020 () and 0023 (empty ) cases: Judging by Chrome metrics for Cookie.HeaderLength, there is definitely non-zero usage of empty cookies, so I'm concerned that rejecting empty cookies would break things... (In the 0020 case, one could also argue that explicitly sending a Set-Cookie header of indicates intent to create a cookie, even if it is empty.) I agree it's a bit silly to allow cookies with both empty name and empty value, though. I think that's a reasonable change to make to the spec, and maybe we don't care about potentially breaking things.\nThanks, NAME I agree with you for the 0004 case. NAME would Apple be willing to change CFNetwork's behavior to match the suggestions above? For 0020 and 0023: Looking at the stable channel on for the 28 days ending on Dec 31st, 0.0025% of headers were empty, spread over 0.12% of users. This is a pretty small number in total, and a pretty small number per user; certainly within the range of changes we've been successful with in the past. I wouldn't be surprised if it broke something (because everything we do with cookies breaks something!), but I would be surprised if it broke anything critical enough to retain the behavior. It would signify an intent to create a cookie without a name and without a value. What would that intent mean? I'd like it to mean \"Don't create a cookie.\" :)\nI'm okay with your proposed changes. cc NAME NAME NAME\nURL should cover the proposed changes above. (I kinda want to rewrite the whole parsing algorithm in terms of Infra to be a bit more precise, but that's a task for another patch. :) )\nI’ve pinged the relevant folks to take a look at this thread. Like me, many won’t be back until Monday.\nNot sure if it's the same thing, but I've seen facebook send an empty header frequently. For example visiting URL loads a 1 pixel gif with the following URL: URL (replacing XXXX with your unique tracking reference) with the following response headers (note the blank header, third from the bottom): This then gets set in the Chrome Cookie jar with the following: ! I can't seem to find this showing in Firefox's cookie jar (even when turning off the default Enhance Tracking Protection) nor in Safari. Anyway, I'm sure a good few of us wouldn't care if this \"cookie\" died a death, or if Chrome changed to ignore this \"cookie\", and not even sure if it DOES anything (it doesn't seem to be sent in follow up HTTP requests, but that's not to say it's not looked at by some local JavaScript). However, given the prevalence of Facebook tracking I'm really surprised these sorts of cookies are only seen by 0.12% of users so thought I'd comment to note this in case those estimates are not accurate? Perhaps the stats are not picking up these cases? I've seen this on many sites and only picked the Guardian as a well-known example and guessed first time it would be on there (news sites are good ones to pick when looking for tracking pixel examples!). URL sets the same if you prefer something US based and URL sets the same (but not on the main domain) for those of you in that side of the world.\nThe number of 0-byte Cookie headers is only a lower bound on how many no-name-no-value cookies are in use. If such a cookie were sent along with some other cookies, it would not be logged as a 0-byte header. I don't think we have any metrics on per-cookie length, just the total length of the Cookie header. So it might be worth collecting more data? Also, that metric doesn't include cookies accessed via .\nRegarding NAME comments on Facebook; I pinged someone there who will forward this to the appropriate team. It sounds like a bug to me, and not something that I think should stop us from making a decision here on the underlying question of whether an empty cookie is a thing we should support. NAME I read your comments above as generally approving of the change suggested in URL, but cautious about how to ship it in Chrome? That feels like an important part of an eventual \"Intent to Ship\", rather than a discussion of what we'd like the spec to say. If we decide to accept the suggestions above, then I'm pretty confident that we can work out how and when to ship it (especially given that there's no real agreement among browsers today, so the status quo is already trifurcated as per URL above). Indeed! You're exactly right, and I should have been more clear. The 0.0025% is the number of requests (not page views, FWIW) for which we'd shift from sending an empty header to sending no header at all. That seems to me to be a change more likely to invalidate an application's expectations than shifting from to . I think this is the easier case, actually, as the serialization of an empty cookie is indistinguishable from a empty cookie jar.\nThanks for sending on NAME . I more meant it as a concern that the 0.12% of users figure sounds way too low to me given Facebooks prevalence. Note you don’t need to be a Facebook user or visit a Facebook site to get this cookie - just a site that uses Facebook advertising pixels (of which there are many!). So I still have that concern that this figure sounds way too low to me. Which then begs the question as to whether there are more cases than we think there are? Now I agree that particular example sounds like a bug and I doubt this empty cookie is being used and so don’t think blocking it off will cause a breakage, but still, I’m uneasy that the stats seem wrong and that we are using this (potentially incorrect) stat to justify the low chance of breakage with this change. There might be other cookies on other sites that will cause breakage. To be clear, I don’t actually think it will cause a lot of breakage, but would rather than was based on a stat we are comfortable to stand behind than one that we have potential concerns with. Is there any way to double check that 0.12% stat given this example? Maybe a lot less users than I think are getting this cookie but given I’ve spotted it on 3 major news paper sites in 3 distinct countries, without having to look too far for these examples, makes me suspect that is not the case. says Facebook assets are being used on 2.38% of the most used 5.7 million sites and I’d guess they all come with some cookies for free. And that stat treats each of those 5.7 million sites equally when actual usage will be heavily skewed to the sites at the top end than the tail so usage will likely be more than that as top end sites are more likely to use Facebook advertising in my opinion. And again, I’m not concerned with this Facebook example - but more that it seems to suggest the 0.12% stat feels wrong. Or maybe it’s not being included since this empty cookie doesn’t seem to be sent in follow up requests, and that’s what that stat is measuring? If that’s the reason, and we’re comfortable with that explanation and the risks that entails (I am for what it’s worth), then that’s fine, but thought it worth asking the question.\nAs NAME notes above, we don't have statistics on cookie size generally; just aggregate numbers on header length. The \"0.12% of users\" statistic is the set of users who have ever sent an empty header (e.g. the only cookie they have for a given request is empty). If nameless\/valueless cookies are set for other sites, they'll be serialized as , which we don't track specifically. I noted above that I'm less concerned about this case than I am about the \"We used to get a header, but now we don't get anything at all!\" case, but I'm honestly not terribly concerned about either given the existing diversity in the way browsers handle .\nOK makes sense. Thanks for answering.\nYes, that's right.\nThe CFNetwork team approves of the change and is tracking the work in rdar:\/\/problem\/58358759.\nGiven that, I'd suggest that we land something like URL along with the tests from https:\/\/chromium-URL Thanks! If you'd like to improve compatibility with IE and Firefox, you might consider representing these as nameless cookies instead of valueless cookie. You can observe the difference via test 0027: Set-Cookie: foo Set-Cookie: bar"} +{"_id":"q-en-http-extensions-12c2d8c3be71db4275e476a35137c0d99536796b8bebf2477138008cafce4304","text":"NAME thanks for the review, see latest commit. is talking about the abstract model for dictionaries, so they are required to be unique. I've removed \"required to be\" so that people don't misread this.\nallows a sender to generate multiple header fields with the same name, if the field is defined as a comma-separated list [i.e., #(values)], and that such duplicates are semantically equivalent to them concatenated into a single header field. I think that this property has been useful for us, when annotating header fields. Can we allow such duplicates for Structured Headers too, when a list or a dictionary is being used? Note that we have merged based on the assumption that it is okay to require the consumer of SH header fields to have the knowledge on what the top type is.\nThat's always been the intent; are you saying that we should have text specifically enabling this for serialisation of dictionaries and lists?\nGood to know that that has been the intent. Yes, I think we should clarify that explicitly for dictionaries and lists, because RFC 7230 can be interpreted to allow duplicates only for those using notation (which we do not use in SH). Also, it might be a good idea to revisit the rule for duplicate dictionary values, as I'd assume that people would try to append a header field without checking the values that already exist. IMO, the rule should be that the dictionary element that appears later overwrites the one that precedes.\nPTAL\nNAME Thank you! cb76bf1 looks good to me. Is your intention to retain the forbidding of duplicate dictionary values, even when the dictionary members are split into different header fields? For example, in the hypothetical world where cache-control is sent using SH, all the cache-control directives in the following example would be considered invalid and ignored. I am not strongly opposed to that, though my weak preference goes to requiring decoders to use the dictionary member that appears later when there is a collision.\nNAME the problem is that ignoring duplicates can cause security issues for some headers (the headers that WebAppSec in the W3C are particularly sensitive about this). The only way to address it would be to add a flag to the API to control what to do with dups -- but that adds complexity (more to specifications than to implementations, but still). I'm not crazy about that; what do others think?\nNAME Thank you for the explanation. But I wonder if it is actually the case that what those specifications forbid is having duplicate header fields, rather than forbidding duplicate dictionary members (or maybe both). By quickly searching on URL, I was able to find the prohibition on having duplicate header fields, but could not find a rule that forbids having duplicate policy tokens. Assuming that at least such specifications forbid having duplicate header fields, and based on the fact that SH draft does not forbid that (when a list or dictionary is used), I think there is no reason for the SH draft to forbid having duplicate dictionary members.\nNAME NAME any comments re: the above? Specifically, if SH swallowed duplicate dictionary values rather than raising an error, would that be a concern from a security standpoint?\nGiven a header like , it seems like we have a few options: Fail parsing. Return . Return . Return . I think I can come up with arguments for any of these options, though I'd find 4 a bit surprising as a spec author, insofar as I'd have to deal with unexpected lists for all my dictionary values. of URL skips duplicate CSP directives. That is, it picks 2 out of the list above, choosing to retain only the first value for a given name. I do wonder whether that was the right decision, but it is consistent across browsers and provides developers with predictable behavior. We throw a warning into the console in Chrome for this case, and that's probably Good Enough.\nThis sounds like a bad API for working with structured header values. I'd hope that eventually we offer better APIs for operating on (what Fetch calls) header lists. I think historically there were some arguments for always using first or last based on likelihood of where header injection would happen. But I'm not sure how sound any of that rationale was. I don't think a client can reasonably protect against it. Therefore, what's most important is that the result is unambiguous and that it doesn't matter how you \"spell\" multiple values (commas or multiple fields or a combination) in the traditional header syntax.\nAnyone have preferences between Mike's options 2 and 3?\nNAME I think option 3 is preferable, because that would allow server administrators to override parameters by appending header fields. I agree that NAME that it's a hack, though sadly, I do not think we can get rid of such practice through API design on the server side. People are so used to dealing with each line of header fields than working through a API that deals with a set of header fields as a whole.\nThere are CDNs that offer JS or Wasm APIs effectively using et al. Once structured header values are more baked and have dedicated on-the-wire support I'd definitely like to add first-class support to objects.\nI weakly prefer \"first one wins\" (2). It's only an aesthetic preference, however: I don't have any principled reason to pick 2 over 3 or vice-versa. I can live with either for the reasons NAME spells out.\nIf I had pick, I probably agree with NAME reasoning.\nPTAL\nThank you for the changes. Looks good to me now. FWIW, I was bit surprised to learn that we do not mention how a duplicate is processed alongside the type definitions in . I checked and now I understand that the formal definition is provided in section 4."} +{"_id":"q-en-http-extensions-599e4d7c45111601f03cb1ee6b621219a558e8c46e01d22aea32a190a8578e67","text":"URL was incomplete, leaving a number of inadvertant references to sites, or the TLD. This patch, hopefully, is less incomplete. h\/t NAME and NAME for pointing this out.\nWDYT, NAME"} +{"_id":"q-en-http-extensions-0718a51eb8ffcf3a6530210b760f761a689c700d531b1bc18f50cae1aec409e1","text":"Allow \".\" in . Also consistently reorders character lists for key.\nI'm getting a high failure rate on parsing (but with a very low number of occurrences; it's not exactly a popular header). The failure seems to occur because of keys like this: Allowing in key would address this, but I'm not sure it's worth the change. Additionally, key currently only allows lowercase alpha; this is mainly to avoid confusion between and , etc. So far this seems to be pretty supportable in cache-control, accept- and elsewhere, with the vast majority of headers seeming to choose the lowercase form. However, it does cause issues in a couple of cases: In , the parameter isn't compatible, leading to a high failure rate for that header In cookies, this makes a number of things incompatible, but it's not clear that we can treat them as a direct mapping anyway.\nI have no opinion about dots. As to case, though; I think the options are: say lower-case only – the current option we should probably instruct individual headers' specifications to define how to get into\/out of SH, which is harder for extant specs open it up to anycase and be case-sensitive this means that parses to and if someone wants to collapse \/ they have to do it after the fact open it up to anycase and be case-insensitive still requires instructions for individual headers' specs to define how to get out of SH Incidentally we don't say anything about case for tokens, not that it matters as far as SH is concerned (we never have to compare them.) It might not hurt to say something to the effect that \"tokens are case-sensitive so you aren't allowed to change them, but an individual specification that uses sh-token may have special rules for comparing tokens that only differ in case\".\nWRT lower-case -- I think your analysis is correct; my inclination is to either stay lc only, or to force to LC when parsing textual headers (much like H2 does for field names). WRT instructions for existing headers -- that's out of scope for SH \"officially.\"\nYour patch for looks good. Force-lowercasing makes me a bit nervous, because the wire form for h1 will be materially different than what appears in the data model. We'll have to be careful to distinguish between the h1 wire format and the abstract model for keys (which we currently are pretty sloppy about). More seriously, even if we did force-lc, I realise it wouldn't be possible to round-trip things like includeSubDomains, so it probably isn't going to be useful on its own; there will always need to be some application knowledge \/ mapping. So, I'm inclined to leave casing in keys as-is.\nI don't know what you mean by \"force-lowercasing\". Is that different from leaving it as-is?\ni.e., forcing LC when parsing textual headers, as per above.\nOh, do you mean converting \"FooBar\" to \"foobar\" as part of some algorithm? Right, yeah, no, I don't like the idea of coercing data into the model if it can't be round-tripped back out. If an individual header field's spec supports a different data model (e.g. strings), then it's up to that header field's spec to define how to convert their string to an sh-key, and back again. I.e. don't do anything in SH, and punt it to RFC6797bis (or draft-mnot-bsh).\nAgreed. Could you make the change into a PR?\nIt is."} +{"_id":"q-en-http-extensions-411acea6335432ca664406d0a6d4f4ead003ab1db071b810b83b99bc8659edce","text":"NAME Fetch Metadata will add a bunch of headers that are supposedly structured headers, but given their simplicity can be compared using equality on servers. If servers did that it would likely prevent adding parameters in the future however or making other supposedly-compatible extensions. Should we accept this for headers clients transmit to servers or should we do something akin to TLS's GREASE?\nIn general, greasing HTTP headers is interesting -- although I'm a little wary, given the extent of the grease-related discussions we've had in QUIC. As a baseline -- I think that the goal of grease should be to prevent casually incomplete implementation (e.g., someone using regex or splitting on commas and extracting the raw values). It can never assure complete and correct implementation overall. I think the interesting thing to grease in SH is parameters, since that's a primary extensibility mechanism, and is (now) uniform across values. Ideally, grease parameters would contain values that frustrate the casual approaches above; e.g., strings with commas in them. A couple of options I can see; We could advise \/ require sending implementations to generate extra parameters sporadically (not super-often, so as to limit bandwidth consumption, but not so rarely that it creates debugging nightmares). To be recognisable as grease to the receiving application, their names would have to have some convention. We could advise \/ require headers using SH to define their own greasing mechanisms when appropriate. I suspect that truly automatic greasing might increase friction against adoption of SH, since some people's reaction will be \"what is this thing doing to my header?!?\". Invoking in on a header-by-header basis (perhaps by adding an appendix with guidance) might help.\nI think the main benefit is derived from clients doing this so the main party to convince would be the user agents (who are already generally on board with SH).\nSo, should anything in the spec change? At this point the most I can see is adding something like:\nI guess I'm mainly interested if implementations are willing to do this uniformly. NAME NAME NAME\nI think NAME suggestion is reasonable to recommend (though it risks a quagmire when we attempt to ship a new header defining a quotient of quietly quixotic quilting), but I'll defer to NAME or NAME to weigh in from Chromium's networking team's broader perspective for headers generally.\nI am, unsurprisingly, in favor of GREASEing things on general principle. :-) Although the network stack only produces a few core headers while others, like Fetch metadata, mostly come from outside anyway. GREASE would want to be applied where the header is produced, most likely. (Well, we could implement some generic post-processing scheme, but then all the headers would need compatible mechanisms and we'd need to know which they are.)\nI also think this sounds reasonable. One problem is reproducibility - if we modify a header 1 in 100 times, then a developer would have trouble reproducing the issue. If we apply the same modification per browser session, it would become a new cross-site tracking vector, unless we also took NIK into account when deciding whether\/how to modify the header.\nFor TLS, we've never fed state in as input to GREASE, which is the safest option privacy-wise. But you're right that you then need to worry about reproducibility. To that end, if we GREASE some TLS codepoint, we always GREASE it, with the randomness coming in only in the particular values, as a low-pass discouragement against hardcoding values. But if the implementation is intolerant to only values that start with 5, it is indeed unlikely to be reproducible enough.\nIf we always grease something, couldn't that result in problems around servers always expecting an extra value?\nIt could, but that'd be a really weird bug for servers to have, whereas not properly skipping over unknown keys is much easier to do on accident. (Maybe I just parse the whole thing with a giant regex and make assumptions about the order.)\nI don't think it's that weird - if I want to test parsing something with a variable number of fields, I'd test on 0, 1, 2, and possible something much greater than 2 (like 3!). I think it's very easy to mess up on any one of those cases, and get the others right. Could assume an initiate search for a delimiter will always succeed, or use a regexp with \"+\" instead of \"*\", etc.\nBeyond the text above, we could add a character to those allowed to start keys, and reserve it for greasing. E.g., . I think I'd do that purely at the parse\/serialise layer, so a grease value never appears in the actual data model. Is that attractive?\nNAME I'm not sure if that is beneficial. I think our goal is to let every HTTP extension that uses a SH list or a SH dictionary to be extensible. To achieve that, greasing has to carry a value that can be recognized by the extensions. Assuming that that would happen for a fair number of HTTP extensions, the SH codec shared among those HTTP extensions would encode \/ decode arbitrary number of undefined values. I think that would be sufficient.\nMaybe you can use \"X-\". One of the reasons we avoided reserving a range of values was to make grease values a little harder to separate from genuine values. The point being that they need to be seen by the code that processes them without filtering. From that, I take two conclusions: A prefix won't work as it is too easy to distinguish. Having a generic processor strip grease values would mean that the code we want inoculated would never get the opportunity to benefit from our attempts at inoculation. Therefore, I suggest that we find a subset of valid strings that we will reserve. Maybe that is all strings of 8 characters or more that fit a certain pattern. We don't have any established pattern for strings, but anything should do as long as new values can be generated cheaply and it doesn't take too much of the available space.\nI'm going to disagree on pretty much all points, NAME The target here is assuring that a consuming implementation isn't grepping, string-mangling, etc. If they're aware of it being a SH and actively NOT using a SH parser, there's not much we can do, reasonably. To that end, is just as effective as, and hiding it from the application is a benefit, because we don't pollute their data model, we don't make them think about it, and once it's parsed into the data model, they CAN'T interfere in the ways we're trying to prevent.\nNAME what does \"recognised by the extensions\" mean? Are you saying that a recipient has to be aware of grease's presence AFTER parsing to serve the purpose?\nNAME Let me use an example. Consider the case of the Priority header field. The header would be generated, encoded, decoded, and used in the following manner: We would definitely want to grease (4), so that it would be possible to define new keys. Therefore, there is no benefit in letting (3) absorb the keys used for greasing. We should prohibit (3) doing such thing. That is my argument. Who adds the greasing keys is debatable. That could happen purely in (1) or in cooperation between (1) and (2). Or we could define a common vocabulary, so that it could be done purely in (2). I do not have a strong opinion here.\nOK, so you want to protect against an additional case -- someone using an SH parser (rather than grep, etc.) but raising an error if any parameters are present. Personally, I'm not as concerned about this case; someone is going to have to go out of their way to write that code. What I'm concerned about is lazy programmers doing some quick string hacks.\nWhile I might agree that we might not need to be concerned about this case, I wonder if there would be a benefit in requiring an SH parser (3) to absorb certain keys, assuming that most, if not all of the extensions that use SH dictionary or SH array would allow future extensions. How about saying that \"a SH encoder MAY add a greasing entry (i.e. a key that starts from \"\"), unless the extension using SH forbids that\", and that \"a SH decoder MUST NOT absorb the greasing entries\"? That would give us end-to-end greasing by default with least hassle.\nI'm not against that. The only thing bothering me is that \"unless the extension using SH forbids that\" means that APIs will need another bump on them...\nYeah we can remove \"unless\", if the key names used for greasing is not going to be a valid key name in terms of the spec. E.g., grease keys added by SH encoder starts with \"\", while applications are forbidden from using a key starting from underscores.\nThat's what I was suggesting up-thread :)\nYeah you stated \"do that purely at the parse\/serialise layer,\" and my complaint was absorbing the values in the parsing logic. Sorry I wasn't clear about what I am complaining about.\nSGTM - anyone else have thoughts?\nLooking at how to specify this, having grease parameters appear in the data model don't make me very happy; we either have to make them valid but tell people not to use them (which is likely to be abused), or we have to make them invalid but say they might appear anyway, just to be ignored, which is weird. Either way, it's confusing to write\/read the spec, and I suspect some people will use the grease prefix for some purpose because it works (most of the time). Stepping back, I think the scenario we're talking about is the possibility that a consumer of a specific SH header (i.e., post-parse) is likely to erroneously reject headers that has unrecognised parameters, even when the header allows them (keeping in mind that some header definitions might disallow them, even though that's not a great extensibility strategy). To me, that's not the purpose of greasing, because there are lots of other possible erroneous constraints that a consumer (rather than an SH implementation) might put on a header, and we don't try to prevent that. If someone wants to purposefully reject something in a way that's contrary to the header's spec, they're going to do it. Rather, greasing (to me) is about making sure that someone who's ignorant of the spec \/ too lazy to read it doesn't just do the \"natural\" thing and e.g., feed an integer into a parser, or split a string on commas. So, to me adding parameters \/ dict members and \"disappearing them\" at parse is adequate. I'm not against addressing the case NAME points out, but I don't want to complicate things to achieve it. Do other folks on this thread have any leaning in one direction or the other?\nI wouldn't say that that is mutually exclusive with greasing each user (e.g., priority). Obviously it is true that the SH spec would be simpler if we grease only SH parser. Though, the flip side of that would be that each user (e.g., priority) will have its own greasing scheme. That's the reason why I tend to think that the total cost might become smaller if we recommend: SH serializers add members using keys that users would never generate require them to be passed through by the SH parser so that each user would be required to ignore members using unknown keys PS. Besides, assuming that we want grease the SH parser, I think it might be a good idea to allow ordinary character to use the escaped encoding (e.g., \\x4A), so that parsers incapable of unescaping those characters can be detected at an earlier point.\nI'm not at all sure that header authors are going to find greasing desirable across the board; it's very intrusive into the application (unlike other forms of greasing). And, as I said above, it's not at all clear that all header authors are going to be happy to ignore unrecognised keys. I'd rather be conservative here -- we haven't done this kind of greasing before (into the application space, rather than just within a protocol), and we're not sure how it will work out (especially considering the potential performance overhead; for small header values, it might seriously inflate their size, even if just in the dynamic table). But, again, folks other than us have an opinion, it'd be good to hear it.\nI've said my part, you disagreed, but I haven't really changed my mind. If we're building something with must-ignore-unknown semantics, and if we do grease (and we don't have to), then it needs to reach as far as possible.\nSH is only a data model and a serialisation, it can't say anything about semantics of values (including must-ignore or must-not). I mean, you could introduce weird sentinel values, but that seems like a bad kind of kluge. (And adding to my lazy regexp isn't the intended outcome.) It is up to individual header fields' spec authors to define the semantics of values in the context of that header field, including which SH types are allowed, what parameter names are ok, what to do about unrecognised ones, etc. That includes greasing their header field_ for extensibility. Maybe if we had a way of discussing a schema – for example a formal grammar an application could pass to the parser to allow it to fail on valid-but-disallowed data – we could talk about greasing there. But in the data model itself, it doesn't make much sense.\nNAME that's the thing; we're not enforcing must-ignore-unknown, merely suggesting it as a default. Whether that's the semantic or not is up to individual header definitions. I'll do a PR shortly based upon the text above.\nLate to the party... I've been doing some thinking on GREASE and request headers in the last few days as part of URL As part of the feedback that we received on that, there are . If the GREASEd request headers are used for content adaptation, each new GREASE variant may create another copy in the cache, at least if is involved. Not saying that this means we should avoid GREASEing - just saying that it's a consideration that will apply to UA Client Hints, and is probably applicable in the general case as well.\nI'm going to merge the PR and (since this is our last open issue) cut a new I-D."} +{"_id":"q-en-http-extensions-f0cf6afb7998fa63984229da2c94d235be4129c70521ed6d7406525e3da88d13","text":"Round to three decimal places first, then check the domain, then do all the actual serialising. It's potentially not very l13n-friendly; feel free to change \"decimal point\" or \"to the left\" etc., if something is more appropriate.\n... still isn't right, as Kari noticed. E.g., a value of won't get rounded to .\nStep 1 of all serialisation algorithms could be related to sanitising\/casting\/failing-on noncanonical data. Turn your random number into a Decimal, including rounding to three digits. Fail if it's or no worries from here on Actually, Key, Integer, String, Token, Byte Sequence, and Boolean already do this, so Decimal is the only scalar type that doesn't.\nWith the current algorithm, if it is allowed as input, -0.0004 will get serialized as , rather than . corrects that as well."} +{"_id":"q-en-http-extensions-1384b2c13a805c0f2f83b427a5dfdbca7f23707363791d7471faf302d36fa954","text":"Small editorial fix. I'm also not comfortable with \"a integer\", maybe remove \"a\"? Anyway that's a nitpick.\nThanks NAME\nLGTM"} +{"_id":"q-en-http-extensions-583919dec824724eee2fc125bf318d1d8b5506d60e19ae735e4c5dfc090719ec","text":"NAME I'm just going to land this (as it's trivial), and roll an -05 today.\nThis should use a named section reference, no? Otherwise it can break again when section numbers change.\nIn URL under sub heading # 1 there is a paragraph, \"Attackers can still pop up new windows or trigger...\", which references section 2.1 for more information on that topic. However section 2.1 is \"Conformance Criteria\" so that doesn't seem the intended target.\nThanks!"} +{"_id":"q-en-http-extensions-66737046cc557fb4e9912a914d21792ddbcc05bef74be55d4783024ba2779a04","text":"In URL under sub heading # 1 there is a paragraph, \"Attackers can still pop up new windows or trigger...\", which references section 2.1 for more information on that topic. However section 2.1 is \"Conformance Criteria\" so that doesn't seem the intended target.\nThanks!"} +{"_id":"q-en-http-extensions-45493f962034be9e5c28b55d971ca68a372df215b795aa56091d7cfd40fff708","text":"The term COMMA is not defined anywhere in this document, or in any of the documents it imports. But we can just use the literal instead.\nThanks.\nThe structured headers draft imports and uses a number of productions from RFC 5234 for typographically awkward characters and character classes (, , etc), but there is no definition of provided anywhere. Similar to the way that , , , , and are used without special treatment, can probably be replaced with wherever it occurs."} +{"_id":"q-en-http-extensions-9b1eae13121e5dc1dde8f93478a7f2a97dad99342b5c03e484bd6fa6f9c445f7","text":"When merging the different PRs, the \"Cost of adding hints\" section was misplaced and it wound up in the middle of the security section. This fixes that (and bumps the revision number)."} +{"_id":"q-en-http-extensions-89c8dba9054477d48003f4d273af2cc12cc28620cccf046aed97232614af18d6","text":"We still had usage of the term override, so while thinking how to address that I took the opportunity to state explicitly that no guidance is provided for how an intermediary might merge.\nLGTM!"} +{"_id":"q-en-http-extensions-334aee4d6a1f0bcda19d8b541f74e7929e4ba428a494e37fecf210e79a2bbf1f","text":"cc: NAME\nThank you for the fix.\nRFC8586 is very clear about other uses of : The current priority draft uses it to detect whether there upstream intermediaries, not loop detection. This should be removed."} +{"_id":"q-en-http-extensions-6dd2a29f972be4c1ece9a13a442ea32bc84fb9cbf75c1c1d2e1aac106664addb","text":"The intent here is to say that your requirements for trusting a secondary certificate should be at least as strict as for trusting additional names in the primary certificate.\nAdd text noting the DNS check \/ not-check guidance in RFCs 7540 and 8336, respectively. Explicitly side-step the issue by saying that if you would do a DNS lookup for a hostname in the primary cert, you MUST do the DNS check for a hostname in a secondary cert as well; if you would omit the DNS lookup for a hostname in the primary cert, you MAY omit the DNS check for a hostname in the secondary cert as well."} +{"_id":"q-en-http-extensions-9849f0c5e6e8d8360e10e667ae7e8f83f361186cb6c76b5de99ede7ffd29e009","text":"I believe that NAME comment on that issue is already covered by the following section on Denial of Service.\nIf a malicious server claims support for a variety of common hostnames for which it doesn't actually possess certificates, browser clients with a long-lived connection to the attacker will probe for certificates when the user attempts to access those sites. This allows the server to monitor user behavior during the lifetime of the connection, even without valid certificates. One obvious strategy is to probe everything \"interesting\" ASAP after the ORIGIN frame, but that obviously leaks what the client considers interesting, which is its own privacy leak. Note that a DNS lookup of claimed hostnames prior to requesting the cert prevents this attack. Should probably recommend that clients which omit the DNS lookup never solicit secondary server certificates, but probably don't need to mandate anything.\nIn addition to monitoring user behavior, depending on client behavior (do you race two connections? fallback on error?), the malicious server could also deny access to other hostname or degrade performance."} +{"_id":"q-en-http-extensions-0d618a5295fa6799b3909d8726a442c2f0bc5e898f9a982c790f1d898106f2ff","text":"We landed a change to remove the semantic meanings of urgency levels but neglected to remove a considerations section that discussed this. This PR moves some of the still relevant text out of the section and nukes the stuff no longer needed.\noops, this will need a rebase..."} +{"_id":"q-en-http-extensions-01e3cb292d4e7b3a67a0212419238ad1acb6b5f2522ba98ba02524f6d5b0048a","text":"Address some build warnings and fix typos.\nand\nThank you for the fix. Merged.\nChanges in structured headers means that boolean parameter values are reduced to their parameter name: -- URL\nQ: should this be labeled ?\nNo, this issue addresses the minor editorial problem that needs fixing in the priorities spec e.g. URL just needs some tweaks to be compliant with structured headers.\nThe lowercase headings in the document imply that they represent field or parameter names, but these are concepts. For instance \"incremental\" doesn't appear on the wire, \"i\" does."} +{"_id":"q-en-http-extensions-1271b3263ebd055f08b78eed15c730c00060b0b0b027f88d8e83e6977583a581","text":"This change to the key syntax allows dictionaries and parameters to use \"\" as a starting character for names, allowing keys like value mygoditsfullofstars*\nLGTM. Do you want to add the change note, or shall I do that after merging?\nBTW - would take a PR with an example key that looks like the last one above :)\nIn URL, NAME asks for as a dictionary member name. I think this just means adding it to valid start characters in key."} +{"_id":"q-en-http-extensions-5ea1116a46814196ed184e2a3ad494b8621cdbb4697af138b290ebc3f3dc1480","text":"NAME I was tired of the warnings of line too long, so I've reflowed a lot of the examples and also most of the text while I was at it. The Python code example was tricky, please pay attention to that as I may have mangled it.\nLGTM"} +{"_id":"q-en-http-extensions-29e061ddbe7592b53ae0d77bb8274dce0ef8a7adaada1458718701d43f6d9e99","text":"Ping NAME\nLGTM from here, thanks!\nJulian asked in e-mail why we can't have a dictionary with a default True value that is parameterised:\nI think we could replace: With:\nNAME NAME NAME thoughts?\nExample-Field: ;param=on-default-boolean Is this a dictionary? What is the key here? (I see that the value is supposed to be implicit true, but I'm not grokking the implied syntax) If I ignore that (assuming I've misinterpreted something), I like the idea of allowing implicit True. The serialization algorithm needs a bit more surgery to support this as the canonical form, probably replacing: with something like: (unless we also modify Serializing an Item to optionally omit True values, but we can't just do that unconditionally, or else a simple Item header would be empty if True.) This is the same issue as URL, I think -- back in December you thought that it would take more drastic changes. If this is everything that it needs, though, it seems reasonable. None of the canonical tests break with this (except for a couple of serialization tests, which just need to be updated), and I don't see any obvious cases that would be parsed incorrectly.\nSorry, that was unrelated, accidentally copied from the email. Ignore. I'll work up a PR and see how it looks.\nAwesome. Now, how about also collapsing the \"false\" value a bit. So instead of having a=?0 just say !a ?\nIMHO explicit is better than implicit: I am not sure that is a good choice. I am not even sure that defaulting is a good choice.\nTantalizingly close to making Dictionary a superset of List. I can't think of anything that would break."} +{"_id":"q-en-http-extensions-0ca9c2af308332921bd142a5d51d5b076f7b47a12231c812625a85e8c5e3a98a","text":"LGTM! thx\nSome of this is very nit-picky; feel free to do what you like with it... ^ There's something weird going on with tense in this sentence. \"Such techniques are expensive... are not portable... and make it hard...\" The third doesn't fit with the first two. \"Client Hints mitigate the performance concerns...\" probably just \"Client Hints mitigate performance concerns...\" Likewise for \"the privacy concerns\" below. \"This document defines the Client Hints infrastructure, a framework that enables servers to opt-in to specific proactive content negotiation features, which will enable them to adapt their content accordingly.\" --> \"This document defines Client Hints, a framework that enables servers to opt-in to specific proactive content negotiation features, adapting their content accordingly.\" as a normative reference probably isn't going to fly, given its status. Why is it normative? maybe instead: Be aware that HTTP doesn't require presence of when proactive conneg is in use; if the response is truly uncacheable, it isn't necessary. maybe instead: instead: BTW, nothing is said about precedence between the two different mechanisms; separate issue? The reference to 6454 is spurious, and it's pretty indirect. Perhaps: It's kind of weird to have this list just appear. Perhaps turn these into prose? Put \"For example\" above the example to introduce it. Limits CH to just \"optimisation.\" Probably should just be:\nThanks for reviewing! :) PR underway The here is conditional on cacheability. Is that OK? We're now considering removing http-equiv support for this: URL Maybe it's best to simply remove that reference here?\nAh, I missed that, sorry. WRT http-equiv; that might be best, then."} +{"_id":"q-en-http-extensions-f9603d342b7dde2dec35643ed2a19ac90fd35aee7bcdf490a2767ae3abb5c947","text":"A few editorial tweaks to improve readability, including some x-refs to the Header \/ frames. The most contentious change is at L262, the sentence read a bit weird so I juggled it around to make the intent (as I understand it) clearer.\nThank you for the changes. L262 reads better now. Thank you."} +{"_id":"q-en-http-extensions-af6d0cf73de97c5d321d0d732d8784961d65b4edd598f7d7a44a8a5ada17a0e5","text":"Moves Digest Algorithms down\nI verified the formatting here URL\nThis looks good but note the similar comment I made elsewhere about the trailing spaces after colons e.g. we need or else the formatting seems to get screwed up"} +{"_id":"q-en-http-extensions-56e441f0eaa1e1482860b99bf38ac7c1a78ed4137fa1d678dc34222a6bf71a73","text":"Reorder Digest section. Cumulative PR of: NAME this patch ends the reordering and achieves: Digest header defined at §3 (pag. 7) Non-normative section moved in Annex Replaces\nNAME WDYT about this roadmap? I'd address each one in a separate PR. If you agree, I can cherry-pick the first ones from your PR so you won't have to do it twice :) Digest Headers to be reordered: [x] move the \"deprecated algorithms\" security consideration directly in {{digest-agorithms}} [x] merge the {{digest-algorithm-encoding-examples}} in its parent section {{representation-digest}} [x] move Digest Header and Want-Digest header outside Header Field Specifications [x] move {{representation-digest}} under {{digest-header}} [x] move {{digest-algorithms}} down [x] move {{resource-representation}} examples to Annex B [x] ensure people start reading from {{representation-digest}} [x] refactor Digest to be more direct Optional: [ ] integrate examples from into {{resource representation}}\nThat sounds like a good plan, thanks. Please action it and I'll review promptly. This is a set of editorial changes that I think would add value to the specification readability. My jumbo PR () had some problems, so lets focus on landing the easy-to-agree items and not try to rush everything through before the submission deadline if we are uncomfortable. We can continue working on the thornier editorials even after the submission deadline.\nHere's a first bunch of PRs. PTAL :) thanks for all the support!\nNAME I'm trying to refactor and I have this proposal: URL we need rewording but just look at the sections ordering and let me know. The next PR to merge is\nThis section outline looks good. I approved, so let's merge it. What's next?\nNow we have and which should end the refactoring. I think that's a reasonable cutout for IETF107 but we are clearly not over :)\nAll the PRs look good, even the optional one. Thanks for your work on this. Let's get them merged, then take a once-over review to catch any basic errors that might have crept in. Then one of us can cut the release."} +{"_id":"q-en-http-extensions-dd439a5a561826f36cc1d3fb949e06017f56b7276f84602e1c3b0d27bb10a715","text":"Some of this is rewording. There are two new paragraphs talking about extensions: one deals with dictionaries, the other deals with extensions to a SH definition. I also removed the \"starts with 'h'\" example for greasing. We know that X- lead to all sorts of mess, so choosing a bad example here could be just as bad. I'm happy to split that out or be talked into backing it out, of course."} +{"_id":"q-en-http-extensions-70ce5adb66207994e1539754d8143b6b6f2ab6ccbeb24aa04712f2f6b3798078","text":"This is a simple editorial change of the \"changes\" section, to add the latest PRs.\nLanding as this change is not substantive"} +{"_id":"q-en-http-extensions-1afd716767bc936e3e83d8303c7429936f5a752314018d8e7c72f4eae80ce903","text":"Non substantive change to improve the list's style, after feedback on URL\nLanding, as this is non-substantive"} +{"_id":"q-en-http-extensions-c095e78537304099ed1920ea139f23513914368e5bf6e86dd41cab7fbf5f564b","text":"Apparently the removal of one appendix now has moved all remaining appendics into the main section. They need to be moved back.\nLGTM! Thanks for catching that :)"} +{"_id":"q-en-http-extensions-0120fddfe0ecefef07119ddacd791695cc0d88e2bc232e21c770c6aab9b3d6f9","text":"\"validator\" refers to validator fields used in preconditions. The term validator fields seems more faithful to the original cache-validators in RFC3230. URL and URL"} +{"_id":"q-en-http-extensions-7945bb7b3f49fb8b5af1640068adc955f6c8bd6aa640b71c18eee440afc91045","text":"Nameless cookies serialize as . Valueless cookies serialize as . No, this isn't what we would have chosen if we were designing cookies from scratch, but it's Chromium and Firefox's shipping behavior, and it's reasonable to just accept. This specification change is covered by 0021 and 0022 in the WPT repository (see URL). Closes URL\nWDYT, NAME \/ NAME ?\nStep 4 says: However looking at WPT tests (for example [1] \/ [2]) I can see that when we have (empty name with value ) the expectation is to have back, so perhaps these instructions should include a special case when the cookie name is empty. [1] URL [2] URL\nMinor suggestion, OK for me either way."} +{"_id":"q-en-http-extensions-455071d3bac5fc2a28acce98ac99b24dacd16744c79e1eca1e6c926601c26890","text":"The 'secure-only-flag' is never \"not set\". It is either \"true\" or \"false\". This patch fixes two places where we should have been checking for falsity. Closes URL\nNAME \/ NAME WDYT?\nGood. I should have sent a PR. :-)\nURL Step 8 says: But then Step 12 says: It's impossible to reach step 12 with the not set because it's always set in step 8. I believe a better wording would be ."} +{"_id":"q-en-http-extensions-4e2eadd9e8f913ae40d72a65bacee8aef125a1ddfd29507ce4223691d19dbfd0","text":"Editorial: add one reference to Structured Trailers in a list of the types, and correct what appears to be a typo of Structured Field --> Structured Trailer.\nHappened to spot these when perusing the structured headers spec; apologies if I've misread and the original text was correct.\nThanks!"} +{"_id":"q-en-http-extensions-127e8d288b8ad817be16e72b90a81e07352871860148c7b890003734ac4f8704","text":"Instead of a four-byte exporter, uses an eight-byte exporter and splits into two values. Support for exchanging server certificates and client certificates is now indicated separately.\nThe current advertisement scheme does not allow for a client to decide it is willing to implement secondary server certificates but not client certificates or vice versa. It may be worth having two SETTINGS values, one for whether you're willing to do either of the two flows."} +{"_id":"q-en-http-extensions-287b14c7ef3496abcb79381d9c11cba045b4c2760ff16bc57c83a184d566eff7","text":"though I'm not sure it's yet precise enough. There are three general classes of errors: The peer did something wrong. There are relatively few of these possible -- indicating more certificates to use than were requested, or more than one unsolicited certificate; sending CERTIFICATE_NEEDED to a peer that didn't indicate support for this extension. The authenticator is corrupt. This might indicate a problem with the TLS connection. The certificate isn't acceptable. There are lots of reasons this might happen. The first two should be at least stream and possibly connection errors. The latter should be handled at the HTTP layer, but previously had error codes defined for them. This PR removes several error codes and recommends handling at the HTTP layer if the certificate isn't accepted.\nIt is very unclear from the draft how exactly the client should handle a secondary server certificate it rejects. Presumably a bad secondary certificate should not be fatal to streams which don't use it. The draft seems to handwaive at this (though not very clearly, which means uncertainty for the server) in section 4. At the same time, the draft says: This is inconsistent with that idea. Suppose the server pushes some CERTIFICATE frame along with some resources it thinks the client would want. If the client rejects that certificate, this text implies the client must tear down the entire connection, interrupting unrelated requests. A useful heuristic for reasoning through all this would be to consider a client which blindly rejects every secondary certificate. This is especially relevant if the draft does not act on .\nThe restriction on the server is that it not push unless it has first sent the certificate. If the client rejects the certificate, clearly it will discard the push; the question is whether the client will be able to differentiate failure-to-send from cert-not-accepted. That depends on whether you assume that clients will look at the certificates, even if they aren't accepted; otherwise, I suspect that it can't. However, I think you're right that this is an overreaction. RFC7540 requires that a PUSH_PROMISE for an unproven origin be treated as a stream error, but doesn't specify on which stream. The parent stream still seems excessive, since the server can't know for which origins the client considers it authoritative."} +{"_id":"q-en-http-extensions-1f797f4e3ae51a2acf283d90050d38eb4cf38c6790fda56806abaf27d419aac4","text":"Automatic use of client-proffered certificates was removed a while back; clients can indicate which certificate (if any) they want the server to consider by sending a frame with the flag set. There was text talking about automatic use of the server-proffered certificates, but there wasn't corresponding text disallowing that behavior for client certificates, simply the absence of permission. This PR forbids considering \"available\" certificates until the client indicates which certificate to use.\nThe secondary certificates draft leaves the authentication state of exchanges like the following unclear: In stream 1, the client is unambiguously using certificate A in the request to . Is it also using it for ? If the client believes the answer is no, while the server believes the answer is yes, secondary certificates introduces a . Some examples to consider: HTTP\/2 cross-name pooling means one connection may service multiple origins. The client may choose to use a client certificate with some origins but not others. The web platform makes both anonymous and credentialed requests (see CORS). Keeping those straight is critical to avoid CSRF attacks. For now, browsers do not pool them due to various problematic connection-level auth mechanisms in HTTP\/1.1, but there is discussion on relaxing this for the vast majority of connections that don't use these. This puts a heavier dependency on protocols getting these kinds of details right. This is especially important to make unambiguous because the original renegotiation-based hack for reactive client auth did rely on ambient authority. Once you renegotiated with a certificate, the unwritten interpretation was that all subsequent requests on this connection pick up the certificate. A server implementation may then be tempted to apply a similar logic to client certificates. The spec needs a MUST-level requirement not to do this, and make it clear that this is a server security vulnerability. Instead, client wishing to approximate the original renegotiation-based hack would send proactive frames on subsequent credentialed requests. This is especially important for HTTP\/2, where there is both larger scope of pooling from cross-name bits and where multiplexing means we requests overlap temporally."} +{"_id":"q-en-http-extensions-f4d5cc5e8d880283bc327309030a4e54bebdbd29d4a36f09d8166df3bff379c1","text":"Deprecates SHA-1 as it's now vulnerable to collisions.\nSHA-1 to be deprecated in because of https:\/\/sha-URL It's NOT RECOMMENDED When we decided to NOT RECOMMEND sha-1, we knew that collisions were possible. Now researchers states that The latest paper is here URL\nSome feedback I've had is that we should deprecate. I'd like to hear from the WG if there is any pushback to doing so.\nI'm in favour of deprecation.\n3GPP TLS Profile 6 also makes SHA-1 a \"must not\" for TLS 1.2 [1]. [1] - URL\nlgtm"} +{"_id":"q-en-http-extensions-b67a1ddb69e7ef73ae8967e834f18e72dacc2f9f3d26c704dca8e508c666e697","text":"adds an example of a request with no payload body (eg. HEAD) mention an use-case in the Signature section.\nNAME do you think this is consistent with content-coding like aes128gcm?\nAn example of using Digest in a request with no payload body. There is a misconceptions that Digest does not apply to request with no payload body. As it's not a special case URL for sha-256 we can just use Other algorithms are free to define a different computation, and the examples will only be tied to sha-256"} +{"_id":"q-en-http-extensions-67ac357dd9bbd69fc40b2f318d2571200c7856aeff039e787a8e8cdd23f1c419","text":"References I-D.httpbis-semantics using an inline referece to {{SEMANTICS}}. This will ease the transition once SEMANTICS is updated to RFC."} +{"_id":"q-en-http-extensions-bf37ee4aa64cde75fb9b66648f5953406ed0bfa2f6a766681e6c5311bc29955b","text":"NAME noticed that is messed up. I think the fix is to indent the top-level 2-5 -- but I want to have a good look at it, compare to implementation, and I'd like others to verify, as I'm wary of making changes at this late stage. Blame seems to be , which I vaguely remember as being... surprising. That will teach me to slow down when something weird happens with Git...\nPing NAME NAME NAME to look over my shoulder...\nLooking at URL the problem is that what became top-level 3 and 4 should have been steps inside top-level 2. Further changes have happened so now that applies to top-level 5 as well. (Top-level 2 itself seems good as that initiates the loop.) (Other for each statements do appear to have correct indenting.)\nLike that?\nLooks good to me."} +{"_id":"q-en-http-extensions-6bd734f4bde8ba251a3ca2c0bf442816b99b2954d25d696d5e84d5b1449c546b","text":"When we discussed removing the semantics of urgency levels, we also thought it would be a good idea to change the default from 1 to 3. We agreed to do this and landed a change that removed the semantic text but unfortunately forgot to also change the default. This change simple fixes the oversight.\nSee URL for background\nThank you for the fix!"} +{"_id":"q-en-http-extensions-62ce1f667b76689d252d03d01c38d9d4511832d82643b7dfbab84b72c16b815c","text":"From Magnus's feedback\nalso NAME PTAL\nThis does not instill much of a panicked reaction in me, which is probably a good sign. I trust it is intentional to not apply a similar treatment to inner lists (which also has a prose description I had commented on previously) on the grounds of them not resembling existing top-level header field structures? That's certainly defensible, though I have to wonder whether someone will simply assume that it must be the same (not that we have much hope of changing such a person's behavior).\nHow does this vibe with ? I know we spent a bit of time on it, but it's long enough ago that I can't recall detail.\nAh, right, I think I remember. Doing it this way means inner-list is different: The original motivation to remove tabs was because the above was going to be but there was consternation about RWS. Most of the discussion is in\nNAME was where my mind went first too, but that seems to be just about in inner lists (which makes sense). I think this is separate; intermediaries that don't know about the semantics and syntactic limitations of a given field shouldn't be meddling inside it (which inner list delimitation is).\nNAME I think so -- that's the only reason to allow this. Existing servers may already legitimately be using TAB when combining multiple field lines. And I suppose it makes sense, given the recent discussion about not breaking up individual list or dictionary members over multiple lines, that we don't need to replace *SP with OWS everywhere. I'm okay with this change -- does it warrant a note in the prose somewhere that this special case only exists to accommodate field concatenation? Let's hope we don't need to add VTAB or FF to this, too :)\nNAME I don't know that that's necessary; we already pretty clearly. In SF, I looked at changing \"whitespace\", but in each case the ABNF and algorithms make it clear what we're talking about, so I don't see a strong need to disambiguate.\nI'm ok with the change over here; that said, the Semantics spec could be clearer. Opened URL\nNAME and I have been talking about this, and have come to the conclusion that this change went too far. Specifically, steps 1 and 5 in \"Parsing Structured Fields\" should just be SP, not OWS, because there isn't any risk of an intermediary inserting TAB before the first field line or after the last field line; it's only between List and Dictionary members that we need to be concerned. Reverting that part of the change also makes writing the test cases considerably simpler. See 6842793e.\nThis is so that field line combination as per the Semantics spec can happen, right? I note that currently says: Note that it says \"comma an optional whitespace\", not \"OWS\". Depending on how you read it, it might include other whitespace characters. I believe this needs to be fixed in the Semantics spec.I dont think I ever saw a TAB in a HTTP header in real life, but I am OK with this."} +{"_id":"q-en-http-extensions-1d11d19737f17573edb09d909531554070fa3049fd544ce6e31fc120a4866239","text":"Add what seems to be a missing word in the __Host example text.\nThanks!\nThank you!"} +{"_id":"q-en-http-extensions-e2ed4abb90b2950ec76af727429c8b6fd680eafa1598fd3d11a9835af0c461c4","text":"and . I've left Security Consideration as TBD, I'll fill that out in a separate PR.\nThe text in about the client using priority to inform request scheduling is a good one, but it belongs elsewhere.\nThe whole discussion about fairness is worth having, but it doesn't really go to the usual security concerns. Security concerns might cover things like DoS through over-use of prioritization (see ), or use of priority to starve others (which might cross-reference the fairness text). But I think that the fairness text should be bumped up to the top level."} +{"_id":"q-en-http-extensions-6147bcd82a8c7458b734df8da03ca3289f1988fee84028b804ba7904bb47cf9b","text":"Since fairness is being moved out of security considerations, (URL) it makes the section empty. Seems reasonable to add some prose about the considerations of this scheme as compared to HTTP\/2. However, this also depends on the outcome of the reprioritization discussion, so I'm being a little weasly with TBD and TODOs. Might want to just hold onto this for a while but WDYT NAME ?\nThank you for working on this. LGTM modulo the point below."} +{"_id":"q-en-http-extensions-268ab9126b009312d403a1c724bb6d013d0b043e67209b3492f1c45530d07977","text":"Strengthened requirement for content identifiers for header fields to be lower-case (changed from SHOULD to MUST). Relaxed guidance on processing Creation Time and Expiration Time to only require verifiers to examine them and test against the verifier’s requirements. Minor editorial corrections and readability improvements.\nThe current text allows mixed-case header field names when they are being used as content identifiers. This is unnecessary – header field names are case-insensitive – and creates opportunity for incompatibility. Instead, content identifiers should always be lowercase.\nThe processing instructions for Creation Time and Expiration Time imply that verifiers are not permitted to account for clock skew during signature verification.\nThis can safely be fixed adding a -like claim, so that the signer remains in control of the signature validity. This is especially important if the signature is created with the signer's private key. imho this kind of document should not define a clock-skew tolerance threshold, and implementors shoud be suggested to fix their clocks. There may be corner cases (eg. IoT\/disconnected devices ) that we should address properly\nI'd split the clock-skew part in a separate PR. The rest LGTM."} +{"_id":"q-en-http-extensions-7a017f910d7bb19bd40fda0c5bd8cd7e66bc807072867c496e5686a7e4174ddc","text":"Fixes .\nDone\nCurrently according to the spec, any cookies set with a header should be ignored if their name and value are empty. However, this check is not applied in section 5.4 of the spec, and as a result the CookieStore had a allowed for sites to work around this check. To me, this is a symptom of a bigger issue: 6265bis treats non-HTTP cookies as an afterthought, . I think it's worth opening a separate issue just to track whether we add the empty-name-empty-value check to 5.4 of 6265bis, and if there are other checks on the header in 5.3 that are implied in 5.4 but not made explicit.\nCC NAME\nAs you've noticed, the document currently assumes that 5.4 is only going to be exercised after 5.3. I think this particular issue is distinct from the issue you raise in the other bug, as the only non-HTTP APIs that existed before the async API you're working through used cookie strings, which were parsed just as the header was. Given that new mechanism, it does make sense to move \"If both the name string and the value string are empty, ignore the set-cookie-string entirely.\" from 5.3 to 5.4. Skimming through, I think that's the only restriction that causes cookies to be ignored at the parsing stage.\nMinor change so that we won't break \"links\" if ordering changes, and to explain the change."} +{"_id":"q-en-http-extensions-1a118da8b6627dfb7bc851aadbead9059ecb755c367bf8fe3783515e5f707392","text":"PTAL NAME NAME NAME\nLooks reasonable to me!\nNAME PTAL; I'll merge tomorrow if I don't hear anything.\nNAME how do you distinguish between a proxy certificate and the certificate used for the next hop?\nBy \"proxy certificate\" I mean the certificate sent by the CDN (when acting as a client) to the origin server for the purpose of mutual authentication. In contrast, would apply (at least in my mind) only to the certificate presented by the origin server.\nSo, I think the argument is that is either a (i.e., the server you're connecting to gave you an error to the effect that you need to present a client certificate), or a (i.e., you realised you need to present a certificate before connecting, but don't have one) depending on exactly how the error occurred. Is that sufficient?\nAs someone who spent countless hours debugging connection issues to various origin servers, my main concern is that alone doesn't provide enough information as to why the connection failed, it only narrows it down to a TLS certificate issue, but for anything definitive that you would want to share with the customer, you'd still need to go and dig further into logs, or try to replicate the issue. Yes, I know that you could put more information in the \"details\" field, but I'm not a big fan of implementation-specific free-form text. Having said that, I'm not going to oppose if you prefer to have only those 3 generic fields.\nThanks. As Ryan points out, the problem is that implementations have different terminology and granularity of error reporting, and it's not the place of this specification to impose that on them.\nLooking at what to map x509 errors (as returned by into, the most obvious one is . However, there's also , which is one of these errors. It seems odd to have these two, but not any of the many other verification errors. At first glance, I suspect it may be better to have a single error, and let the convey the reason - whether it's expiration or something else.\nUltimately, there is a fine balance between providing a generic \"catch-all\" error, and enumerating all possible errors. The linked lists 69 possible reasons for success or failure, and including all of them clearly doesn't make sense, especially when only a few of them will happen in practice. However, having only a generic \"catch-all\" error with the real reason \"hidden\" in a free-form text in , that's going to vary from implementation to implementation, is going to make this less useful for any kind of automated processing and\/or analysis. Ideally, I think, we should have specific enums that cover 95%+ of reasons happening in production, and only use the generic \"catch-all\" error for things not covered by them... So, playing the devil's advocate, perhaps we should add a few more? (Note that there is already raised on mismatched identity, SHA256, SPKI or other missed expectations.) Also, if you can get your hands on errors happening in production, that could help us make more informed decision here.\nI agree we shouldn't do a 1:1 mapping (especially with a library), but the hierarchy needs to be clear, and mapped to terminology used in the standards. I.e., there should be a generic catch-all 'cert error' code, and then more specific ones for the major classes of problems.\nTLS itself also contains a pile of certificate-verification-related alerts. Although I think the set is mostly just carried over from SSL. URL NAME may also have some thoughts on certificate verification error-reporting.\nDespite what libraries may provide, for their developers, is fairly clear: the only defined result is a I think coupling any protocol-level expectation about diagnostics for errors is fundamentally misaligned. It should not matter, for example, whether a particular library checks the validity period of the certificate before the signature. We know, empirically, that different libraries do very different things there: macOS vs Mozilla NSS vs Microsoft CryptoAPI each verify something like validity at different points within their verification algorithms. We know it's deeply flawed to view there being \"the\" certificate chain for a server, because by definition, the chain is defined by the client's policy and set of trust anchors. As a consequence, we know that clients SHOULD implement certificate path building, to explore possible certificate chains, and we know that TLS 1.3 was explicitly updated to reflect those decades of running code and the emergent rough consensus that the message, beyond the first , is an attempt to aid\/influence that. is a huge collection of different ways verifiers can process and examine errors, during forward path building and reverse verification, and notes that the trade-offs are heavily dependent upon client and use cases. While I can understand the appeal for providing 'informational' statuses, any suggestion beyond a level of is probably too normative, and unsupported by the underlying specifications and best practices.\nNAME the context here is a CDN or other reverse proxy reporting back errors in HTTP response headers, for debugging. There are no requirements around sending them (and senders are cautioned against leaking information that could aid an attacker). So, at one end a Boolean doesn't really help, and at the other end, the full list of errors by any one library (or all of them) is impractical. The list of alerts from NAME seems promising, and I note that we already have , whose value can be any TLS Alert string. However, That's intended to convey alerts received (and perhaps we should communicate that more clearly). The desired semantics here are that the problem indicated by the alert is generated in the client itself, not received by it. That leads me to believe that there are two viable paths forward: Rearrange the errors so that they mirror the alerts above (leaving 's semantic as stated above); e.g., we'd have , , and so forth Reduce everything into two errors, and or similar. Both would carry the TLS Alert value as an additional payload item. I think the difference is mostly stylistic, so NAME and I can figure that out. The bigger question is whether this is the right set of errors to focus upon.\nBut I think this is missing an important detail: from the perspective of a client, the only stable output is the boolean. For example, OpenSSL's reporting of errors is symptomatic of deeper flaws within its certificate verification; fixing those errors necessarily means some existing errors can't effectively be reported. This is covered at some length in . Ultimately, the set of errors that are relevant are, to an extent, subject to local policy: it's rare for certificate validators to think of errors in the ontology laid out by the TLS layer. Typically, a client just does a \"best guess\" (or just maps straight to ) when trying to map to the TLS layer, in part so it doesn't disclose extra information. This is certainly true when you think about local policy, such as limits on validity or the requirement of associated metadata, potentially delivered over TLS, such as Certificate Transparency or OCSP Must-Staple. I think the core question is whether you expect the client and the proxy to be in the same management domain. If they aren't, and the ED seems to be oriented around this scenario (where server and proxy are likely same administrative domain, but proxy and client aren't), then it seems better served as an opaque diagnostic string. As RFC 4158 calls out, beyond the 'boolean' is really just an administrative\/diagnostic capability, not something meant to be semantically rich, meaningful, or stable, and certainly not to allow next-hop policy decision making.\nJust to be clear - the user agent isn't involved in this at all, beyond receiving the header so as to make it available to someone diagnosing a problem (Likely someone from the origin, from the CDN\/reverse proxy operator, and\/or a help desk talking to an end user). What I think I hear you saying is that we should define a generic error and allow implementations to convey textual detail in it. Does that make sense? I also still see some value in distinguishing a TLS alert received from the origin by the proxy (in its role as TLS client) from this case (a problem it encounters as a TLS client).\nYes. This matches with Criterion 2 of URL and the dual mode trade-off in URL , which perhaps more clearly highlights the local policy aspect (\"Ultimately, the developer determines how to handle the trade-off between efficiency and provision of information\"). 100% agreed here. I was only commenting to the extent at which the proxy validates the peer certificate independent of the TLS layer framing. Perhaps poorly stated, but my comment in URL was trying to capture that Option 2 from URL () would likely just reduce to a boolean anyways, in a well-behaved TLS client."} +{"_id":"q-en-http-extensions-a0af4fcae0601c8a4ef2f303e685672d8f59ee269dd7169cd900493a68a22d75","text":"HTTP 1.1 messaging is not normative replaced \"message body\" with a periphrases\nDigest to be indepentent from http\/1.1 (messaging) See URL\nNAME iiuc the only MESSAGING-depending parts is: URL The spec references the term \"message body\" 3 times too, which is defined in MESSAGING. Do you think we should replace it? How would you do it?\ntrailers: you may want to reference URL message body: this might be \"payload\" (URL)"} +{"_id":"q-en-http-extensions-e6a8da9e3625d2f52de440cb8ec503902dde8d23408bbfb6d8f7a9870569b05a","text":"is preliminary to Consistently use instead of . In case we need to further focus on the value, we can say or . Looking at \"content-coding\" vs \"content coding\" in URL I think that this choice makes the document more readable."} +{"_id":"q-en-http-extensions-099ee8e33afbab93932b657833ff90897b9449bfc46e3b4dbb8aeacfd2b9aaff","text":"Aligns a part of the doc with the SHA-1 deprecation we decided in We correctly deprecated SHA-1 but in this part we left it as NOT RECOMMENDED."} +{"_id":"q-en-http-extensions-21b97547fa198f8a7ffdb944a3bf9f746cd5097a37ff6c166f3f3e23c5e14c26","text":"cc NAME since it closes your issue.\nI believe this is now good to go, please approve or speak up if you disagree.\nIn there is a rather firm recommendation to erase priorities at servers that are served by coalescing intermediaries: This seems wrong to me. The Priority header field provides information about what the User Agent thinks of priority. This might be adjusted by intermediaries on the path, and it might not be globally consistent, but that does not mean the information should be discarded. Instead, I would prefer that we say that when an intermediary coalesces requests from multiple clients, servers need to be aware of that and account for that in their processing. The remainder of this text is fine in the sense that it identifies information the server can use to more fairly allocate resources to different requests, but - again - this is just information the server uses when making prioritization decisions. The model we operate on should not assume be that Priority is an instruction to the server, but that it provides the server (or intermediary) clues about how it might decide to allocate resources.\nNo one responded to this for a while, sorry. The text is mostly untouched but I did change the example to just . My read of this today, I'm concerned that the requirement is a bit specific. It's not clear if a server applying this would also discard any additional priority parameters. I don't know if NAME original point was along those lines or not, but I'll try to make a PR to address what I think needs fixing"} +{"_id":"q-en-http-extensions-f8e19427fb62d92a51beca7bc51b1de5d4b1cf30fb84f4a3247c43d53a21f4d6","text":"NAME correctly pointed out that the draft contained a made up error code. On inspection, we need to define specific error codes for both H2 and H3. This change does that.\nThe latest version of the draft states: There is no such error either in or in . Perhaps HTTP\/3's H3FRAMEERROR was meant?\nDang, I worked from memory and should've spent 2 minutes for checking. I'll get this fixed up thanks."} +{"_id":"q-en-http-extensions-52dab21f40c62c2df66791f8818cfc2e2ce4d21772e788db4909067681047677","text":"details encryption quirks: encryption functions may produce different results with the same data this means that if I GET an encrypted resource twice, I could get two different digest values so the digest value is just good to validate the ongoing message and not for further processing of the data at rest, unless I store the encrypted data or use an algorithm which in turn might disclose information. This\nlooks good, just some nits."} +{"_id":"q-en-http-extensions-b5e9933f7f13fb24c5490550b0b11c96978b28dfe8ad5be722db4c0e1a1e0ac0","text":"[x] avoid ambiguous statement on digest protecting from compressed content-coding URL i[x] t can protect from intermediaries altering representation metadata (content-type, content-coding)"} +{"_id":"q-en-http-extensions-994fb12896834e69846ebf4db98947459993ab9f3e13d93bc3f93d566c549de2","text":"Replaced unstructured header with and Dictionary Structured Header Fields. Defined content identifiers for individual Dictionary members, e.g., . Defined content identifiers for first N members of a List, e.g., . A signature in a request now looks like this:\nI like this PR! We just need some tweaks, but it greatly improves the way the spec can be used.\nThis PR also fixes . Should we remove that appendix in this PR?\nRedefine the Signature header field to use . This will simplify both the definition and the parsing of this header field value.\nThe Covered Content list, Algorithm, and Verification Key identifier should all be part of the Signature Input, to protect against malicious changes.\nThe Covered Content list contains identifiers for more than just headers, so the parameter name is no longer appropriate. Some alternatives: \"content\", \"signed-content\", \"covered-content\".\nI like the direction this PR is taking: spec is more focused because it outsources spec of field format to draft-kamp-httpbis-structure dictionary format is more flexible and allows multiple signatures (important)"} +{"_id":"q-en-http-extensions-a2266195408c4b4c04c2f512502798d186f2de5dff2fdd4f966d7960225f1254","text":"separates MESSAGING and SEMANTICS concepts (Eg. fields)\nThanks! :tada:\nThe spec relies only on HTTP Semantics and not on HTTP\/1.1 messaging. [ ] improve wording [ ] switch to http-core Check all references to Messaging and consider replacing them using Semantic. See URL\nNAME NAME NAME I was re-reading the spec after the more recent filed issues, and I think that switching this draft to HTTP Core terminology (eg. I-D.ietf-httpbis-semantics ) we could have a more consistent background for taking the right decisions about things like transformations, intermediaries.\nThis should be taken care of with , NAME please re-open with comment if there are more things to be done."} +{"_id":"q-en-http-extensions-ec1060f1650446cc2b2651cb930a1ed0ff50e7a7fb1401edf4a276d349f6b51e","text":"reuses existing definitions for S-F and Unix time.\nDoesn't make this PR redundant? (or regressive, really, because references the exact definitions to use - and from structured headers)\nThis probably"} +{"_id":"q-en-http-extensions-169b12bb890628b685670bf27d7bcf750b7c871136b7d3dd9b14c6077e9858ee","text":"removes algorithm specific rules for content identifiers Current layout is not backward compatible with http-signatures, so it's safe to do it.\nThe rules that restrict when the signer can or must include certain identifiers appear to be related to the pseudo-revving of the Cavage draft that happened when the algorithm was introduced. We should drop these rules, as it can be expected that anyone implementing this draft will support all content identifiers."} +{"_id":"q-en-http-extensions-8fffec994eb7622d2141aff0c427c2b0a34bc8a82edc6c9311844e433c973c07","text":"includes security considerations from\nNAME probably those considerations are redundant: implementers know that computing digest has a cost. Do you think it's worth spending more time to land this PR or we can close it?\nI'd go with a simpler statement such as or say nothing.\nNAME this PR contains your proposal.\nA threat model for . is this useful\/required? should this go into Security Considerations or in another document?\nUser User-Agent: see RFC7230 Intermediaries that do not do TLS termination Intermediaries that do TLS termination.Application: the application actually running the service Intermediaries can alter HTTP messages Intermediaries that do transport-layer termination can alter HTTP messages The application can erroneously or purposedly alter the message User can forge messages with the same checksum values An intermediary can alter an HTTP header by chance or purpose. [x] Use an end-to-end transport-layer security mechanism. A TLS-terminator intermediary can alter representation-metadata by chance or purpose. [x] Use a signature mechanism that covers both and the representation-metadata. A TLS-terminator intermediary can replace the representation-data with another one having the same digest-algorithm. A malicious User can forge two messages with: different representation-data same Digest header values send an HTTP message with the first representation-data then pretend he sent the one with the second representation-data. A malicious Server could implement a similar behavior. [x] Use a digest-algorithm that is not subject to collision (eg. sha-256), or refuse to use digest-algorithms subject to collision. A User could exhaust server resources by chance or purpose via: 1- sending a huge representation 2- digest calculation of the complete representation 3- forcing the server to evaluate the digest of the included representation [x] To consume huge representation-data, you may consider mechanisms sending progressive integrity checksums in the payload body and just use to provide the last checksum value. A malicious User could exhaust server resources via: 1- reconstruction of the complete representation 2- digest calculation of the complete representation 3- forcing the server to evaluate many different digest-algorithms for a given representation If you are exposed to this kind of attack because reconstructing complete representations of your objects is computationally intensive, avoid using [ ] Digest with mechanisms like Range-Requests and PATCH. If you support multiple digest-algorithms: [ ] limit the supported digest-algorithms to the ones you really care for; [ ] establish a policy for validating only the digest using the stronger digest-algorithm. The recipient may skip Digest header validation by chance or purpose and still be compliant to the specification: as explained in {#digest-header} the recipient MAY ignore any or all the received representation-data-digests. This is because Digest was originally conceived in RFC3230 as a tool for the recipient to validate the received representation, but not for the sender to receive an integrity proof of the communication. [ ] You should not rely on the recipient having correctly verified the Digest value of the selected representation. If you want the recipient to validate the : [ ] agree on processing with your peer, like it is done in\nLet's see if we can incorporate some of this into the security considerations.\nall considerations should be inside the I-D once is merged\nThank, this works for me. Let's merge and we can iterate later if anyone thinks of something to add."} +{"_id":"q-en-http-extensions-006e18f5ba5e13c5a8888555e82a9d52e3315859128be6a5f3a1e56b8bcb9382","text":"It looks like this was forgotten in , when unknown values were switched from None to Default. \/cc NAME this caused us some confusion over in Go, if you have time I'd appreciate hearing what you think we should do in golang\/go. Thank you!\nThanks! LGTM."} +{"_id":"q-en-http-extensions-9d45a5a24d7f6ce205c29c6c65f02007f438d19079d708774053d234d6e0501d","text":"Fixes field ordering reference\nIn 12.6: \"Any mangling of Digest, including de-duplication of representation-data-digest values or combining different field values (see 5.3.1 of [SEMANTICS]) might affect signature validation.\" But 5.3.1 is \"Request\". (note that I didn't catch this earlier because it doesn't use the \"Section X of [Y]\" pattern; this should be gixed as well)"} +{"_id":"q-en-http-extensions-33c7be0074d24f2f5c6b262b4a5358a1ffff60ca8399b495d10925958da92b3d","text":"This ended up being more changes than I first intended but I think most of them are uncontroversial and simply improve consistency through the document.\nLGTM. Thanks NAME and Happy New Year!"} +{"_id":"q-en-http-extensions-d4c3eae8412d74e754d98bc02ed543750083eaa4a5ae47f183ebdd9189d5aa83","text":"This is a proposal to address issue \/cc NAME\nI've updated the graph in section 2 to note the new range. PTAL.\nEach record currently includes between 1 and 256 octets of padding, while the default record size is set to 4096 octets. This is in many cases not sufficient, for example in the pad-to-next-power-of-two case. Extending the padding length to two octets will allow up to 65537 octets (65KB) of padding. I raised this on the ietf-http-wg list here: URL Martin about an upgrade path for existing users. I'll reply on the mailing list to not split the discussion.\nThis was fixed in and ."} +{"_id":"q-en-http-extensions-ae74dd44b67333a1ac28e6faf876688bf56b5831b27b16295f34766798e4239c","text":"The I-D contains the following TODO we might add a SETTINGS parameter that indicates the next hop that the connection is NOT coalesced (see URL). I'd like to track that discussion here. Namely, with all the changes in the meantime, does NAME think that there is still a problem that needs fixing? OR can we close with no action?\nI don't think anything additional is needed. We've got the two standard header options to indicate an intermediary is involved, that was the main point of the original discussion in Singapore IIRC. I think this won't matter all that much in practice, given warmed up edge-to-origin connections, pops being able to serve cached resources in between, etc. Origin owners really concerned with performance will know if their setup is coalescing or not (then the question is if origin server software allows tweaking prioritization behavior, but that's a general concern). CDN owners seeing bad origin behaviour will probably just overwrite the priority information instead of forwarding in practice. In short, I think the current text should suffice here, as I don't think many servers would support switching to RR when seeing an explicit \"warning: coalescing intermediary\" signal either way.\nThanks NAME In that case I've created to simply remove the TODO text, which will act to resolve this issues."} +{"_id":"q-en-http-extensions-b2cf497014fb257f5340d4f3d9041da9c7ee48614c34c4d0af8dff8a7044fd86","text":"In order to allow existing applications to update from one octet of padding per record to two (PR , Issue ), this PR renames \"aesgcm128\" to \"aesgcm\" as the value for the Content-Encoding header, as well as the parameter name in the Crypto-Key header. This also affects the context used for deriving the nonce and the content encryption key. Examples have been updated. What I did not update is the JWE mapping (currently \"A128GCM\"). Do we wish to change that to \"AGCM\", which may be less descriptive?\nNAME - for your consideration I updated the hashes, but would like you to verify those when you've got a moment (they include the uint8 -> uint16 change). I wasn't able to reproduce the output of the explicit example using the current spec.\nAwesome. I'm still awaiting the return of my implementation (I'm annoyed that I put it on a boat), but I will verify when able. It's entirely possible that one (or more) of the examples are in error.\nI still need to double-check the examples, but I won't be able to do that for a while now. Stupid travel. Might as well make progress at least. Thanks for putting this all together NAME\nEach record currently includes between 1 and 256 octets of padding, while the default record size is set to 4096 octets. This is in many cases not sufficient, for example in the pad-to-next-power-of-two case. Extending the padding length to two octets will allow up to 65537 octets (65KB) of padding. I raised this on the ietf-http-wg list here: URL Martin about an upgrade path for existing users. I'll reply on the mailing list to not split the discussion.\nThis was fixed in and ."} +{"_id":"q-en-http-extensions-69ea8ad1848b0de4a0e4ada12d3832eb13b179029fc7e93bab53d2fe609f270c","text":"Condensed version of .\nThanks, this looks good to me. It provides a more formal scaffold for the intent that the editors (and design team before them) had in mind for this scheme's extensibility. I do wonder if we might want to name the registry as the slightly more verbose \"extensible priority parameters\" but since we are not yet establishing a registry it is easy to iterate on that."} +{"_id":"q-en-http-extensions-880e72d6265755946647d19fc756be3e58cb8266ea3a14eb49508aa2b951cdba","text":"Clarifies . 1- + HEAD is used to get a resource checksum 2- no Digest in request for clarity 3- only use sha-256 to focus on the actual example.\nDigest can be sent in requests and response Nope, in the example Digest is both in the request and in the response. See the hidden 607 line URL When used in conjuction with signatures, the client that want to certify that no-payload was attached will: compute the Digest of an empty string sign that Digest header with some signature mechanism attach the signature and digest to the request send the request with no payload data The server will then: verify the signature according to the chosen mechanism if no payload is provided, the Digest is the one of an empty string eventually store the signature to prove that the request had no payload Clearly the signature mechanism should provide enough information to define the semantic of the request (eg. the method and all relevant headers). HTH, R.\nWell. So the Digest is for the empty request payload. Maybe it would be good to state this in the spec? (That said, this is IMHO a very misleading example, as the HTTP spec is very clear that GET\/HEAD request bodies (\"A payload within a HEAD request message has no defined semantics; sending a payload body on a HEAD request might cause some existing implementations to reject the request.\"). It would probably better to have this example on a request that actually can take meaningful payload...\n+1\nNAME PTAL :)\nI'll repeat what I said earlier: \"That said, this is IMHO a very misleading example, as the HTTP spec is very clear that GET\/HEAD request bodies (\"A payload within a HEAD request message has no defined semantics; sending a payload body on a HEAD request might cause some existing implementations to reject the request.\"). It would probably better to have this example on a request that actually can take meaningful payload...\"\nIn order to make progress here, I suggest we simplify the example to remove the Request digest. That keeps the example focused on it's name \"Server Returns No Representation Data\".\nNAME removed Digest from the HEAD request.\nNAME PTAL ;)\nNAME PTAL :)\nYAY :rocket:\nIn 10.2 Server Returns No Representation Data: what does the Digest request header field on the HEAD request refer to, and what is the recipient supposed to do with it? there's only one digest in the response, but the prose talks about two\nRequests without payload data can still send a field applying the digest-algorithm to an empty representation. The recipient can validate that the digest-value matches the one of an empty representation or not. True. this is confusing, should reference the previous example.:"} +{"_id":"q-en-http-extensions-863ff2050598ec5fc987a565dd09316b184e7fd092e3ec261179dfcbdeea1883","text":"references RFC6515 and 6194 for deprecating md5 and sha1 Since there are RFCs, we could even skip explaining that those algorithms are vulnerable to collisions.\ncc: NAME\nNAME feel free to PR for the rest of the algorithms :P\nIn s12.2, should refer (informatively) to and for IETF references that deprecate the use of md5 and sha1, respectively.\nThanks for the pointers, sgtm\nYou beat me to the PR!"} +{"_id":"q-en-http-extensions-c28d451880f47e3ba09cd2bc7254647f2e2fcc128e9de749aa14d887fa3120de","text":"As discussed; a quick pass to trim down RTCweb stuff, etc.\nThis looks ok to me, NAME NAME does this look OK ?\nContent looks good to me. , Martin Thomson EMAIL wrote:\nAlso looks good to me. Should I now merge the pull request and submit the draft?"} +{"_id":"q-en-http-extensions-88172470fc093ac7da27cbce42cbce6a12e75923ffee94861e6a0cdd1e4d3ab1","text":"Signed-off-by: Piotr Sikora\nSo, what's the difference to , then? Should we just remove perhaps?\ncan be also sent if the connection is terminated before or when sending HTTP request. is more generic and can be sent in response to other events as well (e.g. read\/write timeouts). But more importantly, either can be sent in response to the same event (i.e. connection terminated when receiving HTTP response), depending on the existing proxy implementation or various policies, i.e. \"proxy should be as specific as it wants to be\" from your last email."} +{"_id":"q-en-http-extensions-416661731e191b934aa7d97d07d48c02ada1b6f94b702f49e68d3c734e80bfec","text":"This removes the in-document \"issues\" list and background introduction note paragraph, in line with the document's status as a working group draft backed with an issue tracker. All removed items should have corresponding issues tagged with in this repository's issue tracker (some have already been closed), so keeping them in the document was redundant and artificially inflated the page count.\nNAME good idea, note added to the top, following the pattern in other drafts\nLGTM. You might want to add a note at the top directing folks to the issues list, etc.; see other drafts."} +{"_id":"q-en-http-extensions-f8e2458d91d1cc1b0be56b04ee5437de4352805191120d0c7778c7982040174d","text":"Use \"content\" instead of \"payload data\".\nto reference semantics-14, including \"payload data\" --> \"content\"."} +{"_id":"q-en-http-extensions-f740bdb3a7eb728a0289fd342286a0217803b969471f80db46e7a46f7f26eb4a","text":"I think this is a matter of taste, so I'm inclined to leave it as is. Could you please update your PRs to make the style consistent and back out these changes?\nConsistent with what? There are two suggested forms of references (\"Section x of [FOO]\" and \"[FOO], Section x\"). Both have their legitimate uses. What doesn't work for me is using the second form inside a sentence without any delimitation, because it introduces a comma in the sentence structure where it does not be belong."} +{"_id":"q-en-http-extensions-4094e9d0ee802813aec16fc4522cd6028ef0786ec0a6ee0c0a284790ad576800","text":"clarifies that you can always ignore you can ignore when using SRI if you validate both, it is nice to provide useful errors\nMe neither, but this comes from RFC3230.\nWell. This is a standards track doc developed in an IETF WG. If we want to change things, we can.\n\"There is a chance that a user agent supporting both mechanisms may find one validates successfully while the other fails. This document specifies no requirements or guidance for user agents that experience such cases.\" This makes it sound almost as if it could be ok to ignore an error on one level if there's no error on the other level. I think the spec should say that a user agent supporting both mechanisms must implement all requirements of both specs, no matter what.\nThanks Julian. I agree and will make a PR to tighten up the language.\nNAME should we really specify what a SRI implementor should do? RFC3230 explicitly states that URL and we are not changing this.\nThat doesn't seem to be producing a list in the actual Internet Draft; see URL - maybe an indentation problem."} +{"_id":"q-en-http-extensions-006a083acd555da140af2647640f3f36cf12e0624602b921dff470d9b2bc4e5f","text":"content-type \/ media type header \/ header field\nThis should include updating the core spec references; I will update this PR accordingly unless you get to that before me.\nSee"} +{"_id":"q-en-http-extensions-02317514ba155a7696739d62be13d5d5654c18afb776244d32f89d1059990756","text":"This change is done while ignoring the fact that we might want a bigger document reshuffle when bringing in a \"content-digest\" header. But it helps illustrate how the abstract \/ intro could be editorialized to address some off-list feedback.\nLGTM :)"} +{"_id":"q-en-http-extensions-71f3322d484cdeadffd8c79ddb2ef31a737a3082f69572789d76bfc9e7623778","text":"[x] 3. It's probably worth mentioning that they're comma-separated in prose. [ ] 3. I'm uncomfortable with relying on examples to specify how this spec works normatively; by their nature, examples are not 'comprehensive.' While this spec shouldn't re-specify HTTP, the header's definition should be clear enough that a reader doesn't have to dig around in the examples and other specs to understand what's going on. [x] 3. Disagreeing plurals in 'an incremental digest-algorithms'. [x] 4. Is Want-Digest a request field, a response field, or both? In either or both cases, what's the scope of the assertion? [x] * 4. A reference to SEMANTICS 12.4.2 is necessary."} +{"_id":"q-en-http-extensions-f3ca894770516a07314b29302975bc3b908045e50461e464ff4f0e3b1e9d79a7","text":"[x] 5. I think you should come out and say that digest-algorithm values MUST be compared in a case-insensitive fashion. [x] 5. 'The IANA acts a registry' reads oddly; it maintains the registry. Also, saying 'the registry contains the tokens listed below' is misleading; these are initial registrations, which can be modified and added to after the RFC is published. [x] 5. There should not be RFC2219-MUSTs in IANA registry text. [x] This can be addressed by having a requirement similar to \"Deprecated digest algorithms MUST NOT be used.\"\n[ ] 5. Also, is the entire checksum value case-insensitive, or just the token identifying the value? NAME the checksum value is defined by the digest-algorithm, which defines its case. [ ] 5- Also, these might be better placed in IANA Considerations. NAME can we just drop any mention of IANA in this section? I like it. [x] * 5. The ability to have a digest algorithm that ignores content-codings is something of a surprise after reading section 2; it needs to be at least mentioned up there. NAME it is stated in the introduction URL . how would you specify it?\nlgtm but keen to see mnot's approval too"} +{"_id":"q-en-http-extensions-dfb15a689489ea0295f95ebd547dd987b89e240bc49ff6f6c3bcf8673484fc2e","text":"[x] 3. It's probably worth mentioning that they're comma-separated in prose. [x] 3. Disagreeing plurals in 'an incremental digest-algorithms'. [x] 4. Is Want-Digest a request field, a response field, or both? In either or both cases, what's the scope of the assertion? [x] 4. A reference to SEMANTICS 12.4.2 is necessary. See"} +{"_id":"q-en-http-extensions-e28c50c95c949b3e30952542db7b5c934cc12b4b1ae009c0b6d1ed2724fc7ff9","text":"moved normative parts in warn that trailers can be dropped by intermediaries.\n[x] 12.7. If this is mergeable into headers when it occurs as a trailer, that needs to be said earlier in the spec, not in security considerations. [x] Also, you can't prohibit intermediaries from dropping the trailer; they're explicitly allowed to do so for all trailers."} +{"_id":"q-en-http-extensions-61f19f306f332984f88ea1788b589eb6436f6f8fe391e693041ea7487a70fe5a","text":"Define deprecation of H2 priority as action, rather than defining it as a concept of \"deprecation\" then using that term to define the action. Probably resolves the first half of .\nMerging this as an improvement, even if might have more to play out.\nWhile I remain annoyed by the polarity of this setting for primarily aesthetic reasons, the PR is fine."} +{"_id":"q-en-http-extensions-393843685df3b9129c486f2b4075ef6f3d0bf849bcaf540cad2b48686117c3b0","text":"The IANA Cookie Attribute Registry does not exist yet. It is requested that IANA create it. Clarify wording to reflect this. Addresses\nThe section titled \"Cookie Attribute Registry\" has a link to the attribute registry maintained by the IANA. This link is broken. Moreso, a cursory look through the IANA site doesn't turn up any similar pages suggesting that this section is obsolete.\nThe registry does not exist yet. The text about it was added in URL The prose should actually tell IANA that the registry needs to be created, not \"updated\".\nAddressed by"} +{"_id":"q-en-http-extensions-789a9b05dbd11e0520570c133617ce7af2a8125e1b06d443305b67ac376073a9","text":"This was languishing in a PR within my repo, but since we've merged the actual draft, we can promote this to a PR in the main repo as well. (Yes, I'll rename the tag if we merge this as-is.)"} +{"_id":"q-en-http-extensions-ae7ae29ace453f7b6a1f89125192a85a2d1e8a63754b464b516033d01ecd9666","text":"I've opened issues for the things that were previously TODO comments in the draft; this removes them from the draft itself.\nIssues and for posterity\nNAME wrote: Public Keys is maybe worth considering. This too is this is an area for further discussion and consideration should this draft progress.\nMy take: keep the scope of this narrow. If raw keys are needed - or other certificate types for that matter - then we can define another field.\nI concur with Martin's take. The TODO made its way into the individual draft (and subsequently became this issue) based largely on this brief suggestion in one part of an off-list email on the topic, \"While I do not know of a use case yet, it might be worth keeping raw public keys in mind as a use case.\" Which is not unreasonable but I'm going to favor narrow scope here over speculative need.\nNAME wrote: might be needed\/wanted by the backend application (to independently evaluate the cert chain, for example, although that seems like it would be terribly inefficient) and that any intermediates as well as the root should also be somehow conveyed, which is an area for further discussion should this draft progress. One potential approach suggested by a few folks is to allow some configurability in what is sent along with maybe a prefix token to indicate what's being sent - something like Client-Cert: FULL \\ \\ \\Client-Cert: EE \\ as the strawman. Or a perhaps a parameter or other construct of {{?RFC8941}} to indicate what's being sent. It's also been suggested that the end-entity certificate by itself might sometimes be too big (esp. e.g., with some post-quantum signature schemes). Hard to account for it both being too much data and not enough data at the same time. But potentially opening up configuration options to send only specific attribute(s) from the client certificate is a possibility for that. In the author's humble opinion the end-entity certificate by itself strikes a good balance for the vast majority of needs and avoids optionality. But, again, this is an area for further discussion should this draft progress.\nI would propose a separate header for the chain that would use the structure List value format.\nI'd also argue strongly for enabling the certificate chain (even if as a list of sha256 checksums of entities in the chain). End-entity certificates are effectively namespaced by the rest of the chain, so not including the full chain and\/or not tying the end-entity cert to the chain are some of the top foot-guns I've found with people using this.\nI like NAME suggestion: separating the EE certificate out is sensible. The assumption here is that the terminating proxy validates the signature from the EE cert when establishing the connection. That makes it different. That said, the origin server might need to assemble and validate the chain, so including all the data presented in TLS is likely necessary. (I don't think hashes are as generically useful; space savings can be achieved with header compression, assuming that you use VERY large tables.)\nFrom a header compression standpoint, I suspect we should note that multiple instances of the header will compress more efficiently than a single instance with a long list, assuming there are multiple possible intermediate cert paths.\nNAME wrote: http-core. This should be pretty simple to add, but for the purposes of having a clean PR pulling out the TODOs, I'm moving it to an issue for now anyway.\nURL points to draft-ietf-httpbis-semantics for this\nNAME wrote: possible with HTTP1.1 and maybe needs to be discussed explicitly here or somewhere in this document? Naively I'd say that the header will be sent with the data of the most recent client cert anytime after renegotiation or post-handshake auth. And only for requests that are fully covered by the cert but that in practice making the determination of where exactly in the application data the cert messages arrived is hard to impossible so it'll be a best effort kind of thing."} +{"_id":"q-en-http-extensions-2351f8fb4b7874155bfa0fe5180e7338684fbbfd6985f3c3337027dda851f8cd","text":"This text should be more precise about the definition of different signature parameters, including inputs to PSS. All examples have been updated. Examples for ECDSA and HMAC have been added."} +{"_id":"q-en-http-extensions-4940a57e9cf1844b167269546088655aac9c420dd0aaf287b122b92e15bd22bb","text":"Remove the list prefix parameter operation on structured headers. This was previously added to allow for signing of a subset of combined headers, but since the combined header value cannot be considered a structured header value, the prefix functionality did not ultimately serve this purpose. However, common cases such as adding additional headers to a request could still be covered by a different mechanism. Note that dictionary key indexing is still allowed and facilitates multi-signature use cases.\n(emphasis added by me) That sounds as if a subset of a list is not a valid list? How so?"} +{"_id":"q-en-http-extensions-438c11c450c9a86a241347cb2fe0f60ae9feb235708034418047725b79b286ce","text":"This can be relaxed to XML with a document element in the \"DAV:\" namespace, or even to the two element names mentioned in Section 2.2.2 of [RFC5323]."} +{"_id":"q-en-http-extensions-417ec9faf3c7eecb02aa0b401a6662d69ffb13586b286afed8943537d5e22e86","text":"says: 'deprecated' isn't the right word here; perhaps 'disabled by a peer'? Also, here and a few other places refer to 'the HTTP\/2 priority signals' without enumerating them. To be precise, that should be done somewhere. I think that means: The PRIORITY flag in HEADERS and its associated Exclusive flag, Stream Dependency field, and Weight field The PRIORITY frame type\nHow about instead \"once a client learns that the server supports this priority scheme, it SHOULD stop sending HTTP\/2 priority signals\" After all, this spec can only really have something to say about the stuff it defines and the h2 priority stuff. (Plus what Mark says about being very clear about what \"HTTP\/2 priority signals\" means.\nNAME I like the suggestion but that does not work as is. The draft defines two separate concepts: an HTTP\/2 SETTINGS flag that allows an endpoint to deprecate the use of H2 priorities, and a different prioritization scheme. While the two are expected to be used together on HTTP\/2, they aren't required to. defines the SETTINGSDEPRECATEHTTP2PRIORITIES flag as a way to opt out of using the HTTP\/2 priority scheme, in favor of using an alternative such as the scheme defined in this specification_. That said, I tend to wonder what NAME and NAME are suggesting is that the current draft is a bit verbose. At the moment, it uses to define what \"deprecation\" means, then the term to define how the client acts when the peer \"deprecates.\" Hence . I would appreciate your reviews. Makes sense.\nThat raises a different issue. The setting is phrased in the negative, where it could be positive. That is, I support priority mechanism X. That works equally well and doesn't have issues in the case that we have to design a third scheme.\nYes, I think I would prefer a model of advertising support for (potentially multiple) schemes and then saying that the client SHOULD NOT send more than one scheme on a given connection.\nNAME NAME While I do not oppose strongly, I think there are number of things that we have to consider: Technically, we can create a mechanism that allows the two sides of a hop agree on one prioritization scheme. However, that's going to be a bit complicated, because SETTINGS is a frame that is sent by the two endpoints before receiving one from the peer. Therefore, the minimal mechanism would be something like each endpoint sending a list of schemes that it supports, and then both endpoint choose a scheme from a intersection of the two lists using a predefined logic (e.g., client's list of schemes are ordered by its preference, and the most preferred one also supported by the server will be selected). But before designing a negotiation scheme, I think it's worth asking one question: do we want a negotiation scheme for HTTP\/3? If the answer is no, I'm not sure if we need something more than a deprecation scheme for HTTP\/2, when what we are trying to do is a back port from HTTP\/3. Even with HTTP\/2, there's isn't a necessity to negotiate the prioritization scheme. IIRC, the two reasons we introduced the deprecation flag (which IMO is an optimization) are: to reduce the burden of setting up the h2 prioritization tree on the server-side to conserve bandwidth All that said, I think my argument is that having deprecation flag is a good way to punt to the future the cost of designing a solution that we might not need. Because the deprecation flag makes h2 and h3 on par. If we find that we need to replace the new prioritization scheme in the future, then we could introduce a negotiation scheme that works for both H2 and H3. Note also that the new prioritization scheme is extensible.\nNot needing to fully specify how they're selected right now is certainly a valid point. But the only difference between a deprecation flag and an indication of support is the orientation of the default value. In HTTP\/2, the orientation is that the peer might support that feature, but many don't. So there's value in being explicit both directions; the honest choice for default is \"don't know.\" Indicating support for each scheme gives us more leeway in the future, and we can recommend that implementations which don't support the old scheme explicitly disavow it.\nRegarding default, I tend to believe that it should be \"supportsh2priorities,\" because clients should continue sending h2 priorities to h2 servers that do not implement the new extension. That leaves us to the discussion of orientation. I do not have a strong opinion, though IIRC, some argued that they prefer default value being 0 rather than 1. That's how we ended up with a \"deprecate\" flag than \"preferred\" flag.\nI agree with all of Kazuho's points. We explored something different in the past (e.g. URL) but ended up with what we have today, and I don't think there has been significant new information that would change my opinion.\nNAME PTAL at this PR that addresses your comment about describing the signals URL\nThat looks good, but I wonder if 2.1 should explicitly reference that for the definition of h2 priority signals."} +{"_id":"q-en-http-extensions-51b35e126208437c3b721dffb4318c46cdaa752abadd56b0df9f75df0470928d","text":"This minor nit was the only thing that jumped out at me during a read-through before the interim."} +{"_id":"q-en-http-extensions-831eda37204e5087dff66115e7b592cd17912dbc7d68d411a49a91d6ca653c29","text":"This PR talks about the server-side recommendation discussed in URL\nThis PR really just documents an existing consideration that was quielty overlooked, so I think it's good to merge whenever we feel like\nAgreed. Let's merge and see."} +{"_id":"q-en-http-extensions-855d1155d6430af6d3c0b76ba30a88098b20e18d11e85df78c61f9d4ee87f2e5","text":"This adds several new specialty identifiers for requests and responses to cover the method, authority, scheme, origin, path, query, and status code values of the HTTP message. It also edits the identifier's associated value to remove the verb and the string construction rules and instead align with HTTP Semantics.\nNAME I haven't added references in yet but I tried to move the language in the direction you are suggesting. NAME I added the target-uri field we discussed, please review the text.\nResponded to comments and updated normalization rules, please re-review.\nFolding in this PR to the main document, discussion can continue on specifics of identifiers and their values as needed."} +{"_id":"q-en-http-extensions-90864b1681a93bc9e129c983b7c537f1931939a56315b39806dd2557aede7134","text":"This creates a mechanism to cryptographically bind a response to its associated request by letting the responder sign the signature value of the request, and the requester validate the signature based on their original request's signature."} +{"_id":"q-en-http-extensions-78d85abe55a23af2857b43ffce55e4d7963825c0df1c4d56ec24342743662c58","text":"Second round on this one, small tweaks and a small amount of restructuring.\nLGTM, made a few smallish comments"} +{"_id":"q-en-http-extensions-7c8c159f0203fe8e219431a3478e28b7de2fafdb7a3fbfd1480a42346975af92","text":"We're keeing reprioritization, no need for this TBD any longer.\nThank you for noticing it."} +{"_id":"q-en-http-extensions-e3a965945c60d7ccd5d822af99602c76d02fde0c91de4429b45fd61e69d5c1a3","text":"Addresses\nNAME NAME PTAL\nRFC6265bis mentions the cookie length constraints in relation to parsing the Set-Cookie headers but doesn't in the more general Storage Model section. From chatting with NAME about this, it'd be desirable to have these constraints be listed in that section as well. Since there are other ways cookies can come in to the cookie store (Ex: Cookie Store API), having these constraints be enforced when adding cookies from any source to the cookie store would better ensure cookie format consistency."} +{"_id":"q-en-http-extensions-03eecafb2402c7521359e73c77f2be5b2b76e2cd245736ab3c2b118d9718c67b","text":"Thank you for the fix.\nFWIW, I would prefer s\/MAY\/can\/ - there is no normative effect of a choice. Originally posted by NAME in URL"} +{"_id":"q-en-http-extensions-6ee11d9663819753a089f0c42af5509c213c8bd6e6aeaca05d9a2379a6b99819","text":"This excludes HTAB (%x09) from the CTL characters that normally cause cookie rejection. HTAB is considered whitespace and is handled separately at a later step in the algorithm. Related to\nNAME NAME PTAL.\nLGTM\nIn URL, RFC 6265bis was modified to specify truncation of set-cookie-lines at the first {CR, LF, NUL} byte. This is consistent with Chrome's current behavior. NAME that this may enable an attack where an attacker may inject a CR, LF, or NUL byte into a cookie value to cause its truncation, thus changing the value of the cookie. Investigate interop\/web compatibility and consider rejecting all cookies containing any control character (rather than truncating).\nAddressed by and .\nThanks NAME Do you know if tests have been updated as well and implementations bugs have been filed against browsers that do not do this (yet)?\nNAME I just submitted for review an update to the tests, and will soon file bugs based on the findings from running those. I'll post back here with links to any tickets I open. Thanks for asking about this For reference: URL\nNAME FYI, here are the corresponding bugs: URL ​​URL URL\nThanks NAME for your work on this!\nlgtm, thanks"} +{"_id":"q-en-http-extensions-d0ac246649570782c3831b1de925cc3fbc245f891b3609a59c70908e794348ad","text":"Minor editorial cleanups, including: added client cert example updated multiple signature example prepare for next revision"} +{"_id":"q-en-http-extensions-b491e6dbe61ef61b2440d8f7544592224ab577ba89f108c5a27f553d2857ecb1","text":"wouldn't the \"Proxy-Status\" header field require an action in the IANA considerations section to register the header field itself? currently, the only things in this section are the creation of new registries required for the header field.\na la rewriting, invalidation controls, etc. See"} +{"_id":"q-en-http-extensions-d41b0b903b218ec5b521d5fe66df34093e76eaf3bc83c019caf71481450ba6d0","text":"I think that I caught everything in the first round of discussion. There aren't any substantial changes here, but it's a PR so that others can confirm."} +{"_id":"q-en-http-extensions-b7a575a56d65ae3657ca0c4ebf5bdf0469941183377e4cc353b9be312376c380","text":"I noticed that parameters are generally defined in terms of what happened to generate this response, whether or not the next hop was actually contacted. The exception is , which says: I think it's potentially valuable for a response to indicate what protocol it was fetched over even if it comes from cache. The remedy would be to remove that statement and align this with the other parameters by talking about 'this response'. Probably also making that more prominent in . Thoughts, NAME\nThanks!"} +{"_id":"q-en-http-extensions-1e06d6125a1bc61cfe34bc1139ace4c3c8eba6ca0928beb75cef969f26e03641","text":"The changes here are relatively small, but please ensure I didn't miss anything. Because not all HTTP\/3 streams are framed, I restated \"on any other stream\" to specify \"request and push streams,\" language which I believe remains appropriate for HTTP\/2. In order to keep the frames as identical as possible, I haven't restructured the H3 version with varints, but that's certainly an option. It would require defining the frame payloads separately, but that's possible.\nLGTM. Nice and simple. Suggestions are all things you can ignore."} +{"_id":"q-en-http-extensions-ed13410e3907c71304f007a771a5903fa800c249aac1a17dfcc8d27f5783eb6a","text":"add registration policy cc: NAME and NAME to URL Using RFC policy avoids the issues we had to face to refresh the current table. See\nNAME now it's Specification Required :)\nThis specification obsoletes 3230 and updates the IANA registry to point to it, but doesn't specify what the registration policy actually is.\nRelated to NAME says\nSee\nNAME do we need ? [x] 5. 'The IANA acts a registry' reads oddly; it maintains the registry. Also, saying 'the registry contains the tokens listed below' is misleading; these are initial registrations, which can be modified and added to after the RFC is published. Also, these might be better placed in IANA Considerations.[x] 5. There should not be RFC2219-MUSTs in IANA registry text. This can be addressed by having a requirement similar to \"Deprecated digest algorithms MUST NOT be used.\"[ ] 5. The ability to have a digest algorithm that ignores content-codings is something of a surprise after reading section 2; it needs to be at least mentioned up there.\nI'm sure I made this comment somewhere but I've lost it. I don't think we should mandate RFC required, that a very high bar for algorithms (and their users) that most likely developed the algoithm elsewhere already. Instead seem like a better option here\nNAME imho registering new algos should be done via an RFC, this doesn't imply that those algorithms should be defined in an RFC. While it is fair to delegate the registration to an external specification, I think the approach of an RFC (eg URL ) is a lot more clean and improves the overall interoperability. It is important to note that any type of RFC is sufficient (currently Standards Track, BCP, Informational, Experimental, or Historic). In any case, I don't think this should be a game-stopper, so if there's general consensus on \"Specification Required\", that's ok.\nI don't think running an RFC process can provide any meaningful value to the registration requirements. I mean, that's going to add like a year of overhead to something as simple as \"use algorithm $foo with token , the output is encoded using base64\".\nThe key to Specification Required is the Designated Experts. If they do they job you basically get what you need. If there's any question from the DEs they can always kick it back out for further review.\nNAME Seems that NAME is on the same page URL ;) So let it be Specification Required! Move on :)"} +{"_id":"q-en-http-extensions-89725e36b3b2b428f1557af0d572c78f022f7714fe5c6d0d7689b107788a7a83","text":"This strengthens the language around Structured Fields so that it's clear they're preferred, and targeted fields are now primarily defined as structured fields. It still leaves using a Cache-Control parser as an option. If we want to go further, we could remove the paragraph starting with 'However,...' and the two bullet points below that.\nReverted in URL for the record\nRight now, parsing of targeted Cache-Control is specified like this: The idea here is that while we'd like the interop benefits of SF, we recognise that some -- perhaps many -- implementers will have a strong motivation to reuse their Cache-Control parsers. This seems like it might be the worst of both worlds, in that we now have two possible implementation strategies. Given that there are several now, I wonder whether should reconsider this. E.g., 1) Strengthen the first MAY to a SHOULD, making the field sf-first, or even 2) Require SF, don't allow reuse of CC parsers\nReopening to double-check that we still want to allow CC parsers.\nTo be clear: the remaining issue (as discussed at the interim) is whether we want to remove the text allowing parsers to be reused for this field.\nThis seems like the right balance: this is a profile of SF that a CC parser will accept, assuming that a CC parser won't choke on parameterization of \"no-cache\" or similar."} +{"_id":"q-en-http-extensions-7c110d5bf79f059d27c9b9f35251fd8f7113a603bca29a49301b3fa24770b5fa","text":"The \"Weak Integrity\" section correctly notes that non-secure cookies can be set by network attackers, and will be sent to secure origins. This commit notes that the prefix is a mitigation against that attack.\nNAME WDYT?\nLGTM, thanks!"} +{"_id":"q-en-http-extensions-f9cc03f1d52afaa07e164c40c2d056dd473df21b5a69f89e8d8e8af896fc3321","text":"Fully editorial; \"Application-Layer Protocol Negotiation\" instead of \"Application Layer Protocol Negotiation\". URL\nThis fills me with joy"} +{"_id":"q-en-http-extensions-aeeaf7e8599af64cdca2fe39a557fa2be343c0f45ea5a3e9dac8ab24a85d21c5","text":"To be fair, RFC 8441 says . Didn't you say you liked consistency? ;)\nRFC 8441 is wrong. Consistency with wrongness should be avoided."} +{"_id":"q-en-http-extensions-af669cf0e862c7383ba37c35827647b658734567b763fa25705433fd6b9269ab","text":"Tweak the phrasing of the abstract to make it a bit more clear that the document is adapting the mechanism not defining a mechanism.\nThere's nothing wrong with them per-se but in the space of 2 paragraphs the doc says This document describes the mechanism for HTTP\/3. Appendix A.3 of [HTTP3] describes the required updates for HTTP\/2 settings to be used with HTTP\/3. So is it a new mechanism, a setting update, a new setting, or what?\nGood point! That is a bit awkward. Check out and let me know what you think?"} +{"_id":"q-en-http-extensions-28b775742a585eab87a4bcbf3f6cb332255a19897756fa24d7f21ddc3d8fa84f","text":"The context here is that the document defines independent elements a) the means to deprecate RFC 7540 stream priority using a setting b) the extensible priorities scheme. This text removes conflating those elements and making it appear that Extensible priorities is mandatory. Where someone chooses to use Extensible priorities (good choice!) there is some specific guidance about how to process settings and inform signal setting.\nAbove, you indicate any scheme can be used, with this being just one option. This does make guidance about PRIOUPDATE frames\/header field more difficult later, so maybe some other textual adjustment is needed earlier: e.g., for the remainder of this text ,we assume the use of the scheme defined in this document as the alternate Or, move the following text from below up and adjust a bit send this scheme's priority signal Originally posted by NAME in URL\nThe text isn't amazingly clear if endpoints have to use Extensible priority when they explicitly disable H2 priority. I suggest we state they do. If anyone really wants to invent a new scheme, they will need to deal with negotiation.\nYeah I see two somewhat contradicting statements: Endpoints are expected to use an alternative, such as the scheme defined in this specification. Until the client receives the SETTINGS frame from the server, the client SHOULD send both the HTTP\/2 priority signals and the signals of this prioritization scheme (see Section 5 and Section 7.1). I would argue that the former is precise, and that the latter is a bit misleading in recommending endpoints send Extensible Priorities rather than any alternative to H2 priorities. The rationale is as follows. In principle, negotiation is a hop-by-hop mechanism and therefore cannot be used for negotiating the use of an end-to-end signal, which includes the header-field variant of Extensible Priorities. To paraphrase, in a world where end-to-end signals are used for prioritization, multiple signals are expected to co-exist. IMO, SETTINGSDEPRECATERFC7540_PRIORITIES is no more than an optimization, that allows clients to conserve bandwidth (by omitting the PRIORITY frames), servers to reduce complexity (by ignoring PRIORITY frames being received). These benefits are stated correctly in the section.\nThanks Kazuho. I tend to agree. The thing I also wanted to avoid is endpoints disabling RFC 7550 stream priority and then not providing any suitable replacement by accident. I think there is agreement that not doing scheduling is pretty bad. But we also highlight that other inputs, beyond this scheme and beyond anything that would be standardised are fine to use too. I'm getting an idea of how we could change the text to address the points made on this issue.\nReading again the current text send this scheme's priority signal. Handling of omitted signals is described in {{parameters}}. This is effectively a differently worded restatement of the guidance for implementers of Extensible Priorities receiving an HTTP request that does not carry these priority parameters, a server SHOULD act as if their default values were specified. That is, imagine both endpoints send SETTINGSDEPRECATERFC7540PRIORITIES and a client wants the default prioritization for a request (u=3,i). It can omit the Priority header field and have some expectation that a server implementing Extensible priorities will interpret that as a signal to apply the default. The simple action here is to make SETTINGSDEPRECATERFC7540PRIORITIES independent from any advice to use a specific alternative.\nLGTM. I like how you've split the section."} +{"_id":"q-en-http-extensions-30db47c24b2c64b7ee6c6a5ffa5386e6b8627ba4747969e54e1a7ab84a22aa7d","text":"RFC 7540 priorities are already deprecated.\nThat's the kind of thing I expect reviewers to question if we made a typo :) How about just changing the text to \"using a setting to disable RFC 7540 priorities was inspired by I-D.lassey-priority-setting? Alternatively, we could just drop the mention altogther, although I think the context is useful.\n\"disable\" WFM.\nProposed changes look good. In addition, I think we might want to change : For this case, maybe changing deprecate to depreciate would be the minimum required change."} +{"_id":"q-en-http-extensions-03f7bc9a1ff4a1c97140abdf4910ed57afce32686afe8db530823baa8bb48b0b","text":"This sentence in the abstract might be relevant for those who are caught up in the way that h2 originally signaled priorities, but for those who will read this document later, it adds nothing. It's a distraction. Stick to what the document does rather than what it doesn't do.\nI agree, thanks for the PR"} +{"_id":"q-en-http-extensions-9bbb5683cebc25b34a4ec9ced9d378ea19f5020e0c884f9aa184849f6f07c7ab","text":"by taking NAME suggestion of MAY and bolts it to an \"if you can detect\" caveat\nSince this is a change of normative requirement, lets solicit for some more WG input before landing\nI noticed the same while reading the document and thought that MAY would be more suitable. Or in more precisely, \"MUST ignore any change and MAY treat it as a connection error\".\nI couldn't see this being discussed in an issue so far. This seems unnecessary. While there is no value to changing the value, it creates a special rule for handling of the setting. In particular, endpoints that don't distinguish between SETTINGS-in-preface and other SETTINGS frames might have trouble generating the special handling necessary to police this rule. While I can see the value to an endpoint in terms of being able to defer creating the whole priority tree mess for new connections until they receive the preface, and maybe avoid creating that stuff entirely. But as long as priority signals are advisory, you get most of the value from a strong recommendation to use the preface, noting that making a change later might be too late. No special exceptions would be needed. I'm sure that the editors will disagree with me here. That's OK, but it's worth noting why this special handling is included in the specification. It's not immediately obvious.\nWould it be sufficient to make this a \"MAY be treated\"? That provides a disincentive to change without requiring special processing."} +{"_id":"q-en-http-extensions-e1f61c41da5b68faf0a20a819812c27d7fb178f3f415b6d700b56963ea8f8ef9","text":"by taking MT's suggestion. This is the right thing to do editorially even if it might make NAME a little sad\nIt is not necessary to make this claim, or any claim, about the range of values. Such a claim would need to be supported with evidence. I don't think we have such evidence. Internal documentation I have lists about 30 different types of requests that a web browser might make that were put into about 11 different categories; you could make arguments that 30 or 11 is sufficient on that basis. And you might still be wrong. The good thing here is that you don't need to justify the choice of 8 different urgencies. Strike this sentence."} +{"_id":"q-en-http-extensions-c6e2b73412599ef1e4713357322b847c1a362bad5417d93d61a06e56e75ae187","text":"The change switches the order so that there is a more natural progress from presenting the incremental default of false, to discussing how a server could act on non-incremental requests, then how they might act on incremental requests.\n(Emphasis mine.) I think that the goal of this text was to establish an expectation. That is, if a request is marked as incremental, there is no expectation that a server will attempt to send the entirety of a response before starting on other responses. It's a negation of the standard expectation that exists for : responses are transmitted in their entirety where possible. Consider rewording this bit.\nSome suggestions, but I think that this is an improvement.Thank you for the PR. LGTM modulo the points below."} +{"_id":"q-en-http-extensions-6386a09594c81d9ff6d5e78deb9c1288dca3a274d954f4e980ed6de4a46136d0","text":"The issue discussion also mentioned the ability for the server to indicate a GET-able URI for the returned results, presumably using Content-Location. It looks like the definition in -semantics is probably sufficient, but it's probably worth highlighting that there are caching implications to having that header present in one of these responses.\nContent-Location doesn't have any effect on caching. It would be odd to add caching implications to it.\nNAME I was recalling this: URL thinking that the response should be cached for the indicated resource. But I had missed this: And indeed, the Caching spec talks only about invalidating any existing entry for the indicated resource, rather than giving any permission to cache the response under that URI. The application is allowed to update its local copy based on Content-Location, but apparently caches aren't.\nWe should figure out\/document: 1) Establishing the cache key - a cache needs to know how to compute a cache key for the request. The obvious (and very inefficient) way to do this is to use the entire request body as part of the key. 2) Request canonicalisation - we should allow caches with knowledge of canonicalisation algorithms for a given format to use them. 3) Indicating canonical form in requests - if a client (UA or intermediary) canonicalises, it would be nice if they could assert that in the request, so a downstream cache can assume it's there and just look for a byte-for-byte match, rather than re-canonicalising. This would allow a client to manufacture cache misses, but that's already possible in URLs... 4) Indicating cache key in requests - E.g., a hash of the request body. Needs security analysis, though. 5) Indicating cache key in responses - If the server can instruct caches as to what future requests would match it, that would be very advantageous. Necessarily requires knowledge of the request format, but we could define common ones for url encode, XML, JSON.\nI don't think that (4) is generally feasible. There might be cases where a cache trusts other entities sufficiently that a hash (Digest field?) can be used, but it will need to work this out for itself. This is probably the big cost associated with caching this method. As for the general case of c14n, it's unfortunate that this is necessary, but it is probably better than relying on having some shared understanding of an abstract model for the resource representation. I would not build this on any assumption that XML c14n is possible. A complete solution there is likely impossible. I know less about the warts of JSON, but even that is entering into dangerous territory. You might get more traction with schema-aware c14n with the understanding that it is not generally applicable to any document. That is, define tools so that specific XML- or JSON-based formats can opt in to the use of c14n. People might then apply that c14n if the format allows it; people might also apply the c14n if the format does not, but we have not promised them anything and so guarantees might not apply.\nnod there's going to be a tradeoff between the amount of knowledge\/processing necessary for c14n and cache efficiency. It might be good to explore what conventions a format needs to follow to make it easily canonical; I can't help but think that most use cases for this are going to be defining a new query language, not reusing an existing document or even data format.\nI think that caching is a \"must have\", though probably we could just provide minimal hints and leave clients the onus of hitting the cache (eg. clients c14n-izing requests in some may increase cache hits). I won't mess with the request media-type (json? xml? x-www-form-urlencoded? ...) and just use the content checksum (eg. ). I think that supporting media-type aware caches is really hard to implement (eg. see rfc8785 and rfc7493 only for json): there's room for experimental specs though... That's ok, though see (2) about supporting media-types is hard Agree wrt Security Considerations IF we want to support a caching based on a response cache key, this could be implemented with some caveats without re-sending the at all: Authentication information should be processed; MUST NOT contain authnz information\nCaching will be tricky for \"similar\" requests sent by different clients. On the other hand, caching could be simple for requests that are repeated by the same client (like in a refresh operation). For this use case, we just need a way for the server to also point to a GETtable resource (maybe with a lifetime). If we solved that problem, would we still need full cacheability?\nNeeds to sync with ."} +{"_id":"q-en-http-extensions-94dc3250229b402349630b88c0061875db2ccc0be1c5d4ed06ef306f297b2681","text":"(opening this so it has a discussion venue) Currently, the spec uses \"SEARCH\" (for the reasons given in the spec). Alternatives: 1) pick a new name 2) use GET with payload (probably unrealistic, just listed for completeness) 3) use POST and find a way to label it as safe 4) re-use another existing safe method that takes a body (PROPFIND, REPORT) (just listed for completeness) The obvious drawback in \"pick a new name\" is the forseealke bike-shedding.\nJust my .02. Using is initially attractive because it's familiar; effectively we'd be extending the semantics of the method with a header. However, I'm not sure it's a good idea. implies certain guarantees to clients, servers and intermediaries -- e.g., write-through. If an extension can change those guarantees, it's important that they're visible to all components, and an extension (e.g., a header) by nature can't guarantee that. It seems to me that one of the critical things to do is to get a positive indication from the user agent that the request they're sending doesn't have write-through, so a extension would need to carry an indication from the client. However, the origin server is also likely to need to convey a cache key and perhaps additional information, which leads this approach to something that's looking complex; the client has to send a certain header, the server has to send a certain header, and an intermediary still might not understand what's going on (which might be good or bad). So that makes me lean towards the status quo, (1), or (4) as being more straightforward. Reusing an already-deployed name has been discussed elsewhere; I'm not convinced that it's going to add a lot of value, considering that most traffic is HTTPS now and so intermediaries with fixed lists of supported methods are much less common. If we had data that they'd get through e.g., an enterprise virus scanning proxy whereas a new method wouldn't, I'd be interested to see that. Still, if fits the bill adequately, I think it's fine. The real test will be whether folks (especially HTTP API developers) find it compelling enough to use; I'm not sure quite gets there, but it's in the ballpark. If we're going to consider new names, I'd suggest .\nIf the point of this exercise is to make safe-ness visible to intermediaries, then a method is the right place to put that signal. I'm comfortable enough with , though an entirely name would be equally good. (I think that we should continue discussion about cache keys separately.)\nI value a lot the ability for intermediaries to filter out write requests before they reach an application. Reusing won't have this advantage.\nInterim discussion: switch document to use , which has good support from the attendees, and see how it seems to people who would be adopting.\nQue? This is far more than the simple rename I was expecting. I had a similar double-take on first read, but most of the additional changes appear to be excising the WebDAV references, which is reasonable to do along with the rename."} +{"_id":"q-en-http-extensions-b7a27ea0bdb4243d689bbbdec88b842657e6833c877ffd06c9f33137dae49a8e","text":"There is limited value in some of this information. I might avoid the detail and limit that text to something like \"RFC 7540 priorities had several issues that this design avoids.\" The purpose of the section is to describe the security consequences of the mechanism that is defined in the document. There are some DoS concerns in previous sections that might be worth noting. You might also refer to RFC 8941 considerations, the size of state commitments required to buffer PRIORITY_UPDATE, and the starvation risks involved with strict application of priorities. Most of that already exists in other sections, so this section could just be a summary of the issues with pointers."} +{"_id":"q-en-http-extensions-3921c5851d9f93bcbc460046e018b2e9f85c1cc16c095316784e49d063f8fca0","text":"We accept that a strong suggestion to use round-robin isn't ideal. And look for other ways to phrase the suggestion of distributing or timeslicing. Fortunately we already had some text in the next paragraph that explained the ways that timing of delivery of incremental resources parts is important to clients and the sorts of things that servers need to consider. So this PR rearranges the text a bit and gives it dust off to make sure it is in line with our understanding now.\nThis recommendation is too strong. There is still value in allowing requests to complete. This suggests that pieces of all responses be provided concurrently. That leads to more context switching than necessary, or even inefficiencies in transmissions (one byte from each at a time would have excessive overheads). I would instead say:\nI agree that \"round-robin manner\" is poorly defined and could be interpreted as 1 byte of everything. That would be bad. However, the full context of that SHOULD is in relation to the other requirements in that paragraph RECOMMENDing respect with no guidance on what that means seems a bit iffy to me. Your suggestion might get misinterpreted as relaxing the RECOMMEND. Rewriting that paragraph risks kicking the hornets nest of one the most divisive issues we had (URL). I sort of inclined to keep the SHOULD but expand on what we intended to say with \"round-robin\"\nLGTM, some suggestions only."} +{"_id":"q-en-http-extensions-3c2626f509c04394d0b7b97fa774d90644e5fe8fbf8f0f50d97af3a69a11f799","text":"Putting the meat of the document in Section 1.2 was a bit odd. Also, the fields descriptions were glommed together.\nWeird; didn't happen on local copies."} +{"_id":"q-en-http-extensions-c0a399b32cc8ee7a9a219724cec4bd28d94cd6d6b3a3bb20f7a81b5febe75288","text":"and This is an editorial set of changes to the intro text. The first paragraph in the \"motivation for replacement\" section is actually broadly applicable motivation for any priority scheme. So we can move that up and highlight that H2 and H3 are multiplexed protocols that have the need of something. This helps us to clarify that mention of HTML document loading is only an example. We also give greater exposition about the actors involved in signalling, highlighting early that servers treat signals as a suggestion and they might follow some of the guidance provided later in the document.\nTo avoid bitrot, I'm merging.\nThe introductory material tries to be general, but routinely fails to be, falling back on talking about rendering web pages. If this is the best example, it could be more direct about that. Or, if this is general, it could try to be more careful about limiting its use of \"document\" and so forth to examples. For example, this lacks sufficient context: And this launches into talking about HTML without saying HTML:\ngood catch. I do think this is the best example and therefore will take your first suggestion. I'll follow up with a PR.\nI have an unpushed branch for this that conflicts with other things, but I believe solves the matter.\nsays in part: In the first sentence, the endpoint is the one that is applying priority based on signals (or lack thereof) from its peer. In the second, the endpoint is the one providing signals when the peer is the one that acts upon it. This is confusing.\nI see what you mean but I don't have a quick fix to hand. Would \"one side operates in ignorance of the needs of the other\" work?\nPossibly. In the first you might say a sender might operate in ignorance of the needs of a receiver; the second might then say that a receive can communicate its view of priority to inform sender behaviour. Not that prioritization is exclusively about prioritization of sending, of course.\nthe terms in the second sentence were written intentionally fuzzy to accommodate intermediaries in the general statement. IIRC Robin elsewhere suggested just saying client and server but that risks over simplifying or misleading.\nI think it might be a good idea to using terms \"sender\", \"receiver\", as suggested by NAME\nTo me, that risks conflating signal sender vs data sender. Someone want to propose a PR?"} +{"_id":"q-en-http-extensions-2d71f204c23c105e731c35925b3e56c2c5068e684f2385923472d2c2e9fbeaac","text":"We want registration policies that encourage extensions. We also need to bear in mind the existing guidance about how the base scheme, urgency and incremental, are used\/applied by clients, servers and intermediaries. The scheme is malleable but it can't implementations are unlikely to be able to swap out prioritization wholesale. This puts reviewers into an awkward position, so we should document clear guidance on registration review and how or when it might be rejected (airing on the side of permissiveness). Fortunately, we had some of these considerations already written down. So this change makes those criteria more specific and actionable. The other important part is that keys are strings, so clashes are less likely. However, given the constraints above and the attractiveness of short key lengths for repeated information, there might be pressure on the finite supply of the 25 remaining single-character keys. So split the IANA registry in two, and place additional Specification Required policy on those precious keys.\nFirst an easy one: Designated Expert is not a standard RFC 8126 rule. I think that you are looking for Expert Review. More substantively, placing a designated expert in a position to consider community feedback is not something we do generally, because it places a lot more responsibility on that person. If the goal is to insist on community review, then you might be better forcing registrations into the standardization process. That said, doing that has historically not worked out well and I would strongly recommend against doing so uniformly, though you might do that for some values (like one-character parameters, for which we have a limited supply) while making policies for other parameters more permissive. It is best if experts are given enough concrete guidance that they aren't ever placed in a position of having to make difficult decisions. Good decisions might cause some people to get angry, for instance. That the decision is good doesn't mean that experts will avoid consequences. Clear, unambiguous guidance to experts is how you avoid those awkward situations.\nthank you! I might rewrite this as \"future you will thank you for writing down the assessment criteria\". We should do this change. We probably want to start thinking about who the expert(s) would be NAME NAME\nThat's ultimately up to NAME I'd suggest you and\/or NAME :)\nI am happy to voluteer for that job and if we can share the pain all the better\nSame as NAME"} +{"_id":"q-en-http-extensions-c4914ca6ae417201cd65d16cc77121f3f67b0b93102134a0442c1c3f14c6da1c","text":"Editorial change related to . This may not close the issue on its own, so lets resolve that separate.\nSimplification; but WFM either way."} +{"_id":"q-en-http-extensions-53bd9fb40652435ab70abdb3eddceca38323352ada267451d71f818181e6a25b","text":"This lets servers be more permissive if they want to treat it like the Header but allows them to be strict if they want. Since this is not an error in the frame itself, change the code to a general error; reason phrase is available if people feel it important to share.\nIn . NAME highlighted our treatment of structured fields parsing failures for the Priority header was a bit naff. We should just defer to structured fields, and that's what PR does. It struck me that our requirement for similar failures while processing PRIORITYUPDATE frames seems a bit overzealous in light of . From URL We introduced the current text as part of addressing . I think perhaps I misread the comment at URL There's a difference between a badly formed HTTP\/2 or HTTP\/3 frame (imagine one that is shorter than it claims to be, that's already described by those documents), and a frame that contains contents some other specification deems invalid. It now seems to me like it would be ok to defer to structured fields rules here. This would manifest as the receiver having processed the frame, but ignoring the value altogether (and treating it like an omission, which we explain how to deal with elsewhere). Thoughts?\nWhile I do not have a strong opinion, I'm not sure if we need to revisit our previous decision. FWIW, IIUC, this is choosing between two principles: Frames are hop-by-hop signal, therefore the connection should be closed when receiving a malformed signal. Syntax of the value of the PRIORITY_UPDATE frame is the same as that of the header field and therefore malformed values should be ignored.\nIn the quiche library H3 layer, we handle all of the frame handling recommendations inside the library. When we process received HEADERS frames, we enure frame and QPACK requirements are satisfied and that we can construct a field section but defer all processing of the field section to applications. We don't process PRIORITY_UPDATE frames yet but if we did, we would likely pass the Structured Field Dictionary up to the application to deal with. With the current text we would either need to embed structured fields parsing into the h3 library code, or make the application responsible for adhering to the requirements. Both of hose deviate from our general implementation design (that's not an excuse, we'll have to do what the WG consensus tells us to do). My thinking is that the contents of a frame's Priority Field Value is not a property of an HTTP\/2 or HTTP\/3 connection. If the Dictionary cannot be parsed, it does not indicate a problem with the connection. It indicates a malformed Dictionary and there are procedures defined on how to handle that in Structured Fields. Since we allow for omission of parameters in this frame, it doesn't seem that different for receivers to parse a Dictionary, fail it and treat it like omission.\nThank you for elaborating. Reading your argument, I'd be supportive to relaxing the current requirement (MUST). Maybe changing MUST to MAY or SHOULD would be the most desirable outcome, as I believe there's no reason to prohibit endpoints from employing a validity check against hop-by-hop signals.\nI see you point, thanks. I think this is a case for , I can make a PR for that. I don't think that is the correct code though. URL says: requirements or with an invalid size was received. And IIUC what you want is to communicate that a problem occured at a different layer of abstraction. I think is ok unless you think we need to define a new error code just for this.\nFrom purely editorial perspective, I'm not sure if we need to say \"if it can detect\" when the keyword is \"MAY.\" I think the choice here might be between SHOULD\/MAY or MUST-if-detected. Regarding the choice of the error code, I think my mental model was that the use Structured Field is a layout requirement. But I agree that it's border line, I'm perfectly fine with using H3GENEALPROTOCOL_ERROR. We do not need a new error code for this - HTTP\/3 stacks can always use the Reason Phrase field to communicate fine-grained information."} +{"_id":"q-en-http-extensions-21123676b2a91fed505722e194614287897d10faa8935acedf623f52f42137c3","text":"Thanks, nice solution.\nThe latest draft of the Digest Fields RFC [0] deprecates all non-cryptographically secure algorithms (notably, MD5 and CRC32c). Given that the RFC is \"not intended to be a general protection against malicious tampering with HTTP messages,\" I'd like to propose not deprecating at least these two algorithms, or changing \"deprecated digest algorithms MUST NOT be used\" to \"should be used with caution\" or similar. CRC32c is a very useful algorithm because it is composable (crc(A+B) == combine(crc(A), crc(B))). See [1] for an explainer and [2] for an implementation of combine() in zlib. MD5 is ubiquitous. AWS S3 and Google Cloud Storage both use it (see [3], which has similar syntax to this RFC except that the crc32c is base64 instead of hex). [0] URL [1] URL [2] URL [3] URL Sorry if this is the wrong forum. I saw some links from the w3c mailing list archive that looked like more discussions were happening here than there.\nHi NAME this is the right place, thanks for opening the issue. Do you have a use case where you'd like to use these algorithms with the Digest or Content-Digest header? They are pretty old and pretty bad at their intended purpose - what caution can we offer? The statement comes from URL, which is really about making it clear that applications do care about protecting messages also need to apply signatures that protect HTTP header or trailer sections. Otherwise, an attacker could just change the header at the same time as changing the representation.\nThanks for the fast reply. My use case is for detecting file corruption in internal HTTP communication, where we trust the network\/server\/client. Since it's internal, we don't have to obey a standard, but I was hoping to provide the CRC32C and MD5 that we already store to our end users -- provide something instead of nothing. Those algorithms provide an adequate level of error detection by our rough calculation (considering that TCP and other layers also checksum). Intel's implementation of CRC32C [0] runs at >19 GB\/s, whereas SHA256 runs at only ~360 MB\/s. We also take advantage of the ability of CRC32C to be combined, so we can concatenate files in O(1) time and get their CRC32C without calculating a SHA256\/512 or MD5 over the whole content. We serve some millions of files. Google Cloud Storage and AWS S3 probably store many trillions of files and provide CRC32C+MD5 and MD5, respectively.1 It would be pretty impactful for servers to be able to pass along these digests and might make it worthwhile for browsers to start verifying digests for downloads (e.g. revive [1]). I might be under-appreciating the security intents of this RFC, but there is clear mention of the other benefits of the RFC (below), which are ones that non-cryptographically secure hashes are often good enough for. [0] URL [1] URL Edit: 1 Clarification: The S3 object is an MD5 in many scenarios, and S3 uses for integrity checks during uploads. They do not explicitly store the MD5 for all objects though.\nCorruption in the network does happen, but HTTPS is a far better way of managing that risk. I know that you say that you trust them, but I would be worried about the network\/server\/client being - or becoming - untrustworthy. Obviously, if you are into high-volume transfers of bulk data, you are going to do what you need to do; specialized applications have specialized constraints. But that doesn't make unsecured transfers any less inadvisable; I'd stand by the deprecation here as a general statement.\nHi NAME Thanks for joining the discussion! Let me clarify one point: This just means that the part protected by Digest (the representation) is only a subset of an HTTP message. For the rest, while I agree on some of your considerations, eg. we had a very long discussion on that URL Maybe, since crc32 & co - while deprecated - are still defined in the digest algorithm table, I think that you wouldn't have particular interoperability issues if you need to use them in a specific environment.\nThanks NAME for the pointers to the earlier discussions. I'd thought I'd searched but somehow missed those. I'll yield, although I remain in favor of saying these \"SHOULD NOT\" instead of \"MUST NOT\" be used at least because of their ubiquity, performance and the composability of CRCs. One last question: does the Digest header provide better security than, or any additional beyond, what TLS provides? My reading of the RFC is that TLS is the typical primary protection layer (since doesn't protect headers; indeed, itself could be altered) and is primarily useful for finding ~bugs. (PS: I really like the digests, nice idea.)\nNAME you've made some good points. Adding accommodation for these algorithms, given the proliferation of e.g. MD5 for this kind of integrity checking could be better than nothing, or hoping that everyone would run a SHA-algo over all of their existing assets. The thorny part is how to square the circle on relaxing the rule and providing clear, meaningful guidance to implementers. The MUSTNOT helps set a clear bar, this means its unambiguous what the spec thinks about the relationship of Digest to secure usage scenarios. Anything else needs careful wording and is hard to get right. The current MUSTNOT is slightly odd in so far as we don't state the ramifications if it is used. Part of that is a product of how Digest has been defined and how apps use it\nNAME The actual work on digest was driven by its usage in signing http messages (eg URL and URL ) This is at a different layer wrt TLS. So you can see Digest as a building block for creating non-repudiable HTTP exchanges without altering the payload (since signatures are conveyed via http headers). Hth, R.\nPersonally -- I think there's something here worth considering. AIUI the primary use case for headers is not adversarial -- it's end-to-end (i.e., through various intermediaries, toolkits, perhaps a data store) integrity. If someone has lots of content they've calculated CRC or MD5 for, it seems reasonable to allow them to use it in ; not only is it more efficient, but it saves them recalculating a value for all of that content (and possibly getting it wrong -- that's effectively \"end-to-end over time\"). OTOH, there are adversarial use cases for too -- mostly, signing the digest as a proxy for signing the content. Here, using CRC or MD5 can be bad, and shouldn't be encouraged (or perhaps allowed) by the spec. This leads me to think that deprecating them isn't the right thing to do; instead, what about marking them as insecure in the registry, and putting language into the spec to explain how to use that? If folks think that's a good idea, I can do a PR.\nThanks for the background NAME . NAME NAME I think that sounds reasonable to state something like \"insecure digests {MUST NOT be used|are not suitable} for protection against malicious actors.\" The final part of this I think would be allowing for the insecure digests.\nI like NAME idea of introducing \"insecure\". It's not trivial how to state that insecure algorithms must not be used in signatures\/adversarial use cases. I'm waiting for Mark's proposal :)\nMaybe \"Checksum Only\" to imply a more narrow purpose. \"Insecure\" is not the opposite of \"secure\".\nLGTMA pragmatic approach."} +{"_id":"q-en-http-extensions-0d05e4b7171e25c5bb7ab1d443163322a6f32bdca17c3c93b53f7bd1d15dccf7","text":"NAME noted a bunch of minor editorial issues, so this fixes them.\nThank you for working on the changes. They look like great improvements to me modulo the concern below."} +{"_id":"q-en-http-extensions-55928862c7f6352c9cdd3e73c5f3262877cc6d2c3bbed7df2b158f940c3d776a","text":". By replacing \"sender\" with \"client\", we make the urgency and incremental section more consistent with how they discuss these concepts.\nParaphrasing NAME feedback, in S 4.1 we say we might consider clearer statements about the actors are i.e. who sends this parameter, and who actions it.\nFor clarity, it's mainly the term \"sender\" in \"sender's recommendations\" that I find unclear here, since the server can also \"send\" Priority information, where it doesn't always reflect the server's \"recommendation\" when doing so (e.g., sometimes it's just echo'ing the client's value).\nWe don't really need to hammer home the \"recommendation\" concept, that's explained elsewhere in the document. I'll propose an editorial change to this text to focus on the thing we care about"} +{"_id":"q-en-http-extensions-b0377f05247a62efbb08cbc46fbc2b0a69cd3ab7dfedd3d253642daf8736c632","text":"Paraphrasing NAME in S 4.2 we say Resources are not sent in parallel (implying \"at the same time\"), but rather multiplexed vs sequential. We don't want to leave the impression that setting i=1 would give better perf due to actual \"parallel\" sending."} +{"_id":"q-en-http-extensions-325b8ded7868f6b12ba63eb5aa47cc6a36c0ec909a3606ade737249ab43584b7","text":"As described on the issue, there are many permutations of this and we'll encounter heat death if we try to explain everything. This change pulls a Captain Kirk.\nThank you!\nParaphrasing NAME S 10 is still somewhat ambiguous on what to do if you have both non-incremental and incremental resources at the same urgency level. There is no single correct answer here, and the \"starvation\" text partially addresses this, but I'm not sure that's enough. Maybe explicitly pointing out the situation and saying no guidance is given? Editors suggestion here is NOT to add any more normative guidance but to provide a better linkage between the the paragraphs that touch on these matters.\nI still personally feel it would be good to call out this specific situation, since it's an edge case you probably -will- encounter and for which you need to (in my opinion) make a conscious decision. The starvation text already calls out other examples, so I feel this edge case could also use some text."} +{"_id":"q-en-http-extensions-8f88da53ffde82a6ab80dbfeb02e6584e76261822669f54c40a6182b8e427233","text":"strip out id-* algorithms from this specid- are complex to address in this document they will be better addressed in URL\nFix typo. Thx NAME I'll wait a couple of days to allow further feedback :)\nLGTM modulo the comment I made"} +{"_id":"q-en-http-extensions-8ffd357ae232f847854b4f25656a1992e38df6a01bbd45d7985234b3bba57d7c","text":"I have trouble parsing this sentence, specifically around \"a signal that overrides for the next hop\". Overrides what? Is a word missing here, or can we restructure the sentence to be clearer?\nOriginal context was a proposal from MT here URL So I think we are missing a that_, e.g. . I made to fix that that.\nThanks!"} +{"_id":"q-en-http-extensions-c18f4e3ccf5bdf60c525be478ecf7a2b2564e05807d49d884096690123ae2809","text":"Not a complete solution, but hopefully helpful. More to come for those who are less cautious than I.\nThank you, this looks clear and accurate to me."} +{"_id":"q-en-http-extensions-af7e58ec60bb03651a44bd89cc644c9f285e810271e838659fb5ebc1132ef4b0","text":"This editorial change combines most references to how this document relates to RFC 3230. By dropping the appendix, it makes the flow of the document more smooth, which is reflected in the ToC.\nLGTM"} +{"_id":"q-en-http-extensions-c1b8968477c7bc210be11823925b7ffdaa0de3ad307cae69c683d6a47c342c85","text":"only one \"on\" value is recognized today allow parameters on token Discussion: URL NAME NAME can you please do a quick sanity check?\nNAME thanks, updated! getting warmer?\nThink so. NAME is the Keeper of the ABNF, though.\nNAME NAME thanks. Updated, do you want to do a last sanity check before I merge?\nLooks good to me.\nI understand that the current method has the save-data aspect of client hints as a simple on\/off switch. I wonder, though, about the willingness to consider allowing for some extensibility. Right now, someone saying \"save-data=on\" is signing up for pretty much anything. Separating acceptance of data saving for different elements, e.g. agreeing to smaller video resources but not lower fidelity audio, seems like it may be useful. I suggest that the working group consider making this an extensible field with \"save-date=all\" as the default, with semantics matching the current \"save-data=on\". Then later extensions can be specified for more granular statements.\nI think we can already achieve this with current definition: We can define other tokens We can define parameters - e.g. \"on;video-only\", etc. Changing from \"on\" to \"all\" doesn't really change anything. Also, we already have both client and server implementations that are relying on \"on\". p.s. some background: URL\nNAME - any further comment?\nI think the parameter method would be quite useful, and if the doc describes that it would be useful. I'm less sanguine about introducing parameters later, but I certainly wouldn't fall on any sharp objects if that were the decision of the WG.\nI guess we should clarify that it is extensible and define \"on\" as one of its recognized values... Does that make sense? I'm sure my ABNF is off.. :)\n, Ilya Grigorik EMAIL wrote: regards, Ted\nResolved via .\nIs there a reason to use semicolon here instead of the more typical comma? Also, the grammar does seem off: it does not allow whitespace (because the “implied ” rule of RFC 2616 no longer applies), and it uses instead of . Assuming you really want semicolons, I think it should be: (I can file a separate issue if you want)\nI wouldn't say comma is \"typical\" -- it depends a bit on the use case. For instance, alt-svc uses a list of comma-delimited directives, each of which can have semicolon delimited parameters. In any case, if this does not use commas, the spec ought to state what happens when commas turn up in a field value, or if a field occurs multiple times in a messages (this is stated for the other client hints).\nThat, indeed, was the intent - e.g. \"on;video-only\". Re, OWS: good catch. Addressed in URL"} +{"_id":"q-en-http-extensions-6a63c40bffdd6450a0c114c3e58106eda1905bcbffc2d8ab924483465b49089f","text":"It looks like SHA-512 was intended here, as that is what the description and name use. Maybe someone had in mind to add an additional rsa-pss-sha256 method.\nGood catch, thank you!"} +{"_id":"q-en-http-extensions-6ffe4201c4a8653e85b240b7bc5b1e95dcccccbbd799ce5f1915da46bdd8f111","text":"During IANA review of the document it was noted that the instructions for populating the new HTTP Priority Parameters registry with the urgency and incremental parameters was not that clear. So this change lists both and to make it clear what we want.\nLGTM. Thank you for the quick work."} +{"_id":"q-en-http-extensions-fa1b8b1613446d6807a3a2aeea8416954c23e87ef1bddc2d30c0c2a821e49d1e","text":"Addresses Ea of Bob correctly highlights that absence of a replacement might not be the only condition under which the prior signals might be used. So lets be more accurate.\nThank you for the PR. Looks good."} +{"_id":"q-en-http-extensions-838dad539ba0a7e04a5f2ea0d44a72dec208f198f1e7c8bd3c6799d9028eb92d","text":"Addressed Ea of Reference the HTTP terms we are using. Be clearer that we focus on \"normal\" responses and that there are other possible uses. But avoid making the entire document a piece of abstract art."} +{"_id":"q-en-http-extensions-98ab0cb2ac6aad9f1ab3398dbf5a594df0ab3d727908604fdc7ef06105dd9979","text":"Addresses Eb of . Bob highlights the actor is unclear. This attempts to address that in a minimal way without going too far down the rabbit hole of intermediaries."} +{"_id":"q-en-http-extensions-1bebd93dd27a5e5685dd6a385ebe09429c13ec8150b227d00466e7cd1db2a3b9","text":"We make this decision explicitly, and we don't want to imply that just because you don't have a fresh \"http-opportunistic\" resource, any commitment is shortened as a result (which is a reasonable interpretation based on the current text)."} +{"_id":"q-en-http-extensions-92117f1ec1dc3459b3d32f2164adc244350d82d206b9a4a55430ae122cf377c9","text":"Addresses Eb, Ec, Ed of This cuts some text to reduce the waffle factor ;)\nThis is nice condensation."} +{"_id":"q-en-http-extensions-3290dfdf27abbeeecdd7bbef0c3921344eecf5820516c8e04fdf513a7187321d","text":"Addresses Ea of Bob comments that the \"no guidance is provided\" sounds like it is in conflict with the rest of the section that includes some normative recommendations. The intent of this text was to highlight there is no (or at least very little) about how servers are supposed to combine priority parameters with the million other things that they have to worry about. This change shuffles the sentences around to try and make the original intent clearer.\nThis is much better than the original. Thank you for the PR."} +{"_id":"q-en-http-extensions-6cb2e9859292c235fd81fa1f47c2a083dfd8ceae49f14f5e6bd2d7fb8d955330","text":"As discussed in URL, to avoid confusion, we can stop short of using phrases like \"one-by-one\" or \"entirety,\" simply stating that non-incremental responses should be served in the order of the stream ID. Addresses the original point of .\nLGTM me thank you. Please merge when you're ready."} +{"_id":"q-en-http-extensions-b1c87e078995b7dc2edd32b12ada0be08ca35e8b778090cc8c650ad5796f749f","text":"In Bob's TSVART review, he notes in Tb that the mention the H2 DoS attack in Security seems useful and we should probably expand on it. The mentions of H2 have been whittled down as 7540bis progresses and the reference to CVE-2019-9513 in the security section now seems misplaced. It's cleaner to just remove it so that the section focuses on considerations for using this document. P.S I think this means that MT was right ;)"} +{"_id":"q-en-http-extensions-4aa6df3a301fcb48bd6ab0df698a3e6ab7096b87baf21462c92137d41f6c059a","text":"Addresses Te of the TSVART review We don't have much evidence to quantify effectiveness. So drop that part and keep the focus purely on the considerations."} +{"_id":"q-en-http-extensions-95b85b13a0d780f3a09336d5aee4303d722ceec749e5c7b781ebaa1551d58ceb","text":"I think this is an issue. and we need graceful way to fall back to fallback. I do not believe that HTTP11REQUIRED and 421 response code are the right solution. I think it would be good to make this part of WebSockets over HTTP3 spec. edit: remove double negation -> I think that HTTP11REQUIRED and 421 response code are not the answer.\nI agree the options we have today are not the right fit. This document could provide that and update RFC 8441.\nThe server could just close the stream with H3VERSIONFALLBACK. Is that not sufficient?\nPerhaps for WebSockets, but there's a more general question here. When a client uses Extended CONNECT with an unknown protocol, what is the correct failure mode? There's no guarantee that HTTP\/1.1 would support an arbitrary requested protocol, even if it might be more likely to work with WebSockets.\nI'm not sure we need a general solution, because it isn't clear that we have a general problem. The answer here will depend on why the client is using this protocol in the first place. For example with MASQUE, the client will know which proxy to connect to thanks to out-of-band configuration, so the proxy operator is responsible for ensuring its HTTP servers are in sync with its config servers. For WebSocket on the other hand, the issue is that WebSockets were originally designed for h1 and there exist servers that are gradually deploying h2 or h3 for GET but don't yet support WebSocket there. For those, H3VERSIONFALLBACK should be sufficient.\nThat's nice in theory. We should still say what happens when the proxy operator makes a mistake.\nWhy is this specific to extended CONNECT then? A server operator could make a mistake and end up with some GET resources not working on h3 too, right?\nThe example I would use here is a server that supports masque on H3 and H2 CONNECT. If for some reason a proxy operator gets desyncd with the config that it may or may not be responsible for, then things could go wrong. In a scenario where version fallback to HTTP\/1.1 is not desirable, articulating such an error is underspecified. Similarly for H2 and H3 websocket, telling client the resource is not available or the stream was reset with the fallback error code is misinformation.\na header describing why?\nYes, that should work. This should be in the spec.\nSent out URL to address this using the 501 response code recommendation."} +{"_id":"q-en-http-extensions-10ad63137f1a7cd16e29388d74792dcc3166abeaef1ee54bb75309461fb324ce","text":"As discussed in URL and URL, describe fairness and how that relates to priorities.\nAs part of the TSVART review, Bob said This seems like a fair point and we likely shouldn't say anything. We can remove this statement entirely but it is normative text so it'll be a change.\nI initially thought the same, but now I have a counter-argument. This section (Section 13. Fairness) discusses distributing bandwidth between multiple connections. and 13.2 cover considerations when intermediaries coalesce requests originating from multiple connections onto a single backend connection, or split requests being received on one connection to multiple backend connections. Section 13.3 distributing bandwidth between multiple connections that might share same path. I would argue that all of these belong to IETF, and that they should be covered in the document. We can probably improve the explanation to avoid confusion.\nOK that works for me too, could you tackle that?\n:+1: Will do alongside .\nFrom the TSVART review, Bob says: We should address this.\nI think there's confusion here. Section 13 discusses fairness of bandwidth allocated between end clients, not requests. I think clarifying this might help.\nMinor edits"} +{"_id":"q-en-http-extensions-ab064e34da914457c344075f887d61f8d6ebbea4985d3238906934bfb1f70efe","text":"Small editorial nit in relation to .\nThis looks ok but I still think we need a additional broader consideration of precedence to address the multiple comments about the term."} +{"_id":"q-en-http-extensions-93f03a275de6e79ad6354e8cbaf33b46654de747aa4f7a9335d2f75c824673f9","text":"Replace use of \"precedence\" that could be confusing with something better. With this PR, there are still two remaining occurrences of \"precedence,\" in , and . I think that those two are fine, as they are surrounded by other text explaining the context. I think it'd be clear to readers that we are using the term to describe difference of importance between multiple responses. FIxes Ea of .\nThank you, this looks good. I agree with your rationale about why we don't need to remove all instances."} +{"_id":"q-en-http-extensions-7bcf8f1c8a8566248c131884cc83e68bafcd4bd741d0b27bf37fd8b2c320be59","text":"This adds the ability to pad messages to the binary HTTP format. The design is pretty simple: add zero-valued bytes to the end.\nWhat if someone adds a non-zero byte in the padding block?"} +{"_id":"q-en-http-extensions-531f686f450d396a07bfeacb1acead0abfa94e44e62e11c38b984687074fecb1","text":"Remove cleartext references.\nFrom NAME WGLC review: I think this was added to cover the id-sha-* algorithms that we defined but have since removed. Let's consider removing this consideration unless it still applies (and if it does, maybe we need some more text)."} +{"_id":"q-en-http-extensions-e8d73e7d78db0c270e7269a6adbd601c8c9f4d0857c98a3ee59850f8cb4a4658","text":"Add syntactic compatibility.\nFrom NAME WGLC review: This wording is awkward, we can editorialize this to be clearer what the scope and intent of changes are."} +{"_id":"q-en-http-extensions-209b1c84fce008a9fdd72dbd589ce9de6253c9164380f2e30054266542b93e23","text":"Consistent use of \"checksum\".\nFrom NAME WGLC review: Personally, I think the use of both terms is ok. Mainly because digest is overloaded in the document. But it wouldn't hurt us to review the document for usages and perhaps add a note in notational conventions about the use of terms.\nis a generic checksum computation, should be the actual digest algoritm computation. We need to clarify this and probably introduce new terminology."} +{"_id":"q-en-http-extensions-ce0da76b742c58d9297d5b73bb6499ee3e960dfd62d2c211d710b0703b18a9c3","text":"Spotted another mention of accidental while I was at it, so changed that too.\nFrom NAME WGLC review: Corruption could also be non-accidental. Let's remove the accidental qualifier or add words about explicit manipulation (for good or bad purposes).\nAgree, but I think that accidental\/bad implementation has to be mentioned. See URL iirc there were some comments regarding the fact that Digest alone (eg. without other mechanisms) is not enough to protect against malicious agents... NAME do you remember anything on that? The current text imho is correct, since it adds:\nNAME NAME I'd close with no-action :)\nNAME Removing \"accidental\" from the sentence in question would fix the issue.\nI completely agree, I'll do that!\nLGTM"} +{"_id":"q-en-http-extensions-a1846e10bd2e30bc6a851bd3b84ae76c169a1ba17e6b589c8b32107bceb51608","text":"There's not really any reason to describe anything other than the algorithm used and the format of encoding.\nFrom NAME WGLC review This seems like cruft. The important things for the table are clear pointers to the algorithm definition and the encoding of values created by the algorithm.\nIf it's not broken, don't fix it :P I think we spent too much time wrt old algorithms. I'm in favor of anyone that wants to provide a PR that has consensus.\nIt is broken -- let's fix it. :)\nLGTM"} +{"_id":"q-en-http-extensions-c94127936c861cbd1481190e7fb85abc76300c7ac45809625131641199668496","text":"I was noted that the citation to Pat Meenan's blog post lacked scientific experimental data and a decent permalink. Since we already acknowledge Pat's original scheme proposal in the acknowledgements, lets just remove this one. To avoid anyone picking up on a lack of multiple citations to match the multiple experiments statement, lets drop that qualifier too.\nMakes sense. Thank you for the PR."} +{"_id":"q-en-http-extensions-551d55e500effb1a069be359b071dc3e15c81798d272860d8355f50adc0c8f09","text":"based on .\nThis PR would address both and\nalso would\nbased on some twitter feedback URL\nNAME wrote: might be needed\/wanted by the backend application (to independently evaluate the cert chain, for example, although that seems like it would be terribly inefficient) and that any intermediates as well as the root should also be somehow conveyed, which is an area for further discussion should this draft progress. One potential approach suggested by a few folks is to allow some configurability in what is sent along with maybe a prefix token to indicate what's being sent - something like Client-Cert: FULL \\ \\ \\Client-Cert: EE \\ as the strawman. Or a perhaps a parameter or other construct of {{?RFC8941}} to indicate what's being sent. It's also been suggested that the end-entity certificate by itself might sometimes be too big (esp. e.g., with some post-quantum signature schemes). Hard to account for it both being too much data and not enough data at the same time. But potentially opening up configuration options to send only specific attribute(s) from the client certificate is a possibility for that. In the author's humble opinion the end-entity certificate by itself strikes a good balance for the vast majority of needs and avoids optionality. But, again, this is an area for further discussion should this draft progress.\nI would propose a separate header for the chain that would use the structure List value format.\nI'd also argue strongly for enabling the certificate chain (even if as a list of sha256 checksums of entities in the chain). End-entity certificates are effectively namespaced by the rest of the chain, so not including the full chain and\/or not tying the end-entity cert to the chain are some of the top foot-guns I've found with people using this.\nI like NAME suggestion: separating the EE certificate out is sensible. The assumption here is that the terminating proxy validates the signature from the EE cert when establishing the connection. That makes it different. That said, the origin server might need to assemble and validate the chain, so including all the data presented in TLS is likely necessary. (I don't think hashes are as generically useful; space savings can be achieved with header compression, assuming that you use VERY large tables.)\nFrom a header compression standpoint, I suspect we should note that multiple instances of the header will compress more efficiently than a single instance with a long list, assuming there are multiple possible intermediate cert paths."} +{"_id":"q-en-http-extensions-ef49cfc9bd371ced183a54f4446b86d38fc55c5354c9ca8356f95d2ec4473a4b","text":"\"Client-Cert HTTP Header Field: Conveying Client Certificate Information from TLS Terminating Reverse Proxies to Origin Server Applications\" is kind of ridiculous looking. Change it to just \"Client-Cert HTTP Header Field\" and move the rest to the abstract, if it doesn't already say as much.\nshorter is better"} +{"_id":"q-en-http-extensions-a177121e1756d1b35e9b7fc29db342c3ab2ddd59a91260def9532202319aa0eb","text":"Document what needs to happen for compression. The Client-Cert field will be largely constant for the same client, so it might be a good candidate for compression. However, with requests through the TTRP potentially coming from multiple clients, the connection to the origin server might get different values over time. Document what needs to happen here: either very large dynamic tables with certificates in them, or no compression for the field and larger requests. You also need to consider the effect on the protocol of very large header blocks. Both HTTP\/2 and HTTP\/3 have soft limits on the size of header blocks that this might exceed. Servers need to be provisioned to allow for large certificates. cc NAME\nThanks for the feedback Martin. I've read through RFC7541 and some of RFC7540 and I'm honestly still unsure what this draft can or should say about compression. As I understand it, the size of the dynamic table is established between the peers (dynamic table size update and SETTINGSHEADERTABLE_SIZE) of the connection. And the TTRP would decide whether or not to add client certs to the table. Undoubtedly there's a trade off between large dynamic table with certs and no compression for the field with larger requests. But I don't know what's \"better\" and imagine it will depend on the specifics of the deployment. Although I guess continually adding and evicting certs on a small table would be the worst of both. Are you looking to have that trade off described in something like an implementation\/deployment considerations? Or something more or different? And if so, are there some specifics you can suggest or propose? WRT limits and large headers, URL has \"A server that receives a request with a Client-Cert header value that it considers to be too large can respond with an HTTP 431 status code per Section 5 of [RFC6585].\" Is that wrong? Or insufficient? Or is this another implementation\/deployment considerations thing where the draft should mention to readers that they might need to adjust some settings around maxim header sizes? Or something else?\nRoughly, I think you could say something like\nI think the issue is somewhat different. H2\/H3 allow implementations to advertise a limit on the sum of the size of all headers received. If an intermediary and a backend advertise the same value, but the intermediary is adding huge headers, it's possible that a request that the client sent was within the limit, but the request sent by the intermediary to the server will be too large.\nThere's also some background in that might help explain the issue.\nThere are two things here really. Alan is concerned about header block size. Mark about compression efficiency. Both are worth mentioning."} +{"_id":"q-en-http-extensions-ea37947c35f6b0d493dc6f06abc54a8b9b75d96f63e356c488249d183d0da97c","text":"Remove \"which\" from a clause where it did not belong, as per feedback from Vijay Gurbani"} +{"_id":"q-en-http-extensions-6b5eb492a3364a8b121437a48103d90d8934211eff9b226d3d22bb98070d68bf","text":"Attempt to cover client-cert and caching interactions per mnot's guidance in issue\nThanks Mark, accepted all your suggested changes\nOrigin servers might select a response based upon the value of the request header. This might impact caches in these situations: When an intermediary that's generating also has a cache When an intermediary between the generator of and the origin has a cache (e.g., as part of a CDN hierarchy or a server-side reverse proxy) When the client on the other side of the TLS connection has a cache (e.g., the browser or other user agent) and the UA's use of client certs varies (e.g., different certs are used, or no cert is used) Some guidance about this is probably necessary. In cases (1) and (2), it's just a matter of adding to all responses for an access-controlled resource. (3) is more subtle, because the browser\/UA doesn't see the client cert in the request, since it's generated downstream. From its perspective, will always select a non-existent header field, and so it'll match whatever the certificate used actually is. The most straightforward thing to do for (3) is to advise that an intermediary generating MUST\/SHOULD transform on responses (when it occurs) to . There need to be some detail work around how this should happen, to make sure that parsing corner cases are caught.\nThanks NAME and apologies that I somehow missed seeing this issue when you initially submitted it. I think I can put together a bit of text with guidance along the lines of your suggestions. Is it okay or preferred to use vs. ? , which isn't a standard I know but it does show up near the top of search results. And on trying to educate myself more I found that says 'A proxy MUST NOT generate a Vary field with a \"\" value.' I think RFC 7231 means proxy only in the forward proxy notion of proxy so it's okay for a reverse proxy \/ gateway. And draft-ietf-httpbis-semantics-19 has a more explicit definition of proxy as such. But it did give me pause.\nIgnore MDN, it's often wrong (will file a PR there when I get a chance). is the most semantically correct, but making it uncacheable works too.\nThanks Mark. I took a pass at this with PR\nLooks pretty good, a couple of nits"} +{"_id":"q-en-http-extensions-d4edf24b3c9e955fc0f3f714df19f739b6814774dd6755b70caad16d8a37c4c8","text":"painful words in an attempt to explain renegotiation's applicability with client-cert header to\nNAME wrote: possible with HTTP1.1 and maybe needs to be discussed explicitly here or somewhere in this document? Naively I'd say that the header will be sent with the data of the most recent client cert anytime after renegotiation or post-handshake auth. And only for requests that are fully covered by the cert but that in practice making the determination of where exactly in the application data the cert messages arrived is hard to impossible so it'll be a best effort kind of thing."} +{"_id":"q-en-http-extensions-05e4a7b48f1a7d548233481968f79d38a947e9c24efc5258652428be6d2a87b2","text":"fill out client-cert's IANA Considerations with HTTP field name registrations to\nNAME wrote: http-core. This should be pretty simple to add, but for the purposes of having a clean PR pulling out the TODOs, I'm moving it to an issue for now anyway.\nURL points to draft-ietf-httpbis-semantics for this"} +{"_id":"q-en-http-extensions-59bbc90156254dae5a1385756b276d16d78bb9c27af7b564d6465b9fb97598f5","text":"Adds text to clarify algorithms in several places: encode ECDSA signature as raw bytes, mark ECDSA and RSAPSS as non-deterministic note non-determinism of ECDSA and RSAPSS in examples, add security consideration of nondeterministic algorithms. Add ed25519,\nNAME and NAME : this PR updates the text around non-deterministic algorithms and how to match them, please take a look. It also is more explicit about encoding the ECDSA signature as a raw byte array (as was indicated on the list).\nIt's been a very long time since I added the RSA signature test vectors to the appendix of the HTTP Signatures spec. We should replace them with more modern examples. My suggestion would be to use ed25519 examples, or secp256r1 (P-256) examples. The former would need a , which would be a good thing.\nWe should add new examples, not remove ones that are there and are functional. All of the test vectors in the spec have been re-generated from scratch in the latest draft, so none of the older tests should still be in the draft at this point.\nAdditional tests for HMAC and ECDSA were added in\nKeeping this issue open as a call for adding support for Ed25519 and others.\nEd25519 support seems to have been implemented in the wild by libraries like URL and URL Tagging an implementor: NAME could you help define the parameters for Ed25519 support in the core spec?\nYou can do HTTP message signing with profile and key management using (a Rust programming language with Lua scripting) and its Content Management System, (aka Lighttouch). The crypto implementation is in its and , and database is in . has a refined basic example to see how it works. I'm working on a web site for URL with installation instructions, a getting started guide, organized documentation links, and example projects. If you need any help, I'd be happy to answer your questions.\nI've implemented what I think is a reasonable version of Ed25519 using pynacl and added it to the demo generation script, if someone else can confirm this code and its results that would be best. I had to do a lot of unpacking of the key values by hand because support wasn't natively there in the libraries I could find. URL NAME perhaps? Script output: Base String: Signature:\nI am just in the process of pulling out my implementation of Signing HTTP Messages to its independent repository . But I can check that the signatures work in JS and Java for ...\nAfter asking around, I've updated the code to run a SHA-512 hash on the input base string before feeding it to the signer, to have some things to compare to. This results in the following signature instead, with all other parameters being the same: This simple change is in URL\nAfter even further investigation, I'm still not sure whether the pre-hash is really the right approach here or not.\nWould having other implementations help decide this? (I have not gotten round to it trying an implementation) What is it that is making you undecided? (Perhaps knowing that could help me give you an answer by implementing)\nOther implementations are always welcomed. What's getting me right now is that I can't see if it's necessary or expected with Ed25519, as it would be with other signature schemes like RSA or ECDSA.\nTo my knowledge, the pre-hash variant of Ed25519 is not very well supported relative to the \"pure\" variant.\nIn Java I found the following: In the Java Web Crypto API I found a proposal to from Feb 2020. I found an issue in Chrome's Blink-dev which linked to an issue . From that it seems that there is no built in implementation in Browsers. It seems people have been building their own implementations in JS. There are implementations such as aimed at URL But I can't tell which are well built, or not. So that means that on the JS side there is quite a lot of work to do. So that is something to keep in mind when developing ...\n\/cc NAME -- I know we've had this discussion w\/ a number of vendors, where did we end up -- what seemed to be the most \"used by default\" approach for EdDSA w\/ Ed25119?\nThis code from the RFC shows that there's an internal hash in the algorithm applied to the message: URL This is true whether PH(M) is a SHA function or an identity function, so I'm now really not seeing the value in having PH(M) be an additional hash.\nI got NAME first signature above to work in Java with this commit: URL That did not require me to set any hash.\nI've backed out the prehash changes from the generator script, and we'll be adding a definition that use PureEdDSA with no PH function. Other functions can, of course, be defined on their own in the future.\n+1 for PureEdDSA with no PH function.\nNo PH seems like the right thing"} +{"_id":"q-en-http-extensions-6d94bd9166179849c288d3de74492f899703443ffd7c8ce8713f759ca3f4203c","text":"WIP.\nNAME NAME quick once over before I merge?\nNAME thanks, updated, squashed.. merging.\n:\n: The and rules are not used in the document. The rule is not used in the document, unless you accept . The rule is not used in the document, but I think it can be useful to clarify the definition of , i.e.:\nI agree that using \"field-name\" over there makes this a bit clearer.\nAddressed in: URL"} +{"_id":"q-en-http-extensions-8e6f4f0218774c8bcfae8818a1ab7437944c06ebeee8045b6a2937de39b7ac75","text":"~~\nThanks! This Lgtm but I'll let Steven take a look too\nThis looks to me as well, thanks!\nUAs should not decode any URL-encoded (or otherwise encoded) characters while storing a cookie, e.g. a cookie such as should not be interpreted as a cookie. I landed some WPTs for this in URL Chrome, Edge, Firefox, and Safari do the right thing. I think this could just be a clarifying note in the parsing algorithm section.\nI'll take this one\nLet's make sure we are consistent for names and attributes."} +{"_id":"q-en-http-extensions-3c11d41c7a2063bd32284daf2e44e5a3c7d20b17ab380fc939318c83c32d6577","text":"update canonicalization rules, remove weird headers from examples changed dictionaries and messages. updated crypto primative functions regenerate examples"} +{"_id":"q-en-http-extensions-965e659803a8ae4759abefff11b5b1b031e8f8cbb8cf126c7cd08d7e66c576f8","text":"pseudo-header is hyphenated per [HTTP3], and expand out a \"headers\" straggler.\nThanks, this is great!"} +{"_id":"q-en-http-extensions-da2b69dd90010d90879186545f2f91299222602dc5cd82c31f44f614e778b9cf","text":"Update query parameter generation in examples,\nI verified that B.2.3 is fixed, but did not implement the \"ttrp\" example.\nThe Example from uses the following request defined in B.2 The signing string is meant to be But that does not work because in the request we have and the date in the signing string is one minute later\nThanks, all the examples need to get re-generated and I'll work on that soon. The current generation script is at URL ... it's very manual right now but I'm thinking of reworking it to do actual HTTP message parsing under the hood like we have at URL instead, so we don't end up with inconsistencies like this in the results.\nI am also developing a test suite for the examples in the spec, which is why I am reporting the bugs I find here :-) That can help you later verify that your new setup is correct. I just found a last error in the Signing String. The Signing string given there is my test tells me that the diff should be: But the implementation having is correct I think.\nExamples have been regenerated by , please check the new example for consistency.\nJust saw this other comment, yes it looks like there's a bug in the query value generator that drops the question mark!\nBtw. My EU project just ended, so I am looking for extension projects. But perhaps someone here knows companies that need me to work on the implementation? (Ie,Java or JS or Scala)... I don't really know where one can ask this. A bit of a specialized thing :-)\nNAME please check the updated examples in -- I re-ran the script with some bugfixes in place which should fix both B.2.3 and the TTRP example in the appendix."} +{"_id":"q-en-http-extensions-815e4c45edce7199188ed6f7b8f95968e31fb6c43e4513ffaefad3275f58d152","text":"This spec uses 'body' and 'payload' a lot; I think most should be 'content'."} +{"_id":"q-en-http-extensions-e47648b0d92f0ff2b769bc58ca25a87bb78e45268e27020895772dd4b508ba73","text":"Sounds like a good idea to me.\nfrom the editor's copy >7. \"message\/bhttp\" Media Type The message\/http media type can be used to enclose a single HTTP request or response message, provided that it obeys the MIME restrictions for all \"message\" types regarding line length and encodings. is this a typo and it should be >The message\/bhttp media type ? Also defining this type at section 7 seems quite late. The first instance of is in examples in section 5, easily missed due to the similarity and prevalence of that is mentioned as a contrasting type. I think it would help a bit to talk about the bhttp type earlier. Probably in the introduction, just after you talk about the old type.\nLGTM thank you"} +{"_id":"q-en-http-extensions-a62d040ee66f4b57772f75c881ff5134179342c26113fca4d795f704af4cb085","text":"…ng authentication if Erik is cool with it.\nNAME\nETIMEDOUT NAME if you object please reopen."} +{"_id":"q-en-http-extensions-f1d02284af1c2afbb7277cd4021fd823f743c552c77041225a92a0c4f535c9f8","text":"What does it mean for a client to send, for example, a CONNECT request in the binary message. Is that permitted or prohibited?\nI don't think that it needs to be prohibited. It's just not going to do anything that makes any sense. Worth a note perhaps.\nI guess it could work if the thing you're delivering the binary message to actually actions the CONNECT, exchanging tunneled bytes would just look like content.\nYou would need somewhere for those bytes to go. A valid CONNECT request doesn't include a body. It's nonsensical basically."} +{"_id":"q-en-http-extensions-1078117cc8b5c02a5cc180abcaaae8c0c226c434d748f26daaa5bf0107ca94b0","text":"The HTTP\/2 rules are related to field value validation, so move them there. The rules about Connection fields are not necessary, but it won't hurt to be clearer.\nthe rules of Section 8.2.1 of [H2]. A recipient MUST treat a message that contains field values that would cause an HTTP\/2 message to be malformed (Section 8.1.1 of [H2]) as invalid; see Section 4. Those references are doing a lot of work -- but does it also include 8.2.2? I.e., are and friends allowed? It's ambiguous, because the text implies all things that make it malformed, but only points to one source.\nNo point duplicating that work, but I agree that it might be unclear as stated. Does this split make it clearer? I'll make a PR with those changes in context. (I think that the latter is consistent with what you and NAME are discussing on-list, though I need to spend a bit more time on understanding that.)\nI think it does the job, indeed!"} +{"_id":"q-en-http-extensions-2e3f4ef186631fef6b09caba367c01f3f58f8ce1529cd6b5428e58b21405d0bf","text":"The format is capable of capturing messages, but the design only really supports the expression of the semantics of valid messages. That makes it unsuitable for applications that need to know exactly what was put on the wire in a given protocol.\nThis document is going to be viewed by some as establishing the 'shape' of an HTTP message -- which is a good and perhaps necessary thing. However, some will be sad, because it doesn't suit some applications such as forensic preservation of messages, bot detection, debugging, faithful representation of HTTP\/1, etc., due to transformations and loss of certain information (especially around headers). So it'd be good to set expectations early on (perhaps in the Introduction).\nGood point. It's for semantics, not forensics and all that."} +{"_id":"q-en-http-extensions-e10893dd3c97a1fa38434bb591aa48d51be5b77d760df6f4364b9d939b758c1e","text":"The original framing might have implied that they can't be sent, where it's just that they are the same as anything else.\nIn s 6: This is just a bit weird -- some may read it to mean that these messages can't be conveyed in this format."} +{"_id":"q-en-http-extensions-1a0e70ef7dc782b75153fa4a72dffc1373274b60694f2850a425552737154449","text":"For the avoidance of doubt. This makes encoding easier if you don't care about wasting a few bytes.\nThe binary messages draft uses QUIC variable-length integers. Those allow encoding numbers multiple ways (for example 42 can be encoded minimally as 0x2A but also non-minimally as 0x402A). The draft should clarify whether using non-minimal encoding is allowed or not.\nThanks!"} +{"_id":"q-en-http-extensions-afe110ab965735243eb6c150ff7f1640c8cd145e380942dbb4081d4b7ba1ea70","text":"by using a citation rather than a redundant and wrong statement. by removing the minimum length (zero is totally fine).\nThe Known-Length Messages section states that whereas the Header and Trailer Field Lines section allows empty field values. Which is it?\nEditorial error on my part. Someone pointed out that fields can have empty values (yet another feature of the protocol that is weird and unnecessary). I fixed it one place, but not the other. Thanks for noticing.\nThe draft currently shows: This Length field isn't well defined. Is it the length of the Known-Length Field Section in bytes or in lines? If in bytes, does it include or exclude the length of the Length field itself? Either way, I don't understand why Length is at least 2. Either it's possible for there to not be any field lines, in which case maybe it should be zero. And if there needs to be at least one field line, then (we should say so, and) the minimum number should be 3, because a field line is two lengths and at least one byte of field name.\nTo pile on here, the spec also says in one place and in another It's a little ambiguous but f the value also can't be zero-length, then the minimum should be 4? (by David's second interpretation)\nNAME see\nYeah saw that after hitting the button. Didn't intend to derail\nThanks!"} +{"_id":"q-en-http-extensions-c48425b9fb17401ffe4acd71a83a74c074ae8bc5abec2340195097053e649f29","text":"Basically, it saves the effort of writing stuff that you know to be empty. It's not a critical feature, but it turns out to be a simple way to deal with not sending trailers or a body. In the extreme, you can also drop headers to produce the simplest and shortest HTTP message ever: 0x0140c8. CLoses .\nHey David, I've made some larger edits here to factor the truncation text better. I've included your normative language.\nThe draft mentions that messages can be truncated in some places, but not in others. I think it would help to explain why one would want to truncated a message.\nThis is really just a neat way to avoid having to write out all of the pieces of a message. So if you don't have trailers, it's OK not to say \"I have 0 trailers\". The decoder is required to deal with it. Before you say \"that's hard\", you don't need to change the parser to deal with truncation. Adding three zero-valued bytes to the end of any message you receive does the job just fine. If it isn't truncated, those will be treated as padding.\nThanks"} +{"_id":"q-en-http-extensions-78ed567c5828a891ac11ad853e76d3b51219f4afc0c3d9d922b81cb59a9fceca","text":"The diagram resulted in a few double-takes that might be helped by a little bit of a note pointing this out.\nI suspect that requests do not carry the Known-Length Informational Response field (otherwise how do you know how many of them there are?) but that should be spelled out explicitly. I'd suggest creating a different diagram for known-length request vs known-length responses to make this easier to reason about, and same for indeterminate length\nI think the optionality is supposed to be indicated by the three trailing dots . I admit to doing a double-take on this during my initial review, similar to David's question. It's optional for a server, sure, but it's never optional for a request just wrong. I agree that separate diagrams would help, at the minor cost of some page space."} +{"_id":"q-en-http-extensions-8ff8dba12d297bc559c893986598c99cd06c98738af1c4bd43944a796f60102b","text":"A combined \"field value\" is a term that HTTP defines, but we didn't link to that. Also, the structure of this paragraph was a little more terse than it needed to be.\nSecurity Considerations state: but the Header and Trailer Field Lines section states: The same field name can be repeated in multiple field lines; see Section 5.2 of HTTP for the semantics of repeated field names and rules for combining values. This leaves me confused, why would it be necessary to produce a combined value if repeated field names are allowed?"} +{"_id":"q-en-http-extensions-b737e5fcaeb4c054813075eb05ca91aa2a0cab5b85eed6588f3d8a4b71985a68","text":"Includes several document cleanups for signatures, to publish as -09\nAlso: If there is no query string should the corresponding signing text be or ie. should the query value be empty or just consist of one Should we return with this request and return on this request This is what I opted for in my implementation. See shows that http4s makes the distinction between blank queries and empty queries is simpler I think this is a corner case that is worth explicating very clearly, perhaps even with a test.\nThere are quite a few occurrences of the header in the draft. From an implementation point of view, Go \"swallows\" the header and it is not part of the standard Request structure. There's a good reason for the Go library to do it, and in fact the draft recommends to use NAME instead. Can we remove from examples, e.g. Sec. 2.1, so we don't train people to do the wrong thing?\nThere are only a couple instances of the \"host\" header in the current draft left, and one of them (section 2.1) is specifically about how to format headers. It could be removed from the other example signature input strings without affecting anything negatively. Same with the \"date\" header, which could be confused with the \"created\" parameter, see\nThis sentence is correct: But it's true for most reasonable uses of , and people will still be using this parameter. So why not add:\nNo, we explicitly did not want to conflate this with the jwa registry. We also didn't want to have a second tier choice like having \"alg=jose\" and then having the actual algorithm elsewhere, since a big problem with late editions of the Cavage draft were exactly that process. (See the mess around \"hs2019\"). The editors discussed this at length and decided the best compromise was to define a way that jwa algorithms could be used in an interoperable fashion, but not to allow runtime signaling with them. Better to have the choice of jwa be signaled within the application level. Additionally, it's preferable to not use the runtime parameter to signal the algorithm any way, which is discussed at length in the security considerations.\nAll good, so why aren't you using normative language?\nIt's not normative because this section defines the algorithms, not the parameter. We didn't want to make it seem like the parameter was the \"right\" way to signal algorithms here, and most certainly is not required. You'll notice that none of the other sections have \"MUST use the alg parameter value of 'foo'\" either. Older Cavage drafts relied on the parameter, albeit in a much less secure fashion, and implementations were susceptible to algorithm substitution in the same way that naive JOSE implementations tend to be -- the \"alg: none\" attack all over again, more or less. The value of the parameter already MUST be something from the registry, and the registry does not (and will not) contain JWA values. So you already aren't allowed to do it, normatively, and the note is our attempt at explaining the why. Ultimately, an implementation is allowed to use whatever algorithm they want and makes sense. What this whole section does is define a set of common usable algorithms as well as provide a pattern of applying crypto primitives that applications can follow for their own algorithms. If someone defines a new algorithm that's generally useful, like we recently did with the ed25519 here, it can be registered for use with the parameter. But if a system can base the algorithm selection off of the key material or another part of the protocol (like with GNAP), that's highly preferable and less vulnerable to substitution at runtime. I'd be happy to know how we can make this clearer in the text, but I'm not comfortable with that kind of normative construct in that section.\nNAME -- would you like to chime in? I recall from our discussions you having another, more specific reason for this decision, but I can't remember the details beyond us agreeing to do it the way we have it in the spec right now.\nNeither Java nor JS subtleCrypto come out of the box with PKCS1 PEM support. (for java ). It is not immediately obvious to work out that the PEM formats of the private key in in PKCS1 format (the spec could point that out). When building an implementation it helps to have the test suite clearly map to the data in the spec (e.g. ) so as to make error reporting clearer. That is even though using the following openssl command to convert the keys work, it makes it difficult to see if an error was introduced at some point. To support PKCS1 the developer has to depend on further less well tested libraries and add additional code (as I ) leading possibly to new bugs. Furthermore, I will need to go through the same process for JavaScript now, doubling the work. Since PKCS8 is as expressive as PKCS1 ( and see ), it would make life easier for devs if the keys were all in PKCS8 format. Unless there really is something that PKCS1 offers that PKCS8 does not? I will try doing this using the Web Crypto API next. But there we find that supports the PKCS format.\nWhile I am at it. I am having trouble with verifying the signature for 3 out of 4 examples from the spec that I tried out in . Currently the only one that works is the . I am able to produce and verify a signature for , and examples, but the signatures don't correspond to what is in the spec. That may very well be a mistake in my code, which I am working on as a to bobcats scala and scala-js crypto library. But I just wanted to check how confident you were in your examples. I am going to re-implement next in JS, so that will give me another reference point.\nI am working my way through this with JS. I was able to parse the and keys after translating them to pkcs8 format with openssl, but am having trouble importing in the console of Chrome, Chrome-beta, Chrome-Dev and Safari. I guess that is in pkcs8 format already. I don't have trouble parsing that key with BouncyCastle... I guess this is a little config problem, but it's difficult to work out what that could be... Perhaps someone has an idea?\nNAME Since this all has to do with the examples in the text and not the spec itself, I think we'd be fine with having the keys in multiple formats if it's useful for developers. However I'll note that in many code libraries you need to add the PEM headers of and for their compliant parsers to parse an EC key in PEM format, for example. I've been able to parse these keys in both Java and Python, and in fact all the examples are generated using a python script: URL Also, note that RSA-PSS and ECDSA are non-deterministic signatures that incorporate a source of randomness into the signature calculation. This might be why you're getting valid signatures that look different from the examples -- you shouldn't expect to replicate the exact signature values. RSA 1.5 is deterministic, and will always give this same signature for the same input (as will HMAC). This is just one of the reasons why it's not recommended in many environments. I think we can add some text in the examples sections that points this out, if that would be helpful.\nNAME You can also test keys and examples at URL for another implementation. I am pretty confident in the values in the examples, but if there are in fact bugs in these implementations (which is always possible!) I'd love to know and fix them.\nNAME you were right. I had been verifying the keys by string comparison, so I fixed that. I also worked out what the Java names for the various algorithms were and got nearly all the tests to run on JS and Java platforms. But there is one error left that is consistent across both, so I wrote it up here. URL\nNevertheless, I have not yet been able to import the private key PEM file in JS. So I looked for a way to convert it to JWK to see how that would fare. I looked online, but none of the services worked, I looked at npm but that did not work, finally I found in Java that did the conversion in 2 lines of code. With this I could get signing to work: So there is something problematic with that PEM, at least insofar as it makes it very difficult to get the example to work in JS which should be one of the main platforms. (I have not yet verified the signature in JS as I am having trouble getting verification in JS to work with a valid PSS key, even though I can create the public key with the PEM, and am able to do that Scala-JS)\nI think I found the problem to be with the private key. After going through this process: transforming the spec key to JWK as described above transforming the JWK to PEM PKCS1 using node's I get the following transforming the above using Java's Bouncy Castle to PEM I get this: And with that all the JS crypto API tests pass. See the .\nI think to understand why that final private key works with the Java Web Crypto API but not the original one from which it was derived (which worked with Bouncy-Castle) we'd need the input of someone like NAME . With that public key replaced (in commit ) and a few others commits to get Continuous Integration to work, we have all the tests pass for Scala compiled to JavaScript Those running in the browser are a bit interleaved due to asynchronous execution, something that needs still to be fixed.\nLooks good!"} +{"_id":"q-en-http-extensions-5eb1ff36cb5a6de0ef3696e688359627d9da1a87db51eb03381032ee35f92047","text":": These rules do not permit whitespace between a semicolon and a parameter, which was probably intended (judging from the examples and from similar rules elsewhere in HTTP). They should be rewritten as follows: and the import of the rule from RFC 7230 should be noted in the preceding text (“The \"Encryption\" header field uses the extended ABNF syntax . . .” and similarly for ).\nThanks for picking this up."} +{"_id":"q-en-http-extensions-03f89714d5dd2a159cf2e5d33cc5813931f85330c356eb5e084f7f71b98937dd","text":"no normative text in security considerations.\nno normative statements in security considerations\nSeparately, this could be read to imply that a TCP checksum is sufficient. I'm not enthusiastic about that, URL Originally posted by NAME in URL In reply, I said\nThis has been slightly modified by ... TLS can't probably be normative, but people already knows..."} +{"_id":"q-en-http-extensions-bb19bfaca0a0ec0c52fa3e97fae5a54d5dd500c8222c4331c74a527eb6a0448e","text":"simplifies IANA field registration section obsolete Digest\nThanks for this, it's almost there. I suggest we just have two sections that ask IANA to do things directly, rather than giving any prelude to those instructions."} +{"_id":"q-en-http-extensions-fdeaa90a7f737cd8dea9dc1f9fc9f5fbf0f42e11c987fdf7a99f06f272228cc9","text":"Includes some new text and several small cleanups.\ndid you forget to merge it?\nSorry, there was ice cream cake and I got distracted\nSome header field values are case-insensitive, in whole or in part. The canonicalization rules do not account for this, thus a case change to a covered header field value causes verification to fail.\nWhile I'd just put a warning on that, it would be great to have some examples\/use cases for that.\nIntermediaries are permitted to strip comments from the header field value, and consolidate related sequences of entries. The canonicalization rules do not account for these changes, and thus they cause signature verification to fail if the header is signed. At the very least, guidance on signing or not signing headers needs to be included.\nLet's add a paragraph with some caveats and header examples."} +{"_id":"q-en-http-extensions-4f5f52a57b10d104ffd42c0e8301a36991d2fc012e8af96adf4a94387fa9e386","text":"I missed this from NAME but it's a point well-taken. Not defining this was sloppy.\nThis was buried inside which was closed so I'm opening a new issue. The draft currently shows: This Length field isn't well defined. Is it the length of the Known-Length Field Section in bytes or in lines? It would really help if this draft defined every single field it declares.\nThe draft currently shows: This Length field isn't well defined. Is it the length of the Known-Length Field Section in bytes or in lines? If in bytes, does it include or exclude the length of the Length field itself? Either way, I don't understand why Length is at least 2. Either it's possible for there to not be any field lines, in which case maybe it should be zero. And if there needs to be at least one field line, then (we should say so, and) the minimum number should be 3, because a field line is two lengths and at least one byte of field name.\nTo pile on here, the spec also says in one place and in another It's a little ambiguous but f the value also can't be zero-length, then the minimum should be 4? (by David's second interpretation)\nNAME see\nYeah saw that after hitting the button. Didn't intend to derail\nThanks"} +{"_id":"q-en-http-extensions-73b7d7aa452e7388ac21ce6b06950026d13a546a1c1c7002a34826dcba5ea281","text":"S 3.6 says that extension pseudo header MAY be included but they have to appear before other field lines. An easy detail to miss buried in HTTP\/2 is the requirement does that also apply here?\nLGTM thanks"} +{"_id":"q-en-http-extensions-3ec9338f4142ae88fde25cc258ea41cde5b768750d59f864f3d4b1c9a7751f0a","text":"If it wasn't your thinking already, I'd like to suggest that the derived component be the singular instead. When canonicalizing individual query parameters, each component identifier references a single query parameter name\/value pair. The use of singular feels more appropriate to me: In the rare case where there are multiple query parameters with the same name, each one's canonical representation is still referring to a single name\/value pair, so the singular still feels more appropriate: When used in the list of covered components, we still include the parameter, so each component identifier is usually [1] referring to a single query parameter name\/value pair: [1]: However, in that rare case of multiple query parameters having the same name, one instance of would cover all instances of that repeated query parameter named . Perhaps the plural does make more sense for this scenario. But if there must be a singular\/plural mismatch somewhere, I'd rather see it in the more rare and exceptional case. So I would propose using the name instead of . In , I see that a recent addition implemented it as the singular . I wonder if drafters already had this change in mind or if it might have been an oversight that organically tended toward what feels more natural.\nThank you for bringing this up, I think this is a valid point. I don't think there's wide support for this derived component yet so now's a good time to change it for all the good reasons you mention.\nLooks good to me, will leave for a beat to get feedback from others."} +{"_id":"q-en-http-extensions-14c42483019c7cecada3776900d1c2effbb9f3cbec22794645daab7c442ee0df","text":"\"The record starts and ends on multiples of the record size\" sounds like a tautology."} +{"_id":"q-en-http-extensions-315b44706b400044bfaf9c0d7a0b335923a94711f8fac27ff833857e5d830f93","text":"NAME - that does not seem to work as expected. See the output at URL\nUnderstood. I personally inline everything always, so outages, bugs, and not-yet-published states simply do not affect me :-)\nPlease note that, per the rest of this cluster of documents, the reference citation strings \"[HTTP2]\" and \"[HTTP3]\" have been changed to \"[HTTP\/2]\" and \"[HTTP\/3]\", respectively. Please let us know any objections.\nThat does not seem to work as expected, see output at URL\nSee URL for how to do this in Markdown\nI struggled with this in my own draft, the instructions in the style guide didn't seem to work, guessing they will once the RFCs are actually published. I copied Mikes approach, but H3 dgram solved it with what looks like a simpler approach URL\nThey key part is that \"\/\" can not be in the reference name (because it's not legal in an XML ID), thus needs to go into \"title\". That may not have worked locally unless you have a recent version of kramdown-rfc2629.\nIf I do and ref RFC91113. Then make causes kramdown to blurt out and the HTML output is Versions:\nOh, that's because the XML reference file will not exist until that RFC is published. You need to inline all the information.\nyeah, that's what I meant by my unclear statement \"Mike's approach\" - see URL But that's like a million miles off what the style guide suggests, and is annoying while we are stuck in this temporary purgatory :D\nSo there are two issues here combined: citing something as RFC which is not published overwriting the reference name The latter thing is what the style guide describes; for the first issue only inlining everything helps. (I personally inline everything anyway, so it didn't occur to me that that's the problem you encountered :-)\nAH! Thanks for the info and pointers. I've sent out to fix.\nI didn't even know what David did was possible. Magic!"} +{"_id":"q-en-http-extensions-9d24104e7129ebec14bfbdeeb9ff3b2d8dd248f7e7d066c9b3254607322df964","text":"NAME I think I did the needful here by cargo-culting . Does this look good to you?\nThanks for the suggestions!\n(not tested, but if it ends generating the desired output...)"} +{"_id":"q-en-http-extensions-5141eb8cd085062019ec039aa407ecf4883bcb484aa4d953db730e41e5756d71","text":"Typi fix\nI had to read that three times to spot your change. Something about that text blinded me to that."} +{"_id":"q-en-http-extensions-436df9002389787e12ad27fedd59458eaaee2ae8f44fa50cca56fd46d360a7bb","text":"Right now, it's inconsistent in the document. HTTP core just uses an unquoted version so I'm inclined to just use that"} +{"_id":"q-en-http-extensions-5936ea3aa6f67a4988b3e215087421992eb57e2fc8a2da80c04d49bb6e4fa7b7","text":"try to say that for multiple post-handshake client certificate authentications, only use the last one (to\nTLS 1.3 post-handshake authentication enables the use of multiple client authentications. The draft probably needs to address this. Firstly, the proxy could be smart enough to know which certificate applies. After all, it needs to know how and when to ask for a certificate. If it is that smart, we don't have a problem. If not, then... For , this is easy. If there are multiple certificates and the reverse proxy doesn't know which to include, it can include all of them. It's easy enough to add extra values. For this is less easy. There would be no way to distinguish which intermediate related to which end-entity certificate if you just added them all. I don't think that's fatal though. Chain building for validation always has to deal with certificates that don't do anything, so maybe we can just dump them all in there and let the back-end server sort them out. In terms of actions, we can say something like:\nThe draft currently , which precludes multiple values\/instances. I am very reluctant to change that in support of something that rarely if ever happens in practice (it would mean HTTP\/1.1 with multiple TLS 1.3 post-handshake authentications or post-handshake authentication(s) after a client cert authenticated handshake). Could we instead just say the first\/last one is used? Then it's predictable\/defined should it ever happen. Or declare it out of scope. Existing related text from URL is copied here: >HTTP\/2 restricts TLS 1.2 renegotiation ( of ]) and prohibits TLS 1.3 post-handshake authentication ]. However, they are sometimes used to implement reactive client certificate authentication in HTTP\/1.1 ] where the server decides whether to request a client certificate based on the HTTP request. HTTP application data sent on such a connection after receipt and verification of the client certificate is also mutually-authenticated and thus suitable for the mechanisms described in this document.\nI could live with picking the last in the absence of information that might allow for another to be selected. But what harm is there in using a list?\nMakes it all more complicated b\/c everything then needs to treat it as a list and reason about it being a list. The value in typing something as what it is seems self-evident so I'm at a bit of a lose as to how else to answer that question.\nWFM."} +{"_id":"q-en-http-extensions-ae67e70b09566468cc8c09f1f155ffde74c723609f8642e114e58e7b8eaecb17","text":"add a note about cert retention on TLS session resumption (to\nHi, given that I had the question today relayed from a customer by our support team, I had a second look at the draft to see what was planned for resumed connections, and I'm seeing this: The problem is, on resumed connections, the client does not present the certificate again, it will use a ticket or a session ID to match a cached entry or any other mechanism I'm not familiar with (please bear with me, I'm not SSL-fluent). In our case when this happens, the related fields are empty (since the info is missing from the TLS stack), and we encourage customers to log something else as a complement to match new connections against a previously verified session (e.g. session ID). I think it's important to explicitly state that such a certificate info is not necessarily available in such a case, and that either the proxy+application have to find other ways to persist the information between connections, or that the cases where the certificate is matched need to be narrowed down to the strict minimum. For example, I remember a decade ago when working for a bank that the security team absolutely wanted to distinguish really authenticated connections and resumed ones and that in particular they refused to consider as 2FA something that resulted from a resumed connection because they wouldn't be able to prove that the authentication happened at this precise instant if they had to defend this in court. After the different versions of TLS evolved, I'm not sure whether there is a stable session identifier that's available across multiple versions, but I do think that if we document a common method to pass this client-cert info to the server, we also need to provide \"something\" for all the resumed connections that do not convey that info anymore, otherwise users might fall back to poor solutions due to the difficulty of understanding all the details of the various versions (and it's very likely that I've been one of those giving such wrong advices, not knowing how to do better). It's particularly important in load-balanced environments because you cannot count on the front LB to keep a copy of the client-cert, as the resumed connection might very well end up on another node, and you cannot count on the server either since different requests will often reach different servers. So the only thing that remains is that \"thing\" that the client uses to be recognized on resumption (sorry for the imprecise language, it seems for me that up to TLS1.2 it was a ticket but I'm not sure for 1.3).\nHi Willy, This is likely a limitation of your TLS implementation. A perfectly reasonable limitation, but not something that is consistent across all implementations. What happens with resumption is that any state from the initial connection can be carried across over into a resumed session. Obviously, things that happened on the first connection after a ticket was issued can't be assumed to be available, but generally anything that happened in the original handshake could be used. What the specification assumes is that the details of the handshake are remembered and can be used again. There's a bunch of other stuff like which TLS version and whatnot, some of which the specification requires endpoints to remember (see for example , which insists that the KDF remain the same). When it comes to authentication state, however, TLS is not very consistent and so implementations are not. Some implementations (like NSS and Boring SSL) remember the client certificate. The way they do this is they put the whole certificate chain in the session ticket. That makes the tickets pretty large, which can badly affect the resumption handshake: whatever you save on sending certificates is eaten up with a huge ticket in the first message. Worse, this first message is bad DoS mojo as you have to accept a whole lot of state before you are really sure that the client is OK (there are ways around this, but they are all pretty bad). Others might save the certificates server side, but that adds a whole different type of fragility. Others just \"forget\" that the client offered a certificate. That last one is probably what you are seeing here. This is a whole new sort of problem to solve. What you might need in this case is a way to recover an old certificate from a log based on some identifier. The problem here is that the identifier isn't stable either. TLS doesn't have a single consistent value you can use across all versions and all types of resumption.\nHi Martin, thanks for the detailed explanation. The only viable way to convery the cert info is via the ticket anyway, because several consecutive connections may reach different LBs and there's no way to be faster than light to synchronize them on any external data. What you mention about BoringSSL is interesting given that it derives from OpenSSL. I'll suggest my coworkers to have a look there, in case we'd find that the behavior is configurable. Anyway, my point remains about the importance of mentioning this in the spec (not the details). Just putting a big warning about the fact that the presence of certificate information in resumed sessions is implementation-dependent, because that's an easy trap everyone falls into, and that's too bad when it serves to design an architecture and is discovered after deployment.\n(BoringSSL's behavior was one of the things we changed from OpenSSL's. OpenSSL only retains the leaf certificate across resumption. We found that having different information available across resumptions led to bugs.) That, alas, does lead to a huge ticket as NAME mentioned. There's a third solution which avoids this and I think demonstrates an issue with this draft's kind of TLS\/HTTP split. (We haven't implemented this in BoringSSL, as client certs aren't a huge priority for us, but I think it's clearly the right design.) At the end of the day, your server presumably uses client certificates as the proof for some sort of application-specific identity. That identity may be an email address, a username, or something else entirely. You can think of the VerifyCertificate process not as returning yes\/no, but as returning the verified identity or an error. If you can capture your application-specific notion of identity, that's all you need to retain after the handshake and in the ticket. It's also all you need to pass down to the application. This will be much more compact than the input certificate chain. But it's incompatible with this and headers, which use a much larger intermediate representation. Of course, per the \"don't make resumption behave differently\" rule, any application which does this should make the full chain equally unavailable on initial and resumption handshakes. (As an analogy, when clients verify server certificates, the identity is the SAN list, perhaps with some metadata like expiry. We don't really need the original certificates. Though browsers tend to provide these silly \"view certificate\" buttons, so we've got a bit of a hard time there.) Back to the original topic, I think we need to treat resumption behavior as a prerequisite to deploying a system around or . If you retain the EE certificate across resumption, you can use and enable resumption. If the full chain is only available on initial handshakes, you must either disable resumption or not implement . Trying to do something clever across TLS and HTTP session state will not work. I've seen far too many cases of bad server deployments breaking flakily because the system made assumptions about the relationship between HTTP requests, connections, and TLS sessions.\nI did have a moment of hesitation when writing \"... or the resumption of such a connection ...\" as I didn't know the details but figured there might be implementation-dependent limitations or variations around what\/how data is retained across resumption. I don't think this draft can do anything about it. Other than a mention? What else can be said? Please propose some text, if you can.\nIs there a reasonable way to say that, in order to use \/ , the TLS implementation needs to retain the client cert info across resumption or not offer resumption to clients that established mutually authenticated connections?\nFor me it's not possible to \"maintain\" it as it's not necessarily the same physical machine. And one essential property of tickets is that they're usable across a fleet of reverse-proxies who share the same keys. I tend to think that it's not this draft's business to try to solve the TLS-level problems nor architectural shortcomings, however the draft needs to warn unsuspecting adopters about them. Something around this maybe ? \"The ability to extract a client certificate and\/or an issuer chain from a resumed TLS connection is entirely specific to the implementation and sometimes also to the TLS version in use. Implementers are strongly encouraged to verify if their implementation matches their expectations or to make sure the application has some way of retaining such information once already learned. One possibility to work around implementation limits is to completely disable resumption when client-cert is needed\".\nThanks NAME I can work with that or similar. Probably\/maybe in the Deployment Considerations somewhere like a new subsection. I'd love for NAME to take a look and tell us why it's wrong though and how it could be better.\nI might add David's suggestion to recommend against providing a value that won't be consistent after resumption, so:\nI guess I'm fine with Martin's proposal.\nI'll borrow from both suggestions to piece something together.\nOpened since it would be good to also have discussion in Security Considerations. There are some very sharp edges here (beyond the ones mentioned already) due to variations in how implementations do resumption and client certs in combination in ways that could be surprising to people who are not experts in TLS implementations."} +{"_id":"q-en-http-extensions-3288566468a98a9febd98a5f6ec38bd13a542d6e26c62a97456e5467f0581c99","text":"Comment by NAME , Figure 1: , Figure 2: For consistency with the rest of Figure 1, you could add a comma after Content(..) and Chunk (..) : Missing word \"message\" : s\/intermediate\/indeterminate\nComment by NAME The document uses \"indeterminate length\" and \"indefinite length\" interchangeably. I would suggest either using only one, or adding \"indeterminate\" when the first \"indefinite\" appear, i.e. in :\nComment by NAME : Should these be \"informational\" rather than \"information\"?\nComment by NAME : Please reference RFC-ietf-httpbis-semantics, or , or the IANA registry: URL after \"status code\".\nComment by NAME : : I think HTTP\/2 should be a normative reference.\nComment by NAME : Might be worth clarifying that this integer value does not include the length bytes. My comment comes from the fact that \"field section\" is the name of the structure containing the length, so \"number of bytes in the field section\" could be interpreted to include the length bytes itself.\nComment by NAME : : At this point in the text I thought this was missing the optional Informational response field, but you say explain that several paragraph later. Maybe clarify here that this is for messages that are not responses containing informational status codes, or move those paragraphs closer together?\nI think that moving the paragraphs works best here."} +{"_id":"q-en-http-extensions-c3b20448b625389af7f5d544252a008abf67237cb2e5e130173aca59e1065404","text":"Just a little more explanation. And .\nComment by NAME : I see that the talks about \"final status code\": and that explains why you called it \"final\" in , but I think you need to define why \"final\" is used in the \"final response control data\" when it first appear (so in ), or you can remove \"final\" in 3.5 and explain that only in 3.5.1."} +{"_id":"q-en-http-extensions-8d2ab9f689f9e35534501b9055bfddc577e9af22f9448e7cace379a670f615e1","text":"Arguably a technical change, though as it is in a picture, so I want to highlight it. This is on top of , so just look at 5468983321b104de62b1aae8df8963ad1abd5fa9 if you want to see the \"technical\" change."} +{"_id":"q-en-http-extensions-d994a19c1efecab06d302f801ca0b462a25679778a78409e26a1b524023ab00e","text":"I had to read this several times to understand what parts of the sentence were replicated into the \"or\" part of the statement. Duplicating some words makes it a tiny bit more comprehensible, I think.\nLGTM"} +{"_id":"q-en-http-extensions-0748479081e4ff31db0f7a256425d3ec34f6d4960f40a559a7726d4d98bbfa95","text":"Moving normative language from Section 6 is the most useful, but there are a bunch of other stuff I noticed as I was reading through trying to implement James' other suggestion.\nlgtm"} +{"_id":"q-en-http-extensions-3f900c5df598860ef3cf58292dfdebeeecbb3f4db3c37d691d846b221969197a","text":"... and apparently some other stuff.\nShould we recognise e.g., binary content in cookie values? If so, how?\nDo you mind expanding on what you mean?\nThis is for the retrofit spec's cookie mapping](URL). Since many cookie's payloads are base64, it seems like it'd be beneficial to figure out a way to carry them as binary. I've been meaning to start a discussion with the cookie authors about this section of the spec generally to a) make sure it's a good idea and b) it's correct \/ doesn't cause problems. PTAL if you have time.\nMaybe the right thing to do here is to introduce a new parameter that indicates the type of the value.\n... except parameters aren't available on Cookie; just Set-Cookie. hmm.\nPerhaps the most we can do is to say that the SF-cookie headers can have a value in any type, but they get mapped to the string value in \"normal\" headers. That way you can automatically convert SF-cookie to cookie, but not the other way around. The only alternative I see is to carry the type information in a separate header, which is not great."} +{"_id":"q-en-http-extensions-720c2723e5eab206553cb1229530197ef9cc4fd70ff3f54699d0e306dcd2e03f","text":"Cookie attribute names are case insensitive, this PR adds a note saying as much.\nThe grammar implies that set-cookie attribute names are case insensitive (because ABNF strings are), and the parsing algorithms say that they're case-insensitive, but I suspect many people may not realise this, because they're so often seen in CamelCase. Could we add a small statement to the Set-Cookie syntax section pointing this out?\nIt's also worth having a variety of styles in the examples to illustrate this."} +{"_id":"q-en-http-extensions-12bd3d929735f3d12721184e0b3479ef1f9b3ba9eb2965cff81953783e0d0237","text":"I still believe this change should end up in the revision of SF.\nThat's\nSome have pushed back on using integers for dates, see eg URL As I suggested there, one accommodation would be to define a new structured type for them. That could be done in retrofit, or separately.\nI see no mention of fractional seconds ? I think we need to ponder that, if the goal is (eventual) convergence for all timestamps in HTTP ? Considering how much effort we spend on speeding up HTTP, I find the \"human readable\" argument utterly bogus. Only a very tiny fraction of these timestamps are ever read by humans, and most are in a context where software trivially can render the number in 8601 format if so desired. In terms of efficiency, I will concede that, in a HTTP context, it is almost always possible to perform the necessary calculations and comparisons on raw ISO-8601 timestamps, without resorting to the full calendrical conversions, but once all the necessary paranoia is included, I doubt it is an optimization. My preference is seconds since epoch, (and this is largely why has three decimals in the first place), because it gives us fast processing, good compression and millisecond resolution. PS: A Twitter poll with only 40 respondents, carried out on the first monday after new-years ? Really ?!\nOne argument against any \"seconds since epoch\" representation is that (at least under the ) it is incapable of representing an instant within a positive leap second (since \"each and every day shall be accounted for by exactly 86400 seconds_\", any such time would be interpreted as being in the first second of the following UTC day).\nFor HTTP I think subsecond precision is an overkill due to RTT \/ intermediaries.\nIMHO: I like structured format. There are a lot of diagnostics and other software that would not want to do additional transcoding of header elements but just represent them verbatim as they occur. Those are helped if the verbatim representation is already easier readable. I do not think TZ belongs into a time\/date representation. Aka: all date\/times in UTC please. TZ can be added somehow as a \"location\" information, ideally in a different header field. If it is in the same header field, it would be confusing to show the time in UTC but ALSO to indicate the TZ. Maybe this can be resolved by coming up that would make it as obvious as possible that the date\/time is NOT adjusted for the TZ shown. Maybe not call it \"TZ\" but \"LOC\" so that the \"the date\/time it TZ adjusted\" recognition is not triggered. And of course waste the three characters on \"UTC\" in the representation. I do not believe the processing speed argument to be valid. The level of processing speed where just a (sub)second unstructured value would help goes IMHO into the space of CoAP, not HTTP. I'd first try to make HTTP better where it does not just duplicate CoAP efforts. All that binary stuff CoAP needs to do is severely making toolchains more complex IMHO. I do like the leap-second argument. I do think msec accuracy should be an option. RTTs within lan\/metro environments can easily be in the single digit msec range and you want to be able to diagnose e.g.: REST based broker\/bus-type transaction speeds, relative ordering of request\/replies.\nSeems reasonable to me. +1 Not sure this is an \"interface\" that HTTP exposes \/ should be relied upon.\nI suspect the experience of pretty much everybody writing and deploying HTTP intermediaries and servers at scale is different -- at those rates, cycles wasted on parsing do matter, especially when architectures have many layers of intermediaries to go through.\nSee linked PR for a proposal.\nRe leapseconds: as written (\"excluding leap-seconds\") means that timestamps will be illegal, effectively mandating POSIX leap-second (non-)handling. To make able to handle leap-seconds requires that every HTTP handling entity needs an up to date file, either at system level or application level, and that the upper limit on the \"internal data model range\" becomes indeterminable, since we cannot predict how many leap seconds will happen in the future, nor what sign they may have. That is a total no-go from a systems engineering and complexity point of view, in particular as the rest of the world converges on either papering over leap-seconds or abolishing them. If the textual format cannot handle leap-seconds (without requiring a boat-load of complexity), then, why take the extra expense in CPU-load and code complexity ? As stated earlier, we should go for the more efficient and less error-prone solution: POSIX time_t (with optional milliseconds) as a . (If we want to designate the timestamps with a senteniel, 'NAME is a particular bad choice, since the HPACK huffmann assigns it 13 bits.) PS: In context of four digit years: Century scale predictions, based on orbital considerations, expect the current leap-second regieme to break down in approx 2000 years, because we will need leap seconds more often than every month.\nDo any current HTTP implementations take leap seconds into account? Regarding HPACK - it's one character. Let's not over-optimise here.\nI don't know, but leap seconds are explicitly supported by : And the difference between 23:59:60 and following-day 00:00:00 would certainly be relevant for e.g. Last-Modified.\nI have spent a lot of time over the years searching, but I have yet to see a 23:59:60 timestamp in the wild...\nNow there's a picture in my mind of PHK walking the wilds, searching for an elusive date stamp.\nI don't know what counts as \"in the wild\", but there is certainly documentation of and .\nDiscussion at IETF114 wasn't conclusive, but we now seem to be focusing on these options: Do nothing. Dates in SF are Integers (unless folks fall back to Strings), and recipients need to know that they're dates and how to handle them. Create a Date SF type along the lines of the PR. The textual representation is human-friendly, and it's identified as a date in a machine-readable way without special knowledge. Create a Date SF type but use an Integer; e.g., . The textual representation is not human-friendly, but it is identified as a date in a machine-readable way without special knowledge. I'll observe that if we believe that we'll eventually have a binary representation of structured fields widely in use, (2) is preferable in that it has the best properties -- it's both efficient and human-friendly. Of course, that's not at all certain. I'll also observe that in the discussions I've had, the people who want human-friendly representation are generally those who are working with tools and developers, \"closer\" to the application. The infrastructure folks and \"lower layer\" folks tend to minimise this aspect.\nPR updated to reflect (3) above.\nI personally support (2) or because it covers similar use cases than HTTP-date. In case of (3) I think we can just use an Integer SF, like we do Signatures and other specs. Side note: we sometimes hear that RFC3339 is ambiguous. I think it is, and after ~20 years it probably requires a RFC3339bis that clearly specifies a basic profile ( T, Z uppercase; Always ). I'd be happy to start this work.\nWhy do the number of decimals need to be specified? Surely we could just say the format is nnnnn[.ddddddd] or similar where an integer-only parser stops at whitespace or a '.' while those who care about sub-second parsing continue on until they are satisfied with the resolution or they run out of digits. The senders decides how many decimals to provide, perhaps with a recommendation of three or upto their clock resolution? So \"123456789\" and \"123456789.1\" and \"123456789.123456789\" are all equally valid.\nI like this direction; thanks."} +{"_id":"q-en-http-extensions-01751cded3135eb07f54d777838256d8961117f580361d5087cb1dfe0ec446b8","text":"There are a few cases where the word is used. From the context, it appears that the intended word is .\nThanks! I have no idea why my spellcheck didn't catch those, and I appreciate it."} +{"_id":"q-en-http-extensions-cb05217c5fe1c79a705bb75f6a503e3365d9c018e8a03ff549f0f1bce87398e9","text":"-\nLGTM modulo nit\nNAME 8972 Simple Two-Way Active Measurement Protocol Optional Extensions or URL line wrapping? Since 8792 is used in examples, dunno if that's normative but it's ok.\n:-) Typo fixed. In any case, we need to reference it (IMHO). I would also suggest to avoid the escaping when not needed."} +{"_id":"q-en-http-extensions-2f50ad0f20a66a60a04fb7ad82351c2b235acaa4c0a71a3910c25169cfc9ab55","text":"This adds a binary wrapper for fields that don't follow the List or Dictionary format for multiple values, such as Set-Cookie. The wrapper is selected using the parameter, and the component value is made by wrapping each individual field value in a Byte Sequence and putting them into a List, then doing a strict serialization on those values.\nIn general, is this even needed or are we over-engineering here? If the main use case is , do we think it is likely to be signed?\nSome fields, like , have an internal syntax that allows unquoted commas and messes things up when multiple lines are combined for the signature base, leading to the potential case where two semantically different inputs have the same signature base. This means that: And: Both produce the same signature input line: Even though the two-line and single-line versions are processed differently by the application. To counter this, we could have a distinct encoding flag that wraps the field values, similar to . This can be used to protect problematic fields like Set-Cookie so that we get something like this for the multiple line version: But you'd get this for the single-line version: The background of this has been discussed in and"} +{"_id":"q-en-http-extensions-289a7870e76303b5755279dab85e4bf99a0a4e2e18be102f3d95cc036bba9e5b","text":"Section 1.1 is currently a bit confusing, because it leaves out sections, and does not present them in order (likely the cause of a recent re-org).\nNAME do we need it?\nit should either make sense, or should be removed.\nNAME if it's not required, I think that the document is fine without.\nmnot asked us to add one a while back, which I think helped at that point. Editorial changes in the meantime have had an effect though, so I tried to add some colour to the structure in URL Preview at URL The intention is that people can launch of from here to focus on bits they might need, like the field definitions or example. Since security and IANA sections are standard, mentioning them seems superfluous. If the opinion is that the proposal doesn't help, let's just remove the section."} +{"_id":"q-en-http-extensions-28d5c9b98bb9b76493870c0a78c201acb0aa8c031d3a92a8579f5ca8b48783c7","text":"Contains the following changes: Skip parsing the attribute on empty value. Skip parsing the attribute when given Max-Age=- Add mention of \"base 10\" as guidance for implementers parsing integers from Max-Age. Regarding browser interop: Skip parsing aligns on Safari behavior and against Chrome and Firefox, which will default to session lifetime when any invalid Max-Age attribute is found (based on manual testing, it's difficult to test this with WPT at the moment, see )."} +{"_id":"q-en-http-extensions-a1f2c231cfdbf05d1a3253e3a6855d9f50b7b015abd937b7e183b9d5476659d4","text":"While already discouraged by the grammar, this warning further discourages servers from sending nameless cookies which have a history of causing compatibility problems. It also reorders the notes in the syntax section to group similar notes together.\nIdeally we'd define server-side parsing of cookie-string (or at least recommend behaviour here for things that don't match cookie-string!), given this does cause problems for some servers (e.g. URL). Until then we should at least make it clear that the \"the algorithm to compute the cookie-string from a cookie store and a request-uri\" can create things that don't match cookie-string (e.g., nameless cookies after ). There's a number of WPT expected results that don't match cookie-string.\nCan you clarify what a fix would look like? Bonus points for a PR. (I mean, I can also attempt a PR, just not clear on what the recommended fix would be)\nYou attempting would probably be as good as me attempting!\nI'm skeptical, however willing to try! And now I think I actually understand the bug report (and didn't when I commented). NAME is saying that the ABNF in URL has a bug, since is defined as: But nameless and valueless cookies exist. cf. URL URL\nI think maybe this can be solved by making the following changes: This is pretty simple to grok: This is supposed to convey \"a nameless cookie, OR a valueless cookie (but not both)\" That should work for the following valid cookie-pairs: (nameless) (nameless2) (named cookie with empty , i.e., valueless) (name + value) If that's the route we go, then in URL cookie-string = cookie-pair *( \";\" SP cookie-pair ) That should work. However... unless we tweak , I think you can end up with the following invalid cookie: . Will need to formalize \"if no name, there must be a value\". Maybe we can just add an and then do something like:\nActually that won't fix the bug I described. Maybe this super elegant solution (but the above ABNF would have to be tweaked):\nFWIW nameless and valueless cookies are no longer RFC compliant after URL Up until then only Set-Cookie strings with empty names and empty values weren't accepted, but there were ways around this to get empty-name-empty-value cookies into user agents' cookie jar using the CookieStore API.\nThere are nameless cookies and there are valueless cookies (and, as you said, there are no nameless-and-valueless cookies). FWIW, the syntax in section 4.1.1 where is defined is a SHOULD, not a MUST, on the server's part. I think one issue is that reusing it to define makes that also implicitly a SHOULD, if I'm reading this correctly: Edit: To clarify, I'm thinking there's no accuracy issue with the spec as written, since if the server does things as recommended (and sends a cookie with name=value), then the user agent will respond as given by . But that is still totally confusing, at least to me, so I think we should still define this better.\n(love the ambiguities of english -- thanks for the explicit callout and pointer to NAME\nIn the nameless cookie case, it seems that an unhandled edge case arises because of the following, from the section: To illustrate the problem with some examples: if is empty and is , the algorithm above works - the resulting cookie line would be if is empty and is , though, the resulting cookie line would be , which would be interpreted as a cookie with name and value . One solution for this, which is currently specified by the , is to forbid cookies with an empty name and a value containing '=' from being set. Limiting from containing a '=' is not mentioned in the section or the section of RFC6265bis, though. It seems like using the following in the section would be a better approach: This would fix the issue in the example above (the resulting cookie lines would be and respectively, both of which would be interpreted correctly). If we want to allow to be produced in the first case, we could replace step 2 above with something like: I'll submit a PR for this change.\nThe empty-name-but-nonempty-value case is serialized as and not for historical\/compat reasons. There's some earlier discussion in . The case where the value contains an '=' is interesting though, and not tested by WPTs I believe. The Set-Cookie header can't produce a nameless cookie with value so this is only a concern for non-HTTP APIs that don't parse a set-cookie-string. Could we instead just add to the Storage Model section the restriction that the cookie-name and cookie-value cannot contain '=' or ';' characters? (Side note, more generally, this is getting out of hand and we should try to factor out common logic in the Storage Model and set-cookie-string parsing sections.)\nhmm, looks like there is a WPT for this case, but the expected output for it would be interpreted incorrectly :( URL;l=22-26 Wouldn't be allowed in a Set-Cookie header? Also, '=' is a valid base64 character, so not allowing it in might cause a lot of headaches\nOh, my mistake, you're right that creates a nameless cookie with value . Chrome\/Edge\/Firefox agree on how to serialize these, but Safari ignores nameless cookies altogether. I would be a bit concerned about breaking stuff if this serialization changed. :-\/ Theoretically there's a use case for this behavior... since and are the same cookie (the nameless cookie) but with different values, and they overwrite each other, a site could be relying on getting the output or (but never both) as a way of (sort of) setting two mutually exclusive cookies named and .\nMaybe we just fix this with a note cautioning against using nameless cookies, and point to this issue as a warning."} +{"_id":"q-en-http-extensions-3da72cb05c811d03172d602f6140617028abc68f57db1a8fc05888d682066020","text":"Adds an optional parameter to signatures, to allow applications to signal specific usage between the signer and verifier. This PR does not add this to any examples yet, but we probably should if accepted.\nNAME NAME I've walked the language in the definition back a bit and added it to an example. I think this is sufficient to include as a new feature.\n[This came up at the IETF-113 session, credit to Jonathan Hoyland] A signature context mitigates oracle attacks, where a sender can be made to sign data that’s partly controlled by the attacker, and the signed string is then used for unexpected purposes, either within the same protocol or outside it. The context includes the higher level protocol that utilizes the signature mechanism, as well as other information such as sender and receiver identities. Signature context is used in TLS 1.3 (see RFC 8446, Sec. 4.4.3, “context string”) as well as the similar parameter in . The simplest way to add context to Message Signatures is as an additional parameter. We suggest that the context value should NOT be defined by this draft, but it should be mandatory to send, and mandatory to validate upon receipt. This prevents cross-protocol attacks but assumes that the higher level protocol is able to protect itself e.g. by ensuring that the receiver’s identity is included in the signed components. The context is fully defined by the higher level protocol, and no IANA registry is established for it. cc: NAME\nThanks for writing this up. I want to make sure I understand the request for addition: Define a signature parameter \"context\" as a string, whose content is outside the scope of this spec Define the use for this field aligned with the above Add a security considerations on the use of this in outside protocols I think this is a relatively light weight add, and I am ok with those pieces. Where I don't think I agree is the \"mandatory to send and interpret\" portion, if we're not going to define anything to put into the field. In this case is becomes just another value, right? I can see a lot of confusion in people trying to apply this without clear guidance. I would be fine with this field being recommended to send by the signer and mandatory to interpret by the verifier if it's present, and applications can make the field required and even have required values, like the example above.\nYes, you have it right. This is analogous to the JWT claim whose goal is to prevent cross-protocol (or really, cross-usage) attacks. So making this mandatory is important. I think mandating verification is nice in theory but I'm not sure it would be effective in practice. Instead, we could give guidance that makes it trivial to use, like in my example: include the protocol's name and possibly version. Just doing that would narrow the \"blast radius\" of cross-protocol attacks significantly. A \"nonce\" () is a well defined term. This is not a nonce. The on is weaker than I would like, but it's still a SHOULD for new types of JWTs. Here we have an opportunity to get it right from day one.\nI appreciate the position, but what if the application is literally just \"I am signing this message\"? What would we put in that mandatory field that isn't either a random value or just \"http\"?\nThen put the application\/service name there, instead of a protocol name.\nWhat's the application or service name, then? I don't think it's always as well defined as you're assuming it to be in practice.\nIt's guidance, not interoperable normative text. Whether people use \"myapp\" or \"myapp\/1\" or even \"myfile.c\" doesn't matter all that much. Basically anything is better than a missing context.\nIf it's not interoperable normative text, then it has no business being a MUST here.\nExistence and verification of the parameter are both normative. Its content is not.\nNAME Could you provide an example attack scenario that cannot be mitigated by signing additional semantically relevant message components, and is mitigated by the addition of a parameter? The JWT claim is necessary because often there is nothing in a JWT that ties it to a particular message. This is a feature of JWTs, as it allows things like reusing JWT-formatted access tokens across multiple requests. I'm not familiar enough with TLS internals to say whether the same concept applies to the TLS \"context string\" parameter. In contrast, reuse across messages is not a use case for HTTP Message Signatures. If a signature is reusable across two semantically different messages, that would suggest to me that the signature is not covering the right message components. So cross-protocol attacks where the signature is presented as a message signature for a different message are mitigated by signing the semantically relevant components. (e.g., , or parts thereof, etc.) Attacks where the signature is presented as some other type of signature (i.e., not an HTTP message signature) are partially mitigated by the fact that the signature base includes the signature parameters, which is likely to distinguish it from content that may be signed in other contexts. For example, the format of the signature base means that an HTTP message signature cannot be passed off as a signature for a JWS. If the overall format and the footer is not already enough for this, I don't see how adding an additional parameter would help. If an attacker can mold the signature input in another context to match the HTTP message signature format as it is, it seems unlikely to me that adding to the end would help.\nHi NAME I'll start with your last paragraph. I agree what we have now is only partial mitigation. This kind of attacks is never fully mitigated, because you need to make assumptions on what implementations of a different, unknown protocol would do. But in general IMHO a unique prefix is a much better way to distinguish between signatures, because something further down in the signature base may be ignored by verifiers. Concretely, having a mandatory as the first line of the signature base would be a much better mitigation. Now back to the : again, we are talking about two different protocols, each of which may have both spec and implementation errors. Of course if both protocols are defined and implemented perfectly, the right fields would be covered and validated, and would also define unique semantics. Life is not perfect - people are lazy and will sign too few fields - and a context protects some protocols\/implementations from others.\nI'm not sure I buy this. If a non-HTTPSig consumer of signed content can be tricked into accepting as a valid message then I don't see why we should assume it would be any harder to get it to accept Okay, can we walk through the failure modes? An API that exposes multiple operations on different paths could fail to require callers to sign the target URI's path. In order for to help, it would have to be set to a unique value per operation, e.g., \"ExampleApp.doExampleThing\". That seems like way more work than just signing the right things. An app could fail to require all of an operation's inputs to be signed. In this case, the only way helps is if it is unique per request, e.g., a hash of all the operation inputs. This is definitely way more work than just signing the right things. Two apps could fail to require the request audience (e.g., the target URI authority) to be signed, allowing signatures to be replayed across them. Here only helps if it is set to something that is app specific. That rules out the use of standardized values based on protocol (e.g., ) or intent (e.g., ). And if is being set to something app-specific, why not just sign the app-specific thing that is already in the request? Are there other failure modes that I'm not considering, where would prove more useful? So far this leaves me feeling that is more likely to encourage bad habits and give implementers a false sense of security than it is to address any actual security risks.\nI do not see how having a fixed signature prefix will solve any security issues with signature base generation. All the same problems with bad implementations occur internally in the signature base, the only difference is that you've got MAYBE a good first line -- and as NAME mentioned this is not stopping people from doing different bad things like having newlines in the content identifier. We can add a security or implementation considerations section on newlines in the signature base. Even then, I am failing to see how this prefix idea is related to the signature context concept. And as NAME said, apps failing to sign enough is an issue for the app. Having a signature context isn't going to magically get that necessary content signed, nor will it make a badly written verifier reject unsigned required content. Again, I'm fine with adding context as an optional parameter for applications that want to make use of it for signaling, but I don't think it solves things in the way that is being presented here.\nWorks for me. Thanks!"} +{"_id":"q-en-http-extensions-d562538330a2f9aeb7d8c3da647a9970dca1d7ecf20d57372c69f9cb1347df7e","text":"Editorial, since this is merely restating locally what was previously incorporated by reference.\nIn draft 00 it says: >The layout and semantics of the frame payload are identical to those of the HTTP\/2 frame defined in ]. The ORIGIN frame type is 0xc (decimal 12), as in HTTP\/2. I think this is the text as it was, before we ran HTTP\/3 and HTTP\/2 through the process. I think there's some room for editorial improvements to avoid the guesswork. Not least because HTTP\/2 Origin has an errata on the payload definition (URL). My suggestion is to just admit defeat and define the whole frame like we do in HTTP\/3 or ext priorities, something like Then we can say something like the semantics of the HTTP\/3 ORIGIN frame payload are the same as those of the HTTP\/2 ORIGIN frame, modulo the differences already described (control streams and whatnot)."} +{"_id":"q-en-http-extensions-c471b36a673cbddb58c96c22ec41f0954189757ebbe637844178e7baea4122e7","text":"Define a signature algorithm named that indicates the use of ECDSA using curve P-384 DSS and SHA-384. The algorithm description is probably just 3.3.4 with \"256\" replaced with \"384\", but should validate that assumption with actual crypto folks.\nWhat's the push for this specific curve to be included here? The reason I ask is that we shouldn't necessarily try to define ALL known algorithms in here, when it's extensible.\nI agree with not defining every possible algorithm; however we should include those with known use cases, and AWS intends to use -384. Since the spec isn't finalized and the addition is trivial, it seems reasonable to add it here rather than punting it to an extension spec."} +{"_id":"q-en-http-extensions-e49c729f475a9ba403e8d30d5f7f00ebca26a165e4b520d911b08ab7266747d1","text":"Hello, URL introduced parameter, \"to allow applications to signal specific usage between the signer and verifier\". However, word context is already used more than 40 times, mostly meaning the context of HTTP message. In my opinion, it adds ambiguity to the used vocabulary. Please consider renaming the parameter, e.g. to application, app, usage, use, purpose. Best regards, Robert\nThanks, I noticed a similar term conflict when we added the functionality -- was the proposed name from the folks that suggested the function so that's what we put in. I'm happy to have something else in there: I think is a good suggestion."} +{"_id":"q-en-http-extensions-aaf3072c275d77ae392ca86f2bab33551aa9ce52fbfdb6332f2ad590a24027f7","text":"update refs for HTTP Semantics, HTTP\/3, and QPACK to point to the now RFCs 9110\/9114\/9204 for\nThanks Mike. That was much easier than using the \"suggestions\" option. I merged and then fixed(!?) the YAML header with . I think this one is good to go now?\nYes, good to go!\nI-D.ietf-httpbis-semantics -> RFC 9110\nAlso HTTP\/3 ietf-quic-http -> RFC9114 QPACK ietf-quic-qpack -> RFC9204"} +{"_id":"q-en-http-extensions-7515efb6a0e12bd226d424d6be408f1554b4b26e5ab00aa688e03cbb616e53cc","text":"client-cert: Mention that origin server access control decisions can be conveyed with a 403 or by selecting response content\nDo you envision that clients would drop their client cert cache in response to this 403? (They currently do not.) If so, how would that interact with subresources on the page?\nI honestly don't envision any change in client behavior. I realize that doesn't address your concern. But I believe doing anything more is firmly beyond the scope of this draft. The sentence added by this PR is only aiming to fulfill Tommy's suggestion at the last virtual interim and in the issue URL\nYou might want to explain that David's concern is a genuine problem that this doesn't solve, something like:\nThanks Martin. I'll add that note.\nFrom URL, it sounds like the intent with the client certs draft is that the proxy does little to no certificate validation and punts all of it to the origin. In that case, we need to sort out how errors work... Client cert UX is bad in part due to an impedance mismatch. In a user-facing client (like a browser), presenting a client certificate with a user-facing identity, we typically need to prompt the user to select an identity, reprompt on bad selection, etc. However, a single user-facing \"session\" covers multiple HTTP requests, which may then create or reuse connections depending on many factors. That means we cannot surface auth to the user at the connection- or request-. TLS session resumption and further complicate this. The consequence is clients cache client certificate decisions. (Non user-facing clients tend to have credentials preconfigured, which effectively also remembers the decision.) With a cache, clients need to know when to invalidate it. Otherwise the user can never recover from selecting the wrong identity. As this is TLS-level auth, today this is done by reacting to TLS-level alerts. draft-ietf-httpbis-client-cert-field breaks this correspondence by introducing a way to move half of TLS-level auth to HTTP. That is incompatible with today's client cert deployment assumptions, so we need to define something here. This is further complicated because the draft is shifting connection-level auth to a per-request scheme. I can see two options, but neither is very satisfying. Proxy translates HTTP error to TLS alert The most compatible option would be to define an HTTP error that the proxy must translate to a TLS alert on the client. The problem is TLS alerts tear down the connection, so this would be very disruptive in HTTP\/2, where we may have other requests multiplexed. (The TLS-level mechanism does not have this problem because it happens in connection setup.) Modify clients to interpret HTTP error as TLS error The other option is to define an HTTP error and pass it through to the client. However, that means this draft is not only a proxy server protocol, but also changes HTTP clients. We would need to discuss the compatibility implications. Additionally, by introducing a brand new way to do client cert errors, we introduce some awkward quirks. Suppose the server sends a resource with subresources on the same origin. Unlike HTTP auth, client cert auth applies to the whole connection. There would be no way to fetch those subresources without re-triggering the auth prompt. Perhaps we include lots of warnings that the server need to keep all its error page resources inline or on a different origin? Perhaps we say the client should ignore the response and map the HTTP error to some network-level error page?\nThe use of post-handshake authentication is similarly fraught. I assume that you are looking to have a connection torn down if any certificate is not fit for purpose. Isn't that a bit of a blunt instrument? It makes the use of multiple certificates an interesting game: get one wrong and you lose. More so because the TLS-terminating proxy is the one that plays the game with the client the one that pays the price. My assumption was a far more narrowly scope scenario: The proxy asks for a certificate based on some condition it arranges with the origin server (this is proprietary and not our problem). The client provides a certificate with a signature. The proxy validates the signature and passes certificate information to the origin server with any requests (again, deciding which requests to annotate thus is proprietary). The origin server takes this information and authorizes requests, or not. That's it. The scope of the draft starts and ends at step 3 here. The holistic problem is that clients tend to stick to their choice of client certificate in the face of these HTTP errors, only changing their mind if TLS fails (with a specific subset of possible alerts). That seems like a poor design choice; a legacy of how HTTP\/1.1 works and not suited to modern uses. For starters, the provision of the certificate was successful, it's just that it was not the one that the server wanted. Tearing the connection down is too strong a response when you consider that HTTP\/2 and HTTP\/3 have multiple concurrent requests, some of which might be affected by a TLS alert that brings the whole session down. I really do think that client changes are the right choice here. Clients can treat 401 (Unauthorized) as a signal that the credentials offered (zero or more certificates) are not adequate. At that point, it's probably safe to treat the cached choice of certificate as not useful for the next certificate request that comes along. Post-handshake authentication allows them to add to the pile of certificates, not jeopardizing credentials that are working. For the proxy, if it decides that it needs to ask for more credentials from a client that doesn't do post-handshake auth'n, then a TLS alert is an option. It's probably the only realistic option as things stand today. I'd prefer GOAWAY with a code that indicates that a new connection with different credentials is appropriate. For NAME benefit, I don't see any of this as affecting the client-cert-field work as long as it makes its scope clear. That work can probably just note that a proxy probably has to tear down the connection to the client after a 401 response so that the client won't fixate on the wrong certificate. Pick an appropriate alert. I don't think we need to go into defining new signaling arrangements. I don't think that it is worth the effort of doing more to improve client certificate-based authentication, unless there is some general trend toward enabling post-handshake auth, at which point we can talk about choosing a better solution.\nWhile acknowledging that and associated issues, the scope of the draft is only about conveying the client cert info and not any new error signaling or attempts at addressing the larger problems.\nIt might be nice to still have some text to still mention that server can send back 403 status codes, etc, if they don't like the client certs. Don't add any new mechanisms, but explain that errors can happen, and there are ways to express them today.\nIf the server sends back a 403 mechanism, would you expect clients to act on this? I'm not aware of any client today that uses HTTP error codes to inform client certificate selection.\nIf a client without a client cert would see a 403, then I think it makes sense to mention that sending a 403 is what would reasonably happen here. I don't know if clients would do something automatically, but presenting the 403 error at least indicates that authentication was the problem.\nPR basically adds this sentence, \"Access control decisions based on the client certificate (or lack thereof) can be conveyed by selecting response content as appropriate or with an HTTP 403 response, if the certificate is deemed unacceptable for the given context.\"\nThis is sufficient. A similar experience exists today when a client presents a certificate which is valid, but which does not grant access to the requested resource. While it's not ideal in certain ways -- the client might possess other certificates that would grant access if given an appropriate selector -- the server might also prefer not to share that list. This draft isn't attempting to create new solutions from scratch -- it's attempting to outline what is currently done in assorted custom ways to help coalesce these into a single format. Those custom mechanisms don't provide TLS alert translation either, so far as I'm aware."} +{"_id":"q-en-http-extensions-a7827ffdc16d3a24ae001810836895da1af48fde1329e1677a40e0618f33c8ce","text":"attempt NAME \"simple alternative\" in to say the chain is in the same order as it appears in TLS rather than copying the language from TLS\nThere's a bunch of text in the draft about putting the certificate chain in the right order. This tries to crib from TLS and ends up being somewhat tricky to get right as TLS has a bunch of carefully worded and - frankly - ambiguous rules. As a simple alternative, how about we just state that the certificates need to appear in the same order as the client sent them in TLS.\nMakes sense. Will look at adjusting the text accordingly.\nI think this is a bit more fundamental than just the order. It's really a question of whether this is a certificate path, or a pile of input certificates to run verification over. That is, either: The origin runs path building, taking the unordered\/unverified pile of input certificates from the header, in TLS order. The proxy runs path building and passes the result in the header. The origin assumes a path has been built and is merely applying ACLs based on some aspect of the resulting path. A mismatch here could result in security issues. For instance, suppose the origin wants to only allow chains issued by intermediate certificate I1 to access . If the header means (1) but the origin expects (2), this ACL is trivially bypassable: Client has a certificate chain EE -> I2 -> CA. Client connects to proxy and sends {EE, I2, CA, I1}. (Or perhaps {EE, I2, I1, CA}. Or whatever else. Different verifiers are differently permissive about path-building. Perhaps it omits I2 altogether and the path-builder picks it up some other way.) Even if the proxy runs certificate verification, a path-building verifier will build the EE -> I2 -> CA path and just consider I1 as a random unused certificate. , per the TLS order rule, contains {EE, I2, CA, I1} Origin server checks this and concludes \"aha, this was issued by I1 and can access the resource\". Conversely, the semantics are (2), the proxy needs to have enough information to validate the certificate, up to a valid trust anchor for the application. If you all intended to punt that to the origin, that's a different protocol. If the spec says TLS order, it's implicitly saying the interpretation is (1). But as that leaves a security-critical check to the origin, it should be explicit about this. Also, as it leaves all certificate validity to the origin, it'd need to further define an error-handling story to properly tell the client the certificate was invalid. But I know of no client that uses HTTP-level errors to clear client certificate decisions, only TLS alerts. I think (2) may be a more robust\/compatible.\nIt is definitely 1. You do 2 and you don't need this client-cert-chain field at all. That doesn't mean that a proxy can't be configured to do the work and pass the info along. But one of the two needs to be in the hook for enforcement. A consequence of 1 is that alerts at the TLS layer won't be used to signal errors, aside from those that the proxy detects (or has to, like if the signature is invalid). That is ok, mostly. The main protocol issue here that might be a problem is validation of signatureschemecert. I can't see how failing to validate that would be a significant problem though.\nIn on the PR that introduced the chain I was aiming for 2. But eventually relented to MT. Some additional considerations about the security implications are likely warranted though.\nI see. If it's (1), that should probably be clearer in the draft. That also means the draft needs to define an error-signaling scheme. Moreover, if the proxy doesn't then translate that error into TLS alerts, we'd need changes in clients for servers relying on this to work right. I'll file a bug.\nI ran into real reservations as I looked at making this change. I still don't like pushing the trust chain validation to the origin. For many reasons. The proxy also has no means of signaling to the client which trust anchors the origin is okay with. The current text is somewhat ambiguous about what goes in Client-Cert-Chain but I think that's actually better than saying it has to be same as in the TLS handshake. I dunno, I think I might raise the question of WTF actually goes in Client-Cert-Chain at the next interim or real meeting.\nAt the last virtual interim (in May\/June yikes!) Martin kindly suggested that I was making this too complicated and to just make the change (paraphrasing). I've tried to do that with changes in PR .\nThis works well enough on the critical points. Some points about readability though."} +{"_id":"q-en-http-extensions-ee20379e93123d1e57abb0bed404974538c1d11340a728f27714df86e12d9bdd","text":"briefly mentions security advantages See\nMoved little changes into sec-: sensitive information can be in both and ; mention intermediaries."} +{"_id":"q-en-http-extensions-4ea8453c110ee2001f97dbc1df81f5023e05a464603191c33674434162cc2172","text":"Adds a citation in 1.4 for DIGEST and forward links to the security considerations sections for covered component selection and message body coverage.\nMinor issue noted while reviewing. -13 does give a good overview of message content considerations, and mentions Content-Digest along with a citation to the digest headers draft. However, the earliest mention of Content-Digest is in section 1.4, and there are several examples through the text that use it. Simple fixup is to cite digest in section 1.4 too, and forward ref to 7.2.8 so anyone interested in this knows where to jump to."} +{"_id":"q-en-http-extensions-61aa6494db2104b23efb03c923cd3f94ab28419933cefcc647e450fa27fb6b72","text":"to\nCurrently, the references to HTTP semantics and to all HTTP versions are informative. While there are only two references to RFC9110, I find myself skeptical that someone would be able to implement it without knowing HTTP. We might consider making 9110 an normative reference, even if we think the individual pointers into it don't rise to that level.\nYou could normatively depend on the intermediary definitions in URL\nAgreed about making HTTP semantics a normative reference.\nUsing and alining with 9110's intermediary definitions is likely worthwhile too. However, that's a larger effort that should maybe be considered separately?\nI'm not too bothered about just changing labels between informative or normative. Thats easy to do and can be changed easily enough at later date. I think this draft actually does a good job of talking about the specialist case of a TTRP and we should keep that without too much meddling. But if the prose would benefit from shaking out a stronger sense of if implementations have normative requirements (directly or via references), I think this is a good ticket to track that against.\nThanks for the thoughts, Lucas. I'm inclined to not do much meddling."} +{"_id":"q-en-http-extensions-28d3150d37f91f7defdeec8ab584a1f7657ec76a6fab0a4d871853258afb59e0","text":"and use \"describes\" rather than \"standardize\" to better align w\/ the Informational intended status\nAs Yoda might say, do or do not. We can lose the aspiration here and be assertive about the document consensus progress.\nI have a tendency towards that kind of demure writing. But you and Yoda are right. Will change.\nLGTM, too.LGTM"} +{"_id":"q-en-http-extensions-2644b4898a1310d89c875171524becf926c61fe296ca6342721cb48901db350e","text":"It might help to refer to the example in section 2.2 and\/or 2.3.\nMakes sense, can add a couple forward refs."} +{"_id":"q-en-http-extensions-7c1549092d94fb5e2c9ba1c978170672784b0cc9ff3f98cca47512560e9fcba6","text":"Intro para 3 has a chunk of text about fields not defined in this doc This could be condensed to say that the header fields in this document provide a standardized replacement for custom or proprietary fields that were known to be used prior to this document.\nIt is perhaps a bit long winded and admittedly not award winning literature. But I think it's useful context for the reader and prefer to keep it.\nRight but in 20 years if the RFC says \"a common practice today is using non-standard fields\" it could read odd. Its clearer if this spec focuses on what it provides not what people came up with as custom solutions beforehand.\nThanks, looks good!"} +{"_id":"q-en-http-extensions-9a6906af4b775c087b686ad1d259992919f9675ea3b49384470f4e046e89cdd2","text":"reword the header compression section with more consistent terms for the roles (to\nLGTM modulo nit\nThis section talks about header compression and says Elsewhere in the document, you've used TTRP and Origin to identify the roles in play. I think it would be clearer to use them here. For instance, I might rewrite this as something like"} +{"_id":"q-en-http-extensions-5f0a00b83ffcb7337e68e513a9748c087331c6721d0c53daf47622d55aee0e0f","text":"Attempt to reword the 'Header Block Size' section to use more HTTP version neutral terms (for )\nIs this a hangover from terminology changes elsewhere? URL says\nVery possible that it's a hangover from terminology changes elsewhere. Very possible. But I'm not quite sure how to update it. HTTP\/3 talks about the \"size of the message header\" (from your reference) while HTTP\/2 talks about block size but now uses field block size URL rather than header block size URL How about the following though? Assuming I'm even adjusting the terminology you're suggesting needs adjusting. Message header seems more neutral and appropriate.\nThanks for the suggestion and PR. I think we're best off holding for NAME review since he had to do a similar renaming exercise for HTTP\/3 and is an expert now\nMakes sense. I, for one, would welcome such expertise.\nURL Mike okayed it\nYes, this looks reasonable."} +{"_id":"q-en-http-extensions-4dd2c52c1e08ab8cfed1ea5f31e1721dcc9f2e8bfcb0db1daa4c4c2092817592","text":"h\/t NAME\nThe security considerations pay some good attention to the concerns of the actors that might deal with the headers defined in client-certs. It crossed my mind that you could mention header protection mechanisms, like Signatures, as one way to harden things.\nYeah, Signatures could be mentioned in there as a possible additional measure."} +{"_id":"q-en-http-extensions-360cf986a79cbe020bed43ea9916d27fac187ce9d906d851bc5977770d2c32bd","text":"try to align better with the style guide on structured fields usage for\nSee URL, which would affect sections 2.2 and 2.3\nWill update the structured fields parts to align with the style guide.\nLGTM"} +{"_id":"q-en-http-extensions-04f865450747ebb0b4f6f3ba644a4ecfec2d5ff04f2dd680604b63a17a5091d1","text":"The HTTP message in Sec. 2.1.4 of draft -14 is not formatted correctly (both HTML and plain text).\nNAME"} +{"_id":"q-en-http-extensions-60b41271c62052750fa26895209d4313f5435db531dfa2ad2852175d7dad61c3","text":"in the manner discussed in London: Admitting it exists, but we don't care.\nSection 2 of includes these nuggets: to this specification MAY define flags. See . and but future updates to this specification (through IETF consensus) might use them to change its semantics. The first four flags (0x1, 0x2, 0x4, and 0x8) are reserved for backwards-incompatible changes; therefore, when any of them are set, the ORIGIN frame containing them MUST be ignored by clients conforming to this specification, unless the flag's semantics are understood. The remaining flags are reserved for backwards-compatible changes and do not affect processing by clients conformant to this specification. HTTP\/3 frames don't have a Flags field in the generic frame layout. So, while I'm not sure we'll ever see a use of these flags in H2 ORIGIN, it might bite us later if it happened. I see a few options for dealing with this: Option 1) Defer. Future us will have a job to do trying to make H2 and H3 compatible if a flags related change is proposed. Perhaps its worth a little note to make sure people are aware of this divergence consideration. Option 2) Add a Flags field () to the HTTP\/3 frame and add in all the same considerations about backards-(in)compatibility Option 3) Encode the flags in the frame type by picking a large enough value. A bit yuck. I'm leaning towards option 1."} +{"_id":"q-en-http-extensions-92b3b960d9670b66b709b4321e403f7d21a2259b4129b589f730bef5780123e0","text":"correcting: The reference to RFC8740 is obsoleted by RFC9113 There are two separate references to RFC5246, once as \"RFC5246\" and once as \"TLS1.2\"."} +{"_id":"q-en-http-extensions-3f97230079b93c3cdf937ac6b9a9b87e5bcee3c58fa0172d0b1678dfc3654899","text":"The draft currently has this example: I'd suggest always omitting trailing dots since it's clear that these names will be FQDNs, i.e.:"} +{"_id":"q-en-http-extensions-2c66b6b6b00c43851f2717379482f915ff94112beb2321f0b63499e28b310989","text":"NAME Adds the BCP14 boilerplate in a Notation Conventions section. As part of that, I noticed we don't reference any explanation of the frame layout language used -- it's the same used in QUIC and HTTP\/3 -- so I added an RFC 9000 ref to pick that up in the same section."} +{"_id":"q-en-http-extensions-35e4c8fd1522e55e32643763984548f2c498e6f98edda98ae0c4387e26a935d5","text":"This draft (-00) is using the term \"alias\" in a slightly different way than used in DNS: A CNAME record has two names, the owner name on the left-hand side or \"alias\", and the target name on the right-hand side or \"canonical name\". RFC 8499 (DNS Terminology): In other words, to use the example in the draft: Which might correspond to a DNS response that looked something like: and are DNS aliases (because there are CNAME records with those owner names) whereas is a DNS \"canonical name\" without an alias. But is included in the example in this draft. This might be confusing for the DNS name at the end of the CNAME chain (which is not an alias from a DNS perspective), so I might suggest changing: to something like:"} +{"_id":"q-en-http-extensions-8de1d5ee9173c593ed9b2bb4b3a2a9708452b37e6a9aa008e534b7998466840f","text":"Agreed, but I don't see a way around having a \".\" be allowed there. If a system wants to further escape dots within labels if they know about them, that's OK...\nThe current text says names MUST be represented using a percent-encoded value instead. However, domain names can also contain and , so I think the escaping rule has to be a bit more generic than just escaping commas, as indicated in the current text.\nWire format DNS names can contain arbitrary octets. I'd suggest allowing characters in the set and %-encoding any other character. This is similar to the filtering that the glibc stub resolver performs on CNAME alias chains.\nReference RFC 1035 section 3.1 for the fact that these can be arbitrary Percent encode everything that it not alphanumeric or dash Mention in the security considerations that if you don't do this, you could inject newlines, etc.\nActually, sf-string already prohibits things like newlines:\nThe unreserved characters are: is a valid interior label character in a wire-format DNS name, which seems like it would allow spoofing a label boundary. Not sure how much of a problem this is."} +{"_id":"q-en-http-extensions-e3d179bbe71d44d708900a7181c2a68cd00dab9530b4967ba56cacaa8592baf8","text":"Comment by NAME I would suggest being explicit about the meaning of \"active\" and especially \"deprecated\": these are terms everywhere in RFCs (I am aware), but sometimes they are open for interpretation, and we keep discussing their meaning. It would be good to avoid that situation from this doc.\nComment by NAME The new registries do not have any guidance for the designated experts as to what registrations should be accepted or denied (conformance with template, point squatting, availability of reference specification, etc). See of RFC8126: The required documentation and review criteria, giving clear guidance to the designated expert, should be provided when defining the registry. It is particularly important to lay out what should be considered when performing an evaluation and reasons for rejecting a request."} +{"_id":"q-en-http-extensions-800376d905c41a0aa48a93f6dcdaade45b78df4780f3c0fcbc2530dbc1cbf09a","text":"Clarify method case requirements. Add justification for use of empty query string.\nComment by NAME : >If the query string is absent from the request message, the value is the leading ? character alone: When would it make sense to sign this component although empty? Might be good to add some clarification for the reader.\nURL I would drop the \"and conventionally standardized method names are uppercase US-ASCII\". Method names are case-sensitive, that's what is relevant. The second part somewhat weakens that statement.\nThis is a note to implementors and doesn't change the normative requirements or behaviors -- I think we should leave it in. Note that the text in question appears in section 9.1 as referenced:\nGood catch, and I should have objected to that. Anyway; I don't think it makes sense to note it here. It doesn't help at all - the method name is what appears in the request message. The note doesn't add anything useful, but might cause confusion. Take this ticket as proof.\nHaving seen people try to normalize things by lowercasing because \"it made sense\", I disagree that the note doesn't add anything useful. I'd like to leave it in.\nJust to clarify: my concern is that implementers will take that as a suggestion to uppercase."} +{"_id":"q-en-http-extensions-c2dcb80df3747d788cbc3642372d4ae2064f0a85607b97d126dcbf9a1f2e3053","text":"URL This does not work. It's impossible to split multiple values (be it a list-based field or not) without knowledge of the field value's grammar. For example, what is the canonicalized value for: ?\nThe value would be Since it was sent as two instances, you combine the instances with a comma and space. The language to split the field values was added late based on several conversations about list type fields and how they can be combined, but we can remove this. If someone needs additional armoring they can use the \"sf\" flag on a list formatted field.\nOk, but what the spec currently says would result in: URL that sentence really needs to go.\nNo, it would not result in that, because as you point out those are not separate field values that can be separated.\nSo what does: mean then? When is it \"necessary\"? How does the code know that for any given field name?\nI think what is meant is that you take a list of the values of a given field (one member of the list for each header line) and then concatenate them by separating them with \", \". In scala this is or just to make it explicit\nThanks NAME - I get that, but that doesn't address the question I asked...\nOk, I see the problem is: I would not know how to do that, whereas I do know how to seperate the fields in the list.\nWe need to be very clear with the terminology. What do you mean by \"seperate the fields in the list\"? Do you have an example?\nIf you are talking to me, I meant what the code above showed: the list is the list of header values (for a given header name). And I meant seperating the values of that list. I don't know how to write that up well in the spec though...\nOk, AFAIU: just drop \"If necessary, separate individual values found in a field instance.\" from the spec.\nURL I don't quite get the reasoning for the second \"MUST\". Why does it matter? What would break if the client uses a single comma, which is also allowed by HTTP? IMHO the exact delimiter sequence here does not matter at all, as long as it is allowed per HTTP. Note that HTTP allows combining field values, it does not allow splitting them afterwards.\nThis isn't combination at the HTTP level by the client or intermediary, this is combination by either the signer or verifier when making the signature base. It's a MUST here because there needs to be only one way to do it at that step. If the lines are already combined then leave them combined.\nhm. ok. So one confusing thing here is that HTTP 5.2 is already only about combining field values. So the \"Specifically\" is a distraction. Also, 5.2 does not mandate using COMMA SP. The section doing that is 5.3.\nAlso...: I think I misread that as a recommendation to parse individual field values into a list, before recombining everything. Maybe \"parsing out individual field values\" needs to be rephrased.\nYou would only parse out individual values if you: 1: Know the overall syntax\/grammar of the field 2: Know that it's a list separated by commas 3: Can't send everything as a single field line 4: Know that an intermediary is going to mess with things sent as multiple lines in a way that you don't want The recommendation is really only for that one corner case, not for every value.\nI agree with that, but the spec really needs to be clearer here.\n(but I'll note that github's way of displaying diffs in long lines makes it extrremely hard to find out what actually changed)"} +{"_id":"q-en-http-extensions-143a2d00aa88fbc64125c5b4e67df7df8d0d561ff6b0f24a9662f3ab9f7ad13c","text":"And how users of digests are the ones responsible for deciding what to do if something is ignore.\nGreat NAME , thank you!\nMurray asked: on the email thread, I responded with To which Murray confirmed it would be a reasonable resolution. Lets get this fixed as discussed"} +{"_id":"q-en-http-extensions-9ee4924a093fa7544d20dbef1a902f0ff994101d63c8eae756be388bdc23b806","text":"NAME please check the new test vectors to make sure they pass in your system as well!\nIt looks good to me (If the tests are correct) URL\n+1\nThe first example signature from fails signature verification. By changing the validity date from 1618884475 to 1618884473 the validation succeeds. I guessed that date as it is the one used in all the other signatures. So I guess that is a typo. (It would have been good to have a different created date, just to make testing less mechanical, but then the signature needs to be changed). There is then perhaps a logical mistake in the next example. The story goes that the proxy receiving the above request alters the message and signs it before sending it on. The original signature is kept and added as part of the signature. The proxy alters the header and adds a header along with a new Signature. But by changing the the proxy makes it impossible for the recipient to verify the initial signature, since that includes a component which was changed. So the spec has to either remove the component from the client request, or change the text at the end of the section which says Note: the proxy's signature does verify for me with the timestamp of the client as it appears in the spec.\nVerification of claim above: The test on line 173 of had to be disabled for my tests to pass on the JVM (JS tests coming soon). All the other tests on requests from the spec (well on one I had to change the time for it to pass. Further commits should get the JS to work.\nThe proxy should not alter the original signature, and that signature does need to include the component. The whole point of the story is that the proxy verifies the first signature, and the service behind the proxy verifies the proxy's signature. If we can make that story more clear, I'd be glad to!\nIt is indeed an option to have the final recipient rely on the signature of the proxy as a verification of the originally signed message. But the text currently says \"The proxy's signature and the client's original signature can be verified independently for the same message, based on the needs of the application.\" The \"independently\" reads to me as if the application (that is the server receiving the signature) should be able to verify both signatures. It seems to me that it can only verify the proxy's signature though. But perhaps I was just misreading... It seems there is a dependence of the final recipient on the signature of the proxy. I think there is still the problem of the time being wrong...\nYes, the signature needs to be recalculated and replaced due to the typo in the timestamp (I will need to check the generation script). I can see now how that line can be misread. Though it is definitely possible for the origin server to validate the client's signature if it has additional information, like that discussed in URL and URL -- but this could be made more clear."} +{"_id":"q-en-http-extensions-696d968c3657d1bafd71684912a66c5075d8ae3fd65502362872384af118b57c","text":"Changes the reference to HTMLURL to use one of the newly-registered permanent anchors. Previously two different section anchors had been used, but I'm not sure how to put both of them into the references section. Is there a syntax for multiple target URLs?\nI would know how to do that in RFCXML. Maybe it's good enough to tune in AUTH48.\nComment by NAME Thanks to Tommy for detailed shepherd write up, that tells me you have discussed this already. Seems obvious that the HTMLURL reference needs to be normative, so we might need to reference a snapshot of this instead. See the following text in the IESG statement: URL I'd like to get this fixed before starting the Last Call.\nA permanent anchor has been requested of the WHATWG: URL\nThis has been implemented by the WHATWG in URL , so we just need to move our references to use the new permanent anchors.\nNote that the question of stable anchors is somewhat orthogonal to the issue of stable specs. A stable anchor doesn't help if the thing being linked to changes.\nNAME while true, as the target document is a \"living standard\" there is no way to guarantee a specific version. Nor is that reason to believe that the document is \"unstable\" in the way an I-D is. At the very least, implementors will be able to find the information they need at the referenced anchors.\nYes, I agree with that. I just wanted to mention that this might not address the AD feedback."} +{"_id":"q-en-http-extensions-9588f06b610453bbad46d303b0a612f292f55dac76be4f946337cf578adb96a0","text":"This is a non-normative suggestion for applications to define what to do when a signature is invalid.\nThis is not necessarily a spec issue, but I thought it should be recorded and maybe discussed here. Is there a recommendation how a server should respond when a request is signed but the signature is invalid. It's likely a 400, unless we want to define a more specific code? Maybe a pointer to the HTTP problem format would be useful?\nGood question. If one was using signatures for authentication would 401 be correct? Clearly an additional error message would be useful.\nThis really depends on how the signature is applied, and I don't think that the signatures spec should really have a strong opinion on the matter.\nI agree it depends on signature, but a bit of confrontation is beneficial. Even some non-normative guidance is ok to me. WDYT about: 400 in a generic case (malformed signature) 401\/403 if signature is used for authnz This is the case of Content-Encoding and Content-Type. Eg.\nIs this the problem format document: URL ?\nAlmost: URL\nI like the approach in PR to make a suggestion on the error codes and leave it up to the application to determine what makes sense in its context."} +{"_id":"q-en-http-extensions-a233928ff40b5516b96b9d54e4f3ef3fcd5401d3a67b6b131cf7fb447fbe074e","text":"It's likely because the field is ignored when parsing fails, but 8941 s 2 allows a field to specify other error handling behaivour.\nHow so? 8941, Section 4.2 says:\nSection 2 has a bit more nuance:\nYou refer to: ? My reading is that this applies to the preceding text about what happens when parsing does not fail, but additional constraints are not met. If this is not the case, we should consider this to be an inconsistency in requirements and treat it as spec bug. EDIT: opened URL\nAs pointed out on list, we need to warn people against just using Date as an extension in a field that was defined using 8941.\nThe warning helps, but it IMHO does not address the larger extensibility problem we have.\nNAME can you articulate that (here or in another issue, if it's substantially different \/ larger)?\nI'll try. The main selling point for structured fields IMHO is \"a unified parser\". You know that a field value is SF-shaped, drop it into the parser, and get a more or less self describing data structure. Great. Now once we add extensions to SF, this get's harder. Starting with the the current proposals: A recipient of a field does not only have to know which top-level field type to parse for, but also whether to switch on support for SF-date or not. Or we would need to say that even when a spec defines a field using 8941, it still's ok to parse it with an sf-date-enabled parser. In that case, it would be up to the recipient to either ignore sf-date-typed extension parameters (if the field definition already allowed future extension), to convert them to an integer, or to reject the parsing result altogether. Retrofit parsing flags: I hope we have consensus that preprocessing values before passing them into the parser will be fragile. So a retrofit-supporting SF parser will either need modes, or the wire syntax for SF could be relaxed so that retrofit would need no new mode. The common concern here is that SF in theory is simple to extend, the process is extremely hard both process-wise and deployment-wise. Ideas: we could simply avoid doing extensions; sf-date is only a \"lipstick\" flag so that tools can switch on a special display mode - maybe we don't really need it? we could try to define what future extensions mean for specs that are written for the current version of the spec My concern with the current state of things is that a parser implementation will need a set of flags, and each new flag will add complexity not only with respect to the parsing code, but also to API and documentation (how do I get the user to select the right mode, or will she always just select all extensions - and why would that be bad...?), and to test coverage (we need a parametrized test suite, right?).\nI think that this is a reasonable outcome. Using sf-date before a recipient is prepared to process it can result in the field being dropped. Now, this isn't entirely satisfactory, because an extension within a field will be dropped by a date-unaware SF parser, but that risk can be documented. Extensions to structured fields defined against RFC 8941 probably don't get to use dates then, at least until sf-date-aware parsers are widely enough deployed that the risk is negligible. Documenting this as a caveat is worthwhile; it's only the current set of structured fields that are affected in this way. I don't think that a flag is necessary for sf-date. (I haven't thought much about your retrofit points just yet, I just wanted to put this there as a potential way forward for sf-date.)\nJust to clarify: any of these three? Which means that updated parsers will accept sf-date, and thus not be compliant to 8941 anymore. (I'm personally ok with that, but we really should be clear about that)\nYes, that had been my assumption -- a recipient is always going to need to check whether or not the types it recieves are the ones it expected; e.g., if a field is defined to only allow Strings and it receives an Integer, it needs to deal with that. Same for Dates, in this case. I don't think that non-compliance has any significant impact, as discussed above. Happy to clarify it, but I seriously doubt whether it will have any real-world impact to do so. In the case where a field that specifies (or allows through extension) a Date, the appropriate spec needs to refer to the updated RFC so that an updated parser is used. The extension case is a little less obvious, so we're talking through the text to highlight that now. Regarding retrofit -- I don't think preprocessing is seriously considered by anyone at this point; the question is whether we make it a flag in sf-bis, or 'monkey patch' the spec from afar, just for the purpose of retrofit. So far we seem to be leaning towards the former, but there are risks (as discussed). Note that if we have flags, they're in the parser, but they needn't be exposed to people doing end-user parsing of fields. Or, as discussed elsewhere, we could decide to drop those compatibility fixes. Let's keep discussion of that on issue . I'm not unsympathetic here. I was much more excited about a Date that looked developer-friendly in HTTP\/1; now I fear it's not going to be adopted (the HTTPAPI is very lukewarm about it).\nOk, here's an implementation question. In the branch for sf-date, URL currently handles dates as extensions of integers. So somebody expecting an integer would actually see an integer, unless when aware of the new type URL (extending NumberItem). Would you consider that a problematic approach? (I got there just because of code re-use from integers, AFAIR it was not a conscious decision (yet)).\nIt'd be problematic if the implementation didn't have any way of distinguishing Dates from Integers, because that would be impossible to round-trip -- the serialiser wouldn't know which form to pick.\nSee linked PR.\nWell, if the caller knows about the new types, it will be able to check. The serializer does know about the new types, so round-tripping would happen correctly even if the user of the API wasn't aware of the extension.\nOk, the PR makes things clearer, in that: you can't use sf-date, even inside an extension parameter, unless you update the spec defining the field, and code calling an SF parser will now have to specify which version of SF to use (that's ok with just the three top-level types and one additional flag, but it will get messy if more extensions are being defined)\nI would like to avoid the second if possible. That is, you can process with a newer SF version, but you can't produce with a newer SF version. That means that rejection by a receiver isn't guaranteed, but it offers a path toward upgrade at some point (we could decide, collectively, that it is OK to use sf-date in an extension for an older field definition. We might do that based on adoption rates for the new SF version, and memorialize it with an update to the field, but we wouldn't necessarily need to pick a new field name in order to avoid the compatibility risk (we'd just be managing the probability of it happening).\nIt would be interesting to see more feedback from people defining APIs for SF support...\nHow does the PR imply that?\nWell, if a spec refers to an SF-shaped field based on RFC 8941, parsing it with an 8941bis parser (allowing sf-date) would be incorrect. RFC 8941 requires draconian error handling, after all. When we defined RFC 8941, we consciously specced draconian error handling + no extension points (until the spec is updated), so that's the consequence. I fear that we are in a very fundamental disagreement about how extending the structured field syntax is supposed to happen given the constraints we put into 8941.\nPerhaps it would be best to take this to the list. I don't see anyone else who yet shares your point of view, so it would be good to circulate it with a wider audience.\n(with two nits)"} +{"_id":"q-en-http-extensions-30942cc8add8b78cef7426e56ad788689ffde293ba36e89427064c4be461a5b0","text":"Replaces\nThanks!\nIMHO we need to be clear what the requirements of SF support are. The way the spec currently is written (referencing RFC 8941) means that SF-shaped field values using the new sf-date format are not supported. I don't think this is what we want, so unless we want to wait for sfbis to be ready, we'll need to add some consideration of how compat with SF extensions\/revisions is supposed to work.\nThe intent of the current text is not to limit the use of the flag to only the currently defined formats. New formats are supported if the signer and verifier support the formats. The agreement of what format to use for a particular field is out of scope for this specification, and needs to be agreed upon by the signer and verifier outside of the signature process. This is true for all the current types and will remain true for any additional types defined in the future.\nOn a high level, this sounds right. As an implementer of a signature library, calling an SF library, what is your expectation what it'll do with sf-date once the spec is published? What if the sender want to sign an integer param in a dictionary, but the field instance contains another param using sf-date?\nI would expect a library of that nature to publish its mapping between field names and SF types, and ideally allow that mapping to be edited by the caller of the library. Otherwise, how would a user of the library be able to provide a type mapping for a custom field that the library doesn't know about, let alone a new sf-type?\nUnderstood. Different layers of libs. What I meant is a generic SF library, that just parses\/serializes SF fields.\nThen you need to know what you're parsing it in to. SF fields aren't entirely discoverable from the syntax alone, you need to know the type explicitly. I ran into this when implementing my own signature libraries. So the signature library needs to have that mapping, and if the underlying SF library doesn't support some new type, then you can't map to it. Once the dependencies are updated, you can provide that feature up the chain. It all seems pretty standard and simple to me, honestly.\nHow so? I still don't see the path to interop for any update of the structured field syntax, unless this spec at least provides some hints.\nYou need to know the top-level type (Dictionary, List, or Item) for each field. The consequence of that (AIUI) is that a recipient wishing to validate a signature that uses will need to understand the top-level type of each 'd field in the signature. It'll also need to support the permissable types in those fields -- e.g., if a field allows Date, the recipient needs to be able to handle that. As a result, it won't be possible to reliably verify a signature if your processor doesn't have knowledge of the top-level type, or if it doesn't have support for an extension type in use in one of those fields. However, there is a chance of verifying it still, if the SF-produced form is sent on the wire and not changed at all in transit, and the processor just ignores processing (and in some ways, this is worse than reliably failing). I think this all can be reasoned out by reading the specs appropriately. Documenting it as a tripping hazard would help users avoid interoperability issues, but this specification is already long and dense, and if we also make it a user guide, it will risk becoming unusable. So, speaking personally -- if the above summary is correct, I could see adding a sentence or two to section 2.1.1 at the most; I could also see leaving it to a separate profile\/user guide.\n[ chair hat on ] This issue is the last thing holding the Signature spec going to IETF LC. NAME NAME please respond to above, particularly: whether such text is necessary whether such text will address the issue when such text, if necesary, would need to be incorporated (before\/after LC). Thanks,\nI believe it would be good to have such text; and that it should be in the spec, not in a guide that is unlikely to materialize. It should be incorporated before LC (I hope you mean a new WGLC here...?). Given the amount of changes, we really should spend some time reviewing all that changed, and to verify that the open issues have been addressed properly.\nNAME can you propose something? I don't see any reason to rerun WGLC, based upon the current PRs; there are some changes (especially in one or two), but they're based on AD review and can be discussed during IETF LC. cc NAME\nI would prefer to see your proposal :-). The RFC diff since the last WG version has 143 change blocks: URL We also currently have 17 open issues\/PRs: URL IMHO we need to get these closed (can be rejecting or fixing), and also have a look at those that we raised since -13 and closed in the meantime. I don't think we do the community a service by sending this to IETF LC right now.\nNAME you are grossly misrepresenting the state of the document. For one, the last WG version, and the one that went to the AD, is -15, not 13, so your diff above is misleading at best. The changes since then have largely not been substantive in nature, and all have been made publicly with the view and input of the WG. The open issues are nearly all addressed by open PRs, which I am waiting for AD feedback before merging (I'm not sure if the AD wants to review the PR text or the merged draft that results). What remains is this issue, as NAME pointed out above. To answer NAME 's question about how to resolve this: I think pointing out as a potential tripping hazard is not necessary but would not be harmful. The precondition to know the type is already in the text as the first sentence about the flag in section 2.1.1: Additionally, the lack of knowledge of the type of the field is already called out explicitly as a required error condition in section 2.5: I would propose we simply add a note to 2.1.1 of the form:\nWell, it's hard to say what remains to be done until an issue is actually closed. We're all volunteers here, so it's good to make things as easy as possible to reviewers. So, trying to rephrase: let's get all open issues closed (one way or the other), and let the people who raised the issues give an opportunity to provide feedback before sending this back to the IESG. (I, for once, am not convinced that I agree with the proposed solutions, but due to the type of diffs (long lines) it's a bit tricky to understand what actually changed) works for the sending side; but what is a recipient supposed to do when the sf flag is present and the field value does not parse due to sender and receiver disagreeing about the SF version? Also, that only affects the toplevel type of the structured field. It does not address the case where an extension parameter is added is an sf-date (or any future new type).\nNAME until you propose text, it will be hard to say what kind of text you think would close this issue.\nExcept for this issue and two others recently raised by you (which will be considered as IETF LC issues), all of the issues and PRs you mention were generated by the AD review before IETF LC; this is a normal part of the process that does not require going back to WGLC unless the AD asks for WG review, which . NAME made a reasonable proposal. If you disagree with it, make your own.\nIf the proposal was: Then this isn't sufficient as it only applies to the toplevel type (see URL). NAME - I believe you are in the best position to make a proposal; but that said; I can try something later today.\nSo, we have in 2.1.1: This implies that the \"sf\" parameter can only be used on fields that conform to RFC 8941, no extensions allowed. I would suggest clarifying that, at the end of 2.1.1, with: \"Note that the normative requirement to use the SF serialization algorithm precludes any use of future extensions of the structured field syntax. Support for future versions of SF will require an update to this specification.\" I believe this matches what NAME suggested.\nExcept that your proposed text is incorrect. The implication you are drawing is not true, nor what others have taken from that text. The use of the parameter does not preclude extensions to RFC 8941, as has been discussed at length here and on the mailing list. Extensions to sf types would not require changes to the signatures spec. I propose closing this issue without change to the signatures spec.\nSorry. Mark asked me to propose text, and that's what I did. The text has a normative requirement to use an RFC8941 serializer, so the proposed text is correct. That we disagree here is proof that a clarification is needed. If you feel that that p.o.v. is too restrictive, we'll have to somehow relax the requirement on the serializer (for instance, by stating that the serializer needs to implement 8941 or a successor RFC - that is an RFC that obsoletes 8941). FWIW, I agree that this would be good. But it is not what the spec says.\nNo, we reference the current document and if that document is updated or obsoleted by something else after publication then you as an implementer go and follow that. That's how updating RFC's already work. All the specs and systems that implemented RFC2616 didn't suddenly break and become invalid when 7230 or later 9110 came out. Here, It's up to the updaters of 8941 to make sure there's A serialization section. Your view of specification dependency is not at all in line with reality.\nI see where you're coming from; I'd appreciate if you would do the effort to understand where I'm coming from. The difference here is that any spec that extends 8941 with new types by definition is not backwards compatible. This is because 8941 has no extension points, has drastic error handling and thus requires a wholesale spec update for new things. This is very different from most other RFCs that we're using here. If there is agreement about what we want, why don't we just say it?\nTo illustrate further: RFC8941-vs-8941bis is like XML1.0-vs-1.1 or HTTP1.1-vs-2.0, not like RFC2616-vs-RFC9112.\nThat's a very specific definition of 'not backwards compatible' and one that I suspect many would be surprised by. Just because there are no explicitly nominated extension points doesn't mean you can't evolve the specification in a backwards-compatible fashion; the key requirement is that valid inputs per the 'old' version are also valid inputs per the 'new' revision. That is the case for sf-bis. It is true that an invalid input for RFC 8941 will now become valid; specifically, a type value beginning with 'NAME used to generate an error, and now will not (provided that the following characters are appropriate). The reasonable question to ask at this point is whether it's realistic \/ probable that an application will be depending upon that error being thrown to operate. Can you suggest when that might be the case? Because I find it doubtful in the extreme that this will cause real-world problems.\nI agree that this wouldn't cause real-world problems. I just also wish that our specs actually say what we think should happen, instead of requiring people to figure out. So in this case, I think we can solve this by saying \"RFC 8941 or a revision obsoleting it\".\nSince NAME acknowledges that this doesnt affect interoperability and this is merely a \"wish\", NAME why don't you go ahead and incorporate your proposal so we can ship this spec?\nNAME - please don't claim what I said something I did not say. My position remains: the spec should be clear about what it expects implementers to do. Right now it is not. If the agreement is \"use RFC 8941 or a future revision\", why don't we say that?\nProposed change: URL"} +{"_id":"q-en-http-extensions-844794130c1d8b9c3bfde65e18c211c96b54e66a3d03431015a94cb61d1e10c7","text":"... or it's a clarification that aligns the text in the two different sections. Sounds like we should discuss on list.\nI still dont understand why we have some of the \"strict\" stuff in 1.1 and some in 4.2. I would move the paragraph from 4.2 to 1.1 to improve the chance people actually read it.\nDisagree; implementers are more likely to look at the parsing section, not what they perceive as boilerplate at the top. 1.1 covers the philosophy and general approach; 4.2 talks about the implementation details.\n\"This is intentionally strict […]\" sounded philosophical to me, but whatever :-)\nURL It appears that some read the last sentence: as saying that the generic draconian handling of parsing errors can be relaxed.\nIt's saying that a generic parser will fail, but the field-specific handling of that failure might do somthing else, including try to parse the field using an alternative method. SF cannot constrain what a field does with the output of parsing.\nThat sounds like \"we require draconian error handling, but if there's an error, you are free to do something else\". Of course we can't force anybody not to do that, but we should be clear that this is not what the spec says."} +{"_id":"q-en-http-extensions-87587a63f33305c13b3102dba8f261dc87ad9cdacd7839e9309a501695a7310e","text":"AD review feedback (thread at URL) and some minor editorial updates\nthanks Mike, nits have been addressed\nLooks good; two nits."} +{"_id":"q-en-http-extensions-effca5700e6e6751667f4580f60882e11ca09150cdd5ed82100fdccf8b772ca8","text":"Closes URL\nI suspect a title like \"HTTP Resumable Uploads\" or \"Resumable Uploads for HTTP\" would be more clear -- not referring to HTTP will pull in readers looking for other things, and calling it a protocol reinforces that it's separate, when we're trying to make it an extension to an existing protocol.\nGood idea to include HTTP in the title! I updated the PR.\nGreat suggestions, Lucas! Thank you very much!\nLGTM, thanks for picking it the pen.\nThe title of the resumable uploads draft starts with \"tus\", which is a little weird as the inclusion is entirely unexplained. If the intent is to acknowledge something, might I suggest the Acknowledg[e]ments section.\nAgreed. I think this helped during the \"pitch period\" as a differentiator for this proposal vs others. However, now its an adopted document the lone use of the term is odd. I'll prepare a PR that gives a nod back to the tus v1 spec that inspired this.\n+1\nI opened a PR for this: URL NAME Let me know if you had something different in mind."} +{"_id":"q-en-http-extensions-c3e90eea67c8465e9867710ff7a6d837f54b652e2edf941f0678eb5732d3d94e","text":"some content-coding are \"not deterministic\" ;) not only encryption-related ones.\nNAME great! I applied them :) Feel free to merge!\nmerged, thanks\nJames Manger wrote:\nyeah I think RFC 8188 was the target of the section here. NAME can you recall? What can we add to make this section a bit more clear?\nThat's correct. And the suggestion wrt different compression levels is correct to! This is worth mentioning and maybe highlight that it applies to a specific kind of content codings."} +{"_id":"q-en-http-extensions-d403c8739bbdccfd767a53d1bfaa908d3569aa8ec6aa463d8ed3c6be43ce44b0","text":"Murray suggested some things that we could change about the way new IANA registry was being handled. This change: tries to make it clearer that this document uses a new registry that we will need IANA to create moves rote IANA instructions to the IANA considerations better highlights interesting hash algorithm considerations for implmenters clarifies the registration procedure and expands on the expectations for the Designated Expert(s)"} +{"_id":"q-en-http-extensions-f780894b5d637a0b25fc2810e4bdc37977f7fdce80a964fac730ebab698ed6e9","text":"Add reference for obs-fold, add note about transforms,\nThe spec several times refers to the term obs-fold, but doesn't say where it's defined (-> URL). Furthermore, in 1.3 it claims that addition of obs-fold could be permitted under certain circumstances. That IMHO is incorrect.\nAdding the reference makes sense. The signatures spec doesn't change how the messages are created, nor does it make more allowed than it is now, it just tells people what to do with the messages that they have even if they have in them.\n: This is misleading. No HTTP permits addition of leading\/trailing whitespace. Leading\/trailing whitespace in field values by definition is invalid.\nWhile technically true, many (including implementers) won't understand the nuance Here. Given the impact, it's better to be over-inclusive, I think.\nIf invalid changes are included, then it might be best to either say that some of these changes might be invalid or point specifically to those that are.\nYes, my proposal would be to split the list into two; the second just listing the transformatios that might happen in reality despite bing forbidden.\nNAME I agree that we need to prepare developers for the messy reality of HTTP in addition to what's strictly allowed. What else would appear in this second list?\nThe only other thing that I can see right now is addition of obs-fold.\nSpeaking of which: there are other transformations like parsing and re-serializing field values, which are certainly allowed. Shouldn't these be mentioned here as well?\nNAME Re-serialization of a field value by an intermediary would likely break things in an expected way. It was my impression that this was not allowed by intermediaries in general. A structured field does help this, and the flag protects against that kind of transformation, as mentioned in the section.\nI don't think we have a rule anywhere that would keep an intermediary from parsing and re-serializing fields. And even if that was the case, the same would be true for the original point (addition of leading\/trailing WS), so I'm trying to understand where we draw the line."} +{"_id":"q-en-http-extensions-7f2d91c0eaa4c9dc0b785a867b923dcddeadcd7c4c15630edf6aa574c83ff535","text":"Since next-hop-aliases is a String parameter, the syntax allows an empty string . In cases where there is no chain, is it advisable to send the empty string to indicate this, rather than omitting the parameter? If so, it might help to add a sentence somewhere that highlights this usage type.\nThat's a good question! I don't think there needs to be a normative requirement here, but I think it's reasonable to have signify that there were no aliases.\nThis mention is clear. LGTM"} +{"_id":"q-en-http-extensions-2ea6fc42b27b262aefbb79cc70cb0db5f218dd7adf9e6bd3dcceea8b68bccb41","text":"A signature over another signature does not provide the transitive protection that it seems it should. The examples and advice have been updated to accomodate this.\nNAME NAME can you please verify the signatures on the updated examples in this PR?"} +{"_id":"q-en-http-extensions-d01979a98ed8f007de88fdebf2f9547df07dca5b50ec4758293d98114f654e40","text":"[ ] to obsolete the IANA Digest Algorithms Registry or define another registry [ ] it can stay as historical ~identify whether and how it should stay~ [x] if it should stay, the note below should be fixed.\nI think it's easiest to just add a request to update the note in the IANA considerations of digest spec. E.g. current: something like\ncc: NAME guidance welcome\nNote: this has been fixed. the IANA Digest Algorithm Registry Note not to mention \"message body\" since it's not even mentioned in RFC3230.\nNAME NAME what is expected to happen to URL after Digest's deprecation? Can it be marked as \"historical\" to avoid issues with current Digest implementers?\nYes, IANA can be instructed to do that in IANA Considerations."} +{"_id":"q-en-http-extensions-25145a0168bc690d74eb04f7bcf7d49963e164dcda527eff167acaeb2da556c1","text":"Add requirement for unprompted auth to use TLS 1.2 with extended master secret or later. Fixes httpwg\/http-extensions\nPlease mark this PR as fixing the corresponding issue\nFrom Ilari Liusvaara on the :"} +{"_id":"q-en-http-extensions-66b5000bc5757210d9b81da5abc2f608b93e216f3290fc2ca1fcc1882612fb0d","text":"Fixes httpwg\/http-extensions.\nPlease mark this PR as fixing the corresponding issue\nFrom NAME on the :\nThe conservative thing to do here would be to insist that the key be unique to the protocol. We have domain separation in our outputs, so we don't need to worry about unprompted auth messages being used in cross-protocol attacks, but a different protocol (that doesn't use context strings) could possibly be induced to generate unprompted auth messages. We also need to worry about key compromise. Imagine a protocol that leaks the private \/ secret key, that shares a key with unprompted auth. This doesn't even need to be a bug, it could be a commit-before-reveal style protocol. (Obviously that's a pathological example, but it shows we can't assume composability with all protocols) If we want to do a formal analysis (which I do) I think we need to be conservative here.\nSGTM"} +{"_id":"q-en-http-extensions-b597baebfc347f9e85e43860e7dffa701540527635993e133a0061393fbecb75","text":"The user ID parameter \"u\" effectively identifies the keying material that servers might use to verify the authenticator sent in the header. As noted on the list, this is more or less a key ID, and I think leaning into that framing could be helpful. For example, beyond avoiding the confusion raised on the list, it might also be more ergonomic for different types of authenticators. One could imagine using a for unprompted authentication where the server only learns that someone with a valid private key produced the authenticator but not necessarily which one."} +{"_id":"q-en-http-extensions-49293356f8db24491e55b4f7a0b294d63028b07c70079cebfd918869124b46ad","text":"OK with me.\ndoesn't return . Without that, the number itself isn't necessarily enough to determine what the type is (necessary for parsing Date).\nSo what we need to prevent here is that: parses as date.\nI don't think this is necessary; the result of Parsing a... already has a type, and that's already checked in Date parsing. Nevertheless, see PR...\nThe PR isn't correct; it fails to account for the different return signature everywhere it's called. Fixing that is much more invasive, so I'm going to push back on this more strongly; it isn't necessary. What Parsing an Integer or Decimal returns is typed; it's either an integer or a decimal. This can be seen in other places as well -- e.g., in Serializing a Bare Item (\"If input_item is an Integer, return...\"). No change is necessary here.\nNAME NAME can I close this with no action, or should we take it to the list, or...?\nSo your assertion is that a value inherently carries a type? That's not obvious to me.\nYes. defines those types; the serialisation algorithms take them as parameters, and the parsing algorithms return them. That's reflected in the introduction text for each individual algorithm."} +{"_id":"q-en-http-extensions-2311c497506334e8e64bf10f4fe142992687578c8aa46bcc5469c8319fe1c3f8","text":"Based on\nRight now the document contains the (user ID) parameter. Thinking about it some more, this is more about describing the key than the user, see httpwg\/http-extensions. In addition to that change, we should add text warning about the tracking risks or this and how to avoid these issues.\nIt seems to me that there's room for both user-ID type uses for the \"u\" (in the employee remote access model, e.g.) as well as the more indirect key-indicator type uses (in the Internet Freedom model of distributed communities). I'm especially interested in the latter - use cases where users can prove they are in possession of a valid secret without requiring a hard user identifier (one that attaches them to a permanent\/billable\/irrevocable external identifier). The draft does not currently provide an Authentication Scheme that directly offers this latter capability, however (though it's possible to overload on the \"u\"\/\"k\" parameter if the server and client agree on the scheme for doing that). I'm unable to steelman the case for \"u\" (or \"k\") being a \"super-cookie\". The imagined use cases in my head have this scheme tied to a single service provider and employed on the first hop. I don't understand the language being used in the list commentary on this topic (e.g., \"Each site (*) could get a different key pair and key identifier.\"). My assumption has been sites issue credentials as they see fit without reference to any other site. The commentary implies to me that there is some overriding entity distributing credentials to many sites (a la Privacy Pass). I did not think Unprompted Authentication addresses that space. Further, if indeed we need to address that space, it seems to me the \"super-cookie\" burden of proof is then with THAT scheme, not Unprompted Authentication. The \"document it\" point might be the salient one: \"This is probably another case where documenting a little more detail about the usage context could help.\". Here's a start: Sites offering Unprompted Authentication are able to link requests from individuals clients via the Authentication Schemes provided. However, requests as not linkable across other sites if the credentials used are private to individual sites."} +{"_id":"q-en-http-extensions-0e6be6a264e3205386102baabe43b9ae2ad187328f110b94996774a6d56ac969","text":"Based on\nThe draft could benefit from some example use cases up front in the introduction.\nConcise Use Case Description: Unprompted authentication, as defined here, serves use cases in which the provider of a service or capability wants only \"those who know\" to access it while all others are given no indication the service or capability exists. The conceptual model is that of a \"speakeasy\". For example, a company might offer remote employee access directly via its website, or access to limited special capabilities for specific employees. Members of less well-defined communities might acquire metered access to geography- or capability-specific resources by an entity whose user base is larger than the available resources can support. Unprompted authentication is also useful for cases where a service provider wants to distribute provisioning information to its resources without exposing the provisioning location to non-users. Edits\/Additions welcome\nVersion 2: Unprompted authentication, as defined here, serves use cases in which the provider of a service or capability wants only \"those who know\" to access the service or capability while all others are given no indication the service or capability exists. \"Knowing\" is via an externally-defined mechanism by which user identifiers (required here) are distributed. The conceptual model is that of a \"speakeasy\". For example, a company might offer remote employee access to company services directly via its website, or offer access to limited special capabilities for specific employees, while making discovering (probing for) such capabilities difficult. Members of less well-defined communities might acquire access to geography- or capability-specific resources by an entity whose user base is larger than the available resources can support (via that entity metering the availability of user identifiers temporally or geographically). Unprompted authentication is also useful for cases where a service provider wants to distribute user-provisioning information for its resources without exposing the provisioning location to non-users.\nVersion 3: Unprompted authentication, as defined here, serves use cases in which a site wants to offer a service or capability only to \"those who know\" while all others are given no indication the service or capability exists. The conceptual model is that of a \"speakeasy\". \"Knowing\" is via an externally-defined mechanism by which user identifiers are distributed. For example, a company might offer remote employee access to company services directly via its website using their employee ID, or offer access to limited special capabilities for specific employees, while making discovering (probing for) such capabilities difficult. Members of less well-defined communities might more ephemeral identifiers to acquire access to geography- or capability-specific resources, as issued by an entity whose user base is larger than the available resources can support (by that entity metering the availability of user identifiers temporally or geographically). Unprompted authentication is also useful for cases where a service provider wants to distribute user-provisioning information for its resources without exposing the provisioning location to non-users.\n\"security by obscurity\", yes?"} +{"_id":"q-en-http-extensions-d511d378e704c3af575040f5fc544b6bbce012905c6e6bea9f6af7bbe797467c","text":"Ilari suggests on the :\nto this. It's been implemented for exported authenticators and isn't too challenging."} +{"_id":"q-en-http-extensions-fa2deb9b47b188ce81d57641f2bf96995076b2872d9ad292a4c15dfddab9e74a","text":"s\/ise-case\/use case e.g. -> e.g., nit pass"} +{"_id":"q-en-http-extensions-93a5024d7b89e6a5a58b7db14563d194cdbd64ca53d308d3f5ec5b9d99e662eb","text":"I think the alias draft should probably have a (non-normative) reference to , in particular noting that: can be populated from the output of . can be used to emulate or substitute for .\nI think the flag to only gives you the \"canonical name\" which is the final CNAME target at the end of the CNAME chain. If you want the full set of interior names across a chain of multiple CNAME records I don't believe you can retrieve that with .\nYeah, I'm hesitant to add references to system APIs like , and particularly for the reason given above. NAME what do you think?\nI think AI_CANONNAME is important historical context here. You can't understand where the draft is coming from without it. I also think that this extension will often be implemented by wrapping getaddrinfo, and some guidance about whether that is compliant would be appropriate. I think Robert's comment raises a good example of why this is not trivial, and hence is worth mentioning.\nI'm not sure is really required to understand the context of the draft. We don't use it at all on iOS\/macOS implementations for current CNAME cloaking detection. I do see that it may be good to warn that it is not sufficient to just wrap up .\nThanks LGTM"} +{"_id":"q-en-http-extensions-2d407eed0a382af933f30558d6c5d8cf5895d0c00087a7d97996f3b68529d45f","text":"Follow up to the IANA review of the digest spec. We asked them to deprectae the old registry, so lets add a note to it to clarify what the new expectations of that registry are."} +{"_id":"q-en-http-extensions-e2b1e9328aef5292133eb8c3efdf855361d9aeea494f2dfd1f1b2d3309960b06","text":"Reframe use cases that had previously signed signature values to avoid problems with this approach.\nThis looks good to me! Summary of the changes: Explicitly flagging that the application \/ implementer must specify how request-response binding is to be achieved (if desired). Removed all references to request-response binding in the main body. Removed all signatures of signatures from the examples Added guidance that applications SHOULD sign all signed components of a request when generating a signed response. Added guidance that generating signatures of signatures is NOT RECOMMENDED when generating responses. Added guidance that generating signatures of signatures is NOT RECOMMENDED when acting as a proxy between a request \/ response."} +{"_id":"q-en-http-extensions-dcaa8dfcad01d7fdb02c0681e4ce4d53b85cb5b9fad646b4b44cca1bc84cd9cb","text":"Addresses encodings of signature component values for query parameters and field values.\n: This seems to be inconsistent with Section 2.2.8 (\"Query Parameters\"), which could create non-ASCII component values.\nFWIW, a query parameter (after application of URL) could contain CR, LF and other control characters.\nSo we likely need to serialize information extracted from query parameters using some kind of encoding; maybe an SF Byte Sequence.\nThis is tricky -- originally the query parameters were kept in percent-encoded form but from WG discussion we changed that to just use the unencoded value instead since several percent-encoded forms could be used. However it does look like the unencoded form could end up with problematic values here, so maybe we need to go back to that. I'd rather us use an encoding native to the value rather than inventing something to put on top of it, even trying to re-use the flag we've already got defined for headers.\nAnd we did that because to extract query parameter values, we need a query parser, and that is defined the way it is. I believe that decision was good, we now just need to figure out out to transform the extracted values so that they can be used in the signature base. (And, FWIW, at the end we also should have an example for a case like that in the spec).\nIn 2.2.8, it might be good to mention that the referenced algorithm only supports query parameters using (percent-escaped) UTF-8 encoding (this came as a surprise to me because I assumed it works the same as the algorithm used for forms decoding).\nin 2.1.3: What if the field value contains non-ASCII characters (from obs-text)? Of course that could be considered an error, but I believe the intention is to be able to represent anything here. So the text should refer to the field value as byte sequence, in which case the encoding step would not be needed.\nThis text was added in response to a comment about needing to know which encoding to put into the byte sequence input, and the commenter suggested ASCII as the base for it. If field values have a well-defined byte sequence that isn't ASCII characters only, then we should refer to whatever that is from the HTTP docs.\nWell, field values in HTTP can be anything. It's obsolete, it's deprecated, but that's what it is. So we need to deal with it one way or another. I could live with UTF-8, and a note that if the octet sequence does not decode as UTF-8, you can't sign it.\nNAME - yes, you can merge and close issues as you wish, you are the editor. But by all means do not submit a new draft until the changes have been reviewed by those who opened the issue. (cc chairs NAME NAME )\nAs far as I can tell, the changes require percent-encoding only for non-ASCII values. That is, the character U+0080 would get the same encoding as a verbatim \"%80\" in a field value. I believe this is a problem.\nThis issue has been addressed. There is no percent encoding needed in this section since the value is taken directly from the bytes of the field in a binary-wrapped representation, and the bytes are encoded using the base64 serialization of a Byte Sequence from Structured Fields:\nI'm talking about the simple case, not the one where Byte Sequence encoding is used (yes, the issue title doesn't match anymore).\nOk, opened new issue URL"} +{"_id":"q-en-http-extensions-f13a15da75bf9a3cd0092b61f51241981e52ee87fa1d8494ce1a44c4ffab1802","text":"Import obs-fold, reference US-ASCII, enforce ASCII-ness of Signature Base"} +{"_id":"q-en-http-extensions-94059bffa0809e125bc6f9552803015232dc7147483d051ce22cb8056ba3c65b","text":"There are currently some uses of \"serialise\" in place of the far more common \"serialize\", including one case that breaks the pattern of header names (\"Serializing a List\", \"Serializing a Dictionary\", \"Serializing an Item\", …, \"Serialising a Date\"). This PR also includes an analogous update of \"summarises\" to \"summarizes\" to increase the general pattern of American spellings.\nSpeaking Danglish myself, I am disqualified from having an opinion on this subject matter :-)\nDespite my Australianised sensibilities, I see that 8941 was normalised to American spelling by the RFC Editor, and changing things now would introduce more diffs. Thanks for the PR."} +{"_id":"q-en-http-extensions-fc6612705e2791606981f382554983fb67dcb2c09053c4ea834edc66751b6357","text":"This moves the values that were in the Encryption header field to the start of the payload. This provides a meagre efficiency gain as well as removing the need to have two correlated header fields. This should go most of the way to address WGLC comments.\nLooks good to me in general. That said; this removes the parsing weirdness for \"Encryption\" (good), but not for \"Crypto-Key\", right?\nNAME happy to correct any parsing weirdness, but need to know precisely what it is first."} +{"_id":"q-en-http-extensions-fed8e167dd46236e4bc7690aad28a84a038d956c1fce57b7f816dd5089aa527a","text":"Sorry for noticing this so late. I don't think the security considerations of in retrofit work out. (Also CC NAME who is much better at web security than me.) If we introduce two ways to spell some field, we need to worry about whether this can allow various issues when sender and receiver don't agree on what's going on. The draft mostly leaves this to the future, but it does include some about this: This seems to put the onus on the sender, not the receiver. If you don't know the receiver accepts the alternate representation, you can't send it. It includes no corresponding receiver obligation, I guess on the assumption that if you see SF-Cookie, you can assume the sender meant it? I don't think that works. In a browser, the sender is a combination of browser logic (which has the final say) and JS (with a complex trust relationship). The browser logic specifies some which JS cannot set. Other HTTP systems have similar splits: an HTTP serving frontend may get fields from a CGI script or act as an intermediary, but it cannot allow the downstream code to specify fields like . These rules typically look like a blocklist with the default behavior to pass fields through. All this is to say, browsers won't let JS set the field directly, but setting is perfectly fine, up to CORS rules. A receiver may see that text and incorrectly assume that processing is fine, breaking some cookie invariants. Or vs , which will break some sites' CSRF processing. Beyond just blocklists, a retrofit-unaware HTTP cache that processes caching fields but passes through other fields will likely make a mess of the mapped caching fields. If we prefixed the names with , that would at least meet the browsers' rules, though I worry it'll still break other intermediaries. It's also a little unsatisfying of a fix. We could say that a receiver cannot process these either unless it's received some explicit negotiation that the sender knows about this. That's vague and unspecified, but probably would work, depending on what the intent is here. This is a bit handwaivy because I'm not sure what the use cases are here. But if we require explicit sender and receiver negotiation, it's unclear what we're getting out the parallel field names in the first place. It seems to me that's where all this risk comes from. Giving alternate names interferes with pass-through logic across the pipeline. Maybe we can get a more robust story, inspired a bit by (I'm just assuming this is one of the main use cases here) is to say: The text representation of is what it is. If this part of the system uses the text form, use the existing syntax The structured field representation of is this mapped thing. If this part of the system uses structured fields, use that syntax At boundaries where forms change, convert. We need to be a bit careful about fallibility here, but perhaps: Conversion from text to other forms is allowed to fail, since you can just fallback to the text form. Conversion from other forms to text probably needs to be stricter? The difference between mapped and normal fields is the structured field and text representations don't directly align, so they need to be treated a bit special Like I said, this is handwaivy and definitely incomplete. But it feels to me like something in this might be more sound. It makes it more obvious we have to be sure what representation we're using at each layer and there's no risk of one side thinking something is a generic header and another thinking it's an alternative representation of a special header.\nThis seems like a plausible risk to me. I'm not as familiar with non-browser use cases, but I can certainly imagine confusion that would result from clients being able to create mismatches between and . I can imagine ways to deal with this in browsers (in addition to NAME suggestions, we could add as a , just as we've done for and ). It's unclear how helpful that would be for non-browser usecases.\nMain nuisance with prefix is that, on the receiver side, you don't know a priori if the client is an updated browser, which knows about the prefix, or a preexisting browser which does not. I suppose you could do UA sniffing, but let's not design a system that requires UA sniffing to work if we can help it! :-)\nThat's fair. would be safe without a change to Fetch. Still, given my expectations about the current deployment of mapped fields, I think we could reasonably get ahead of them in the browser world. Not at all sure about the rest of the ecosystem...\nI like the direction that if you've already negotiated support, you could just replace the legacy headers with their spiffy (SFfy?) new equivalents using the same names, and if you haven't negotiated support then you have no business sending these variants and expecting them to work. Of course, that means this draft probably can't get away with \"We don't define a way to negotiate this, but you do that.\"\nI think we'd avoid needing \"OUGHT TO\" in that model. That model is more explicit that these are two incompatible representations of these fields. The negotiation requirements become more straightforward, I think: E.g. draft-nottingham-binary-structured-headers meets this requirement because you set a flag on the message, and gates usage on a setting. A hypothetical SF-based HTTP API would also meet this requirement because it's unambiguous that takes some in-memory structured field representation.\nHey David, Yes, I've been somewhat concerned about the potential issues in this area for a while. Minting new field names for the alternative forms was the initial approach, but I was half expecting someone to raise this issue :) One thing: No; the assumption is that a recipient sees SF-Cookie, they'll ignore it unless they've positively opted into it meaning something. Perhaps, as you point out, that should be more explicit -- e.g., say that these fields MUST NOT be processed unless the sender does [magic]. Making sure that evidence of [magic] is propagated properly may be tricky, though. I could also see removing the new field names. That should work for many contexts, although I suspect it might make handling the new forms difficult in some. Prefixing with doesn't seem very attractive to me.\nAh interesting. So I guess the middle option, \"require explicit negotiation on the receiver\" was actually the intended interpretation. That's not how I read the draft. It says \"implementations are prohibited from generating such fields unless they have negotiated support for them with their peer\" without saying anything about the recipient's responsibilities. So it sounds like, at minimum, we should update the draft. Getting that right in the presence of a chain of intermediaries is quite subtle however. Suppose we go A -> B -> C and both B and C support mapped fields, but A does not. B must be careful not to forward along a mapped field from A. E.g. it might decline to advertise mapped field support to C because A doesn't support it. Or it might reject all mapped field names from A, on grounds that it must have been invalid. Moreover, \"intermedaries\" has to be intepreted very loosely. Every processing step in the entire HTTP pipeline needs to carefully manage these alternate names. This seems a disaster waiting to happen. I think I'm now convinced this is the only plausible option. Regarding \"many contexts\" and \"some\", do you have use cases in mind here? I saw draft-nottingham-binary-structured-headers, but I gather there are others? Something to keep in mind is that this draft intentionally punts figuring out the negotiation design to the future anyway. That means this is also the more flexible option. If we define , etc., now, we have permanently committed ourselves to this security problem. If we decline to define now, we can still choose. We can always define the parallel field names later, should something that needs them come up. (Of course, I'd raise the security problem again, but it'll be easier to reason about with a concrete use case! )\nI guess this raises another question: how do we handle mapped field values that cannot be represented in the underlying text representation? Given how many intermediaries and intermediary-like constructs are involved in HTTP pipelines, and given how every HTTP system today uses the current syntax, I suspect mapped values that cannot be represented in the original syntax have to be treated as an error. Which means I think we need to go through the exercise of writing SF-to-text converters for each mapped field and then narrowing the allowed inputs until that converter is infallible.\nIt sounds like removing the field names is the best path forward. Yes, for those, the application using retrofit needs a fallback, just as it does for compatible fields. If that isn't obvious from the document, it should be clarified -- perhaps in a new 'Using Retrofit Fields' section.\nI could very well be incorrect, but it sounds we want to have alternatives per each hop. Does that mean that these mappings would be best represented by ther per-hop pseudo headers that we have in H2 and H3? Not that I feel that doing so would be the best option, though.\nOoh yeah, per-hop is a good way to think about this! For pseudoheader vs. something else, I suspect it'll depend on use? draft-nottingham-binary-structured-headers already has a flag, so it probably could just use the original names, though pseudoheader names would work too I suppose. The existing names has the minor benefit of being able to reuse the existing HPACK static table. Some sort of SF-based HTTP API like probably (?) doesn't want pseudoheaders in the API and can just use the original names. But I haven't put much thought into that. If one wanted to send them in their textual form in existing frames, we probably need pseudoheaders. Though I don't know if we really get much value out of defining that, over just using the existing syntax. shrug We can also punt this problem to documents that want to use this, if we're not sure yet. :-)\nIf you want to use an invalid field name, there are a lot more options than putting a \":\" in the first byte. You need to negotiate that usage, but then you can do anything at all, including using that to signal lots of stuff. (If you want to avoid the extra byte, you can pack data into the high bit of each byte...)\nI feel like punting is the right solution, unless we can come up with a convention that addresses the identified issues in all cases.\nNot acting is a decision of a sort. But I'm not sure what that would concretely mean for retrofit? No prefix? Or just that we shouldn't worry yet about the effect this has on smuggling through WAFs and whatnot?\nI think it means: No new header names; applications that use mapped fields need to define their own way to distinguish them A new section about these considerations"} +{"_id":"q-en-http-extensions-617fd857523f131e9c9f223418e62322e996d2dbf11d3aa2167312e65c089843","text":"message signatures are just one part of an overall system, and this adds language to make that expectation more explicit in the introductory sections."} +{"_id":"q-en-http-extensions-2b12effa53ab462432981e576519121a5184d5bf15092600d70f9114f7b1d558","text":"Point out that many frameworks smush together form parameters and query parameters into one bucket, and that can cause unexpected shadowing. Some examples changed to avoid ambiguity.\nThis is great."} +{"_id":"q-en-http-extensions-98332ad680e612826215095a1d59b42c4575c071477d1fa536867c31ad4b2c77","text":"Proposed fix in URL - note that this is an incompatible change.\nFWFW, if the argument is that not escaping the escape character is actually ok for this purpose...: in that case the prose should actually explain this.\nI actually do think it's ok to not escape the escape character. If you've got non-ASCII header values you're already kinda out in the weeds in terms of processing the header bytes. Ultimately the signature base really only needs to have bytes, but its input is constrained to ASCII because (1) all derived components are already constrained this way (2) component names and arguments are already constrained and (3) header component values (here) are SUPPOSED to be constrained to ASCII, but might not be in some weird corners as you've pointed out. All that the newly-introduced percent encoding is doing is putting bumpers around the weird corner that isn't supposed to happen but might in some cases. I'd honestly be happier with putting advice to use the binary sequence wrapping ( flag) on problematic non-ASCII header values instead of doing percent-encoding and requiring universal escape of the escape character when the encoding isn't going to hit the VAST majority of use cases. That would reverse the previously-applied fix but I'm fine that.\nThat works for me. If we combine that with the \"MUST produce an error for non-ASCII\" in the signature base generation (and people actually do that check), we should be safe.\nOK, I'll work on that, unless you want to take a swing at it.. Since we've already got the cut-out for fields that don't behave nicely as lists (specifically Set-Cookie), we've got a good space to put this advice. I think an error on generation makes perfect sense here too. I'll back out the percent-encoding for non-ASCII headers at the same time (but note it'll stay for query parameters, where I do think it makes sense to use)."} +{"_id":"q-en-http-extensions-b41737bfe969c4aa89f9ff0ec9628f9dc1544c1cc322322b6727e731d7546ad1","text":"Encodes query parameter names to match values, to avoid non-ascii or otherwise problematic values.\nIn 2.2.8: What if the query parameter contains non-ASCII characters? I think this needs to say that the name parameter needs to contain the raw (percent-encoded) name.\nGood catch, I believe you're correct."} +{"_id":"q-en-http-extensions-1ddafe3e29495ffd750e372c67d1159e8adf374b09a8023bc3258ca133d9a6b6","text":"LGTM\nOn the list, Rob Sayre asked: I agree it seems we can improve the text here a little. Maybe NAME or NAME can elaborate as to why a digest trailer is not suitable in this case?\nThe reason is that in order to produce a MICE hash, you need to process the content in reverse order. Therefore, there is no reason not to put the hash in the header. The advantage you get in return is that you can process the response in natural order.\nThanks martin. We should just say it that way, because it is clearer to the reader. I'll follow up with a PR."} +{"_id":"q-en-http-extensions-0cf4b6bbbf14d98f14648f7867b3b1bd2579ab60bfc72026145f1fcfdb0addd0","text":"The \"\" set includes \".\", but we actually need to escape \".\" to avoid attacks.\nHm, my impression is that this attack is about making a period within a label look like a label separator. By the time that you have a string for a name, it would already need this kind of escaping done. Put another way, the \".\" characters we have in the string are indeed label separators already.\nThe draft doesn't specify how to construct the DNS name string (and in general there is not a great specification for this). The most common convention (noted in RFC 1035 Section 5.1) is to precede \".\" inside a label with a backslash. This draft would then percent-escape the backslash, but not the \".\", so there would indeed be a \".\" character that is not a label separator in the final header field value. Whatever escaping convention you choose, this is sufficiently non-obvious that I think a very clear specification and some examples are probably in order. (This whole rigmarole is normally not a problem because URIs are restricted to contain only \"hostnames\", which cannot have weird characters in the labels.)\nNAME can you look at URL\nI realize that this is now resolved, but isn't it also possible that we could say \"this field cannot be used to express names that include a literal period (\".\") in labels\" ?\nNAME yeah that’s an option, although it could theoretically incentivize using dots in labels as a way to hide the CNAME from being reported\nSure. But you can add another attribute called \"this-site-is-abusing-this-mechanism\" for use when that happens."} +{"_id":"q-en-http-extensions-32717014d71a13fb5390986df058abcc4ea941d037d99f26e4b2f3f844e60a7d","text":"NAME please take a look!\nWithout going through a proxy, clients implement CNAME cloaking mitigations on the basis of DNS information they see during resolution, which is totally reasonable. If resolution is subverted (or whatever) and the CNAME information is modified, these client checks might not work. When going through a proxy, the proxy is the one doing DNS resolution, so it seems like any sort of funny business that one might do to DNS answers for the proxy might affect the client's ability to implement cloaking mitigations correctly. The security considerations might benefit from a bit of expansion to cover this additional aspect of the threat model."} +{"_id":"q-en-http-extensions-142061c24780c9fa4066335df59ce69ed17fce8336d351ae9094430250408fb7","text":"We've been circling an answer to this for a while. The entire gamut of possible solutions is broad, and we haven't had much luck in reaching clarity here. One the one end of the scale, it should be perfectly OK to send a request for an origin over an authenticated TLS connection. But we still have the (entirely legitimate) concern there is that some servers might get themselves confused by this. The other end of the scale is the bells and whistles JSON stuff. NAME seemed OK with this based on his implementation experience, but it is a little complicated. We found too many corner cases for me to be happy that it implementations wouldn't end up busted. And now that we insist on authentication for the server, many of the features didn't make sense. We've also considered an HTTP\/2 setting. That's appealing, but it does limit the applicability a little. This change aims more to the conservative end of that scale. It keeps the resource, but simplifies it, reducing it to a flat list of origins. The client only needs to acquire this from the authenticated server. This doesn't defend against Alt-Svc attacks mounted by attackers with the ability to both send header fields and run an authenticated server, but we're in a very strange place if this is the sort of capabilities we ascribe to our attackers in our threat models. I've explicitly added here. I believe that's reflective of consensus, but that part is easy to revert. The text about client certificates is now clearer."} +{"_id":"q-en-http-extensions-cfc70d86a43b2eda34e3a76fdec8ab1d1a89a11048e9c94b1b35f4a5df5aedf3","text":"This PR tracks changes under discussion with the RFC editor. (Note that I haven't renamed the file to the RFC number, because that hasn't been done in other drafts in this repo.) When AUTH48 completes, I will add the commit moving this draft to the archive and merge it."} +{"_id":"q-en-http-extensions-106bf4e2f305f2bb1fe511b6cdd3ede81e730a15bdc9a9e22dbba2b3eb65506d","text":"think its a case of s\/an\/and\nThe abstract says is it really , is it some other phrasing a little more technically correct?\nLGTM thanks"} +{"_id":"q-en-http-extensions-635ad063c4535719d2f6d64943caa24c1ee39c07ec267c9de7c5c6b354128b3d","text":"ballot position feedback\n[x] §6.1 Recommend adding cautionary language about the capabilities of an adversary like those stated in Peter’s SECDIR review [x] Appendix A, typo[x] Section 6.5, e.g."} +{"_id":"q-en-http-extensions-89d6b78f5b7b3c7cb791588c0027b10216e396997cefc93766067352dec857be","text":"SGTM. Feel free to merge if you are sure we won't need this. Kind regards, R\nJohn Scudder raised during IESG review: algorithms that use a reserved token value that cannot be expressed in Structured Fields\". This is a well-formed sentence but I have no idea what it means. I made a desultory attempt to suss it out by searching the document for \"token\" and this was the only occurrence. If people who will actually be making use of the registry can be expected to make sense of it, then feel free to disregard my comment, of course. This reminded me at some point, I'd sketched out a design for the \"Hash Algorithms for HTTP Digest Fields\" registry that worked differently with what we ended up with now. In that design, it would have been possible to register an algorithm in the table under some friendly name and then separately register some Structured-Fields identifier that it related to. This would have made it possible to register an algorithm but reserve it from being used, in order to (for example) try and maintain some parity with old digest. Needless to say, the original sketch design had flaws and we didn't go that way. Instead, the registry requires entries to be identified using a Structured Fields key, which makes the idea of a reserved status due to a key that can't be used rather strange. TL;DR lets remove the \"reserved\" status because it isn't used by anything already and doesn't have much logic behind how it would be used in future.\nWe need to evaluate and possibly pare down \/ adjust the flags in the frame, based upon real-world use cases. See Kazuho's evaluation: URL\nI agree the need for evaluation. OTOH, I do not think we should pare down \/ adjust flags, since real-world use cases could change as time evolves. The way it is defined now represents the cache state as-is and I believe that it would be most resilient in the long term.\nOK."} +{"_id":"q-en-http-extensions-6cf7e7260d26d2baedbfcc7d22eb11ecc119f5d2fce6f8b11e3c544a56d0028b","text":"Roman Danyliw made some comments during IESG review related to receiver validation behaviour. RFC 3230 was quite relaxed about stating receiver requirements and we've followed suit by carrying over that normative langauge almost verbatim for Content-Digest and Repr-Digest. The intention has always been that applications are responsible for deciding what integrity checks are important and what validation failure means to them. Roman suggests that we make this intention more concrete by adding explicit statements. This seems like a good idea and I'll prepare a PR to this effect. Its a restatement of intent but it is new normative language, so I'll cross post to the HTTP WG list for any additional input."} +{"_id":"q-en-http-extensions-9390f935b4ae2d45e6a7a777ad52c326a5c8e722de41f9a92f1c29dcca411be4","text":"Apologies in advance.\nNo apology needed. I stuttered over this same sentence yesterday when fixing an unrelated nit. I almost changed it then but didn't want to mix concerns. Given that somebody else has raised a comment and provided an alternative I agree with, leads me to believe its worth fixing. Thanks.\nMuch cleaner, +1"} +{"_id":"q-en-http-extensions-54528fd281d5975715a33d0a9fb00a14928468be3dcb193a93be245ce8f91926","text":"Update example domain names to avoid \"example-URL\" and too-long lines\nSome examples use ; IDNITS dings you for not using . Should probably use something like .\nThe examples in 2.1 have lines that are too long. Because it's caused by , which is a String, I think your options are: Shorten the strings Use RFC 8792 line wrapping"} +{"_id":"q-en-http-extensions-dcaaa9e18bf6d3a62f6e13ac8952572f2145776773af81750fcbad8be782debb","text":"From Ilari Liusvaara on the : From NAME on the :\nMaybe also consider for identification. That's not perfect, because of the \"none\" thing, but it might be a better fit. The TLS schemes are generally OK provided that you only use the recommended ones (of which there aren't many).\nI don't love JWA since , but using TLS SignatureScheme with a requirement to only use the recommended ones sounds good to me.\nNAME Doesn't it define the algorithm and then define the subkey types and in ?\nNAME that could be made to work though it's quite clunky since it requires a separate string parameter for \"alg\". Overall I think the is a much better fit for signatures here. We already tightly integrate with TLS but not JWA. All that said, the closest option for hashes is the and that's been orphaned by RFC 8447. At this point I guess we should bite the bullet and define a new registry. Perhaps we include as current values: hmac-sha256 hmac-sha384 hmac-sha512 signature-rsapkcs1sha256 signature-rsapkcs1sha384 signature-rsapkcs1sha512 signature-ecdsasecp256r1sha256 signature-ecdsasecp384r1sha384 signature-ecdsasecp521r1sha512 signature-rsapssrsaesha256 signature-rsapssrsaesha384 signature-rsapssrsaesha512 signature-rsapsspsssha256 signature-rsapsspsssha384 signature-rsapsspsssha512 signature-ed25519 signature-ed448 Of course this requires to paint the bikeshed one of two colors: HMAC and Signatures in the same registry, using a single new auth scheme Separate registries, using separate auth schemes Thoughts? At this point I suspect this issue will need WG discussion at IETF 116.\nI had to implement rsapsspss_shaX algorithms to produce test vectors for DCs, and unless there's a really compelling reason to include them, I'd really rather not. They're just such a pain to use, and the support for them is very poor (thus me needing to implement them).\nFWIW, adding an entry in the registry doesn't mean it'll be mandatory to implement so there's not much of a cost. But I don't feel strongly about this one so I'm happy to not include it if it's the WG's preference.\nJust saying here's a registry that you might could co-opt: URL Defined by URL\nThis is an existing registry of hash algorithms with string identifiers: URL Defined by URL\nDiscussed at IETF 116, sense of the room appeared to be that removing the HMAC feature entirely and switching to the TLS SignatureScheme registry for signatures was the simplest solution here.\nOK wrote up to address this\nIs there a technical reason why the header was not chosen for this draft? That can be sent without a corresponding challenge (in an unprompted way), as we note in the . It seems like that would work in this context, too.\nIs this rather about using the Authorization header?\nYep!\nNAME can you elaborate on \"as we note in the \"? I skimmed through that draft and it seems to clearly require a TokenChallenge before sending a Token.\nI thought we had text that described it, but I can't find it. In any case, I think one can send the Authorization header unsolicited, so I think this draft ought to do that instead of rolling a new auth header.\nFair enough, let's discuss this at 116.\nFWIW, I agree that unprompted \"Authorization\" is permissible in this context, but I think \"Unprompted-Authorization\" could be useful for clarity when reading logs, etc.\nDiscussed at IETF 116, sense of the room appeared to be that switching to Authorization made sense.\nOK wrote up to address this"} +{"_id":"q-en-http-extensions-7edbb96d955e023828969ef62fea78dfc51921a8d9ecd1d1cf736a6a8da841ca","text":"NAME that should already be covered by the last paragraph in this section. In particular the format is the Port is quite explicit there. Are you saying we should mention that strings are in ASCII?\nNo, sorry, I'm talking about the host name in particular. There was a long back and forth with Paul about the format of this field (is it the thing that appears in the HTTP request authority header, the thing that appears in the TLS SNI, etc), and we ended up being explicit. Just trying to save you some trouble down the road.\nOh, thanks. I agree we should be explicit. Could you point me to the specific text in Privacy Pass for this please?\nFound the suggested Privacy Pass text and added to this PR.\nI would recommend making the TLS exporter labels specific to the type of algorithm in use. For example, rather than use \"EXPORTER-HTTP-Unprompted-Authentication-Signature\" for signature-based authenticators, I might use \"EXPORTER-HTTP-Unprompted-Authentication-Signature-Ed25519\" for authenticators using Ed25519.\nThis would seem to have the drawback of making it complicated for intermediates to forward the authentication nonce. The alternative being for intermediary to implement the validation itself, which has its own drawbacks. Thinking why would one want to do something like this, I came up with vague ideas about possible cross-key tracking vectors. However, making labels algorithm-specific would not help if both keys happen to have the same type. One idea would be to mix the key id as context input, which would make the nonces be pretty much per-key in a way that is feasible for the first intermediary to calculate with no explicit algorithm support nor configuration.\nAn alternative proposal would be to just use a single context string for exporting the nonce but then somehow bind additional information into the signature computation at a higher layer. This would address the issue without complicating the intermediary story, I think.\nAny case that involves intermediaries forwarding tokens will necessarily involve some amount of trust in the intermediary. The intermediary can provide the exporter output when it forwards the token, which would allow the signature to be verified, but the recipient is going to have to trust that this isn't copied from some other request. I definitely think that the context string is the right place to include origin, key identifier, algorithm choice, and maybe even some context from the request (URL). That latter depends on how heavily people intend to lean on the intermediation case.\nDiscussed at IETF 116, sense of the room appeared to be that adding the key identifier and algorithm to the context made sense. We can skip URL and instead add realm, see\nOK wrote up to address this\nsignatures are computed in a way that binds them to the origin which accepts the authenticator. While it's true that the nonce is a per-session concept, and the session is bound to a specific origin (by virtue of the TLS SNI), it seems possible for bugs to be introduced in settings where a TLS-terminating server supports more than one origin. Maybe the server accidentally sends an authenticator computed in a session for one origin to another, or something. I would follow in WebAuthn's footsteps here and sign something that's specific to the origin.\nHTTPS sessions are not bound to a specific origin. There is no requirement for client to only use one origin on connection, and even if there was, TLS SNI is not fine-grained enough, as e.g. https:\/\/foo.example and https:\/\/foo.example:8443 are different origins, but have the same TLS SNI. Extending the idea of throwing some stuff into the context from httpwg\/http-extensions, one could throw the origin (host\/:authority) in as well (the contexts can be up to 65535 bytes, so there is plenty of space). This would bind the signatures to origin, would still work if host gets remapped somewhere, and could mitigate some tracking vectors. And it would still retain the no explicit configuration \/ no algorithm support property.\nOh, right, duh. We have connection coalescing! Throwing in the origin as you suggest seems like it work, and as in httpwg\/http-extensions perhaps doing it above nonce generation is best.\nDiscussed at IETF 116, it seemed like the proposal most likely to succeed was to add the to the context\nRegarding the question on , asking if origin and also URL should be part of the context. says, quote \"unless specifically allowed by the authentication scheme, a single protection space cannot extend outside the scope of its server.\" So as NAME points out, we should include the origin. discusses the danger of using the same context for different URLs belonging to the same origin, pointing out that it is an problem that already exists and explaining the mitigations (such as not forwarding the Authorization header to the application running on top of HTTP). Assuming that we would switch to using the Authorization header, I think we can rely on that. To paraphrase, there is no need to include URL in the context. In addition, we can1 include \"realm\" in the Context, so that there can be multiple \"protection spaces\" inside one origin. In the Unprompted Authentication scheme, there is no way for a server to signal the realm to be used. Therefore, as NAME pointed out, there has to be a default value that would be defined by the specification. Clients and servers can agree out-of-band to use a different value. 1: During the meeting, I asked as a pure question how Realm is to be used. In the follow-on discussion at least assumed that \"realm\" has to be part of the Unprompted Authentication as it is to use the Authorization header. However, reading RFC 9110 I realize that not all authentication schemes are required to support realms (i.e., having multiple protection spaces within an origin). Hence the \"can.\"\nOK wrote up to address this\nThis is a good change, but to avoid future nit picking I might clarify what are the types of each field in the context. For example, is the port an integer in network byte order? What is the format of the host string? We just went through a whole kerfuffle in Privacy Pass on this point, and my advice would be to address it early."} +{"_id":"q-en-http-extensions-906f476dfba4b19294bd42ad4b75e04f9c88430254657f5fb7c9f1f056a734e0","text":"In the current draft, the header is composed as: The server computes the nonce from the TLS key material exporter and verifies the signature. This doesn't actually guarantee freshness, because its possible for parties to construct signatures which are valid for any message\/nonce. This means that a party which does not have access to the TLS Export material can still inject a valid Auth header. I don't think that's a major problem, but it is a bit unintuitive and its hard to predict how drafts like this will eventually get deployed. A simple fix would be in to add an additional parameter, say which is or similar and have servers check its been correctly computed. This ensures that only parties that can access TLS key material can inject valid headers. Behind the scenes, this attack \/ fix works because hash functions have to be collision resistant for all inputs, whereas signatures don't have to satisfy any properties if the associated public key was generated maliciously. has a longer discussion.\nThanks! This sounds like an easy fix. Instead of a hash, we could also generate more bytes from the key exporter and use some for the nonce and some for the parameter\nWrote up to fix this"} +{"_id":"q-en-http-extensions-29b88e362b123986cb3ccce7284189af0f8c89e84561a86065c8db65df4f533f","text":"From Ilari Liusvaara on the :\nI think that I might disagree with Ilari here for this application. Key separation is probably a better model to employ here, though as soon as someone even hints that they might want to share client certificate keys and these keys then this sort of protection probably makes sense.\nBriefly mentioned this issue at IETF 116, but did not have time for questions so we asked folks who care to comment on the issue.\nThinking about this some more, prepending a fixed string to the nonce before signing it sounds like it would be pretty cheap and would remove a class of issues - I'm inclined to do that. I'll write up a PR.\nOK wrote up to address this"} +{"_id":"q-en-http-extensions-9c2cf02fd65948ae1effd5dc0bef9c79c3ab74163fcd657627dce890225a89be","text":"During the SEC AD review, a DISCUSS was raised about the current status values. This change modifies the values to better align with Signatures. The guidance for registration and\/or usage of an algorithm based on the status is not affected."} +{"_id":"q-en-http-extensions-863642a0a13c13b1d330bd8330241e3dc70998687a8ebfb2de5d646acd91a26c","text":"Looks good except for minor details and potentially a bug (encoding vs decoding). I'd also like to see the spec enforce a consistent form of percent encoding (lower vs upper)\nI am sceptical wrt \"use the URI percent encoder\". The reason being: percent encoding rules depend on what part of the URI needs encoding (and many APIs do not get that) calling the API for a single character multiple times usually requires converting a char to a string, and that might not be good for perf if we really care about that\nI've removed the reliance on URI for percent encoding; was clean as NAME said it would be. Regarding case -- NAME RFC4648 Section 8 is explicitly case-insensitive. Given that and the discussion in the group yesterday, I've left this case-insensitive. Say if you still disagree.\nNAME , I was suggesting an inline definition. The fact that uppercase is base16 would just be an observation (and not a particularly useful one). Lowercase tends to be the default mode in several places.\nNAME it wasn't at all clear that's what you were suggesting. I'll make an attempt.\nLooking at it, I think I prefer referring to 4648. This this addresses the original issue, I'll merge; if someone wants to propose an alternate way to specify this, please feel free to open a PR.\nAs currently defined, the two serialized DisplayStrings %\"foo\\\"bla%22bar\" and %\"foo%22bla\\\"bar\" are semantically identical. We should either use %-escapes or \\-escapes, mixing them this way is asking for smuggling-attacks.\nI think probably -escapes; will work up a PR.\nI don't believe that relying on a different RFC for the hex encoding makes this much better (so I sort of disagree with Martin's proposal). Just define the escaping exactly inline, and then we can choose upper\/lower."} +{"_id":"q-en-http-extensions-8cd02bb99d657e6f01559410f4356b5af1cf359d0f5725db45f9a467c9da09b8","text":"Closes URL I am not super sure about the wording here, but this is a start. The sentences seem quite long and maybe the rules could also be expressed in fewer words.\nI wonder if it is also possible to expose this final upload size in the response to Offset Retrieving Procedures. I don't think we can set in the HEAD responses because that would indicate that the server already has this amount of bytes, which is not correct.\nIf the client acknowledged its final upload size at any point ( + ), it should still know it in the future, so I don't think we need a way for the server to echo it in the offset retrieving response. If the server disagrees with the client, then someone's data representation has changed, and we shouldn't continue the upload anyway.\nLooks good. Wondering if these could be reduced to SHOULDs though. It wouldn't really affect interop if the server didn't implement this.\nFair point, let's leave such functionality out of the draft. While it is true that clients usually won't really notice if the server does or does not support this, I think we can be stricter here and require this additional safety catch here. It will help to create a more standard landscape of resumable upload servers. Unless there is something against requiring this, we can keep it a MUST IMO.\nI just updated this PR to use the Upload-Complete header as agreed on in\nThe arguments for SHOULD are: This check requires additional code and complexity. MUSTs are situations that a minimal implementation has to handle anyway and we are just strict about how to handle them. For people using chunked uploads, this check is only useful if the last chunk is interrupted which is sort of an edge case. Maybe we can generalize the check if we adopt ranged PATCH. I don't have a strong opinion on this, so either way is OK with me.\nWhen the server receives a series of appends: If the offset+length is inconsistent, the server should reject the upload.\nTo clarify; Is what you are saying here that the provided during the upload creation procedure should act as the total size of the file, similar to Upload-Length in tus1x?\nI am not sure if we want this. I ran into this exact problem because your NIOResumableUpload implementation uses this logic (see URL). We currently have the header to indicate the completeness of the upload. Based on this information, the server can determine if an upload is complete or not without relying on . Adding to the upload logic would mean that we have two indicators for upload completeness, which could potentially disagree with each other. Personally, I think we should not use to determine the upload's completeness and only determine it based on whether the client set in a successful request. For each Upload Creation Procedure or Upload Appending Procedure, the should be set to the number of bytes that the client wants to send in this individual procedure. For example, if you want to transmit 100 bytes in the first POST request, set (or not if you use ). If you want to transmit 200 bytes in the following PATCH request, set . If you want to finish the upload then, set and the upload is considered complete with its 300 bytes of content.\nThe issue in NIOResumableUpload was a bug, not how this is supposed to work. Offset + Content-Length = final upload size when the upload is complete, and if the client has told the server about the final size, it shouldn't be allowed to change its mind.\nEchoing what Guoye said, should not determine the upload's completeness, but if an request has a , then the server can expect and enforce the final upload size based on that . This prevents a client from appending more bytes to their upload than the server anticipated. Ex. Client starts an upload , Server expects a final upload size of 100 bytes Upload is interrupted, offset-retrieving, server sends Client now tries to append an extra ~10GB Client sends , , Server can reject this request\nI see, thanks for the explanation. That makes sense. Basically, in conjunction with let the server know the final upload size, which should not be changed\nHow does that work if the last append has in conjunction with . The Content-Length needs to match the number of bytes in the append message.\nFinal size = upload-offset + content-length if the upload is complete\nSo the first ever request with Upload-Incomplete: ?0 and content-length sets the expected final size. Every other request must have a content-length less than that value?\nYeah, once the final length is determined, it cannot be changed by subsequent requests.\nOK I think that makes sense but it's not obvious from the spec. We should try and make it more accessible with some new or different text.\nI started on the wording in"} +{"_id":"q-en-http-extensions-276bea40807bf985afa2b57276d5a4215d913f6025465393eaadeecf9f119a4f","text":"Based on discussions with NAME NAME and others, there are potential key confusion attacks when the server is checking the signature using the wrong public key. I think the simplest solution is to add the public key to the TLS key exporter. That should solve the issue without increasing the amount of bytes sent on the wire\nIncluding the public key in the exporter context makes it significantly harder to implement the server side of this. Including the hash of the public key in the signature appears to work also. I have a sketch model in Tamarin that doesn't find any attacks when the hash of the public key is in the signature, which gives me some confidence that it's doable, although NAME wasn't convinced. We should discuss this in the room at 117.\nI agree with NAME The implementation complexity that comes from putting the public key in the exporter context is real, at least for some edge networks. I think we should pursue alternatives that don't weaken the security properties, even if they make the registration step a bit more explicit.\nIf this design makes implementation hard then we should totally change it. Can I ask you to elaborate on what causes this to be hard on the server side?\nThe basic problem in our particular case is that the component that terminates TLS and computes exported values is upstream of the component that has the public key. So when a request comes in with the header to the TLS terminator, the public key needs to get relayed back for computation. If, instead, the public key was signed, then the TLS terminator could simply forward the computed exporter without any sort of round trip. (I punched this in on my phone, but can draw pictures on a laptop if the above is not clear.) We don't necessarily need to cater to one implementation, but I would be surprised if this were not a common arrangement in practice.\nThanks, that makes sense. To explore the problem space: if we were to include the public key in both the context and a header parameter, would that work for your implementation? (That way the TLS terminator can call the exporter without a round trip.)\nYep, that would address the problem too.\nSIGMA seems like it might be worth considering as an alternative. That is, sign the nonce, but then include a MAC that covers all of the relevant parameters.\nSomething like SIGMA should also work, but I'd like to minimize the number of primitives required to implement this. Currently, modifying an HTTPS stack to support this draft only requires a signature implementation and a TLS key exporter API. Using a MAC would require a MAC primitive and opens the question \"which MAC should we use?\"\nThat's an easy question: you are using HKDF to get the values, so HMAC is a logical choice. The only reason I suggested that is that I'm increasingly nervous about these bespoke constructions. We understand what sign and mac provides.\nI think we need to first identify what property it is we actually want, and then see what is the simplest way to get there. NAME what is the property we want?\nNAME had a long chat about this today. We didn't come to any solid conclusions - the authentication property here is really subtle and the attacker model not yet clear. This is going to need some really careful work and detailed formal analysis to get right. In terms of this discussion, my personal take would be: I don't think a mac & sign approach is going to give you anything strictly better or different than an exporter + signature approach. In both cases its just a question of what you put in the exporter \/ mac. Having the public key conveyed in the header and bound explicitly in the exporter (rather than via the keyid) seems like a conservative and reasonable approach.\nThanks for thinking through this! I've written up to add the public key to both the context and the header"} +{"_id":"q-en-http-extensions-03f13a2b45b5ed34ae3cbbf95b62adc6ab21b47abaaf0420af3f5de9d2932ba7","text":"Allow the original name of the next hop to be the start of the list, and add an example.\nNAME NAME please review!\nLGTM\nI understand that the current draft is specifically targeting forward proxies, but as currently described, the is missing some information when used by reverse proxies, and it's not self-contained anymore (i.e. the requested\/configured next hop is not part of the field), making debugging much harder. I find the fact that is not included here a bit confusing. While operators or clients connecting directly to might know that it's a forward proxy and could infer that was the requested hostname, the clients that are a few hops away from it, won't know this, and might be surprised how the and other aliases got in there. Furthermore, in a similar scenario with a client connecting to the reverse proxy and requesting , which is configured to use as the backend, that response is missing the most critical piece of information, and instead of helping, it adds more confusion. As such, I'd suggest that the requested\/configured hostname should be included as the first entry in , especially when points to the resolved IP address. cc NAME NAME\nThis is a good point, especially for reverse proxies. The name of the next hop could be in the next-hop parameter itself, but then there is a choice between the address and the name there. For forward proxies, the name is known and the address is interesting. For reverse proxies, i imagine the name of the next hop is more useful than the address, but I can see why both could be wanted. In such case, perhaps we can allow the aliases list to include the next hop name if it differs from the authority requested by the client.\nYes, that's unfortunate. I'm not sure how we've missed this in RFC 9209. Perhaps we could allow both and use for the configured next hop name, and for the resolved IP address... which actually matches what I'm suggesting we do here by including the requested\/configured name as the first entry in . But that's only true on the first hop, right? The authority could be changed along the way by intermediaries, and then the original client wouldn't know what started the resolution which resulted in the received . How would the original client know if that happened in a long chain of proxies? The authority requested by the client connecting to is not included in the header, and therefore it's not known to the original client. It's impossible to tell if that entry was generated by a proxy that was configured to connect to , which also happens to be the authority it received and therefore it's not included in , or if it was configured to connect to . Basically, I'm trying to avoid adding ambiguity and guesswork to the debugging process that uses this header.\nUpon discussing in person: Good to add to an example The text should make it clear that the original name is allowed to be at the start of the list (but doesn't strictly need to be included)"} +{"_id":"q-en-http-extensions-0fb781538ef768a4307420044df61d4e31359bf4c3540dd580d5fd3cc7790f78","text":"Within TLS 1.3, the only options for 0-RTT are \"accept\" or \"reject\". \"accept but delay until after the handshake\" is not visible within the TLS 1.3 protocol. The current draft language doesn't align nicely with this, and should be adjusted. Credit: NAME"} +{"_id":"q-en-http-extensions-9b21f526036d6620773741792c76bcf39769a0c2d699bb3dfe18cd9d601f9531","text":"Closes URL\nNAME brought up that during the WG meeting that reusing Upload-Complete on requests and responses can be confusing. They don’t mean exactly the same thing: on the server means it has received the full upload, while on the client means the body it will send is a complete upload.\nWe discussed this internally and prepared a response: Upload-Complete is used on both requests and responses because it is symmetrical. Upon a typical successful transaction, an request results in an response, and an request results in an response. A server implementation can simply echo the field. However, they are also meanings that we can take advantage of. The presence on the request means it’s a resumable request, and the presence on the response is also a useful signal. Since we are upgrading regular uploads, there are cases where 4xx\/5xx responses are intentional and we can't just eat it by attempting to resume. We would be much more confident that the response came from the real server rather than a middlebox which shouldn’t randomly add . The server can also use to terminate a request early if it does not care about the body, similar to the early response mechanism in HTTP\/3.\nI am having second thoughts about this as well. While debugging a server implementation, the naming of the headers also confused me for a moment (even though I wrote parts of the draft). The reason is that in a response (or how it is currently called) refers to the current state of the upload, while in a request refers to the potential state in the future, if the request completed successfully. I don't think that we have a general issue with the concept of and have to change much. But maybe a simple name change can help here already to alleviate the confusion. We could use for the response header to indicate that it represents the current state. And for request headers to indicate that process the request successfully will complete the upload. We loose the symmetry of the names in request and response, but maybe this makes their relation even clearer.\nI think for both client and server works because it represents their own perspectives of the upload. For the client, sending means \"the upload is complete on my end, I've included all the data to complete it.\" The client isn't making assumptions about whether the server has received all the data; it's only expressing to the server that from its perspective, the upload data supplied is now whole. For the server, responding acknowledges the client's view of the complete upload and says, \"Yes, the upload is complete on my end, too.\" Same goes for .\nAt IETF 117, we agreed on using for requests and responses: URL I will open a PR for this and then we can close this issue."} +{"_id":"q-en-http-extensions-40e335c0103754ffca7e25660d021964f9859dd92e868ed2a73c216449023f61","text":"Can I suggest we use 'converted' instead of 'normalized' ?\nWFM, will update.\nIt's annoying to test for otherwise. Still allow both cases in parse.\nPostel's law (\"Be liberal in what you accept, and conservative in what you send.\") isn't as popular these days as it once was. Not sure we need to accept upper case % anymore.\nI am sure that we don’t. (Though I have thoughts on the use of the Robustness Principle to guide this sort of decision. Generally, it is better to look at what the case needs. Here, the values are created by code, so permissiveness is not a valuable feature.)\nFine. See URL"} +{"_id":"q-en-http-extensions-cfd9979bc733651a321dcb0f8d585fd3676956fe8ff5ea09a8ce02b544fe9573","text":"I tried to make this work, but NAME is right: this is entirely a transport decision and given that our best option was to violate a MUST in RFC 7230, that's too hard. This just says that HTTP\/1.1 is no good because we can't include the scheme (because RFC 7230 says we can't). I almost made this h2-specific, but then remembered that there might be another protocol that has favourable properties.\nIt's official - MT says \"HTTP\/1.1 is no good.\""} +{"_id":"q-en-http-extensions-987c7d33c87718b1aaac458ac663ce4231cb6ce464fe37b9d3ced4d241f449c8","text":"This switches the custom wildcard matching to use with regexp support disabled. URLPattern is being used by the service worker routing API's and is what the web side is standardizing on for anything that pattern-matches URLs. This splits the match pattern into explicit path and search (query param) components and forces the origin to be inherited from the dictionary request URL. Also included here is a new field for to allow for matching on request destinations. The biggest use case for that will be for allowing site-wide HTML dictionaries to do something like and for the dictionary to not be sent for every request across the site (images, etc). See issues , ,"} +{"_id":"q-en-http-extensions-0c87129c0e9dcf95efe4926b0d452bb5d704d74e1e5318c18d24fc7e4a124277","text":"Stupid language question: \"all possible\" or \"any possible\" ? All we say about the UTF-8 decoding is: Let unicodesequence be the result of decoding bytearray as a UTF-8 string {{UTF8}}. Fail parsing if decoding fails. But UTF-8 has enough \"payload bits\" to transfer up to 0x1FFFFF where Unicode is limited to 0x10FFFF. Should we include that in the caution somehow or is that (sufficiently) implicit in \"if decoding fails\" ? (This is obviously approaching bikeshedding, I'm fine with the current text.)\nChanged to 'any'. I think it's implicit in 'if decoding fails'; is there a situation where it wouldn't?\nI propose we ban from serialized , in order to avoid a large class of security issues (and to simplify implementations) in C\/POSIX environments where is the string terminator. I can see three alternatives here. In the serialization of we add: if byte is %0x00, fail serializing. In the parsing of `Display String we add: if octet is zero, fail parsing. If either end uses a strict UTF-8 encoder or decoder, which do not allow \"overlong byte sequences\" (see below), this prevents from being transported with DisplayString. Java pioneered a handling of by specifying that it must be UTF-8 serialized as - a so-called \"over-long UTF-8 byte sequence\". We have presently not specified if the UTF-8 encoding of DisplayString is \"strict\" where such over-long byte sequences are illegal or \"non-strict\" where either all or a few over-long byte sequences are accepted. Over-long byte sequences in general are frowned upon, because they can be used to \"obfuscate\" UTF-8 strings, for instance by encoding a period as 0xc0 0xae to try to escape directory traversal checks. We could change the spec to say that the UTF-8 encoding\/decoding must be loose (enough) to handle the Java-way, but I failed to identify a suitable normative reference which didnt bring in a lot of other UniCode\/UTF-8 baggage we may not want. We can optimistically serialize any bytes we encounter in the encoded UTF-8 bytearray using the \"java-trick\" and leave it to the UTF-8 decoder to either reject or accept as it sees fit. In the serialization of we add: If byte is %x00 append \"%c0%80\" to encodedstring. In the parsing of we add: if octet is zero, fail parsing. I'm a big fan of clear text and simple solutions, and I cannot imagine why anybody would ever try to send through a for non-nefarious purposes, so I am 100% on board with simply banning bytes in the encoded UTF-8 byte-array. But it is a (tiny) loss of generality, and if we want to avoid that, we either make life difficult for implementers in C\/POSIX based environments by rejecting this ticket, or we throw our lot with Java's handling of to a greater or lesser degree. Optimistically serializing , as specified in the third alternative is a way to do that without tying ourselves to Java's mast, but it introduces some wiggle-room which, depending on ones point of view, is either desirable (allowing the receiver to reject by using a strict UTF-8 decoder) or undesirable (making it anyone's guess if can be transported in a DisplayString or not.) But Java's handling is a bit of a hack, and allowing (some) over-long UTF-8 byte sequences introduces a class of security issues similar to the one this ticket tries to prevent, so all in all, I think we should just ban in serialized with the first alternative above.\nIIUC, Display Strings aren't meant for sending byte arrays, therefore I do not think we need to define a special way for conveying NUL characters. Based on that, IMO the question boils down to if we should ban encoders from sending NUL. I do not have a strong opinion on if that should be forbidden, considering the fact that if the receiving application is suspectible to NUL characters decoders have to be careful in the handling of NUL characters regardless of the encoding requirement. Note that decoders are already expected to treat NUL characters specially if necessary, see Security Considerations. Maybe we can just add a reference from the section describing the decoding logic to the Security Considerations, and call it a day.\nOne possible good point in favor of banning it is that we know that many configs will just act based on regex pattern search and might find some searched string after a %00 and consider they found what they were looking for, while the implementation will stop before when seeing the zero. I think we should simply say that \"0x00\" must never be serialized, hence \"%00\" must never appear in a DisplayString. This implicitly allows end users to write pretty simple protection rules (even regex-based) consisting in blocking the 3-char sequence \"%00\" in such fields. Encouraging the blocking on the receiver side can be efficient enough to discourage implementations from even trying to purposely send it.\nThere is a cost to banning it, which is that the Unicode string to DisplayString encoding function is no longer complete. While C strings can't represent embedded NULs, all other languages' string types can. Those languages would then have a harder time writing a DisplayString API, because now encoding can fail. So we're trading off a nuisance for C vs. a nuisance for everyone else. It is unclear to me what the second option buys us. The second option just means there are two ways to spell U+0000. A C decoder that uses a naive percent-decoded in-memory representation would still need to account for somehow. The third option would avoid that, but ultimately it just means the on-the-wire representation of the same U+0000 codepoint becomes more verbose. This, however, suggests a fourth option: keep the abstract data format and wire representation as-is. (All Unicode strings are allowed and is how you say U+0000.) Instead, non-normatively suggest that decoders that use a NUL-terminated in-memory representation MAY account for U+0000 by representing it as in memory.\nI'm naively suspecting this could be used to attack some elements: fill such a field with as many %00 as you can and it gets doubled in its final representation (e.g. when passed to a next hop) :-\/\nBy \"this\", do you mean the in-memory representation? I don't think that's a concern. If passing to a next hop is still over HTTP, the representation you send will continue to be because (under that proposal), that is the one and only representation of U+0000. It would continue to be incorrect to encode it as . If you're worried about a large in-memory representation, I will note that it costs the attacker three bytes ('%' '0' '0') of bandwidth to get you to represent something in two bytes ('\\xc0' '\\x80'). The attacker would get a better ratio be sending you a bunch of 'a's. But ultimately all this is just a question of what in-memory representation your program uses for U+0000. It just needs to unambiguously represents the possible state space. Indeed I might suggest, rather than doing these hacks, don't use NUL-terminated strings! Then this is moot. I certainly would not go with in anything I write.\nI see no reason to single out U+0000 from other control characters. Either allow U+0000, or disallow all control characters. The overlong encoding hack will bring more issues than it will solve.\nThe only reason to single out U+0000 is that it is the string termination character in C and POSIX environments, and that is still a major cause of CVEs. I would not object to banning all the control characters, but that takes us into UniCode land which is full of dragons... That is a third way to get the indeterminacy of the \"java-hack\", but it does have the slight benefit that the implementor of the sf-bis parser may know if that over-long UTF8 sequence will work or not. However, if it does not, then nothing is gained with respect to predictability.\nI'd prefer to stick with a caution here, rather than a prohibition. That means that endpoints (yes, whatever that means) can make decisions about what they tolerate, but we aren't constraining use. That is, the field can carry anything, but a resource might decide not to accept some values.\nHow about simply adding: \"UniCode category 'Cc' (Control characters) are not permitted, unless the field definition unwisely explicitly permits them.\"\nThat's a field-specific parsing rule, which doesn't seem great from an API perspective.\nNot if it goes in the \"Defining New Structured Fields\" section ? I think it is perfectly fair game to insist that fields which accept Control Characters are forced to specify that, and ban them by default. My hope is that adding such text will mean that no field will ever be defined to allow them.\nRight, but the software handling that ban isn't a SF processor, it's the field-specific code. I think the most we could do would be something like how we handle field-specific constraint failures in 'defining new...', eg: But that creates a situation whereby the default is that there's nothing in the field's spec about control characters, the SF generic implementation passes them through, and the field-specific code is expected to know that it should reject them. Not great. So a better approach might be:\nThis is where we realize we have waded into the \"no way to be a little bit pregnant with UniCode\" sump, isn't it ? I still think that %00 is a valid security concern in C\/POSIX environments, but I really do not want to open the UniCode can of worms. Maybe we should just drop this ticket and add a stern security warning that there is no such thing as \"safe unicode strings\" ?\nI'd be good with a general unicode 'there be dragons' security considerations warning."} +{"_id":"q-en-http-extensions-943f21d466eb99af06cbf90e6aaba3170d10ff97a941a229ff6efa18f1e06013","text":"Implementing Date in Python as a object has an issue -- Python dates are limited to a specific range. Given that it's good to expose SF as idiomatic data structures wherever possible, I poked around a bit for limits on date\/time objects in libraries, and found (all expressed as limits on years, as that's the most significant bit): Python - Rust - Go - PHP - JavaScript - Ruby - Java - Should we specify a minimum range of dates that an implementation should support (likely 1-9999) so that developers can be confident of them (as we've done for other data types)?\nHm. Do these differences affect other parts of HTTP as well, such as processing the \"Date\" field (or fields specific to caching...?).\nOne would imagine that people put HTTP dates into these structures from time to time, yes.\n(it's just that HTTP dates rarely go outside the range 1970 - \"forseeable future\")\nRestricting to a year-based range would IMHO make sense if we actually would be using calendar dates (which we do not).\nTo be clear, I'm not proposing we restrict it -- just require implementations to support at least a particular range of values, just as we do with many other types. Don't get stuck on years, that's just a rough sizing to get an idea of what we're looking at. I think we'd express it in integer values.\nI've commented in the PR that I dont like this, but let me add a bit of context: Handling of time, in real life and in computers, is incredibly messy and one of the most unreadable scientific books of all time is \"Calendrical Calculations\" which is our best shot at that subject matter. I really do not want us to wade into this tarpit, because everybody who has ever done so, has just added more tar in the process. My really strong preference is if Date was defined as \"Seconds since Epoch\" according to POSIX\" with the full range of signed 15 decimal digits we inherit from Integer, so that we in no way adds any further complexity.\nThat's effectively what we have, even with this addition. This text is just requiring implementations to support at least a certain range, so that it can be relied upon for interoperability."} +{"_id":"q-en-http-extensions-9711b11ff2ce16258a0cd56128b8d3a55753046f7df5d83b1fc0a027736628ff","text":"OK with me.\nDisplay Strings are a superset of Strings, which are a superset of Tokens. This makes APIs more complex, and makes designers have to think about which structure to use. A couple of thoughts: 1) Should we define best practice, that e.g., you should specify Token or String when you specify String -- similar to HTTP? 2) Should we define a new TokenOrString type to encourage this pattern (both in header definitions and APIs)?\nNot sure it would be always a best practice, but there are certainly cases where it is. Thus describing the pattern and making it easier to use it sounds right to me.\nI'm probably missing something, but why would we want header designers to use TokenOrString? Given, as you say, string is a superset, if you needed to use a string, why allow tokens too when you can just define the header to simply take string?\nFor brevity. It happens in legacy header fields all the time (dunno whether that's a good reason, though).\nThis is because legacy fields didn't actually believe token and string are different types, right? Just that strings were optionally unquoted when you don't need quotes. But guidance on the types is for new fields. And, for better or worse, structured fields went with a data model where quoted and unquoted are different types, rather than alternate encodings of the same types. Now that we've made them different, if you're making a new field that needs to represent string values, use strings. It's true this means designers must pick between two similar types. Perhaps making them different was a mistake. (Though making them the same would mean multiple encodings, which is differently bad.) But we've already done that. It's too late to fully merge them because we'll break everyone who was told structures fields was ready to use and treated them differently. Without fully merging them, generic structured fields APIs are stuck using different types for them. So merging them back needs to be a per-field thing, which means fighting with your SF APIs. Instead, the best simplification we can manage is to discourage the TokenOrString half-measure in new fields and say you should just use one or the other.\nTo the first half of the bug, yes definitely define best practice! Just if the best practice we pick does not align with the data model we picked, we have done something seriously wrong.\nWe already have best practice about Display Strings in the spec (in that we steer people away from them if they don't need them). We have a separation between Tokens and Strings because they were visible on the wire in pre-existing headers, and we wanted to be able to retrofit \/ 'pave the cowpaths'. In some cases, they are interchangeable; in others, they are not. I suspect that if we documented best practice for new fields, we'd say \"probably use Strings most of the time\" because of the underlying reason I raised this issue for -- APIs have very natural mappings for Strings, but for Tokens you probably need some sort of wrapper object or equivalent strategy, so that you can distinguish them from Strings. I'll put together a PR nudging people that way.\nSounds good! So I guess the decision tree is then: Do you need display strings? If yes, use that and only that. Do you have a legacy reason to use tokens? If use, do whatever your legacy reasons need. Otherwise, just use ASCII strings."} +{"_id":"q-en-http-extensions-bc995313e45956556b55f4bbd8a7a38ebe36a67d1818cfbce3f32efd9e141b73","text":"See URL Lots of boring editorial tweaks to ensure better consistency both within the document and other HTTP specs.\nOne related question: The style guide also mentions that \"examples should be in HTTP\/1.1 format unless they are specific to another version of the protocol\", but the draft uses a notation that resembles HTTP\/2. Should we change the example messages to HTTP\/1.1?\nThank you very for this tedious work! I only spotted a few minor typos."} +{"_id":"q-en-http-extensions-879decaf8eb72b78094e54360590f60f20e9f7e8166a0edbec912cd217c8917f","text":"Since the last draft version was published, we replaced the header with the header. This requires an increase in the interop version as existing clients will need changes to be compatible with the next draft release."} +{"_id":"q-en-http-extensions-46318709a35d00f8482ac97e1b1b8174cf48f7662ca619e88087791a48e0095c","text":"This nags at me a bit. I think you want to be crystal clear are what parts of Stuctured Fields you are and what you aren't. As far as I see, you want to use the aspect in order to reuse the serialization\/deserialization, legal characters, legal range, etc., for the auth parameter values. But you don't need the aspect, which would allow you to parameterise the parameters. Does that sound accurate?\nWe added this sentence based on a URL from NAME Martin, do you have thoughts here? I think the intent was just to point out that we're in the new structured fields world even though Authorization hasn't been retrofitted yet\nIn , NAME pointed out that we can't simply use byte sequences from structured fields because authentication parameter values need to be which doesn't allow non-quoted colons. So I switched to double-quotes in dcc520544cf98cf634c9ca0a24119256fd18d58e, but that makes the usage of structured fields somewhat messy. Thoughts?\nCopying my comment from the change\nGood call. I now added both colons and double-quotes via 690ee072bb450ab631db5b32a23b457cec9bb6be\nFrom Ilari Liusvaara on the : I renamed this issue to track whether we want to keep structured fields or not. Personally, I don't have a strong opinion either way.\nI could live without structured fields here, presumably there are servers that already have to handle Authorization fields using the more-traditional encodings\nI'll note however that if we go with we'll need to either double-quote it or explicitly ban padding, because is not allowed as part of\nWhile base64 proper is often padded, it is more common for base64url to be unpadded (indeed, I think that the string \"base64url\" specifically refers to the unpadded version if you follow it to its origin, which I think is URL).\ndates back at least as far as and that spec explicitly : None of which is a problem, we just need to be explicit. We know how to do that.\nRight, the 7515 thing was what we ended up using in another context, where padding was explicitly not included. Clearly, we shouldn't use it here either[^1]. [^1]: I hold the view that it shouldn't be used in any situation, because if knowing the length is important, you would have a length prefix or delimiter, rendering padding pointless for anything other than protection against traffic analysis. And you can't arbitrarily pad with base64 padding, so that's no good.\nI'd ideally want to align with draft-ietf-privacypass-auth-scheme. We reviewed this a lot with NAME and went back and forth and ended up with: So from a syntax standpoint, this is \"base64url and you don't need quotes, but do include quotes if you have padding\". If you define that you don't use padding, then you don't need quotes. So I think things can be consistent here.\nBased on the conversation in the room at IETF 117, I'm inclined to make these no longer structured headers, and to use base64url without padding, without quotes, and without colons."} +{"_id":"q-en-http-extensions-e12f89671bc80af8214060519508317fabfb4fb9e5bf2eb7d3907580f4cd3b7f","text":"Based on the discussion in URL, this PR would allow servers to repeatedly generate 104 responses to inform the client about the upload progress. The formulation allows the client to free associated resources (e.g. file buffers) because the server guarantees that the upload offset will never be less than the last reported value. Closes URL\nIf this is not a mandatory part of the protocol, I wonder how the client would know that the server supports it. It's possible that the client has already forgot about the initial bytes beyond the first ACK when the ACK is received.\nI don't think the client must know in advance whether the server will send these additional informational responses. This mechanism is an additional enhancement, but not much should be lost if the server does not implement it. Yes, that can happen, but this can also occur in the current draft version. If the client forgot bytes that come before the reported by the server, it cannot continue the upload. The only save way for the client to regularly forget data is by chunking the upload into multiple requests using and thereby getting a confirmation from the server about the new upload offset. The addition of 104 for progress information does not change much about this approach. If you want to be sure about the offset, you should chunk the upload. But the 104s provide a slight improvement, where the client can already forget data before the request is completed. In a sense, the client should never forget data before the request is completed or a 104 is received if the client wants to recover from upload failures. This PR should just be an enhancement for the client, but not pose any additional issues for it. Does that make sense?\nBased on the feedback from and IETF 118, I adjust the text in this PR to clarify what header fields can be present in informational responses: Upload creation: Server may send one or more 104 responses The first 104 response must contain the Location header, subsequent responses must not. The Upload-Offset header is optional Upload appending: Server may send one or more 104 responses The 104 response must not contain the Location header. The Upload-Offset header is optional In theory, this allows a 104 to be generated with no Location or Upload-Offset header, but I don't see a reason against disallowing this. The text now also ensures that the client knows the upload resource URL before it receives a 104 with Upload-Offset. This avoids implementation issues where the client might free upload data before it knows the upload resource URL to resume the upload in cases of failure. Please let me know if these changes improve the draft's readability.\nIf we are adding strict incrementing requirement to 104 responses, should we also do so for plain offset retrieving?\nI would prefer so for consistency.\nIn a message to the mailing list (URL), Austin Wright (NAME proposed to use informational responses to send information about the upload progress from the server to the client. During the Upload Creation Procedure and Upload Appending Procedure, the server could regularly send an informational response with the latest offset in the Upload-Offset header. The value should reflect the amount of data that has been received and safely stored by the server. This way, the client knows what data has been successfully transmitted and is not needed for resumption anymore. Any resources attached for this data (e.g. memory buffers) can be released then. In contrast, If the client only monitors which,data it has sent, it cannot be sure if the server properly stored it yet. There is a short example of this how communication could look like (note: it uses server-generated upload URLs (URL), but it could also be done using client-generated tokens, of course):\nI think doing this is fine, and I agree that it has benefits as pointed out above. General guideline of HTTP is that servers are allowed to send any number of informational responses. There's no reason to not use that. Regarding the status code, I think we can reuse 104.\n+1 Kazuho\nI think we need some communication between client and server to enable this. The protocol is designed to avoid depending on 1xx status codes. Does the client need to explicitly opt in to this? Can the server opt out? What if the server doesn't support 1xx? If we want to keep this optional without adding a new header field, a possible design is: If the server wants to send periodic updates, it can start by doing so in the initial 104. The client can enable additional buffering behavior based on this information (but it might have already dropped some bytes by the initial 104). This is a server driven solution and the client won't be able to opt out.\nOr maybe URL\nI'm not fussed on the specific response code value but the 102 seems deprecated, for whatever that means, and the linked wording seems at odds with the system design we want\nIt's not deprecated, but wasn't carried over to WebDAV-bis due to lack of implementations. It's in the registry, so it's a real HTTP status code. That said, I now agree it doesn't fit well, as 102 is for situations where the client is actually waiting for a response (as opposed to still sending).\nI guess that's accurate to the historical record, after a very cursory delve into RFC 2518 and 4918. But I wonder how clear that interpretation is to the broader HTTP world. MDN has a perspective you might want to follow up on. Yeah this is where I'm hitting a hard edge. A registered status code, where the behaviour, the important stuff, is described in an obsoleted RFC doesn't really seem great.\nFWIW, I would push back hard on that line of argument. The code is registered, so it's a valid status code. And yes, a spec that wants to use might want to either include an updated definition, or make sure the definition is moved to a separate new document.\nI also think that 102 is OK to use here (but maybe a new status code is still warranted, see below). The 102 isn't depreciated as a bad idea or because of another better solution, it was merely omitted from recent updates to the WebDAV specification for lack of implementations. I imagine another specification could take over ownership of the definition\/registration. I think the only reason I didn't specifically write 102 in my email is, I've to indicate \"upload complete, the server is now performing additional processing\" and it may be useful to distinguish this signal from \"upload has been partially received and committed to storage\". The former is probably closer to the intended usage, so it may still be good to mint a new status code for the latter usage. Alternatively, both cases might be distinguished by the HTTP fields that are returned, and maybe servers want to indicate both at the same time (\"50% committed to disk, 20% processed\").\nI share the view that 102 does not fit very well here. It would be awkward to send an URI to which the missing part of the request should be sent, while indicating that \"the server has accepted the complete request\" (). If we are to choose from an existing status code, 100-continue might be a better choice, though I would prefer using a new status code considering that we did so in introducing 103. All 1xx codes are interim and therefore kind of implies continuation. We can burn a new status code for a particular case of contiuation, which Resumable Upload is.\nI opened a PR for this in URL\nI think this would be a good addition to the spec as long as it's not mandatory for the server to send this data. As previously stated in this thread It would help the client to discard in-memory data that is no longer required to resume. Would it make sense to also include the location header in the additional 104 responses? This would make the solution more robust in case one (or several) of the 104 responses get lost and the extra header field would not add any significant amount of extra bytes in each request.\nYes, this would be optional. I am unsure how big the benefit would actually be. This would only help if a 104 is lost without the connection being destroyed, so the HTTP response is still open and a subsequent 104 could be received. Maybe this is possible with HTTP\/2 or HTTP\/3, but I don't know that. Other than that, repeating Location would be redundant.\n1xx response are allowed to be dropped by intermediaries. But I'm not aware of any that would drop some in a request. Typically they just swallow all or pass all.\nNAME Is that so? I could be wrong but states that a proxy MUST forward 1xx responses unless the proxy itself requested the generation of the 1xx response.\nI suspect what NAME meant to say is that some proxies do drop 1xx responses, whether they're allowed to or not. Also, some HTTP libraries consume all the 1xx responses without exposing them to the caller, even if the library itself doesn't do anything with that particular 1xx code; such a library is indistinguishable from a poorly-behaved intermediary.\nMy concern isn't so much proxies dropping one or more response as much as e.g. switching network and losing one or more of the 104 response (think spotty Wi-Fi moving over to 4G). Another input here would be consistency, especially if we add the header to the first 104. I agree that the benefit might not be huge but it's not zero either. It does however seem like the consensus in Prague was that this is not needed and I can get behind that.\nOne or more of the 104 responses could be dropped, along with the final response, due to the connection being interrupted, but selective 104s can't be dropped due to network issues. Clients need to deal with failed connections no matter what, so I don't see how this is an issue.\nNAME Right, then this is a non-issue that can be skipped. I just wanted to bring it up so that we do not miss aspects. :)\nThanks for the feedback! I adjusted URL to clarify what header fields can be present in 104 responses."} +{"_id":"q-en-http-extensions-bfd855e855278a1e5ac7b27182ff093805e75a63d2ce46e1809288789f801637","text":"From the list: At the end of Section 2.1 in URL, there is this: Can that last line be expounded on and clarified? Will a \"\\\" character in a label be 'eventually' encoded as \"\\\" or \"%5C%5C\" or some combination of them?\nI think we can add an example for this"} +{"_id":"q-en-http-extensions-a3bba8ef49ca608a4798e0faf21e4e395bb740dbf450c628bdaa0a74d9e03e12","text":"CC NAME comment syntax: URL \"Handling Ballot Positions\": URL Why is the value defined as a String that can be comma-separated as opposed to a List (of Strings)? Up to you, but feel free to reference RFC 3493 section 6.1 for getaddrinfo and discussion of the AI_CANONNAME flag. \"might not available\" -> \"might not be available\"\nThe nit was fixed with\nFor the list-of-strings, parameters themselves can only contain bare items, not inner lists. URL"} +{"_id":"q-en-http-extensions-03b9c726867c84abbdbfe788bf42841a86c518715b87d114aa5cec4ed9fdfdd8","text":"IANA early review highlighted that we were attempting to register to the old table. Lets fix that and do a little more tidyup.\nThank you!"} +{"_id":"q-en-http-extensions-205eb5f9c8d422d15aa1662e47c186db53e15baea1bf0adc65b9eb4fb9661722","text":"This removes the hashes negotiation and changes the Available-Dictionary request header to explicitly use sha-256 as discussed in URL If sha-256 becomes insufficient, the hash negotiation can then be added (or different mechanisms can be used). This is a rebased version of the (sorry, it got messy so it was cleaner to resubmit a clean one)."} +{"_id":"q-en-http-extensions-340e14eb411c28603c22593c9e68ba5d58398b7c35d7d76562ed826a678a4b54","text":"Discussion at IETF 118 was that the didn't necessarily need to be short to reduce variations (was originally planning on 7 days) but chaning it to last-fetched instead of last-written is important. This updates the default to 14 days as a balance (can always be overridden explicitly)."} +{"_id":"q-en-http-extensions-8a4b13896ba1511105fb1d8fe5e4949d965aaa1ce798b3f2f0f34a84d9413a58","text":"Adds a new \"Content-Dictionary\" with the hash of the dictionary used when encoding the HTTP response. For\nNAME this adds the dictionary hash to the HTTP response when used. Right now it is a . Let me know if you think it should be a instead.\nMostly for developer ergonomics to make it easier to deploy. In the case of static resources using previous versions as dictionaries (v1 to v2 of URL), one method for enabling that is to compress the new version using the old version on the cli as part of the push to production and store the dictionary-compressed artifact as a file on the server next to the full file. Then at serving time, compare the value of the request header with the file names on the server for the given URL to see if a delta-compressed artifact is available. Structured fields encode the binary data as base-64 encoded strings which aren't filesystem-friendly so there would need to be some steps at serving time to decode the header, convert it to ascii and then check. It's not insurmountable but it's a bit more processing and a straight string match could be handled directly with templating in server config. Lowercase hex was picked since that's what all of the cli tools for generating hashes emit.\nIs a reasonable name for the response header? I'd actually prefer but it felt like that might be over-stepping since there could be some other generic need for associating a dictionary with a HTTP-response but maybe I'm being overly-cautious.\nI don't think that authoring convenience outweighs the advantages of having a unified grammar here. It's not like base64 is some esoteric function. If you are talking about sha256sum, here's a toy: URL\nI agree with NAME - don't. To make the sf value filesystem friendly, you just need to strip a few characters and remap \"\/\".\nKeep in mind that hashes are not hex strings. A SHA-256 hash is 32 bytes of binary data. The hex thing is just what some tools happen to output. But anything consuming this programmatically to, e.g., compare the hash against something would be best served by the underlying bytes. Using base64 is also slightly more compact on the wire. And if binary structured fields gets off the ground, it can be even more compact.\nMy main concern is trading off a unified grammer of using sf-binary in the headers vs consistency with tooling and how the dictionaries are represented once it gets out of the protocol. In the dynamic case, this could be something like the web server checking to see if the requested dictionary is available (on disk) in response to a request. I don't know how each web server will choose to build that but having files named with the hash is an easy common starting point. Otherwise each chooses what the ascii representation of the 32-byte hash should be for their case. On the static case, I'm not so much worried about not having a tool that can create base64-encoded strings of sha256 hashes (it's trivial as you pointed out). I'm more concerned about the transformations that need to happen at serving time to decode the hash, convert it into an ascii representation and then check if that variant of the encoded file is available. It's certainly all doable, but it requires base64decode and binhex (or some equivalents) to be available in the layer where the checks are being done - or to rely on application code. Specifically, I was thinking that it would be easy enough in VCL or an existing Nginx or Apache config to do something like: If request header is present and validates against a [0-9a-f]+ regex and 'br-d' is available in the request header try returning + '.' + request['Accept-Encoding'] + '.br-d' (and add relevant response headers) otherwise return Doing similar processing with base64-encoded hashes has a step to convert it into an unspecified format that is platform-specific and may not be doable without either running application code or updating the server with base64decode + binhex support. It likely doesn't matter for the case of applications and servers that are dictionary-aware and can operate on the raw hashes (and enjoy the ~20 byte request header savings) but it is a tradeoff that pushes complexity further up the stack so I just want to make sure that is the tradeoff that we want to make.\nHaving a finally consistent grammar for HTTP is the precursor to get consistent tooling. It may be more convenient to do something else in the short term, but in the long term we'd just be adding to the mess of inconsistency in HTTP. In the long term, I would hope that these config language DSLs would be able to process structured fields natively. (Although, it may be on this WG to demonstrate this being possible, since the data model for structured fields is slightly interesting...) I also don't think we should be encouraging, much less designing for, folks trying to process HTTP headers with regexps. There's a well-defined grammar and inventing other parsers is a great way to not quite get things right. We already have a pretty hard time with HTTP extensibility due to folks parsing it wrong. The path-unfriendliness is a little unfortunate. Possibly we should have picked the URL-safe version of base64, but too late for that. But since byte strings and text strings aren't the same type, I imagine any sf-binary-aware config language DSL would also have hex and base64 functions available. Indeed taking attacker-supplied input and mapping it to a path component, without running it through some very restrictive encoding, seems a pretty bad idea! At the end of the day, SHA-256 hashes are byte strings, and sf-binary is the way HTTP decided to say byte string. We could say that, although SHA-256 hashes are byte strings, dictionary names are ASCII strings computed by hex(sha256). Then these headers should be sf-string (with quotes). But that would be less efficient on the wire, and our on the web is to put user needs (bandwidth, in this case) first, so I think this is the wrong tradeoff. That said, I see also uses this format. It wouldn't make sense for and to use different formats, so perhaps switching away from the less consistent and less efficient format should probably be handled separately from this PR?\nI switched and both to sf-binary. The plan is to update the ID and then send out a summary of changes for discussion on the listserv and since this PR is about the dictionary in the headers anyway it should be fine to update both of them at the same time. Happy to split it out if it helps with history tracking.\nI think this change adds significant complexity, and I'm not sure regarding its benefits. I very recently played around with prototyping a compression dictionaries deployment and was very happy with the simplicity of it: You generate diff files at build time with some naming convention Then at serving time, you configure your routing logic (similarly to how NAME ) to those diffs That can be done today in almost any deployed routing layer. Once this change lands in implementations, a similar deployment would require me to add custom logic to transform that header value to something that is file system friendly. It doesn't matter if that transformation is to base64 decode the binary data and then to hex encode it, or to 'just.. strip a few characters and remap \"\/\"'. It is an operation that is not necessarily supported in many layers that developers currently operate in. To take just one example - if I were to implement this as a , I'm not even sure it's feasible to do that. (although it might be, with a creative ). Can you elaborate on the advantages of moving to base64, beyond theoretical purity?\nI don't understand how a client determines which dictionary was used by the server. It seems like the response depends on the request, which is fragile (and doesn't allow clients to advertise multiple dictionaries, which seems likely with Use-As-Dictionary as it is). cc NAME Moved from private ID\nFor identifying the dictionary, it's worth noting that the spec requires that the client advertise a SINGLE dictionary that it supports but yes, the response depends on the dictionary identified in the request. This also helps with minimizing the variants that caches would need to store since the response needs to be varied on the request's available dictionary (and multiple would explode the permutations). It wouldn't hurt to echo the dictionary hash in the response which would allow for other mechanisms of advertising dictionaries but that also carries the cost of complicating the vary logic (would need to vary based on whatever request header was used to negotiate the dictionary that ended up being used).\nDictionary hash used is now in the response headers.\nIf this field carries binary data, why aren't you using structured fields?"} +{"_id":"q-en-http-extensions-f59b545ed088bbaba6a57b08b33999531adc1f25adb16b411c5576e7c3a0ed3d","text":"As brought up in , resumable uploads may interfere with intermediaries scanning the request content. This PR adds a paragraph in the security considerations. Closes URL\nSome security measures that involve looking at incoming HTTP requests might be circumvented by resumable uploads. For example, a filter which looks for certain HTTP headers and corresponding text in the request payload might not be triggered when the header occurs in one request and the matching body fragment in a subsequent request. I don't believe there's anything we need to change about the protocol here, but it's probably mentioning this in Security Considerations. Perhaps guidance would be to ensure any security measures are either resumable-upload-aware or are explicitly called to inspect the final request between receipt of the last chunk and execution of the original request.\nThank you, that is a good point. I have opened to add a paragraph in this regard. Please let me know if that expresses the problem correctly."} +{"_id":"q-en-http-extensions-6363897cb1f066a55bee6614d451905f5a13d2d90bdd05e1420c7ec5af911db7","text":"This adds support for a server-provided sf-string \"id\" of up to 1024 characters that is echoed back in a \"Dictionary-ID\" request header when the dictionary is advertised as being available. This allows for a server to use something other than the hash to retrieve the dictionary (i.e. a cache key). This should be more flexible than the client sending the original request URL and will allow for servers to store arbitrary context information with the dictionary. It doesn't open up any additional privacy or security concerns because the dictionary hash was already being treated as if it was user-identifiable information. For\nNAME could you please take a look and see if this meets your needs and looks reasonable?\nYes, looks great! Thanks!"} +{"_id":"q-en-http-extensions-9a9aee4565103bf5f9e0d5d180090472d86bd5f6731fdead2dc3897607eb139f","text":"This changes the URLPattern match to use a single match pattern to be consistent with other uses of URLPattern.\nNAME NAME this is an attempt at a change to have it use a single string for pattern construction. I don't feel great about the same-origin check part of the validation but I don't see why it wouldn't work. It still allows for people to waste bytes by not using path-relative patterns so I may have to re-open the issue around that but should provide the consistency you were asking for. This would be instead of PR .\na couple nitpicks while passing by"} +{"_id":"q-en-http-extensions-5f74af57330c92721d3b5ef16c2141ea7ad27c878bbb4fd971e0920d48c79500","text":"After looking harder, I found that RFC 9110 that a proxy \"is usually identified by an \"http\" or \"https\" URI\". I'm not sure that's 100% accurate, but it conveniently resolves the wording problem, and who am I to argue with RFC 9110?\nAt the moment, section 1.2 (Problems) start with the following sentence: I totally get what it tries to say, but it is confusing if not incorrect. states that a CONNECT request is handled by a \"proxy\" and the target specified by the \"host:port\" tuple is the \"origin.\" Therefore, HTTP CONNECT proxies are not identified by an origin. Rather, they are identified by the user's proxy settings. I think we'd better update the problem statement, and might also consider pointing out that the definition of \"origin\" will be different between CONNECT and connect-tcp (or use different terms to identify the CONNECT target and the entity that creates a tunnel in order to avoid confusion, maybe?).\nPS. Practical example showcasing the side effect of the change to the definition is that the way to authenticate becomes different; when a client tries to establish a CONNECT tunnel with authentication, it uses \"proxy-authorization\" header, whereas when a client tries to establish a connect-tcp tunnel, it uses \"authorization.\" I think it's worth noting.\nI'm realizing that this spec has the opportunity to fix the origin issues present in regular CONNECT. While we're there it'd be great to clarify how connect-tcp interacts with Alt-Svc because it's unclear for CONNECT. I sent some more about why it's unclear for CONNECT.\nThanks for the discussion!"} +{"_id":"q-en-http-extensions-64d50013f5fee090957e73576736b60264f868fe96f3fbdb9d76a2cae300a45a","text":"Related to and HTTP request proxies use \"Proxy-Authenticate\" in order to distinguish proxy authentication from destination authentication (\"WWW-Authenticate\") in the mixed bag of headers that is presented to the proxy. Classic HTTP CONNECT proxies don't have that problem, but they still use \"Proxy-Authenticate\", probably just for consistency. The MASQUE RFCs don't say anything about which authentication headers to use. \"connect-tcp\" and MASQUE should definitely follow the same rules, whatever they are. NAME noted that RFC 9110 says \"the Proxy-Authenticate header field applies only to the next outbound client on the response chain\". This implies that a general-purpose gateway is not expected to forward Proxy-Authenticate headers. However, there is actually no such thing as a general-purpose gateway for Extended CONNECT, so this seems somewhat moot. Personally, I think that our guidance should probably also line up with what we want to say for which is (in my view, despite some authors' protestations) a modernized HTTP request proxy.\nAn interesting related question is whether MASQUEish resources can be authenticated with other methods, like cookies and bearer tokens. If so, I think they are essentially \"content-like\" and should probably use WWW-Authenticate. If not, they are essentially distinct, and could conveniently be marked as such with \"Proxy-Authenticate\". Or perhaps they are both, depending on the context...\nI somewhat think it's both — it's content and the proxy.\nNAME Thank you for opening the issue. If we are to consider extended CONNECT gateways as proxies, then the question would be what the URI of the connect-tcp request means. RFC 9110 says that target URI specifies the origin. It is only because all the existing extended CONNECT family deals with origins that the TLS handshakes to those servers implementing the extended CONNECT family can be validated using certificates issued for that origin. To conclude, to me in seems that the use of WWW-Authenticate is mandated by the using URI as the target. PS. Note also that RFC 8441 specifically explains how proxies can be used between the client and the extended CONNECT target, see .\nI don't think anything is really mandated. Within the current language of RFC 9110, I think we are free to choose either interpretation. However, I'm happy to say that \"WWW-Authenticate\" is the winner here, and \"Proxy-Authenticate\" does not traverse \"connect-tcp\" (or \"connect-udp\") gateways.\nLooks good! :+1:"} +{"_id":"q-en-http-extensions-2fb60e0dfd47a66db11659cffcefe89e7a19c89644980c6964549d6582f0b1b0","text":"This is kinda silly, but it seems like adding support for coalescing is something we could fix later. with the nuclear option.\nthe pull request is fine by me. I do wonder if we shouldn't make the json a tad more complicated to allow for a reasonable mixed definition in the future. But an unreasonable one is obviously possible :)\nCan we coalesce http and https? The implication from everything we've said is \"yes\", but Erik remains reluctant."} +{"_id":"q-en-http-extensions-d676d6fde5d8e76f3f91732a08d4f434fd85e3ac6d2a9fbfdf403c4cd4d20a5b","text":"This removes the dictionary-specific ttl and switches to using the dictionary resource freshness as the mechanism for expiring dictionaries.\nNAME could you please take a look and see if this looks ok? NAME could you look at the language around the freshness of the resource relative to cache-control and stale-while-revalidate and make sure I expressed it properly (and it is something we're ok with leveraging)?\nWhy does the dictionary lifetime need to be independent of the cache lifetime? Having an object that is cached with a separate lifetime seems unnecessary. I would have said that clients can cache a dictionary for as long as they want (as always), but no longer than the lifetime stated in the cache-control field. Now, you might say that a server might want a resource to be usable in its own right (i.e., as an ordinary resource) for one duration, with a separate lifetime for its use as a dictionary. I see two reasons that this isn't much of a compelling argument: A server can forget about a dictionary and not provide delta compression at any time, so there is no real need for the dictionary lifetime to be shorter than the cache lifetime. Client caches won't want to retain something for longer than its validity, so I don't see much chance that a resource will be kept past its cache lifetime. Resources like this are likely exclusively used as a dictionary. There isn't much value for those resources having a separate lifetime in that case.\nThe main reason for a separate lifetime for the dictionaries is to allow for sites to reduce the possible variations that are in the wild and reduce the number of variations of that come in and constrain the number of previous releases that they need to compress artifacts for. Otherwise, a dictionary \"miss\" will result in a full version of the resource getting added to caches for every possible version of where delta-compressed artifacts aren't available. Caches can probably manage that OK on their own, but by reducing the dictionaries to a known window it both provides a lever of control on the variations and also allows for clients to \"hit\" on a cached version of the resource when they have an old dictionary.\nThis has been rattling around in the back of my head for the last month or so and I wanted to get my thoughts down for discussion. I think it might be possible to use the existing validity of max-age and stale-while-revalidate to achieve the same result but I'm a bit worried about the complexity that would bring to users. There are 2 main scenarios that express the tension: Versioned immutable resources that change the URL when the content changes. Well-known URLs that are cacheable but update the content in-place (libraries for analytics, ads, embeds, etc). For the purposes of this example, lets assume that the resources are updated twice a day and that 80% of repeat traffic re-visits within 10 days of their previous visit. Assuming a serving architecture with a server-side cache (CDN, load balancer, etc): In both cases, the response is cacheable and keyed based on the dictionary that the client sends as available (). The main concern is how to tune the requests so that the cache behaves like it does today, minimizing traffic to the origin. The \"current\" version of the resource will be the only one actively fetched by the application independent of when the user last visited. The current best-practice is to cache them for a long time (1 year) and mark them as immutable. Without dictionaries that means that the cache will effectively only see requests for the current version of the resource and all previous versions can age out of the cache (even if they have a long time-to-live, just by virtue of never being accessed). There will be one request to the origin (or a small number) while the cache gets populated but all future requests will always be served from the edge cache. With dictionaries, the cache key depends on when the client last visited the page since the dictionary will be the version of the resource they have in their cache. That means 80% of the requests will be covered by 21 cache keys (one without a dictionary and 20 delta-compressed variants) but that there will be 700+ variants in the wild if the 1-year resource expiration is reused for the dictionary. The cache could manage access frequency to store just the 21 \"hot\" variants for the last 10 days but the 20% of visits that are using older dictionaries will result in cache misses that require going back to the origin. If a developer shortens the resource cache lifetime to a shorter time, like 5 days, knowing their release schedule then the problem largely goes away. That would require tying the dictionary use to the expiration of the resource and not allow for it to be used even if it is still in cache (but expired). That would mean we are now imposing a short cache lifetime on an immutable resource in order to optimize the hit rate of the middle caches. It feels like explicitly managing the dictionary lifetime at the same place where the dictionary details are specified might be cleaner. For well-known URLs that are not immutable and change the content in-place, we want to be able to use the previous version(s) of a resource as a dictionary for the \"current\" version. To be able to do that we need the dictionary to be viable beyond the expiration time of the resource. Since we need to hard-clip the use of the dictionaries to prevent cache key issues we'd need to have the dictionaries use something like the time window to allow for a revalidation request of an expired resource while still using the dictionary but allowing for a hard-clip of the dictionary validity at the end of the stale-while-revalidate time. It's theoretically possible but it feels like we're tying the dictionary TTL into the cache heuristics in ways that might not be intended and might also be easy to get wrong.\n\"well-known\" is a term of art you might want to avoid here. I get your intent. It looks like there is a fairly good suggestion for . It might take a bit of time for a cache to be able to work out the length of time from the last access that represents a good time horizon for use. But that seems manageable. The resources that churn is an interesting one. It seems like you are assuming that the cache lifetime is short, such that a replacement version is requested AFTER the cache lifetime expires. Your option does seem like a possible way of managing this, but I'd be unsure whether stale resources are good for use. You'd have to specify that behavior (reasonable, but it would need to be specified). It might be that a longer cache lifetime also works well enough. Clients do tend to re-check earlier than expiration in many cases, which would ensure that the resource is fresh in use, while making the overlap work well.\nNAME NAME does using the resource max-age + stale-while-revalidate as and expiration for the client-side dictionary work ok for you (with a recommendation that immutable resources specify expirations based on their typical release cycle)?\nI think that having an explicit life time for the dictionary is a little bit more flexible than relying on cache-control headers, but I think that by itself should outlive most dictionary uses I can think of, especially given that the CDN is unlikely to keep those resources in its own cache beyond that period as well.\nNAME I'm not sure I understand (or we're not talking about the same part of the issue). The main concern for caches is how long clients consider a dictionary valid for which will dictate how long of a window a given resource will see request headers from. e.g. for a dictionary with a max-age of 1 year, clients will advertise that dictionary as being available for up to a year. If there are 1,000 updates to the resource used as a dictionary (not impossible for a js app that has 3 releases per day) then there will be 1,000 variants of the dictionary in the wild at any given time (with more frequent requests for more recent dictionaries but still some infrequent long-tail requests). To prevent cache misses, it would be beneficial to cap the dictionary validity to a smaller window where most return traffic falls in and have older clients download the full resource from cache rather than missing and going back to the origin. It also caps how many delta-compressed versions need to be created. The discussion point is if that \"client valid\" time for a dictionary should be its own separate value or if sites should adjust their max-age to be more reasonable instead if they want to limit the variants (capping the validity to the max-age but requiring a shorter max-age rather than blindly setting everything to 1-year).\nWorst-case, a separate ttl could be added later if using the resource expiration ends up being a problem blocking adoption.\nIt seems like there are two different mental models of how caching and cache-misses could work. The difference between them may be contributing to the confusion. The first is that the CDN is keeping \"resource A compressed against dictionary X\" as a cache item, and if a client makes a request for \"resource A compressed against dictionary Z\", the CDN needs to go back to origin to retrieve that combination. The second is that the CDN is keeping \"resource A\" and \"resource Z\" as cache items, and if a client makes a request for \"resource A compressed against dictionary Z\", the CDN either has Z locally or not. If not, it will compress resource A in a different way the client supports. (Of course, this doesn't prevent the CDN from having some amount of second-layer caching to avoid recompressing frequent combinations.) Personally, I tend to think the second tends to have better properties -- you don't have an explosion of cache entries or origin requests for the long tail, you simply fall back to a non-dictionary compression scheme. CDNs might choose to count usage as a dictionary as a cache hit for Z, making it stay in cache as long as it continues to be used by clients, but might not choose to fetch Z again if it ages out of the cache. (The interesting quirk from a CDN perspective is that a single resource might have multiple old versions still getting \"hits.\") The two then probably imply different ways of thinking about lifetimes. In the first case, you could want a separate lifetime on the dictionary to feed into how long the CDN needs to keep around the f(A,Z) item. In the second, the lifetimes of the individual objects likely suffice. (For that matter, if the client is identifying the version of Z it has, then it doesn't really matter whether that object is fresh or not, only whether the server still possesses the indicated version.)\nThe main use case for this is for something like a web app at build time where they pre-compress the resources at build time for some number of previous releases. At serving time, the CDN doesn't have any special dictionary logic and just honors the headers to vary on and the origin checks for pre-compressed assets using the requested dictionary (in my apps that I was doing this with, I just added the hash to the end of the file name and did a quick file check). On a miss, the origin sends the full resource which ends up in a separate cache key on the CDN. This allows for origins to support dictionary-compressing the content without their CDN or load balancer having to have implemented it.\nRight, that's a reasonable workaround for CDNs that don't know about this spec. But wouldn't we rather the CDNs could implement the spec and calculate the delta at the edge? Wouldn't that be the desired end state? Perhaps I'm wrong, or perhaps I've wandered into an architectural question bigger than this issue.\nI'm not 100% sure it's just a workaround and origin-compressing against previous releases at build time may be preferable for some cases. The most extreme case that comes to mind is something like a huge WASM app (like Photoshop) where the files are upwards of 50MB (well, huge relative to the usual web page resources). Presumably the CDN could still do the work to do the compression (maybe as an offline job for future requests) so it doesn't require the origin to do it necessarily but it's a lot more reasonable in the extreme cases. That said, I think we can still get buy without a separate TTL and adding one later if it ends up being useful can be done as an enhancement without breaking the existing case.\nI like red patches. Thanks."} +{"_id":"q-en-http-extensions-ab16705c9023c28bfe0b63be5d2d0a799111781e031f689322b8ee4fe2a63bcd","text":"I guess that's the best we can do, considering that Unicode is a minefield. :-\/\n... in a swamp.\nIs it worth dealing with the fact that 3629 doesn't actually define how to encode a sequence of characters, but we're calling it with one? We'd need to add something to this text, like \"for each character (or code point?) in inputsequence, append the result of applying UTF-8 encoding to character to bytearray\".\ncurrently does not consider what happens when UTF8 serialization fails, although this could happen with surrogate points. I believe it should explicitly say that. Alternatively, the introduction of display strings could say that only unicode scalars (URL) are allowed (but even then I'd add the extra precaution in the serializer). Finally, we may want to add an example because all of this is far from obvious.\nNAME and NAME pointed out that URL doesn't define the and operations that this draft uses. RFC 3629 only defines a character at a time. RFC 3629 also doesn't say to fail encoding or decoding in any particular situations. It says \"The definition of UTF-8 prohibits encoding character numbers between U+D800 and U+DFFF\", but its definition of encoding doesn't actually prohibit those characters. It also says \"MUST protect against decoding invalid sequences\", but the definition of decoding doesn't fail in that case. URL is more explicit on these questions and might therefore be a better reference.\nNote that the recommended entry points for other specs are URL which are clear about how to handle BOMs as well.\nSeems reasonable - will try to work up a PR. I'm likely to keep the general reference to 3629 for UTF-8, and use encoding for the specifics.\nI would very very very much prefer not to have a normative reference on a \"living\" document here. Let's just add the missing needed on top of RFC 3629 to this spec. (and yes, I can make a concrete proposal)\nWe were clear how to handle BOMs, but what you suggest is a change on top of that.\nSee minimal PR.\n(Posted on the PR by accident:) Is it worth dealing with the fact that 3629 doesn't actually define how to encode a sequence of characters, but we're calling it with one? We'd need to add something to 4.1.11, like \"for each character (or code point?) in inputsequence, append the result of applying UTF-8 encoding to character to bytearray\"."} +{"_id":"q-en-http-extensions-69967a83c32d06af537207fec69214a3bd4921bb8158ac548ff8aad3fbca96a2","text":"The draft says \"Implementation of \"100 (Continue)\" support is OPTIONAL for clients and REQUIRED for proxies.\" What does that mean? Is the client allowed to fall over if it receives 100? (This is not a theoretical concern, many HTTP libraries stop listening for header blocks after the first one.)\nI've posted to reduce ambiguity here. RFC 9110 I think that makes it pretty clear that clients are supposed to tolerate unexpected 100s."} +{"_id":"q-en-http-extensions-60e9807c8ea2a14e663b458aba8c17054a4beacf25ceaf9b9edd1b6ac7fccd74","text":"This treats empty dictionary strings as the same as there being no dictionary and reqires all ID's have a non-zero length for the Dictionary-ID header to be sent.\nNAME could you PTAL and see if this handles the case you were concerned about?\nDictionary-ID was introduced to the Compression Dictionary Transport specification by . However, it is unclear whether empty string dictionary IDs are allowed. I don't think there is any use case for an empty string dictionary ID. So how about explicitly disallowing empty string dictionary IDs in the spec? NAME\nI'll make a zero-length string the default and have that mean \"no ID\". The dictionary ID will only be sent if it has a length > 0.\nThanks. lgtm."} +{"_id":"q-en-http-extensions-3c0432bd7c9eebf36cb4229c7687aa29a40237f7c07a817d094f1b662ea46a39","text":"This changes the field to be a list of destination strings to allow for matching multiple destinations (i.e. document and frame). This allows for matching an empty destination explicitly using an empty string. e.g.\nNAME could you PTAL and see if this looks OK for the destination matching?\nThe spec of compression dictionary transport says: But an empty \"\" is used for some features such as , , , , , Cache API, Download, prefetch and prerender. (See this ). So currently there is no way to set up a dictionary for such destinations. For example, I want to use a dictionary for fetch() API, but I don't want to use the dictionary for HTML documents. NAME How about using a string \"empty\" for such empty destination? We are already using \"empty\" for header of empty destination request. URL Also if we don't need to use an empty string \"\" for , I think should be a instead of a . We don't need to use double quotes.\nHaving special values feels a bit strange, particularly since it's only defined in the internal processing part of the w3c spec. If we use a list of strings we can use for an explicit empty destination and use an empty list for matching everything (and as the default). Does that seem reasonable?\nOK, sounds reasonable. Thanks.\noption was introduced in the Compression Dictionary Transport spec by . But if my understanding is correct, currently it is not allowed to setting multiple types of destinations. So if I want to use a dictionary for HTML files including top frame, and , I need to download the dictionary three times. NAME How about allowing setting multiple types of destinations? Example:\nSounds good (though document\/frame is probably the only practical use case). I'll switch it to an inner-list of tokens.\nlgtm Sorry for the late reply."} +{"_id":"q-en-http-extensions-1898ef26316acc47a7de461cc43fea3c52eaa300085ba0c0ad6cec30e4034b94","text":"Switched from using the ABNF field types to the human names (i.e. sf-dictionary to Dictionary) for all of the Structured Field header values. Switched dictionary \"type\" to a Token (from String).\nNAME could you PTAL at the change in referencing the header field types? Specifically, I wasn't sure if I should capitalize the types or not (I did since that's how the structured fields RFC referred to them).\nYep, this looks good. It's common to add something to you Notational Conventions section (or whatever) that says roughly \"This specification uses the following terms from [ref to SF]: String, Byte Sequence, Token, ...\""} +{"_id":"q-en-http-extensions-305c734b5c379f83eb4bf872ca7ce4179664bd7b691c3c10c36d404dfe373df2","text":"Changes: s\/tcpport\/targetport\/ (aligning with connect-udp) Remove support for multiple IPs and\nNAME The discussion at IETF 119 leaned toward removing the multiple-IP mode entirely. Are you suggesting that we should keep it?\nThis PR contains two changes; I'm not making a suggestion with regard to multiple-IP mode, but I think aligning the variable names in the template makes sense. Did both get dropped, or is that being discarded because it's in the same PR?\nOK, I've updated this PR to also drop the multiple-IP support entirely.\nThe target_host variable is only described vaguely. It should be defined more rigorously. For example, it doesn't specify whether IPv6 scope IDs are allowed or not. I'd recommend reusing the text from RFC 9298.\nOK, I've opened to make the definition more rigorous.\nFixed by\nThe draft currently defines the \"targethost\" and \"tcpport\" variables. We should replace \"tcpport\" with \"targetport\" for consistency with connect-udp. This consistency isn't just cosmetic, it simplifies reusing the same URI template for both protocols.\nThe distinction is deliberate, precisely because it facilitates using a single URI template for both protocols. As discussed in , the client can use a single template for various purposes by inspecting the variables that it contains: \"targetport\" -> connect-udp \"tcpport\" -> connect-tcp \"ipproto\" -> connect-ip \"dns\" -> DoH Using the same name (\"targetport\") for TCP and UDP would add complexity for client implementors. Instead of being able to determine a priori_ whether a proxy supports TCP and UDP based on the template, the client would have to probe (and periodically re-probe) for this information and maintain dynamic state about which protocols are supported.\nWhat you're saying is that a client can infer support for certain protocols based on the variables present in the URI template. That's not correct, or more precisely it wasn't specified for connect-udp, and it definitely doesn't work for connect-ip (in connect-ip the variables are optional).\nOK, I see that it doesn't work for connect-ip, so connect-ip can only be supported by probing or with an out-of-band signal (or with an ecosystem restriction that connect-ip servers must support \"target\" and \"ipproto\"). It seems fine for connect-udp and connect-tcp because the variables are mandatory, so it would provide value to UDP+TCP clients (which seem likely to be common).\nIn that case we should state somewhere that passing around an URI template with these variables implies support for the corresponding mechanism. I'm not sure we'd get consensus on that though.\nThanks for the updates based on the discussion at 119"} +{"_id":"q-en-http-extensions-4ae30595db055d030099a820b7c5f47a5184f539bc36b759840821d9f88259cf","text":"This PR adds descriptions to all message exchange examples in the draft. It should make their purpose clearer and also help distinguish the two separate examples in section 4, as pointed out in .\nThank you all!\nنزت\nI think there's two different creation examples being shown in URL but the presentation of them isn't great and could be a source of confusion. If so, lets break them apart by having a short sentence before each example expliaining it.\nYes, these are two separate examples, which are confusing. Most of the example are in need of a proper description. I can open a PR to address this. Thank you!"} +{"_id":"q-en-http-extensions-d2fff8e0a87888f823d1a2050a27475e19398ec2dde9c0392b97b84b8b37a2bf","text":"This PR adds explanations helping readers to understand when empty requests for upload creation or appending might be useful and why server should support them. Closes URL\nThe field definitions allow for a client to send 0-length content. Sometimes this is useful, other times it it might not be. For example, A client could perform upload creation by making a request content-length: 0 and with upload-complete:?0. This would allow it to test creation wihout having to commit to an upload in the same message. A client could attempt to append to append zero bytes. Is this useful or should we prohibit it? This is a cheap way to keep an upload \"active\". However, that might be a vector for abuse by allowing a client to cause server-side state commintment. Depending on what we decide, we might need some text in the security considerations.\nJust to add to the two bullet points: This can also be used for clients\/servers that do not support 1xx response by sending a 0 byte upload-complete: ?0 request to get the unique resource URL. This is for instance how does it. For cases where multiple upload-complete: ?0 requests are made, a final 0 byte upload-complete: ?1 request can be used to notify the server that the upload is complete. This can be useful in cases where the client wants to provide all the data before finalizing the upload to reduce the risk of losing the final response. I don't really see a use case for sending multiple 0 byte upload-complete: ?0 but there might be some?"} +{"_id":"q-en-http-extensions-ffdaea7e240cf127a0e92977f0b1c56d280f205116b9c6e6bb82336da84ffede","text":"A server might allow GET requests to be sent to an upload resource and serve content, in which case a 200 status code is more appropriate for responses to HEAD requests. This PR lifts the requirement that a successful response must be a 200.\nThe current draft requires the server to respond with a 204 No Content status code for HEAD requests: [...] If the server considers the upload resource to be active, it MUST respond with a 204 (No Content) status code. This restriction is too strict and does not allow the server to respond with a 200 OK. Since the HEAD requests for resumable uploads do not include any specific additional header field, the server cannot tell apart whether a HEAD request is used for resumable uploads or used for starting a download, for example. For example, a server might allow to download the uploaded file through a GET request to the upload URL. HEAD requests to this URL should then return the same status code (e.g. 200) and header fields as the GET request would do. But the draft requires a server to always respond with 204, leading to a conflict. Can we also allow 200 for HEAD responses?\nI think this would be a good change to the spec. Maybe use similar language as with PATCH, where we allow the entire 2xx range but have a recommended status code? I don't think this would be an issue for any client. Is this a typo? Should be \"... 200 for HEAD...\"? >Can we also allow 204 for HEAD responses?\nThat's correct. Thank you for pointing out"} +{"_id":"q-en-http-extensions-7cc26962959672bf007c9df8f858e6c1b612e38c2a20e68809f39e9c03b68fb1","text":"allows another status code to be returned for offset retrieval requests. This might break interoperability between existing implementation, so this PR bumps the interop version to make this change more visible."} +{"_id":"q-en-http-extensions-8d65c80c88f10539257438eed0c7c2c7649782bec58c1da27e39e8769926c0fa","text":"This PR removes the requirement of checking the request's redirect chain during the computation of same-site-ness. This is being done because RFC6265bis is blocked by this work but we have yet to find a way to implement it in a web compatible way. In the interest of moving RFC6265bis forward the requirement is being removed.\nThe work to re-add the requirement back into RFC6265tris is being track by issue\nYeah, we did implement this - but we had to back it out (and re-spin stable, IIRC?) because it broke too many sites. :(\nCorrect, Chrome had to disable the change. Firefox had a similar experience.\nWe noticed a difference between Firefox & Chrome’s behavior for same-site cookies. Specifically, if a web page is requesting a resource whose final redirect target is same-site with the web page, . Firefox follows what was originally decided in URL and specified in recent i.e., Firefox looks at the whole redirect chain. For context: Firefox shipped SameSite=Lax by default this January and had to backpedal due to this (and some other webcompat issues). Firefox does not ship SameSite=Lax as the default setting for new cookies as of now. While we’re still working on our own metrics, we’ve noticed that Chrome is seeing about 1% of page loads potentially affected, which seems prohibitively high. At the same time, we would prefer not to weaken our implementation of explicit SameSite cookies, given that the redirect check was specifically added to prevent CSRF abuse scenarios. With these things in mind, we’ve been exploring the idea that we might treat cookies differently depending on whether the cookie was set to lax by default or is an explicit samesite cookie. NAME NAME it would be nice to hear some thoughts about this idea and also NAME how other browsers may have implemented . CC: NAME NAME NAME NAME NAME NAME\nYeah, I agree this is unfortunate. We did , but it broke a few sites and was backed out (or rather, put behind an off-by-default feature flag). NAME has been collecting better metrics so we can hopefully make progress on this. Can you give some more detail what you're thinking here? (also cc NAME\n\n\nWith these things in mind, we’ve been exploring the idea that we might treat cookies differently depending on whether the cookie was set to lax by default or is an explicit samesite cookie. Firefox is already enforcing that all redirects are same-site for cookies that have an explicit or strict attribute. We would like to keep that restriction of course. From my understanding the idea would be that only cookies that don't define their own SameSite attribute and are then \"lax-by-default\" we would ignore the redirects.\nAs a side note, Chrome does send with these redirected sub-resource requests, so internally the networking code does seem to know the correct same-site value of the request. Obviously we'd like to do the more secure behavior all the time, but we are committed to doing the specified secure behavior for explicit SameSite cookies. Firefox has enforced this for explicit SameSite cookies for maybe around a year and has not suffered a noticeable amount of web incompatibility. We are in the (slow!) process of gathering telemetry to measure the impact of this on laxByDefault vs explictly SameSite cookies. Our hope is that we can convince Chrome folks to honor the spec for explicit SameSite cookies so that users and sites will be secure as promised by the spec, and that Firefox will be less likely to run into Web Compatibility problems in the future because of this. And that if we must weaken the behavior of laxByDefault (as it appears we must based on site breakage) that we explicitly change the spec to say \"lax-ish by default\" cookies follow Chrome's current behavior of ignoring redirects and comparing only the originator and the final request URL.\nHello and sorry for my late response. Our (Chrome's) metrics continue to show around 1% of page loads containing 1 or more cookies that would be blocked by this enforcement. While I'm unable to share numbers (yet, I'll add them to URL once they're approved) I can say that the vast majority of values triggering this are unspecified (lax by default). I wonder then if many of these cookies are likely those being accessed after a POST request from some login or payment flow redirect chain. I don't have any data to back this hunch up but URL hints this may be the case. If it is then perhaps we can tie the redirect chain enforcement to the 2 min Lax+POST mitigation. I.e.: Allow unspecified cookies to be accessible after cross-site redirects for up to 2 minutes. While I worry this will further ossify that \"temporary\" mitigation it feels better than a blanket except for unspecified cookies. Given our metrics I don't think your proposal is unreasonable, but the more we can tighten the restriction the better. Any thoughts? Does Firefox have any data that could support or refute my idea? (Tangentially: I'd still like to see the 2 min exception removed, but I get a headache even thinking about it.)\ntagging NAME NAME for visibility to the above comment. I think tying this to the 2 minute exception is fairly reasonable - 1% of page loads is sadly much too high to ship what the spec describes in any reasonable time frame. Perhaps some pages would still break, but it's worth a shot. And I agree we should change the spec to match (whatever becomes) reality here.\nWhat about the idea of differentiating between explicit and -defaulted ? Not ideal, but then at least there won't be a security regression for those who opt into . Then for -defaulted we could do something along the lines you propose, in name of webcompat.\nRight, we already differentiate between explicitly Lax () and unspecified () when it comes to the 2 min Lax+POST mitigation. Only the unspecified cookies qualify for that mitigation, explicitly Lax do not. (Perhaps it would have been better to name it Unspecified+POST, oh well) I'm suggesting to continue that pattern for the redirect chain exception, so I think we're in agreement.\nNAME Yes, that's aligned with our thinking right now. Do you have numbers that can help us gain more clarity for these exact cases (a redirect where the final URL is same-site, but the overall chain is to be considered cross-site, also called \"boomerang redirect\") and split up into explicit-lax and default-lax? We're still in the process of collecting those within Firefox, but are obviously interested in your insights so far.\nNAME I think URL has what you want. Those numbers are from roughly a week ago. tl;dr Between 80-90% of cookies are unspecified\/default lax. Cookies that are being newly written are on the higher end of the range, cookies being read are lower.\nHi! John from Apple WebKit here. If we standardize different behavior for default vs explicit SameSite=lax … We increase the complexity of SameSite cookies significantly. Is that desirable? Will developers be able to know the difference without this effectively being displayed as an additional cookie attribute in developer tools? Are we arriving at the desired end state or piling on more legacy and technical debt? It looks to me like this should instead be a fourth SameSite value, beside none, lax, and strict. Then we’ll have to decide if the new, fourth value is the default or the old explicit lax. I generally don’t like fixing legacy problems by increased spec complexity since it doesn’t help the web\/Internet progress. I will bring this up with the CFNetwork folks at Apple to see what they have to say.\nHi John, You bring up some good points. I'm curious if the CFNetwork folks had anything to add.\nI think what's under discussion is making the fourth value explicit in the specification. It apparently is already a thing in some implementations. I'm not sure we have a good reason to expose it in syntax though. But yeah, a clear name in the specification would help with tooling and such.\n+1\n\"Default\" is an explicit value for the in the algorithms and storage model defined in Section 5. It's not a part of the syntax. Unfortunately it looks like the algorithm only sets the if there's an explicit SameSite attribute in step 17 (\"Default\" is used for an otherwise unknown value). Presumably the should be initialized to \"Default\" somewhere, but that's an assumption and not explicit. The value of same-site-flag is technically undefined if there isn't a SameSite attribute and that's clearly not the intent.\nHi, Zhenchao Li from Apple working on CFNetwork. This change sounds quite complex to me. Although, if it means minimizing website breakage and tightening cookie restriction, I'm in favor of explicitly adding the behavior to the RFC. It does sound like we need to move from \"SameSite=lax by default\" to \"SameSite=[fourth value] by default\". Also, \"default\" still needs to be a separate state persisted in \"Storage Model\" so as not to be conflated any time. (I believe this is intended by latest draft, but as NAME pointed this might need some changes to the Storage Model section) NAME Could you help clarify more about \"2 min Lax+POST mitigation\"? I see it mentioned above but it would be great to discuss how the mitigation should behave exactly.\nURL is the spec language around the behavior. As a quick refresher: the goal is to allow unsafe http methods (commonly POST) to attach cookies that do not specify a attribute but are being treat as \"lax by default\". This behavior is intended for newly created cookies and so has a lifetime limit of up to 2 minutes. Let me know if you have any more specific questions.\nWe (NAME NAME NAME and I) discussed this at TPAC and came to the following suggestion: We think that browsers have to apply a compatibility measure here, and it could be including cookies on cross-site redirects if they’re younger than, for example, 2 minutes. We’ll collect metrics to get a better understanding of what that number should be. We will update \"Lax-Allowing-Unsafe\" in the spec to document that behavior, which would make Chromium compliant to that and Firefox could (AFAIU) ship a compliant version as well. We’ll continue to recommend \"Lax-Allowing-Unsafe\" as the default behavior in the spec. We’ll note that this is a compatibility mechanism that browsers may support or not support at their own discretion. It's not a behavior sites should opt into.\nI believe the \"same-site redirect chain consideration\" change should be removed from RFC6265bis, with the intention of adding it back to RFC6265Tris. RFC6265bis contains a number of improvements over RFC6265 that are being stalled due to UAs being unable to implement \"same-site redirect chain consideration\" in a web compatible way. My efforts examining metrics collected by Chrome haven't yet offered much insight into how this issue could be solved and I have no idea how much longer it could take. Because no UAs (to my knowledge) have shipped \"same-site redirect chain consideration\" I don't believe reverting the language will have any practical effect and shouldn't weaken any active protections. But publishing RFC6265bis as a new spec will have a number of benefits, including security\/privacy benefits for the web's users.\nThis seems like a reasonable, pragmatic step to me. The current limbo state isn't helping anyone :) cc NAME\nYes, +1 to documenting reality and no longer blocking the IETF process for 6265bis on this particular item (which I do agree we should treat as a security bug to be eventually solved).\nNot an editor, but LGTM (given that it reflects reality).LGTM, given the context."} +{"_id":"q-en-http-extensions-8b4a3170ce7600c71f58987eb77d7d0981a3ad44d758ab80029347268a9031dd","text":"Technologies such as HTTP\/2 will split the header into multiple fields. This PR modifies the spec to no longer expressly forbid this and instructs servers to be aware and capable of handling multiple incoming headers.\nIn the original 6265 as well as in the latest 6265bis draft, this paragraph exists: This is directly contradicted by RFC 7540 and then : I personally suspect that trying to split the header up into multiples is a recipe for disaster and interop problems even if done only over HTTP\/2, but maybe I'm just a pessimist. Would it be sensible to maybe address this contradiction (better) in the bis document?\nHTTP\/3, repeats the HTTP\/2 statement as well: Both these HTTP RFCs also clarify that: But still, that is for the receiving end and does not really restrict the sender when using HTTP\/2 or HTTP\/3.\nI agree that the inconsistency is bad. That said, why do you believe that using multiple field instances can cause an interop problem?\nI'm fairly sure Chromium does this, so if there were interop problems, we'd probably have noticed by now. :-) The phrasing is a little odd and probably could be improved. One resolution is to say this is an HPACK-specific encoding thing and not actually sending multiple fields. The full text from 9113 says: I.e. this field splitting starts and ends its life in HTTP\/2.\nA concern: Some servers have a maximum request header field size limit of around 8K. curl makes sure to never send a header longer than that as it risks getting a 400 back and basically blocking that particular request without it being very obvious to the user. This is a limit RFC6265(bis) does not mention but probably should because it is one of the more important ones. By splitting up the header into multiple ones, the total size of the sent cookies should probably still remain smaller than 8K since there is a risk that there is a h2\/3=>h1 conversion done that will pass those cookies on in a single header and thus be affected by this limit. It would if HTTP\/3 didn't have the same wording so it also applies there.\nBecause one RFC says we MUST NOT send them and another says we MAY. I bet there is a server side or two out there that then will not appreciate getting multiple such header fields, perhaps because the h2\/h3 special case is not taken care of. But this is just be being worried, I don't know for sure this is the case.\nAgreed that such a limit should apply to the total here. I think that would follow from treating this as a funny HPACK encoding of a single field, rather than semantically sending multiple fields. That's not precisely how it's described today, but it results in the exact same behavior on the wire. I.e. we can still get the compression benefits of compressing individual cookies separately, and resolve the slightly awkward phrasing.\nSafari started splitting Cookies in h2 and h3 a few years ago, it actually improved website compatibility where the h2 load balancer had a per-field size limit but the ultimate h1 origin did not.\nUnderstood and agreed. So yes, that \"MUST NOT\" doesn't make any sense, and it's inconsistent with the base HTTP spec. Would be interesting to find out how that got there.\nI can't recall the discussions about it on the http-state list back in the day (man, it's been over a decade already!) and I could not find any relevant threads when I casually searched the archives.\nChanged from SHOULD NOT to MUST NOT between draft 09 and 10: https:\/\/author-URL\nI think the wording is intended to be internal to HPACK\/QPACK -- that is, there can only ever be one Cookie field as far as HTTP is concerned, but Cookie is special-cased to be split when encoded. To deal with the special case, any occurrence of multiple values gets coalesced. I don't see this as a contradiction.\nAre there any remaining concerns before I close this issue? I agree with Mike's interpretation that this isn't a contradiction. The spec mandates that UAs generate a single header field which the server expects on the other end. I don't believe that another layer transparently muxing\/demuxing this field is an issue.\nI agree that Mike's interpretation is the sensible one to draw from this. I disagree that this is what the specs actually say in the written words.\nI can see the potential for confusion here. This seems to stem from 6265bis's tendency to re-implement other specs requirements, but in a somewhat inferior way. The cookie spec probably shouldn't care how the HTTP request is specifically formed so long as the invariant of a single cookie-string from UA -> Server is maintained by the time the server's cookie processing code receives it. URL would probably help here. In the meantime, I'd prefer if we didn't special case actions such as 9113's cookie header field splitting, so maybe some rephrasing could avoid the issue? Here's a rough draft of what I mean. My intention here is to make it clear that the UA (more specifically the cookie code) should create a single cookie-string to attach as the header's value. This would lead to a single header in the default case, but should still allow for header splitting.\n9113 and 9114 have to special-case Cookies for one simple reason: they're a semicolon-delimited list rather than a comma-delimited list. 9110 ; the text in question applies the same logic to Cookies despite the different delimiter and points out that for compression reasons it's particularly useful to split Cookie fields. (It might have been worth noting that other list fields might see similar gains from being split and recombined across the compression boundary, too.) I think NAME proposed text on generation is fine -- there should always be exactly one Cookie header generated from UA->server, though a lower layer might mux it before compression. After decompression, demuxing is mandatory when being passed into a context that doesn't know to expect being split. However, there's that exception in both RFCs: \"URL being passed into a non-HTTP\/2 context, such as an HTTP\/1.1 connection, or a generic HTTP server application.\" This text clearly supposes that anything version-specific which comes after RFC 7540 will know that Cookies may have been split; perhaps this document should prescribe tolerance of receiving that on the server side. It's then an implementation concern whether that tolerance takes the form of recombining them before processing.\nWhich document specifically?\nSorry, let me clarify antecedents: >This text [found in RFCs 7540, 9113, and 9114] clearly supposes that anything version-specific which comes after RFC 7540 will know that Cookies may have been split; perhaps [6265bis] should prescribe tolerance of receiving that on the server side. That is, perhaps the Cookie spec should require that servers be able to handle Cookie fragments having been split into multiple Cookie headers, since this document has the opportunity to be aware of HTTP\/2+ behavior.\nThanks for clarifying, that makes sense.\nLGTM % tiny nits."} +{"_id":"q-en-http-extensions-2c175c748b717b8cd8890db5efef1b635cd20296cfab22ec738be7428f6c48c9","text":"Adds some additional notes and warnings to developers on how to more carefully use SameSite cookies.\nWe are reporting on the behavior of cookies on cross-site windows opened through that goes against our assumptions (implied by the current specification) on when and how cookies should be isolated. Breaking these assumptions can lead to security issues, allowing the bypass of cookies as a defense mechanism against CSRF and XS-Leak attacks. The following table shows when cookies are attached to requests associated with s and pages opened through (where :x: stands for not attached\/not accessible and :whitecheckmark: for attached\/accessible): The behavior of cookies regarding page reloads for pages opened through from a cross-site position is evident from the : Currently, the specification mandates blocking cookies on user-initiated page reloads but not on reloads performed through JavaScript. We think that cookies should also be blocked in this scenario, as this interaction between JavaScript navigations and cookies allows attackers to bypass the restrictions imposed by (see 2.1). Additionally, we verified that cookies, even when not sent in the top-level request that loads a page, are still attached to requests that load subresources for that page. This behavior is not explicitly defined in the specification. However, its counterpart (setting a cookie) is defined as follows: Although these requests are not cross-site, they are performed implicitly by a page navigated from a cross-site context, effectively enabling an attacker to perform authenticated cross-site requests (see 2.1), or cookie tossing attacks. Following this argument, when considering that user-initiated page reloads are deemed cross-site and therefore do not attach cookies, it seems reasonable to assume both JavaScript navigations and subresource loads should also not attach cookies, as they are also transparent to the user. It is possible to abuse the previously described behavior to bypass CSRF protections An attacker can forge same-site requests starting from a cross-site position by abusing gadgets that result in secondary same-site requests. An example of one of these gadgets is a JavaScript-based redirect that performs a redirection using attacker-controlled data. Here is a possible attack flow: Victim visits URL, setting the cookie cookie for domain . Victim then visits URL, which executes The request to URL is cross-site, so it does not attach cookies URL performs a navigation to the value of the URL parameter through This triggers a request to URL, which attaches cookies, allowing an attacker to make authenticated requests from a cross-site position A similar scenario was discussed in this . Besides JavaScript redirections, other requests, such as subresource loads, could also be exploited to subvert the security assumptions granted by cookies. cookies effectively defend against certain classes of XS-Leaks attacks. However, -based attacks are still possible since cookies are attached to top-level cross-site requests. Since cookies are not attached to top-level cross-site requests, they have been deemed an effective mitigation against -based XS-Leaks. As discussed in 2.1, gadgets that allow attackers to forge same-site requests from a cross-site position can be leveraged for XS-Leaks. An example would be a page that performs a stateful subresource load, which modifies the page DOM. If an attacker opens this page through , cookies will not be attached, but following subresource loads in the new page will attach these cookies, leading to state-dependent changes to the DOM, which could be measured by the attacker (through frame counting for example). Best regards\nThanks for your well researched issue! I’ll try and address some of the points you brought up. First and foremost, SameSite is meant as only one layer in a defense in depth strategy, its design and implementation tries to balance our security and privacy goals with the realities of the web as it exists today. This ultimately means that it must be used with smart site design that leverages the other security tools and features available. differs from the original site (let’s call it a.example) navigating itself by allowing a.example to continue to hold and manipulate a reference to the new window. But, every navigation a.example makes on that reference will have an initiator of a.example (On Chrome at least, I haven’t checked other browsers, but I believe it’s true for all). This means that, as far as SameSite is concerned, an a.example window navigating itself directly or opening a new window and navigating that instead are all the same type of top-level navigation. This appears to hold true for all the examples and notes in your post. This was a purposeful decision. We wanted to allow the site to be able to operate specific “landing pages” in order to allow third-party entities (maybe a sign in flow or payment processor) to navigate the user back and then the landing page could refresh to allow access to cookies in order to complete the process. Since the reload must be explicitly initiated by the page it would require the site developer to “opt-in” to the behavior and hopefully design the landing page with that behavior in mind. Conversely, we disallowed user initiated reloads because that doesn’t show an intention on the site’s behalf that it wants or needs cookies following that cross-site navigation. Users have been taught over the years to try reloading if a page looks broken, something that could occur if cookies aren’t present. Continuing to block cookies helps prevent users from inadvertently weakening their own security. That’s expected and intended. Samesite operates on a per request basis (or on a per JS call basis) so all requests, sub-resource or otherwise, are subject to the same rules. See . The spec assumes that since the page loading the same-site resources is in control of what it loads, it shouldn’t load anything that makes harmful changes. You’re correct that if a site exposes an API like you’ve given in 2.1 then it’s possible to give an attacker control + access to SameSite cookies. But to me it appears that the gadget is the footgun and I’m not sure how\/if SameSite should solve this. Allowing any cross-site entity to make requests as yourself is a dangerous tool! I mentioned above that pages are in control of what they load and as long as they only load safe resources that they’re expecting they should be fine, but if you have any examples to the contrary I’m very interested in hearing about them. Going back to how s are basically the same as any other top-level navigation, this is the intended behavior. Lax cookies shouldn’t be used for anything that can allow changes to anything important. Agreed, it’s definitely possible to leak state this way. I’m not familiar with the reasoning with why the single-origin policy allows references to read , but I’m sure the spec writers had a good reason. Hope that helped clear things up, let me know if you have any other questions or concerns.\nHi NAME thanks (as usual) for the kind and thorough response! I did a bit of additional investigation and found some discussions that we missed before opening the issue: URL and URL As far as I understood, the current behavior concerning page reloads has been defined only recently (2021). Before that, different browser vendors had . Even nowadays, browsers do not seem to fully agree on the intended behavior, or at least Firefox is inconsistent with Chrome's behavior since it does not attach SameSite=strict cookies after a call to . Some confusion on this matter is justifiable. :) I now see the rationale behind the design that enables specific pages to opt-in to SameSite=strict cookies upon a self-initiated reload. I always thought SameSite=lax cookies ensured compatibility with SSOs and payment flows, as lax cookies are attached to top-level navigations. But this \"strict + reload hack\" (so to say) puts the webpage in control instead of attaching cookies to all cross-site top-level requests. The downside of this behavior, as discussed, is that JS-based open redirectors become a gadget for bypassing SameSite=strict cookies. I'm not sure if the benefits outweigh the risks, but probably, the spec would benefit from a more explicit discussion of this trade-off. Concerning subresource loading, I would argue that in general this is not a valid assumption. In principle, here we have an untrusted context (the cross-site page opened via ) that automatically loads authenticated resources. This is somehow the dual of SRI, CSP, mixed content, and other security mechanisms that prevent a trusted context from loading untrusted resources that an attacker could abuse. We have not measured the prevalence of security issues caused by this behavior in the wild (i.e., xs-leaks introduced by an authenticated subresource being loaded), but I would expect a non-negligible number of affected instances. The current behavior surprised me, and it is possible that developers who care about protecting from xs-leaks are also unaware of the potential risks. For instance, the states that: citing only other reasons (e.g., the 2 minutes exception rule) as major limitations of SameSite cookies. To conclude, we just wanted to discuss this issue after encountering a behavior that diverges from our mental model of SameSite cookies and that we could not entirely justify it after reading the spec. As for JS-initiated reloads, if the behavior is here to stay, I think the spec could mention these limitations more explicitly to prevent developers from building their applications based on wrong assumptions. Thanks again for the discussion! Marco\nHi NAME As you found in those discussions, reload behavior wasn’t well aligned previously and I’m not sure how purposeful each UA’s behavior was at the time. I know Firefox has a number of SameSite related behaviors behind default disabled configs, this could be as well. I don’t have easy access to Firefox at the moment but I believe the nightly builds have these flags enabled. What happens if you try testing on a nightly build? SameSite=Lax is useful for flows that use top-level navigation, but if a flow is happening in an iframe then SameSite=Lax cookies would be blocked. The reload technique enables these iframe based flows to function. The issue here is that a page may still load sub-resources that require SameSite=Strict auth cookies even if the initial request didn’t include those auth cookies? Plainly, any site that allows unauthenticated access to a page which loads authenticated sub-resources is incorrectly using SameSite. Sites should return a 401 error, or some other denial, if the initial request does not include the appropriate cookies. Sadly though, I agree with you that there are probably sites that do not. Just to confirm, the behavior you’re talking about are: URL navigations are not as restrictive as iframe navigations. Non-user controlled reloads allow cookies Sub-resource requests send SameSite cookies Redirect gadgets allow access to SameSite cookies Did I miss any? What do you mean by “these limitations”? Are you referring to the redirect gadget?\nAny remaining comments\/concerns before I close?\nHi NAME Sorry, I missed your earlier reply in October, and I'm just now seeing it. Thanks for the ping. I tried Firefox 123.0.1 (stable) and Firefox 125.0a1 (nightly), and they behave the same way. SameSite cookies are not attached to requests initiated via , but they are attached to requests caused by . Chrome sends the cookie in both cases. Indeed. While open redirectors and script gadgets are known to carry their problems, I am concerned about issues enabled by authenticated subresource requests, as they are less known to web developers. I don't think so. All of the above, actually. I understand that it's challenging to change the behavior at this point. Still, I think it would be beneficial to explicitly document both redirection gadgets and authenticated subresource requests under Sec. 8.8 of the spec. This would help web developers to understand the risks better and adopt mitigations. Cheers, Marco\nStill LGTM."} +{"_id":"q-en-http-extensions-3d84dacfec9effa9cf4a4d4d0c2a3b9dad6fdadf336bfad4e0ce7d0f13cf98ce","text":"Rephrases the requirements for the attribute to be slightly broader in order to support . Currently the requirements specifically ask for a secure protocol, with this change they'll ask for a secure connection (as defined by the user agent). This allows UAs to support cookie access on http:\/\/localhost\nNo WPT that I can see. According to , Firefox fully supports localhost, Chrome supports but not cookie prefixes, and Safari doesn't have support but does have an open bug.\nGot it, thanks. If you're taking care of Chromium, and WebKit has an open bug, landing this seems reasonable. Thanks!\nWe are reporting inconsistent browser behavior regarding cookies and and prefixed cookies on . delegates the decision of what is a \"secure protocol\" to the user agent when describing how to handle cookies with the attribute. The affects both cookies with the attribute and cookies with name prefixes ( and ). rfc6265bis-12 states the following (): According to our tests: Firefox allows cookies to be set\/sent on , as well as and prefixed cookies. Chrome also allows cookies to be set\/sent with the attribute. However, and cookies are not allowed. Safari does not allow cookies to be set\/sent, and consequently, and cookies are also discarded since they require that the cookie's be set to true via the cookie attribute. The table below summarises browser behavior when setting cookies on , localhost subdomains, and . We noticed these inconsistencies after interacting with developers of Web frameworks while discussing cookie integrity issues. Allowing cookies with the attribute and cookie name prefixes to be used on , would enable adopters of Web frameworks to test their applications locally, removing the burden of setting up HTTPS locally to use these security features. As shown in the table above, Firefox fully supports this behavior, while Chrome's support is still partial. Following , Firefox moved from a \"secure protocol\" check when handling cookies, to using the definition of specification, where is considered \"Potentially Trustworthy\". The states that: >An environment is a secure context if the following algorithm returns true: Chrome made the same decision following , and support for and prefixed cookies on has been deemed \"in scope\" (). Safari is aware of this inconsistency, but there is still no decision regarding changing their behavior (, , ). As the cookie specification does not clearly define the behavior on localhost, we could consider reusing the notion of from the Secure Context specification, instead of suggesting a user-agent-dependent \"secure protocol\" check on the scheme of the request-URI, in point 13 of . Best regards\nI think it's reasonable to modify the phrasing to include potentially trustworthy origins.\nThis is a good change, thanks. I don't think we have WPT in place for this (and I'm not entirely sure that we can?), but have you looked into the support for this change across browser engines? I'm generally in favor of landing it as long as it's reflecting (near) reality."} +{"_id":"q-en-http-extensions-a12afbd4aa9b7709c9ce85857d120d0a9ebe8d6a53a18c8c2c6f1267a3b94637","text":"There were a few cases of the section linking to itself, which I find distracting, so lets remove them.\nGood catch, thank you!"} +{"_id":"q-en-http-extensions-da7391446eb350ea9e8f52b214fbe8897371232b6767be25fdf0bfb1cb37c80e","text":"Please correct me if I'm wrong, but IIUC, the crux of this PR is that to allow any client that can present a valid pair of Authorization and Signature-Auth-Context header fields to obtain a protected resource. Assuming that if we would be allowing such behavior, I wonder why we need to require clients to export keys from TLS session, rather than just coming up with some arbitrary authentication context and present it to the server? Or if it is the case that there are situations in which we have to require clients to use keys exported from TLS session, doesn't it mean that there are cases where servers must ignore Signature-Authentication-Context header field or drop it?\nNAME no this is only for trusted intermediaries. If a server receives a Signature-Auth-Context header from an untrusted source then it drops it. That's somewhat mentioned on line 470, but I'll add normative text."} +{"_id":"q-en-http-extensions-d9ecd049936990a7bd1615963e42bd480c441b5c9fe0658c25f9c91901c65e8d","text":"This change makes it explicit that the content encoding is (same as ) and is (same as ). See the discussion in for why larger ZStandard values were not selected. This carries some nuance for what encoding to use when compressing for maximum benefit as the resource size crosses beyond 8 MB but that is more of an editorial discussion about the individual compression algorithms and probably better laid out in blog posts or explainers than in the spec.\nNAME could you PTAL and make sure the language matches what you're working on for the update?\nThanks. I switched the content-encodings definitions and IANA registrations to a list with references back to the algorithm RFCs. For the IANA registrations I removed the window size from the text and linked it to this document (where the window sizes are defined) instead of to the compression algorithms directly. Let me know if a more verbose description of the need to specify the window sizes would be useful or if just being explicit about what they are is enough.\nContext: URL aims to clarify the window size limit for the Content Encoding token as it wasn't clear in RFC8878. Since the Compression Dictionary Transport draft hasn't been published yet, we should also add some window size considerations to it for and to maintain interoperability. I think in Chrome we ended up using . NAME\nNAME could you comment on how this impacts Brotli dictionary decode? Is the dictionary independent from the compression window (so we can adopt the same defaults as or do we need to specify something relative to the size of the dictionary? NAME NAME for ZStandard, to allow for delta-encoding large resources, is reasonable or should it be ? We have use cases for delta-encoding wasm applications where the dictionary can easily be upwards of 50 MB.\nFor Zstandard, the states that the entirety of the dictionary is accessible as long as the input is <= Window Size. Therefore, to give an extreme case, it's possible to have a 50 MB dictionary, and it's entirely accessible even if the Window Size is only 1 KB, but only as long as the data to de\/compress is <= 1 KB. As soon as we reach 1 KB + 1, reaching the dictionary is no longer valid. For delta compression, this has consequences. Presuming the reference is ~50 MB, and the updated content is also ~50 MB, one would need a window size of ~50 MB to ensure access into the dictionary during the entire reconstruction process. But if the updated content is just a bit bigger, say ~51 MB, a ~50 MB window size would allow delta operation only during the first ~50 MB, the last ~1 MB would have to be compressed without dictionary. If follows that the size of the dictionary is irrelevant for the Window size. What matters is for how long one wants to access the dictionary while rebuilding the content. And as a general approximation, one wants this dictionary during the entire reconstruction process, so the window size is effectively the size of the content to rebuild. Zstandard has a \"windowless\" mode which states exactly that : the size of the \"window\" is the size of the content. For this to work, one has to set the size of the content at the beginning of the compression process . Setting the content size is either an automatic operation when data is provided entirely upfront (), or, when data is provided in small increments (), it must be stated explicitly at the beginning (). will automatically trigger the \"windowless\" mode when the declared content size is smaller than the set Window size (effectively reducing the memory requirement for the decoder side). I'm not sure if there is another way to trigger it. Note that the decoder will be informed of the Window Size requirement by reading the frame header, and may decide to not proceed if this value is deemed too large. For example, has a default 128 MB limit, and will refuse to decompress payload requiring more that that, unless explicitly authorized to employ more memory (). Just for reference, offers a \"large delta compression\" mode with the command . It has made us somewhat familiar with some of these issues. In situations where the amount of data to compress and compare is very large, we now prefer to both the dictionary content and the destination file. It has proven useful to reduce memory pressure.\nThanks. That makes things a bit more complicated for the case where we want to require a minimum window size that the server can trust that all clients advertising content-encoding will be able to decode. For the dictionary case it might be cleanest to use the same 128MB limit for window sizes as the default zstd limit (knowing that the client won't allocate that much unless it actually gets a stream that uses it).\nHello. Shared dictionary is placed in the address space between backward reference window and static dictionary. In other words it does not interfere with normal compression window, but using it makes \"static dictionary\" references more expensive. We decided to order it that way, because shared dictionary works almost the same way as extra window. Sorry for the late response. Best regards, Eugene.\nExtra note. As you can see brotli allows backward distances behind window. IIRC, regular (as opposed to large-window) brotli allows distances up to ~56MiB, i.e. larger shared dictionaries would not be useful (== referenced)\nThanks. For the dictionaries to be usable for a broad set of apps (including large wasm bundles), I'm considering spec'ing: Brotli: Regular 16MB window, allowing for delta-compression of resources up to ~50MB. ZStandard: 128MB window, allowing for delta compression of resources up to around that size, depending on where the duplicated bytes are in the updated resource. The main difference with regular content-encoding is on the ZStandard side where the window is 8MB. For platforms that are resource constrained and are not willing to support a 128MB window they would be limited to Brotli (and then decide how large of a dictionary they are willing to keep and use). Another option on the ZStandard side would be to allow for a maximum window that is a multiple of the dictionary size (i.e. 8MB or 2 * dictionary size whichever is larger with a 128MB max). That way the client can have an upper limit that it knows in advance before advertising as being available. That adds complexity to the encoding side though. We have already seen cases where people want to delta-compress resources > 50MB (wasm apps) where they had to use ZStandard instead of Brotli because of the Brotli limits so I don't think a small limit like 8MB for ZStandard would work with this use case.\nThat massive resources exist is not sufficient justification to build a system that will not be available to some users. I would prefer a far smaller default. If only to encourage the use of smaller resources overall. Defining a separate codepoint for massive windows is possible, say . The scalable option also seems reasonable. Clients can then choose to offer dictionaries when they exceed available resources.\nI'm ok using the same 8MB window ( I believe) that we use for for and note that the effectiveness of delta compression for responses > 8MB using may be limited. The scalable option concerns me that people are likely to get it wrong and it won't be obvious and the decompression will start to fail as the dictionary and resource size changes. NAME does that sound about right?\nThe alternative floated where 's window size is derived from the size of the dictionary is much more attractive to me. (That is, cap the window size log at , where the spec picks an somewhere in the range .) In such a scheme, where the window size is deterministically derived from the size of the dictionary--a resource the client must already have access to--the client is free to decline to advertise support for for dictionaries which would require the use of a window larger than the client is willing to support. I.e., in resource-constrained environments, Chrome could avoid having to allocate more than 8 MB windows for by not advertising support for on requests where the resolved available dictionary is larger than . Put another way, given that the client will have the tools to make these choices for themselves, I would rather give the client discretion here rather than just bake a flat restriction into the spec.\nNAME would you be supportive of updating the zstd command-line tooling to add an option that would set the window to the larger of 8MB or the dictionary size (i.e. match the content-encoding behavior)? My main concern is that it is easy to document and get people to use a fixed command-line for a 8MB window but if it involves a more complicated algorithm then it is likely to break in subtle ways that they may not notice until the resources are bigger than the dictionary (and are bigger than 8MB).\nNAME yeah, that would be pretty straightforward. Happy to add that.\nIf we end up that bundles the dictionary hash with the compressed stream we can make sure the tooling for that file format handles the window size stuff for zstandard automatically (my main concern is around unexpected breakage from different server and client window requirements that only trigger when resources grow). The language will be a bit complex but will probably have to look something like\nIf I may chime in for some tactical consideration : In an update scenario, it's common for a new resource to be slightly larger than the old one it replaces. The expectation is that we generally tend to add stuff over time. A formulation which caps the window size to the size of the dictionary would introduce problems for all these update cases, which are expected to be the majority of update scenarios. On the other hand, we don't want this increase in size to be completely uncapped, in order to keep control of memory size. And if the new resource is really much larger than the old one, the benefits of delta compression are dubious anyway. So the suggestion is to authorize a window size which is slightly larger than the external dictionary size. As to how much larger exactly, a suggestion here is by +25% (x1.25). I would expect a large majority of update scenarios to fit into this picture, with the new resource being slightly larger than the old one by a few %. For the less common cases where the new ressource is much larger than the old one, Zstandard could still employ the dictionary but only for the beginning of decompression. The compression process will have to select a window size which remains under the specified limit (x1.25 external dictionary size). As a technical insight, note that byte-accurate window size is only a thing when the window is the entire decompressed content. Otherwise, the window size is encoded in a condensed format, which can only represent increments by 1\/8th of a power of 2. This is not a problem to determine this \"largest window size under the specified limit\", but due to limited accuracy, you can already guess what would be the problem is we were to keep the formulation \"not more than the larger of 8 MB and the size of the dictionary\". Imagine that the external dictionary size is bytes, and the new resource to delta-compress is , aka 1-byte larger than the dictionary. With the initial formulation, since the window size is limited to the size of the dictionary, the compressor cannot use as a window size, because it's 1 byte larger than the limit. So, it will have to find the next best window size under the limit. Due to granularity limitations, this next best window size is 8 MB. So delta compression would proceed fine for the first 8 MB, but for the last 1 MB, data would have to be compressed without the benefit of the dictionary. With the newly proposed limitation to \"x1.25 the dictionary size\", delta compression would remain active in above scenario during the entire decompression process. And it would remain fully active if the new resource to decompress is 10 MB, 11 MB or 11.24 MB. But if the new resource is 11.25 MB or larger, then the compressor will automatically limit the window size to 11 MB (since it's the largest representable window size under the limit, which, in this particular instance, is ), achieving the wanted memory size control.\nSince neither the server not the client may know what the resource size will be when compression starts (say, in a streaming HTML case), won't the 1\/8th power of 2 granularity apply even for cases where the resource is the same size (or sometimes smaller) than the dictionary? For resources slightly larger than the old one, the dictionary would still be used for the majority of the resource but the last bit would fall back to regular window-based compression. I agree that having the dictionary available for the whole resource would provide better compression but I'm not sure where to draw the line. Something like a 25% fudge factor can be quite a bit when you start talking about 50 MB+ resources that might be close to where the client wants to draw the line anyway. They may have allowed the dictionary at 50 MB but not allow it's use at all if it needed 62 MB available for a window. A similar case could be made for the non-update case where you have a custom-built dictionary but the resource being compressed exceeds 8 MB. Only the first 8 MB of the resource would be able to use the dictionary. Hopefully at that point the compression window would have most of the stuff that was in the dictionary but the dictionary becoming unavailable is something we should expect for some cases with ZStandard, even if we use a fudge factor. Is there a way to convey the 1\/8th power of two granularity on the window size or is it something that the ZStandard libraries will take care of automatically for users (i.e. setting the window size to 11 MB when the caller sets it to 11.24 MB)? I'm wondering how much of this implementors are going to have to deal with directly.\nIt depends if the compression side (server) is informed upfront of the size of the payload to compress. If not, then indeed you are right, the compressor will have to set a Window Size, and the granularity rule applies. So for example, let's assume the payload to delta-compress is 50 MB, and the reference dictionary is also 50 MB. Let's then assume it implies a window size limit of 50 MB (dictionary size) and the compressor is not informed of the size of the payload. In which case, the compressor will have to set a Window Size, and it will have to settle for the largest representable window size under the limit, which happens to be 48 MB. Therefore, the dictionary won't be used during the compression of the last 2 MB of the payload. With the proposed +25% tolerance rule, the limit would become 62.5 MB, and the largest representable window size under the limit is 60 MB. Now the dictionary remains active during the entire compression of the 50 MB payload. When the data to decompress ends up being smaller than the window size, it's obviously not a problem for the decompression process: the final decompressed size is still properly reported at the end. It's just that the intermediate buffer allocation will be larger than it could have been. Note that uses for the allocation of buffers, so the address space is just reserved, but the physical memory may never be used if it's not needed (i.e. if there is no write action on the later segment part). +25% is more like a suggestion, this can certainly be discussed if some other limit feels better. I liked that it's a (binary) clean factor, and it's larger than , which covers the scenario above, and gives a little bit of breathing room if the newer payload is a bit larger than the older reference one. I see this scenario as being very distinct from delta-compression. For delta-compression, we expect large segments to be identical between the reference and the payload to compress, so we want this ability to access the reference during the entire compression of the payload. The price to pay is that the reference is very specific, and is not designed to offer any serious benefit to any other payload. Custom dictionaries are different: they are supposed to carry the most relevant \"commonalities\" detected across a set of samples. Presuming a large enough sample set, the common components are not expected to be very large. In such a scenario, the custom dictionary is expected to be (relatively) lightweight (compared to delta-compression), likely in the <= 100 KB range. Consequently, by the time the payload to compress reaches 1 MB, it should already contain all references it needs to compress the rest of the file. I would not expect a large impact of the custom dictionary beyond that point, let alone beyond 8 MB. The plan would be for the library to do this adaptation automatically. The user would not have to know anything about it, though it's better if the user understands that the library selects the closest window size which is <= to the limit, so that they are not surprised if, upon inspection, the final window size is not byte-identical to the set limit. A good API documentation will likely help to reach this objective If by implementors you mean \"server side programmers\", they would have to know what's the implicit window size limit when a client requests a format (essentially the rule you are currently defining). Then they would set this limit as a compression parameter, and the library will take care of the adaptation into window size.\nYes. (Keeping in mind of course that most traffic in question will presumably be under the 8 MB floor and so none of this complexity will apply.) Right now the zstd library has limited controls about setting detailed window constraints. We'll need to add a new option to give users that fine-grained control. We can do that with an eye towards user simplicity. Frankly, I'm also very open to also adding a flag (and equivalent compression parameter in the library) that would just look at the size of the dictionary (which zstd necessarily has access to) and would configure the window accordingly. In that case, users wouldn't have to do any math at all.\nOK, I chaged the structure and language so that they are no longer streams of bytes compressed with a given algorithm but are now specifically and . The references for those named formats have the window restrictions (which for Zstandard includes the 1.25x dictionary requirement). Currently the byte stream itself is still a native Brotli\/Zstandard stream, just with specific settings but this also makes it cleaner if we decide to . PR with the update is here: URL\nlgtm other than the comment"} +{"_id":"q-en-http-extensions-150071266372cb523a6e67c8b1fa7aec8d9ccffeb6bbcc00548be36d30b0f41c","text":"Changed it to a proper normative reference. For\nMerging this. Can always clean it up more if it is still a concern.\nFor example: Usually, we add something like URLPattern to the references, then that.\nSorry, didn't mean to close the issue. Feel free to close this out if the current language looks OK (also happy to update it if it needs some more tweaking).\nGoing to close this out now. Thanks for taking the time on this."} +{"_id":"q-en-http-extensions-5e364b4ad9594fb2d1bac6563765c67d38e1fb3b756db485a146baf9a82454da","text":"The CONNECT-TCP draft makes informative references to other drafts which use \"extended CONNECT\" (CONNECT-UDP, CONNECT-IP), but does not have a normative reference to the draft which defines this use of the CONNECT method. It references the \"extended CONNECT\" method without supplying a reference for this term. This draft spells out the required values for each pseudo-header, but omits the requirement to set SETTINGSENABLECONNECT_PROTOCOL. A reference to RFCs 8441\/9220 is probably needed, and a reminder that the method in this document can only be used if the server sends this setting first.\nThat'll do."} +{"_id":"q-en-http-extensions-8612c5d068ee1c778a5f15d67b15174a6bf1223878898500c74bc414893d37cb","text":"This changes the maximum ZStandard compression window to be the larger of 8 MB and the size of the dictionary. This allows for clients to limit the maximum window size to as low as 8 MB if they like (by not advertising support) while also allowing for clients to use larger dictionaries with ZStandard than they would be able to if the global limit was 8 MB.\nNAME and NAME could you also take a look and see if the language looks ok?\nContext: URL aims to clarify the window size limit for the Content Encoding token as it wasn't clear in RFC8878. Since the Compression Dictionary Transport draft hasn't been published yet, we should also add some window size considerations to it for and to maintain interoperability. I think in Chrome we ended up using . NAME\nNAME could you comment on how this impacts Brotli dictionary decode? Is the dictionary independent from the compression window (so we can adopt the same defaults as or do we need to specify something relative to the size of the dictionary? NAME NAME for ZStandard, to allow for delta-encoding large resources, is reasonable or should it be ? We have use cases for delta-encoding wasm applications where the dictionary can easily be upwards of 50 MB.\nFor Zstandard, the states that the entirety of the dictionary is accessible as long as the input is <= Window Size. Therefore, to give an extreme case, it's possible to have a 50 MB dictionary, and it's entirely accessible even if the Window Size is only 1 KB, but only as long as the data to de\/compress is <= 1 KB. As soon as we reach 1 KB + 1, reaching the dictionary is no longer valid. For delta compression, this has consequences. Presuming the reference is ~50 MB, and the updated content is also ~50 MB, one would need a window size of ~50 MB to ensure access into the dictionary during the entire reconstruction process. But if the updated content is just a bit bigger, say ~51 MB, a ~50 MB window size would allow delta operation only during the first ~50 MB, the last ~1 MB would have to be compressed without dictionary. If follows that the size of the dictionary is irrelevant for the Window size. What matters is for how long one wants to access the dictionary while rebuilding the content. And as a general approximation, one wants this dictionary during the entire reconstruction process, so the window size is effectively the size of the content to rebuild. Zstandard has a \"windowless\" mode which states exactly that : the size of the \"window\" is the size of the content. For this to work, one has to set the size of the content at the beginning of the compression process . Setting the content size is either an automatic operation when data is provided entirely upfront (), or, when data is provided in small increments (), it must be stated explicitly at the beginning (). will automatically trigger the \"windowless\" mode when the declared content size is smaller than the set Window size (effectively reducing the memory requirement for the decoder side). I'm not sure if there is another way to trigger it. Note that the decoder will be informed of the Window Size requirement by reading the frame header, and may decide to not proceed if this value is deemed too large. For example, has a default 128 MB limit, and will refuse to decompress payload requiring more that that, unless explicitly authorized to employ more memory (). Just for reference, offers a \"large delta compression\" mode with the command . It has made us somewhat familiar with some of these issues. In situations where the amount of data to compress and compare is very large, we now prefer to both the dictionary content and the destination file. It has proven useful to reduce memory pressure.\nThanks. That makes things a bit more complicated for the case where we want to require a minimum window size that the server can trust that all clients advertising content-encoding will be able to decode. For the dictionary case it might be cleanest to use the same 128MB limit for window sizes as the default zstd limit (knowing that the client won't allocate that much unless it actually gets a stream that uses it).\nHello. Shared dictionary is placed in the address space between backward reference window and static dictionary. In other words it does not interfere with normal compression window, but using it makes \"static dictionary\" references more expensive. We decided to order it that way, because shared dictionary works almost the same way as extra window. Sorry for the late response. Best regards, Eugene.\nExtra note. As you can see brotli allows backward distances behind window. IIRC, regular (as opposed to large-window) brotli allows distances up to ~56MiB, i.e. larger shared dictionaries would not be useful (== referenced)\nThanks. For the dictionaries to be usable for a broad set of apps (including large wasm bundles), I'm considering spec'ing: Brotli: Regular 16MB window, allowing for delta-compression of resources up to ~50MB. ZStandard: 128MB window, allowing for delta compression of resources up to around that size, depending on where the duplicated bytes are in the updated resource. The main difference with regular content-encoding is on the ZStandard side where the window is 8MB. For platforms that are resource constrained and are not willing to support a 128MB window they would be limited to Brotli (and then decide how large of a dictionary they are willing to keep and use). Another option on the ZStandard side would be to allow for a maximum window that is a multiple of the dictionary size (i.e. 8MB or 2 * dictionary size whichever is larger with a 128MB max). That way the client can have an upper limit that it knows in advance before advertising as being available. That adds complexity to the encoding side though. We have already seen cases where people want to delta-compress resources > 50MB (wasm apps) where they had to use ZStandard instead of Brotli because of the Brotli limits so I don't think a small limit like 8MB for ZStandard would work with this use case.\nThat massive resources exist is not sufficient justification to build a system that will not be available to some users. I would prefer a far smaller default. If only to encourage the use of smaller resources overall. Defining a separate codepoint for massive windows is possible, say . The scalable option also seems reasonable. Clients can then choose to offer dictionaries when they exceed available resources.\nI'm ok using the same 8MB window ( I believe) that we use for for and note that the effectiveness of delta compression for responses > 8MB using may be limited. The scalable option concerns me that people are likely to get it wrong and it won't be obvious and the decompression will start to fail as the dictionary and resource size changes. NAME does that sound about right?\nThe alternative floated where 's window size is derived from the size of the dictionary is much more attractive to me. (That is, cap the window size log at , where the spec picks an somewhere in the range .) In such a scheme, where the window size is deterministically derived from the size of the dictionary--a resource the client must already have access to--the client is free to decline to advertise support for for dictionaries which would require the use of a window larger than the client is willing to support. I.e., in resource-constrained environments, Chrome could avoid having to allocate more than 8 MB windows for by not advertising support for on requests where the resolved available dictionary is larger than . Put another way, given that the client will have the tools to make these choices for themselves, I would rather give the client discretion here rather than just bake a flat restriction into the spec.\nNAME would you be supportive of updating the zstd command-line tooling to add an option that would set the window to the larger of 8MB or the dictionary size (i.e. match the content-encoding behavior)? My main concern is that it is easy to document and get people to use a fixed command-line for a 8MB window but if it involves a more complicated algorithm then it is likely to break in subtle ways that they may not notice until the resources are bigger than the dictionary (and are bigger than 8MB).\nNAME yeah, that would be pretty straightforward. Happy to add that.\nIf we end up that bundles the dictionary hash with the compressed stream we can make sure the tooling for that file format handles the window size stuff for zstandard automatically (my main concern is around unexpected breakage from different server and client window requirements that only trigger when resources grow). The language will be a bit complex but will probably have to look something like\nIf I may chime in for some tactical consideration : In an update scenario, it's common for a new resource to be slightly larger than the old one it replaces. The expectation is that we generally tend to add stuff over time. A formulation which caps the window size to the size of the dictionary would introduce problems for all these update cases, which are expected to be the majority of update scenarios. On the other hand, we don't want this increase in size to be completely uncapped, in order to keep control of memory size. And if the new resource is really much larger than the old one, the benefits of delta compression are dubious anyway. So the suggestion is to authorize a window size which is slightly larger than the external dictionary size. As to how much larger exactly, a suggestion here is by +25% (x1.25). I would expect a large majority of update scenarios to fit into this picture, with the new resource being slightly larger than the old one by a few %. For the less common cases where the new ressource is much larger than the old one, Zstandard could still employ the dictionary but only for the beginning of decompression. The compression process will have to select a window size which remains under the specified limit (x1.25 external dictionary size). As a technical insight, note that byte-accurate window size is only a thing when the window is the entire decompressed content. Otherwise, the window size is encoded in a condensed format, which can only represent increments by 1\/8th of a power of 2. This is not a problem to determine this \"largest window size under the specified limit\", but due to limited accuracy, you can already guess what would be the problem is we were to keep the formulation \"not more than the larger of 8 MB and the size of the dictionary\". Imagine that the external dictionary size is bytes, and the new resource to delta-compress is , aka 1-byte larger than the dictionary. With the initial formulation, since the window size is limited to the size of the dictionary, the compressor cannot use as a window size, because it's 1 byte larger than the limit. So, it will have to find the next best window size under the limit. Due to granularity limitations, this next best window size is 8 MB. So delta compression would proceed fine for the first 8 MB, but for the last 1 MB, data would have to be compressed without the benefit of the dictionary. With the newly proposed limitation to \"x1.25 the dictionary size\", delta compression would remain active in above scenario during the entire decompression process. And it would remain fully active if the new resource to decompress is 10 MB, 11 MB or 11.24 MB. But if the new resource is 11.25 MB or larger, then the compressor will automatically limit the window size to 11 MB (since it's the largest representable window size under the limit, which, in this particular instance, is ), achieving the wanted memory size control.\nSince neither the server not the client may know what the resource size will be when compression starts (say, in a streaming HTML case), won't the 1\/8th power of 2 granularity apply even for cases where the resource is the same size (or sometimes smaller) than the dictionary? For resources slightly larger than the old one, the dictionary would still be used for the majority of the resource but the last bit would fall back to regular window-based compression. I agree that having the dictionary available for the whole resource would provide better compression but I'm not sure where to draw the line. Something like a 25% fudge factor can be quite a bit when you start talking about 50 MB+ resources that might be close to where the client wants to draw the line anyway. They may have allowed the dictionary at 50 MB but not allow it's use at all if it needed 62 MB available for a window. A similar case could be made for the non-update case where you have a custom-built dictionary but the resource being compressed exceeds 8 MB. Only the first 8 MB of the resource would be able to use the dictionary. Hopefully at that point the compression window would have most of the stuff that was in the dictionary but the dictionary becoming unavailable is something we should expect for some cases with ZStandard, even if we use a fudge factor. Is there a way to convey the 1\/8th power of two granularity on the window size or is it something that the ZStandard libraries will take care of automatically for users (i.e. setting the window size to 11 MB when the caller sets it to 11.24 MB)? I'm wondering how much of this implementors are going to have to deal with directly.\nIt depends if the compression side (server) is informed upfront of the size of the payload to compress. If not, then indeed you are right, the compressor will have to set a Window Size, and the granularity rule applies. So for example, let's assume the payload to delta-compress is 50 MB, and the reference dictionary is also 50 MB. Let's then assume it implies a window size limit of 50 MB (dictionary size) and the compressor is not informed of the size of the payload. In which case, the compressor will have to set a Window Size, and it will have to settle for the largest representable window size under the limit, which happens to be 48 MB. Therefore, the dictionary won't be used during the compression of the last 2 MB of the payload. With the proposed +25% tolerance rule, the limit would become 62.5 MB, and the largest representable window size under the limit is 60 MB. Now the dictionary remains active during the entire compression of the 50 MB payload. When the data to decompress ends up being smaller than the window size, it's obviously not a problem for the decompression process: the final decompressed size is still properly reported at the end. It's just that the intermediate buffer allocation will be larger than it could have been. Note that uses for the allocation of buffers, so the address space is just reserved, but the physical memory may never be used if it's not needed (i.e. if there is no write action on the later segment part). +25% is more like a suggestion, this can certainly be discussed if some other limit feels better. I liked that it's a (binary) clean factor, and it's larger than , which covers the scenario above, and gives a little bit of breathing room if the newer payload is a bit larger than the older reference one. I see this scenario as being very distinct from delta-compression. For delta-compression, we expect large segments to be identical between the reference and the payload to compress, so we want this ability to access the reference during the entire compression of the payload. The price to pay is that the reference is very specific, and is not designed to offer any serious benefit to any other payload. Custom dictionaries are different: they are supposed to carry the most relevant \"commonalities\" detected across a set of samples. Presuming a large enough sample set, the common components are not expected to be very large. In such a scenario, the custom dictionary is expected to be (relatively) lightweight (compared to delta-compression), likely in the <= 100 KB range. Consequently, by the time the payload to compress reaches 1 MB, it should already contain all references it needs to compress the rest of the file. I would not expect a large impact of the custom dictionary beyond that point, let alone beyond 8 MB. The plan would be for the library to do this adaptation automatically. The user would not have to know anything about it, though it's better if the user understands that the library selects the closest window size which is <= to the limit, so that they are not surprised if, upon inspection, the final window size is not byte-identical to the set limit. A good API documentation will likely help to reach this objective If by implementors you mean \"server side programmers\", they would have to know what's the implicit window size limit when a client requests a format (essentially the rule you are currently defining). Then they would set this limit as a compression parameter, and the library will take care of the adaptation into window size.\nYes. (Keeping in mind of course that most traffic in question will presumably be under the 8 MB floor and so none of this complexity will apply.) Right now the zstd library has limited controls about setting detailed window constraints. We'll need to add a new option to give users that fine-grained control. We can do that with an eye towards user simplicity. Frankly, I'm also very open to also adding a flag (and equivalent compression parameter in the library) that would just look at the size of the dictionary (which zstd necessarily has access to) and would configure the window accordingly. In that case, users wouldn't have to do any math at all.\nOK, I chaged the structure and language so that they are no longer streams of bytes compressed with a given algorithm but are now specifically and . The references for those named formats have the window restrictions (which for Zstandard includes the 1.25x dictionary requirement). Currently the byte stream itself is still a native Brotli\/Zstandard stream, just with specific settings but this also makes it cleaner if we decide to . PR with the update is here: URL\nlgtm with nits, thank you!LGTM!"} +{"_id":"q-en-http-extensions-ee87c757f50f99a3d90ddfda372984d1fbeddeaea1b6d91b55a333c3a56d8933","text":"This eliminates the response header and moves the hash of the dictionary into the payload of the response. Specifically, it adds 36-byte header in front of the compressed stream which is a 4 or 8-byte signature followed by the 32-byte SHA-256 hash digest. The signature is different for the two different compressions: Dictionary-Compressed Brotli: (0xFF followed by ) Dictionary-Compressed Zstandard: (32-byte skippable Zstandard frame) This also changes the content-encoding names for the different schemes to make it clear that they are a new format and not a raw brotli or Zstandard stream ( and ). From the cli, the stream is equivalent to: or\nFWIW, it doesn't look like it would be particularly difficult to use a skippable frame instead of a custom header in the Zstandard case if that's the way we want to go. It's 4-bytes bigger (for the frame size) so it will essentially have an 8-byte \"magic number\" (since the size will always be the same). It won't help with the creation of the files but if there's benefit to have the existing tooling be able to decode them (without verifying the hash because it will just skip it) we can go that route. On the client side, it feels like it will be cleaner on the implementation to use the same flow for verifying the dictionary hash and magic number for both encodings without having to deal with different sized magic numbers (and always buffer and remove the first 36 bytes) but it's not a significant difference.\nStill LGTM\nNAME could you please take a look and make sure I got the byte sequence for the skippable frame correct? I tested with the zstd cli and it was able to decode the file with the header without issue when created like this:\nThanks. Updated and added an example with the zstd cli.\nThis is a discussion we've had several times when defining content codings, but it seems like there is never really a single answer. Should the content coding be self-describing, or can it rely on metadata in fields? The compression dictionary work uses header fields to identify which compression dictionary is in use. Originally, the client would indicate a dictionary and the server would indicate use of that dictionary by choosing the content coding. This made interpreting the body of the response quite challenging in that you needed to have a request in order to make sense of it. More recently, the specification has changed to having the client list available dictionaries, with the server echoing the one it chooses. Both use header fields. There is a third option, which is to embed the dictionary identification (which is a hash of the dictionary) ahead of the compressed content. This has some real advantages: Requests could use delta encoding. There is - of course - a real question about availability of the dictionary, but we have that problem with encrypted content codings and keys. That is a problem we solve for responses with a field. It might be solved using the same technique, combined with the client-initiated content coding advice. Or applications can use their own systems. A field costs a lot more bits. This would not benefit from header compression, so it might be a net loss in some cases, but in many cases it would be a distinct win because 32 bytes of hash in the body of a request is far less than a field name and field value containing a base64 encoding of those same 32 bytes. It also comes with disadvantages: Header compression could remove a lot of the cost in bytes. It might be ever-so-slightly more complex for encoding and decoding to have to split the first 32 bytes off the body of a message.\nOn the encoding side, the main downside is that the cli tooling for brotli and Zstandard don't currently do the embedding so tooling would have to be added to prepend the hash to the files in a way that isn't standard for either (yet anyway) and for manually decompressing the files. Zstandard has identifiers for the dictionaries when using non-raw dictionaries but both assume that raw dictionaries will be negotiated out of band. Technically it would be a pretty trivial modification for clients and servers that are doing the work, I'm just a bit concerned about the developer experience changes (and whatever needs to be done to get both brotli and Zstandard to understand what amounts to new file formats).\nI opened issues with brotli and ZStandard to see if they would consider adding it to the format of their respective file formats. If it's an optional metadata tag that is backwards compatible with existing encoders and decoders I could see it providing quite a bit of value, even in the non-HTTP case of dictionary compression.\nThere was some discussion in the ZStandard repo about possibly reserving one of the skippable frame magic numbers for embedding a dictionary ID but there's some level of risk for collision with people who may be using those frames for watermarking or other application-specific use cases. As best as I can tell, the brotli stream format doesn't have a similar frame capability for metadata or application-specific data. We could create a container format that held the dictionary ID and stream (header basically, not unlike zip vs deflate) but that feels like a fairly large effort and the tooling would have to catch up to make it easy for developers to work with. At this point I'm hesitant to recommend adding anything to the payload itself that the existing brotli and zstd tooling can't process. Being able to create, fetch and test the raw files is quite useful for the developer workflow and for debugging deployments. Would it make sense to allow for future encoding formats to include a dictionary ID in the file itself and make the header optional in those cases (and make the embedded ID authoritative)? I'm not sure if that would make sense in this draft since this one is limited to the 2 existing encodings and it would be addressed in a new draft when the new encodings are created or if it makes sense to allow for it here without requiring that and explicitly require embedded ID's.\nI don't think that you want to change the zstd or brotli format, only the content coding. That is, something like this: This does partly work against the idea that you might have a bunch of files that contain compressed versions of content. You can't just say . And you can't just decompress without some processing. What you get in exchange is a message payload with the advantages I described.\nI do think we need to define a file format for it if we go down this path to ease adoption and it should probably have a magic signature at the beginning. Maybe something like gzip is to deflate but with a simple 3-byte signature followed by the hash followed by the stream data. Assuming we create a cli tool to do all of the work for compressing\/decompressing them, I'll ping several of the current origin trial participants to see how it will fit into their workflow. Something like: Dictionary-compressed file\/stream format with a 3-byte signature indicating hash and compression type, followed by dictionary hash, followed by raw compressed stream. Something like for Dictionary-Compressed Brotli with Sha-256 hash and for Dictionary-Compressed Zstandard with Sha-256 hash. Create a cli script (cross-platform, available in npm, bun, apt, etc) that creates the files and (optionally) calls out to brotli and zstd to do the compression. We could include any special ZStandard compression window handling in the format spec as well as the cli tool. I'm assuming something like this would be better off as it's own draft that this references or do you think it makes sense to define it here? I agree there are significant benefits to having the hash paired directly with the resource. I just want to be careful to make sure whatever we do fits in well with the developer workflow as well.\nAdding the header would add some work to typical CI workflow. At least in my case, the diff file stream is created using the cli using the options. Adding the header would require creating a new file with the header and add the generated stream. If this can be done in a published script it would simplify the logic (FYI - URL currently does yet not support brotli bindings, see URL) but not a requirement IMO. Do I understand correctly that with this idea implemented the header would not be necessary anymore? That would simplify some CDN configurations which currently needs to add it. Related to this - what in this case would be header value? current spec is it should be which I understand as brotli dictionary stream. If we add the header, how would a client be able to determine which one it is? Should we add another content encoding for this?\nYes, this would eliminate the need for the header. The would be and the definition of would be changed to reference the stream format of a brotli stream with the header prefix. There would be no content-encoding support for a bare brotli dictionary-compressed stream. At that point, maybe renaming them to and for the content encodings would also make things clearer.\nI have a question. If we can use the new and in the header, why do we need to have the \"3-byte signature indicating hash and compression type\" in the response body?\nIt's technically not required but it makes it safer and easier to operate on the files outside of the http case. For example, here's some discussion on the brotli issue tracker from 2016 asking for the same: URL\nI don't have a problem with doing that in this document, if that is the way people want to go. I personally don't think that a new format is needed because this is a content-encoding, not a media type. But if a media type (and tooling) helps with the deployment of the content-encoding, then maybe that is the right thing to do. Either way, I don't think that you should make that decision (it's a significant one) without having a broader discussion than just this issue.\nNAME - could you help me understand this one? (mainly, why wouldn't we be able to apply delta encoding in the absence of an in-body hash?) This will definitely add complexity (e.g. the need to define a new file format that would wrap dictionaries, along with required tooling). It's not currently clear to me what the advantage of this would be. Do we predict the use of this HTTP mechanism outside of an HTTP flow? If so, concrete examples would be helpful.\nNAME is the complexity you are concerned about limited to tooling and spec process or do you also see it as being more complex after we are at a good state with tooling? Assuming the brotli and Zstandard libs and cli tools have been updated to add a flag for \"embedded dictionary hash\" format streams, does one start to look better? For me, the hash being embedded in the stream\/file removes fragility from the system. It blocks the decode of the file with the wrong dictionary and enables rebuilding of the metadata that maps dictionaries to compressed files if, for some reason, that metadata got lost (file names truncated, etc). It also feels like it simplifies the serving path a bit, bringing us back to \"serve this file\" based on the request header vs \"serve this file and add this http response header\". On the size side of things, I expect it will likely be a wash. Delta-compressed responses will be a few bytes smaller because the header (name and value) is larger than the file header (10's of bytes, not huge by any means). In the dynamic resource case where multiple responses will re-use the same dictionary, the header can be compressed away with HPACK\/QPACK, making the header case a bit smaller. I don't think it's a big change in complexity\/fragility one way or the other but it does feel like there are fewer moving pieces once we get to a place where the tooling is taken care of to have the file contents themselves specify the dictionary they were compressed with. The need for tooling changes would delay adoption a little bit so it's not a free decision but I want to make sure we don't sacrifice future use cases and simplicity for an easier launch.\nI'm not sure that I see the tooling process as critical, relative to the robustness. And performance will be a wash (though I tend to view the first interaction as more important than repeated interactions). For tooling, if this content is produced programmatically, then there should be no issue. Integrated tooling can do anything. If content is produced and stored in files, then seems fine. That's a one-way operation generally. I don't see the definition of new media types to be necessary as part of that. I don't see these files being used outside of HTTP, ever. Maybe you could teach the command line decompression tools to recognize the format and complain in a useful way, but that's about the extent of the work I'd undertake there. You could do as NAME suggests as well, which would be even more useful, but given the usage context, that's of pretty narrow applicability.\nI'm mostly concerned about official tooling and the latency of getting them to where developers need them to be. Compression dictionaries already require jumping through some , due to latency between brotli releases and adoption by the different package managers. At the same time, if y'all feel strongly about this, this extra initial complexity won't be a deal breaker.\nI think there are enough robustness benefits that it is worth some short-term pain that hopefully we will all forget about in a few years. On the header side of things, how do you all feel with respect to a bare 32-byte hash vs a 35-byte header with a 3-byte signature followed by the hash (or a 3-byte signature followed by a 1-byte header size followed by the hash to allow for changes as well as 4-byte alignment)? It's possible I'm mentally stuck in the old days of sniffing content but since the hash can literally be any value, including accidentally looking like something else, I like the explicit nature of a at the beginning of the stream. It essentially becomes (or whatever the signature is) so it doesn't add significantly to the complexity but it does add 3 bytes, potentially unnecessarily.\n+1 to adding a 3 byte magic signature if that's the route we're taking.\nI'm ambivalent on the added signature, so I'll defer to others. I can see how that might help a command-line tool more easily distinguish between this and a genuine brotli-\/zstd- compressed file and do the right thing with it. On the other hand, it's 3 more bytes and - if the formats themselves have a magic sequence - the same tools could equally skip 32 bytes and check for their magic there.\nI think I am having the same response as you all in the opposite direction, where to me it feels preferable to make it HTTP's problem so that my layer doesn't have to deal with additional complexity. :smiley: But if I overcome that bias and accept that it would be nice to avoid an additional HTTP header, here's what I think: If we used a Zstd skippable frame, that would change the stream overhead to 8 bytes (4 byte magic + 4 byte length) + 32 byte hash. But it would mean that existing binaries would be able to decode the frames and avoid ecosystem fragmentation. That's a lot more attractive to me than a new format along the lines of which (1) existing tools won't understand and (2) isn't a format I'd feel comfortable auto-detecting by default in the CLI because isn't all that unlikely a beginning of a string in the way that is (non-ASCII, invalid UTF-8). And I've thought about it more and I'm actually not concerned about colliding a skippable frame type with someone else's existing use case. It would be more of a problem if we were trying to spec a universal solution, but if we scope this to just this content-encoding, then we're free to reserve whatever code points we want and attach whatever semantics we want.\nThanks. I guess the main question I have is if there would be benefits to Zstandard itself for the files to be self-contained with the identification of the dictionary that they were encoded with or the possibility of mismatching dictionaries on compression and decompression (or being able to find the matching dictionary given just the compressed file) are issues that are HTTP-specific. I'm fine with specifying that the encoded files carry the dictionary hash (before the compressed stream data) and having different ways for Zstandard and Brotli to do that actual embedding. That said, the tooling gets more complicated on the encode and decode side to generate the frame and insert it in the correct place in the compressed file and at decompression time, that makes the client more format-aware, having to parse more of the stream to extract the dictionary and then re-send the full stream through the decoder (at least until the decoder library becomes aware of embedded dictionary hash).\nNAME Is this really what you want in this case? The decoder needs to know where to find the dictionary, so wouldn't you want this to be a breaking change to the format such that a decoder that has a dictionary is fine and a decoder that doesn't knows to go get one. ... And - importantly - an older decoder will abort. (I confess that I don't know what the zstd frame extension model is and didn't check.)\nI don't think we're contemplating a model where Zstd can ingest a frame and figure out on its own where to find the dictionary it needs and then load it and use it. I expect that the enclosing application will parse this header, get the hash, and then find the dictionary and provide it to Zstd. The advantage to using the skippable frame is that then you can provide the whole input to Zstd (including existing versions) and it will work rather than have to pull the header off.\nOne thing I realized now - by assuming that the hash length is 32 bytes, we're assuming the hash would remain SHA-256 forever. That might be how things play out, but it that we'd need to change hashes at some point. If we were to do that, having a fixed length hash as part of the format would make things more complex.\nHaving a fixed-length hash (or fixed hash) as part of a content coding is perfectly fine. If there is a need to update hashes, it is easy to define a new content coding.\nNAME the doesn't use skippable frames and uses the same custom header for Brotli and Zstandard (with different magic numbers). I can switch to using a skippable frame instead (which effectively just becomes an 8-byte magic number since the frame length is always the same) but I'm wondering if it makes sense and is worth adding 4 bytes. It won't help in creating the files so it's just for decode time and the main benefit you get is that you can use the existing zstd cli and libraries to decode the stream without skipping the header but those also won't verify the dictionary hash, they will just skip over it. That might not be a problem but part of the decode process will be to fail the request if the hashes don't match.\nIn talking to the Brotli team, it looks like Brotli already and validates it during decode to select the dictionary to use. It uses a \"256-bit Highwayhash checksum\" so we can't use the hash to lookup a dictionary indexed by SHA-256 but we can use it to guarantee the decompression doesn't use a different dictionary (and the existing libraries and cli tools already use it). NAME when you were concerned about the , was it for both lookup and validation or just validation? I'm just wondering if we can use the existing brotli streams as they already exist or if we should still add a header to support locating the dictionary by SHA-256 hash.\nThere are a few places in HTTP where the interpretation of a response depends on something in the request. Everyone one of those turns out to be awful for generic processing of responses. That's my main reasoning, so I'd say both: lookup first, then validation. That is, my expectation is not that the client has a singular candidate dictionary, or that it commits to one in particular. So in the original design, when you had the client pick one, that didn't seem like a great idea to me. For validation, Highwayhash (NIH much?) doesn't appear to be pre-image resistant, so I'd be concerned if we were relying on the pre-image resistance properties of SHA-2. Can we be confident that this is not now subject to potential ployglot attacks if we relied on that hash alone? That is, could two clients think that they have the same resource, but do not?\nI wouldn't be comfortable switching hashes given the wider and proven use of SHA-256 and agree on the lookup case to allow content-encoding with different and multiple dictionary negotiations. Looks like a separate header is still the cleanest so the main remaining question is if we use the same style of header for both or a skippable frame for Zstandard. I also need to update the brotli citations to point to the shared dictionary draft format instead of the original brotli format. On Sun, May 12, 2024 at 7:58 PM Martin Thomson NAME wrote: >\nSorry for the late comment. It seems to me that conveying this information in the content eases the integration with Signatures and Digest. It is not clear to me if there are still possible cases where the response does not contain all the information required for processing.\nThanks\nLooks like you just need to update the PR summary to reflect this change.LGTMOverall, I like this. I want to flag for others that it doesn't use the skippable frame concept."} +{"_id":"q-en-http-extensions-c6d1dd36992a8a6d3f8ddbe6a929fb295626e6aaa9e3e7e9cffd1c71a50aa667","text":"Adds language to the to not send the dictionary-aware encodings when a dictionary is not available (or when one is available but not being used). The dictionary-based encodings don't work without a dictionary (they contain a header that has the dictionary hash) and should not be advertised when they are not really available.\nThanks"} +{"_id":"q-en-http-extensions-24b6e122f3bc133e174f3ef9795c2da3a48ae47aee25454ed6ed1d4f342c56e7","text":"This simplifies the CORS processing language in the doc, leaving all of the client-side readability processing to fetch and clearly spelling out the server responsibilities.\nThe new language is much better, but I think it still misses the point. We don't need to be shy about why these requirements are in the spec and under what circumstances they are necessary. I suggest starting with an explanation:\nThanks. I pulled in your explanation and tweaked the language to make it clear that precautions should be considered, one of which may be to not use dictionary-based content encoding. I like the new version a lot better as it allows for cacheable, public data to still use dictionary compression even if it is fetched in a cross-origin context and isn't client-readable. I'll have to be careful with the fetch language as, currently, the client will fail a cross-origin non-readable response in case the server hasn't applied relevant protections. It may be reasonable to relax that and allow for the client to process it and put the burden on the server but it might also let a weak configuration go unnoticed (though since attacker-controlled dictionary content isn't particularly practical in this model, maybe it's not really a concern).\nFor the existing draft, there is a lot of unnecessary confusion regarding features of fetch, like CORS, that don't make any sense from a security perspective. That's not what CORS is capable of covering, nor how it is implemented in practice, so reusing it doesn't make any sense. cc NAME Moved from private ID\nCORS covers privacy from a browser perspective as far as the readability of responses relative to the origin of the containing document which is exactly the context that it is needed for here. The concern that it takes care of is to make sure that responses that shouldn't be readable from the document context of the client can't be exposed to oracle timing attacks (because there won't be any client-opaque responses). HTTP itself doesn't really have the same document framing context and need for protecting read access of individual responses on a shared connection by clients running in different document contexts.\nFrom NAME I still don't see how CORS applies to the draft. If the origin is choosing both the available dictionaries and whether or not to apply them to a given response, why is the context of a request relevant to the nature or credibility of the compressed response? Neither one is controlled by the context. The origin server isn't going to attack itself. The client might choose not to advertise an available dictionary based on the context, but I don't see how ignoring a server's valid response makes a difference. I assume the content of the response after decompressions would still be restricted by CORS, but that should already be defined by fetch; I wouldn't expect this document to specify it.\nThe concern raised by the security team is that a client could abuse dictionary compression and a timing attack to expose the contents of a CORS-opaque response. The contents themselves would already be prevented from being exposed directly but there's a theoretical risk that an origin serving private data through dictionary compression could expose that data unexpectedly in a cross-origin timing attack. I don't know enough about the practicality of timing-based oracle attacks using dictionary compression and the level of control that an attacker would need to have on either the payload of the response or the payload of the dictionary but in an abundance of caution we are recommending that servers avoid the problem entirely and don't use dictionaries in cross-origin opaque responses (even if it would be blocked from being processed in the client).\nThe shared-brotli draft has a fairly good write-up on the CRIME attack concerns which is what the CORS mitigations prevent (by making sure there is no mix of public and private data and only allowing for public data): URL Most of the CORS protections are on the client side and not something that needs to be exposed to HTTP or for origins to be concerned about. If a server never mixes public and private data or doesn't talk to a client that needs to partition that kind of data (i.e. not serving to a browser) then it can also ignore the one remaining case called out in the spec. If there's a chance that public\/private data will me mixed AND the server is serving content to a browser (or other client that implements CORS) then there is a risk of a timing attack being able to reveal the private content if the crossorigin not-allowed case uses dictionary compression.\nClosing this as the need for CORS for the client protection should be well documented at this point (and mostly applies to the WHATWG side of things in fetch processing but there is one case here where people should at least be aware of a possible attack vector if not handled).\nI have no problem with this being a requirement for the fetch implementation (or for script-enabled browser implementations in general). However, I still don't understand how the requirements in 9.3.2.* will pass muster for the IESG and RFC Ed reviews without a normative reference to the Fetch spec. A link to a web page isn't enough. Personally, I think these details should be moved to the fetch spec (where the right people will read it). The bits needed for the IETF are covered in 9.3.1 and the first two paragraphs of 9.3.2: and then everything else in 9.3 (and below) can be a replaced by a non-normative reference to fetch. Otherwise, this should be a normative reference and the several links to fetch replaced with an appropriate [FETCH] entry in 11.1.\nTo be clear, I don't object to moving forward with this as is, but I don't believe will be allowed as is, unless the chairs have some extra special hand-waving skills I haven't seen yet.\nI almost take that as a challenge...\nThere is processing model stuff that is browser-specific that will be in fetch but the parts in 9.3.2.1 and 9.3.2.2 are the things that are required from any HTTP server that might be used to serve sensitive information to a browser. I stripped out as much as I could but there is a timing attack that can be performed in fetching cross-origin content that would allow a client to \"read\" the response even if it doesn't have access to it on the client side and the only way to prevent it entirely is to not serve a response in the first place. By making it only a requirement when the CORS headers are present, it allows servers that never talk to browsers to not have to consider it but it really does need to be considered carefully for any server that might serve responses to a browser (or anything else that implements cross-origin readability restrictions). Making the protection non-normative would make the privacy protections optional and risk servers not being aware or necessarily implementing it (and it is the same side-channel timing attack that killed SDCH). If we're comfortable making that change then I can make the fetch spec parts authoritative about the CORS restrictions on both clients and servers but it would probably have to be a normative reference.\nOh, I see, the server description contains things a server MUST do (or not do) based on the values received in those fields when they indicate a cross-origin request. I was reading them as things a client would verify after receipt of the Response, since we usually don't rely on a bad client sending \"I'm good\" information in request header fields. [BTW, this is why the main HTTP specs always target requirements to a role instead of making them neutral requirements on a message.] In that case, these are definitely normative requirements, similar to conditional requests. HTTP servers don't normally parse CORS header fields, since pre-flights and such are handled by back-end applications (not httpd).\nI'll make another pass over the CORS language tomorrow to make it clearer and make sure it only covers the parts that servers need to handle for fetch-compliant clients. The client bits can and should be in fetch."} +{"_id":"q-en-http-extensions-f50d111c7558ba1a88c026794c7d6beeabcb591b9790fda9cf66273afbd440fd","text":"Updates some out of date portions of the IANA considerations section. See for more info\nNAME Ping\nNAME Ping\nSome aspects of the IANA considerations section are outdated and should be updated: The “Cookie” and “Set-Cookie” registrations from RFC 6265 have been moved from the “Permanent Message Header Field Names” registry to the “HTTP Field Name Registry” Update Sections 9.1 (Cookie) and 9.2 (Set-Cookie)The word \"Registry\" should be omitted from registry names. Update Section 9.3 (Cookie Attribute Registry)\nLGTM. Apologies I missed this."} +{"_id":"q-en-http-extensions-8539f913952132c787cffb293d54589e4ca6a075e399a5f81e6abc7735c0c87b","text":"HTTPSEM has been published as RFC9110, update all reference to reflect this.\nNAME Can you PTAL? I spot checked that existing section #s still matched up, but I would appreciate another look.\nThe sections should be stable, see . I'd also change \"HTTPSEM\" to just \"HTTP2\".\nSorry, not sure I understand.\nOops. I meant to say: I'd also change \"HTTPSEM\" to just \"HTTP\". See URL\nThanks, the latest commit should fix that.\nStill LGTM. Thanks!"} +{"_id":"q-en-http-extensions-638228f70eb965fc2e6c366b334ef0408134521e53eb4d6ed9e6200023cac747","text":"Adds suggestions for how servers could fulfill the requirement Adds suggestion that servers be mindful of the number and size of the cookies they set to avoid hitting their own header length limits. Also mention that UA should consider splitting longer fields.\nIf that's the case, I'd recommend punting on getting this in 6265bis - which has had a goal of documenting reality.\nIt is not unusual for servers to reject HTTP requests that contain individual headers longer than a certain limit. If a client sends a header longer than around 8-9K bytes, the risk is quite large that it will get a 40x response back which effectively blocks the client from continuing what it is set out to do. Because of the rather complicated results of such a \"too long\" cookie header, I think it could be worth mentioning this in the spec. For newcomers in cookie land, this is perhaps not completely obvious. The spec currently mentions size-limits per cookie, but a server can easily send a few large cookies and have the combined size of them outgrow 8K. I would perhaps suggest wording that says a client SHOULD keep its lines shorter than NNN bytes and servers SHOULD accept cookie lines up to at least NNN bytes. (in curl we have the limit set to 8190 bytes) HTTP related specs rarely speak of this kind of size limits, but I know my implementation could have avoided some issues if this real-world limit had been mentioned.\nSeems like a worthwhile warning to me.\nRFC 6265 had a requirement that a client MUST generate a single cookie header. HTTP\/2 and HTTP\/3 allow \"cookie crumbling\" that permits multiple Cookie headers. 6265bis says in URL What is tolerance really? Why are we stating MUST requirement that are immediately invalidated by two of the versions of HTTP in wide usage? Section 5 is huge, what requirement is the text in section 4.2 forward referencing to? I think there's some room for improvement in this area. An HTTP\/2 or HTTP\/3 is being instructed as a MUST requirement in the relevant RFCs to combine multiple cookie headers into one e.g. URL but if the server MUST be tolerant to multiple Cookies, maybe we can relax the HTTP\/2 or HTTP\/3 requirements?\nThis was the output of the discussion in URL\nThanks Mike, I missed that discussion first time around. My question comes about because of some HTTP\/2 code I looked at that simply implements HTTP\/2 and so combines the multiple fields back into one. But as NAME and NAME alluded to their comments on , the splitting could be done for some reason thst isn't purely about H2 or H3 compression optimization. It seems there are actually use cases where keeping the split is useful, and not just in a private deployment. Of course relaxing things more might have unintended consequences. I find the current text too terse to be useful. This is a cookie spec, it seems like the right place to ba adding commentary about how to handle cookies across various version and conversions. Especially some anecdotal examples about why some clients actually do it.\nMentioning why a server should be tolerant, and what shape that could take, seems reasonable to me. Especially given how nicely it dovetails with . What forward reference are you referring to? ?\nIn that mention, I think it's worth including that re-combining the Cookie lines before passing to the next layer is a form of tolerance from the client's perspective.\nSome more text sounds good and I'm happy to review. In Section 4.2.1:\nStill LGTM, thanks!"} +{"_id":"q-en-http-extensions-0b2d20262784f7326e509579061209b4ffee10d4013b5936525afdf665fa9860","text":"In 4.2, the parameter is \"a byte sequence that contains the public key\", and the encoding of the public key is taken from Section 3.1. When reading this, I was initially unclear whether was only the key or the entire structure defined in Section 3.1 or another structure, particularly given the use of the word \"contains.\" I might clarify that the parameter is the public key, and perhaps point from both 3.1 and 4.2 to the same place for how to get the bytes which are being base64url-encoded here. (Make a 3.1.1?) (Marking editorial, as I don't think I'm asking for a technical change here, just clarification of how this already works.)\nWhat you're describing is indeed what we intended. I'm write up an editorial PR to clarify this."} +{"_id":"q-en-http-extensions-2d5faaadda0b56b67b3efa97082a0fdd55edb91b835b27e93ea00de1b661a85a","text":"The five authentication parameters are offset in various ways throughout the document. Examples: using the Parameter (Section 3.3) using the v Parameter (Section 3.2) The REQUIRED \"p\" (proof) parameter (Section 4.3) Additionally, the word \"directive\" is used instead of parameter in Section 4.4. It might be advisable to align the document on always using one of these, with a possible difference where the parameter is being defined.\nAgreed. I'll write a PR that standardizes on parameter and not directive \"p\" when defining a new parameter when referencing the parameter"} +{"_id":"q-en-http-extensions-66d4f77c7d4ca08153d290df3b71ef0afefccc92d6b2b5428fdfd0534cd6b1bd","text":"In Section 8: Given that the exporter can be produced at any point in the lifetime of a TLS connection and that TLS connections can remain open for a very long time, it's not clear that this guarantees freshness in the same way that a nonce does. It might be clearer to state that it guarantees the authenticator is no older than the beginning of the current TLS connection. (I presume that a server can use GOAWAY to get rid of connections it considers unacceptably old.)"} +{"_id":"q-en-http-extensions-f9ab3b4e6c2c16b0b1c7b0b466c2745f51e8c0dae17e2bd39b9fea78bdd15946","text":"Summarizes all the major changes from 6265bis, replacing the existing draft-by-draft change log.\nWhile there are still a couple open issues that may\/may not change this PR, I'd still appreciate an initial look. Thanks!\nNAME This is ready for review, I don't believe the pending PR needs to be added to this changelog\nLooks generally reasonable to me. Maybe drop the IANA changes summary (too detailed), and try to lead with the important changes (e.g., drop the errata down to last)?\nIt'd be nice if there were an appendix summarising the changes. We do this for most updates.\nSorry for being a bit dense but what is URL in comparison to that? :)\nWhat's needed is a summary of changes from 6265, not for draft-by-draft. And that needs to stay in the spec upon publication.\nNAME I see, thanks!\nIs there a rule of thumb for which changes should be included? \"Major\" changes? Any normative changes? Something in between? Also, do you know of any existing RFCs that are a good example?\nMajor changes definitely, and any significant normative changes. See eg URL\nCatching up on things after being out sick for a ~week. This change LGTM. I think the level of detail is appropriate, and I didn't see anything in the line-by-line changelog that jumped out at me as being big enough to consider missing."} +{"_id":"q-en-http-extensions-0dcd65206e104dac591bed46cf74a0207b8e6fe77b8baf4ccad26777e970c42f","text":"defines a framework for problem types, which can be used as a standardized response format when errors occurs. With resumable uploads, we have a few cases where such problem types could be useful, but usage remains entirely optional. The text in this PR has been inspired by similar sections from .\nThank you very much for quick feedback, Lucas!"} +{"_id":"q-en-http-extensions-4f0a3510ed724bdd2a8560d5b11820afa34234b8d26c7aa885e346b5be51c43d","text":"minimalist attempt at a fix for\nThanks for the review NAME I've incorporated all your suggestions.\nThanks NAME and NAME I dare say it's okay merge? (thinking about the I-D submission cutoff in a few hours but admittedly don't know what else could\/should go into -15 or if publishing that today is even on the table)\nI don't believe this example given in particularity well reflects the situation that precipitated the behavior described in : It's only an explanatory example so arguably doesn't even really matter. But RFCs are forever and they are often cited so it'd be nice to have this bit of the historical narrative better reflect the realities at the time (I say this mostly from perspective of having experienced some of the difficulties caused by the introduction of SameSite cookies and their default enforcement mode and, to this day, continuing to have challenging conversations on the topic). As best I can recall, login type flows were the main driver for the \"Lax-allowing-unsafe\" default mode but the need manifested itself more in the later part of the flow where the authenticating site (IDP) caused the browser to make a POST request callback to the site requesting authentication (RP). At the start of the flow, prior to sending the user to the IDP, the RP would typically set a cookie with or referencing some transaction specific state and use that info during processing of the callback for validation of the token provided by the IDP. Cookies with Lax as the default enforcement mode would not be sent with the callback POST request and the whole login flow would fail. The \"Lax-allowing-unsafe\" default mode (aka \"Lax + POST mitigation\") allowed those flows to not break for a while and gave RPs (and their vendors, frameworks, etc.) a little grace period to update to explicitly designate those cookies as SameSite=None. Could we augment or replace the example described in to account for all that? The cookie age stuff in , I think, makes more sense in the context of the above too. I'm (obviously) having trouble articulating some of this so including some \"helpful links\" here too, which may or may not describe some of this better. Chromium called it the \"Lax + POST mitigation\" where the cookie would have a CSRF token: URL This explainer by some Googlers describes some of this in Top-Level Cross-Site POST Requests: URL and links to this payment flow which is very similar to the login flow: URL The question in this issue on kinda related stuff came back to my attention very recently, which is what led me to looking at these parts of RFC6265bis again. Which maybe explains some of why I'm putting in this annoying issue near the very end of WGLC. Or explains the timing anyway. URL\nTo clarify, You'd like the example to be one which is closer to a real world case? Namely FedCM (guessing from your use of \"RP\", \"IDP\", and the issue on the repo). I'd like to avoid getting too specific and in the weeds but there's probably a way to generalize the FedCM case. Contextualizing the cookie age also seems like a positive to me, its relevance isn't super clear otherwise.\nYes. Apologies if that wasn't clear from my rambling above. And to try and be more clear, I'd like the example to be one which is closer to the real world cases that gave rise to the Lax-allowing-unsafe mode. No, namely SAML with the authentication response delivered with the POST binding. Or OpenID Connect using the where the '' is needed to validate the returned ID token. Agreed about not getting too specific and in the weeds but think the SAML\/OIDC cases could be generalized too. Although I tried to do that in the issue description here and didn't seem to be too successful in conveying the concept. In my mind it makes a lot more sense in the context of a short term cookie used for transaction specific state and validation. And doesn't make a lot of sense in the context of \"cookie[s] with login information\". I guess my hope is that it would just be better contextualized by way of using an example that's more reflective of the historic occurrence. But more contextualizing, if needed\/appropriate, would certainly be okay by me too.\nNAME could you propose a diff to the paragraph: Or is it sufficient to just tweak the first sentence of that paragraph? For someone who is not super familiar with SAML authentication flows ( , it's me), what exists in the draft text today feels like a fuzzier higher-level description than: It's entirely possible that's the only pattern that motivated Lax-allowing-unsafe, but maybe not - the web is messy. Maybe a compromise here is to tweak the text to cover your example generally (as well as other examples sitauations to ~POST w\/ a cookie expecting some login information), and go super precise in the commit message. Just a thought.\nIt'd be a bit more involved than just the first sentence but I'm happy to take a stab at reworking that paragraph and maybe a few other very minor related things, if the authors\/WG are amenable to the idea? Not to be expected :) The OpenID Connect flow is probably more salient but familiarity there isn't expected either. The important distinction (to me anyway) is where the cookie in question would be sent (or not sent), the associated failure mode, and the effectiveness of the mitigating default mode. With what exists in the draft text today, the cookie is holding\/referencing login state that's sent to the site that will authenticate the user (maybe based on an existing session identified by that cookie). If the cookie isn't sent (due to default of Lax and a x-site POST style redirect), the failure mode will usually be the user needing to enter their login credentials again. Which is annoying but recoverable. Also the Lax-allowing-unsafe applicability to \"cookies which were created recently [~2 mins]\" doesn't help in many many cases for cookie that's used for login state due to such cookies being relatively long lived (not created recently) and\/or the user not having even visiting the site recently. With the pattern I'm (trying) to describe, the cookie is holding transactional state that's sent back to the site requesting authentication after user authentication. If the cookie isn't sent (due to default of Lax and a x-site POST style redirect back to the originating site), the failure mode will usually be an unrecoverable error for the whole login flow. The Lax-allowing-unsafe applicability to \"cookies which were created recently [~2 mins]\" is quite helpful in this case because that transactional state specific cookie will typically be set right at the beginning of the whole login flow and only needs to live as long as it takes for the user to complete the login journey. Oh yes, of course, it's very messy. I'm sure it wasn't the only pattern that motivated Lax-allowing-unsafe but I'm almost as sure that it was the primary pattern that motivated it. Let me see if I can tweak the text to cover things generally (without screwing it up) and the commit message can reference this issue to link back for more info?\nHi NAME Have you had any progress?\nI'm sorry NAME probably my fault in the way I left the prior but I wasn't sure if I was supposed to be working on a PR or was waiting for an okay from you\/WG\/etc to do so. And in the intervening time my attention for it was sort of overtaken by events. Apologies again and I'll try and do something soon.\nURL is a minimalist attempt at some adjusted text for this\nLGTM, thanks! I'll wait for NAME to take another look before mergingCatching up on things after being out sick: this still LGTM. :)"} +{"_id":"q-en-http-extensions-0327b51b50dad0f8da9c69a67b515516d5dead2ca1c36cdb75cccd757afc7041","text":"This is a follow-up to , which defined a problem type for when the server rejects modification for an completed upload. However, the draft didn't specify that such modifications are not allowed. This PR adds this requirement. The section for offset retrieval already includes a definition for when an upload is complete, so this is not repeated in this PR's changes:"} +{"_id":"q-en-http-extensions-a682dff75d58d9a556384f9d5b32728a2483fc5d0da8659d4f7e363e11df2f72","text":"The draft says there are 4 upgrade tokens, but a fifth one has since been added: connect-ip. It already has text to avoid this issue so it should be a short section."} +{"_id":"q-en-http-extensions-6f1ff8614f306a41dab0cf4f4066d2ec39dbd43fe47b1b1188e71995dc7051ce","text":"This includes changing the title, adjusting the language to be more generic, and adding a section with guidance for HTTP CONNECT.\nMost of the considerations in optimistic-upgrade, which originally emerged from consideration of \"connect-tcp\", also apply to HTTP CONNECT. We should consider expanding the scope of the draft to cover that topic as well. cc NAME"} +{"_id":"q-en-http-extensions-f41ef4369d7e6b3dbcbb14ea11cd52be392a55423e37d1ceb9f74b4a75719b83","text":"URL made the use of media type mandatory for appending to an upload. This warrants a new interop version."} +{"_id":"q-en-http-extensions-23d630211d0062cc0a2b371e7d999c83aefd23a2b4d3ea32443299fa7e268bae","text":"As discussed in URL, this requires the upload offset to not decrease. The server is not allowed to \"forget\" data appended to an upload. The benefit is that it allows a client to reduce the amount of data buffered while the upload is ongoing as it receives acknowledgements from the server that data has been stored permanently and will not have to be retransmitted. Closes URL\nURL added the requirement that the offset reported in 104 informational responses must not decrease over time. For example, it is not allowed that one 104 response includes and the next includes . For now, this requirement is not mentioned for responses to offset retrieval responses. As NAME mentioned in URL, we should add this rule there as well. Maybe it also makes sense to move this into a more central section of the document, so that we don't have to repeat this rule trice (upload creation, upload appending, and offset retrieval).\nThinking about this a bit more, progress reporting and offset retrieving are different enough that adding this requirement to offset retrieving might not actually make sense. In a single append, it's impossible for progress to go back and it's fine to be extremely strict. However, offset retrieving can happen much later. A server could have already forgotten about the previous uploaded body, but still want the upload restarted from 0. In the case that the client cannot supply the bytes, it can always decide to not resume, or perform an explicit cancel.\nI am not sure if such a case would actually occur where the server has forgotten parts of the uploaded data but has not forgotten the existence of the upload resource itself, allowing the client to still resume the upload. Do you have an example application? I would have imagined that the upload should be considered as unresumable and failed then. The client could create a new upload if its still has the file data and the application logic allows that. That was my motivation behind enforcing a strict requirement that the upload offset never decreases. The Upload-Offset values should be a guarantee by the server that this amount of data has been saved and won't be lost again. If we don't include such a requirement, it could encourage servers to handle the received data without too much care because they are allowed to decrease the offset again."} +{"_id":"q-en-http-extensions-02f0a2c97302bc09a26d384af4014954fe9712b8564a264049e488ac76c969d3","text":"This addresses the reference issues, comments and nits from the AD review (see )\nNAME would you mind taking a quick look before I publish a new draft?\n(tried using ietf-comments but ran into issues so just tracking manually in one combined issue)... cc NAME Thank you for the work on this document. Almost all my comments are about references. I think a new version is necessary before starting IETF Last Call, to avoid process issues along the way. Francesca The boilerplate is duplicate, please remove the second occurrence. Can you please update the reference to 8941 to draft-ietf-httpbis-sfbis ? That doc is with the RFC Editor so should not be holding this document up. Also, I believe the reference to draft-ietf-httpbis-sfbis should be normative, not informative, since terminology from that doc is used. Alternatively, if you want to keep the ref informative, you can import the part of the terminology that is necessary for this doc. I think that's a uglier solution, so I highly prefer sfbis to be made normative, but won't block on it. [URLPattern] \"URL Pattern Standard\", March 2024, URL needs to be indicated as Living standard (see RFC 9110 or 9421 for eample of whatwg specs references). Please fix this so that the Fetch spec is properly referenced (normatively is needed, I believe). RFC 8792 should be (informatively) referenced. I agree with Mark's write up, 5861 should really be informative. There is several occurrences of {Origin}, please fix. Please add a reference. It would be good to have an informative reference to 6265 (or even 6265bis). This review is in the [\"IETF Comments\" Markdown format][ICMF], You can use the [ tool][ICT] to automatically convert this review into individual GitHub issues. [ICMF]: URL [ICT]: URL"} +{"_id":"q-en-http-extensions-cc4728cfb16ca7bc9d02f23a3728200142d3fec756769444aa2865b619d3b6e8","text":"I do not agree with this suggestion. Upgrade is intended to support a not-yet-defined future. Limits placed upon it must be evaluated in terms of what will be excluded from future use, not what is currently being used in practice. If you want to limit what can be sent in a body, do that instead. The entire premise of this document is that a client will send something incredibly stupid and uncontrolled in a request body while asking for an upgrade. Just forbid that and be done. In any case, Upgrade with GET, HEAD, and OPTIONS has no such issues. Upgrade should also be possible to use with QUERY with any compliant server. If you can find a server that fails to process a valid request body and then proceeds to treat it as a pipelined request, that is a vulnerability in their code.\nThis PR originally assumed the deprecation of \"TLS\" in . I've removed that deprecation there, so other HTTP methods are necessarily allowed. Due to that change, I've rewritten the recommendation here, removed the PR stacking, and removed the \"draft PR\" flag.\nFormally, Upgrade can be attached to any HTTP request, in which case the request is processed in HTTP\/1.1, and the server has the option to Upgrade before returning the response. This includes requests with and without nonempty bodies. Handling of Upgrade with a nonempty body is well-defined, but the Upgrade paths that are actually implemented today (WebSockets and MASQUE) only use GET requests with empty bodies. NAME raised concerns that this is something of a \"sharp edge\" for security. For example, a server implementation might assume that there is no body in an Upgrade request, leading to a mismatch between the client and server when a body is present. We could make this problem go away by mandating that Upgrade only be used with \"Content-Length: 0\", and perhaps only with \"GET\". However, this requirement would appear to contradict RFC 2817, which says \"A client MAY offer to switch to secured operation during any clear HTTP request\", so that document would have to be Updated or marked Historic.\nFWIW HTTP\/2 upgrade has some statements that might conflict with the proposal. From URL before the client can send HTTP\/2 frames. This means that a large request can block the use of the connection until it is completely sent. important, an OPTIONS request can be used to perform the upgrade to HTTP\/2, at the cost of an additional round trip. On the other hand, RFC 9113 , so the topic of this issue is compatible with that notion with a bit of wordsmithing.\nWell phrased. More concretely, for servers which read (and buffer) the full request -- including request header and request body -- before processing the request, this is a non-issue. However, many servers make routing and other decisions after the request headers are received. To handle upgrade after the request has been handled, all requests would need to check for Upgrade at the end of handling requests, not at the beginning, which is more common in code that I have seen and in code that I have written. I think it would be ideal if Upgrade requests for new Upgrade tokens were sent to endpoints for which an alternate resource is not provided if the Upgrade does not succeed. If a request with Upgrade is rejected, the HTTP status could reflect that. At the moment, a client can request \"GET \/altresource.mp4 HTTP\/1.1\" with \"Upgrade: websocket\" and the RFCs specify that \/altresource.mp4 should be delivered in its entirety before the server switches to WebSockets, and the HTTP response headers included Upgrade: websocket, along with Content-Length or Transfer-Encoding: chunked for \/altresource.mp4. The combination of request\/response bodies for resources tangential to the Upgrade request is what I would like to see avoided in future Upgrade tokens. As NAME worded so well, I think there are sharp security edges here, and complications in implementations. HTTP\/2 extended CONNECT is a better approach as it is only the :protocol change request, and does not include a request body or a request for an alternate resource, at least as I understand RFC 8441 to describe HTTP\/2 extended CONNECT for . Another example of where request bodies can potentially be mishandled is if an origin server passes the Upgrade request to a backend via CGI, FastCGI, SCGI, reverse proxy, or something else. Even if the origin server handles the request body properly with the Upgrade request, I certainly do not trust an amateur PHP program to handle the request body properly before switching protocols. (Yes, I am sure that somewhere, someone handles this properly, but I am willing to wager there are also places where an empty request body is incorrectly assumed by naive scripts.) To be proactive, lighttpd 1.4.74 and later will by default ignore Upgrade in all requests with a request body unless lighttpd 1.4.74 or later is explicitly configured to enable request body along with Upgrade via URL (gw == gateway in lighttpd, for CGI, reverse proxy, and other backends) tl;dr: I am not asking to restrict HTTP\/1.1 Upgrade only to the GET method. I would like to request that optimistic-upgrade add guidance suggesting that future Upgrade tokens should suggest\/recommend use in requests without a request body, and note that some proxies or origin servers may ignore HTTP\/1.1 Upgrade when a non-empty request body is present. Thank you for your consideration.\nSince we are discussing this publicly, I hope these details are appropriate. lighttpd 1.4.56 and later supports the HTTP\/1.1 (now deprecated by the RFCs), but lighttpd ignores unconditionally if a request body is present. Among the reasons for doing so -- in addition to simpler implementation -- is that if an attacker controlled the request body content and wrote it in such a way to start with HTTP\/2 Connection Preface and an HTTP\/2 request, and if the server failed to handle the HTTP\/1.1 request body before switching to HTTP\/2 protocol, then an attacker could inject HTTP\/2 requests via that mishandled (on the server) HTTP\/1.1 request body.\nAs I understand, this problem is specific to HTTP\/1.1. I'm not sure if we could add new requirements, unless we change the core HTTP documents and can modify all existing HTTP clients that use upgrade to adopt the new behavior. At the same time, IMO it would be perfectly reasonable to point out the \"sharp edges\" and provide guidelines to new extensions that would leverage the upgrade mechanism. In that respect, I like the idea of recommending use of GET but not any other HTTP method for new extensions, as suggested in the original post from NAME\nNAME Yes, the request header is specific to HTTP\/1.1. I can imagine a future Upgrade token which requires POST with some sort of (short) request body containing authentication and\/or payment information. Even so, I still like the idea of recommending that any new HTTP\/1.1 Upgrade tokens prefer to use GET without a request body, or to use other methods if a request body is required. Aside: NAME you might be interested in adding some Upgrade tests to URL, specifically for request smuggling where servers support and might not properly handle an unexpected request body. (lighttpd always ignores if request body is non-zero length.)"} +{"_id":"q-en-http-extensions-d99b04ec9b322b71da91e39c3fd9079306a37bd77d247290bd8a7c4b141d8aff","text":"A number of minor misc changes and corrections to improve readability, usability, and consistency throughout the spec.\n6265bis is nearly ready, but after years of changes there are probably some minor editorial issues that should be corrected. Someone (me) should proofread it and fix any problems found.\nThe RFC Editor will help with that.\nAh, good to know. I'll still give it a first pass to clean it up a bit.\nLGTM."} +{"_id":"q-en-http-extensions-8ce148ee2fbe29c7bc75ec5634f529c3d5779ddb4bfd351e3867943879f8f49c","text":"This PR clarifies a few things around the Upload-Limit field from : The last upload append request may now be smaller than the minimum append size specified in . For example, if a 510 MB upload is split into 100 MB requests, the last request can just be 10 MB. It clarified how a client can indicate the entire upload length when creating a new upload via the Content-Length header. The PR also mentions that no resumable upload resource may be created if the minimum size is not met, but the server might still accept and process the data (although without offering resumability) The Upload-Limit header is added to informational response in an example"} +{"_id":"q-en-http-extensions-51d5d6379a38d890d6cc3ddc81d1487baa54c09948b043c8fab5a787585c404e","text":"This avoids having to determine whether is an actual existing standard or not. This is an alternative to\nMerging for draft cutoff.\nThe lists an entry for HTTP\/.\\, and specifically identifies \"2.0\" as a possible version value, as instructed by . However, this flow seems to have been superseded by \"Upgrade: h2c\", which was introduced in RFC 7540 and deprecated in RFC 9113. I am not aware of any implementations of \"Upgrade: HTTP\/2.0\" in clients or servers. I think we have two main options: Status Quo: Regard \"HTTP\/.\\\" as a mechanism that is formally defined but not implemented anywhere. Provide formal analysis of its security for completeness. Deprecation: Update RFC 9110 in this draft to remove the definition of this upgrade token. Mark \"HTTP\" as \"obsolete\" in the Upgrade Tokens Registry.\nMoving discussion over to the issue, perhaps. Upgrading to TLS is a defined thing that gets used; I'm not clear that we need to deprecate that. Upgrading to future HTTP versions is a little more interesting. Certainly it's illegal to upgrade to or because upgrade isn't defined by those specifications. RFC 7540 defines a distinct token, RFC 9113 deprecates it, and RFC 9114 does not define an Upgrade path for obvious reasons. It certainly looks like we've decided not to use this mechanism for changing HTTP versions. Does that mean we should eliminate it? I'm not opposed, but I'm also not certain we need to go there.\nMoving to TLS with upgrade (i.e., RFC 2817) is now so much of a niche thing that I think that deprecation would be wise. We've heard that it is still in use, but it's certainly not inoperable in quite the same way as other parts of the protocol. Either way, I think it would be better to write a separate document deprecating Upgrade (either completely or more narrowly) than to sneak it in here. This document can point out that it is unsafe in the ways in which it is unsafe, but it doesn't need to make the deeper cut to achieve that.\nIt seems clear that we don't want to deprecate in this document, so I've removed that text from . I've kept open to discuss the fate of . The WG seems to believe that this mechanism is already deprecated, if it was ever defined at all, and the IANA registry entry is just misleading or incorrect.\nThe trouble is that exists in a sort of quantum superposition of standardized and deprecated, but that will collapse if we mention it in this draft in any way. The options I see are: Don't claim (as the current text does) to present an exhaustive analysis of the defined Upgrade Tokens, and silently ignore the registration of . Treat it as a live standard. Declare that it was already deprecated. Deprecate it.\nI'd say deprecating isn't particularly useful or interesting, since that doesn't impact implementations. If we can't agree on which quantum state we are in and want to be in at the same time, then I'd leave the cat in the box. The more interesting question is: should implementations do something to protect against a dangerous ? For h2, the preface should protect against smuggling. Where do we stand for other versions, is there a risk we should warn implementers about?\nMy expectation is that implementations of do not exist, and never will, so I don't think it's very useful to provide guidance to them. There's a question of whether one can even speak about a \"downward Upgrade\" from HTTP\/1.1 to 0.9-1.1. Assuming that is allowed, I think 1.0 and 1.1 are safe, if we assume that the HTTP-Version line is sent\/repeated at this point. (Does Upgrade: HTTP create a new HTTP connection or re-version the existing one???) They both start with \"HTTP\/1\", which is not a valid method name due to the \"\/\". HTTP\/0.9 is possibly more interesting since it does not send an HTTP-Version line. I don't see a way for the server to figure out whether the client is speaking HTTP\/1.1 (after the rejected upgrade) or HTTP\/0.9 (optimistically). However, I don't think this poses a security threat, so long as the attacker-controlled data is the same in HTTP\/1.1 as it is in HTTP\/0.9. There could possibly be a problem if the client, believing it is speaking HTTP\/0.9, allows untrusted code to manipulate headers that were not special in HTTP\/0.9 but are special in HTTP\/1.1 (e.g. Content-Length, Transfer-Encoding). But as I said, this strikes me as counting angels on the head of a pin. \"Upgrade: HTTP\/0.9\" is just not a thing, and we should all agree on that and move on.\nI totally agree with you. I was more thinking about guidance for intermediaries that support other Upgrades (e.g., WebSocket) for what they should do if they encounter unknown\/unexpected upgrade tokens like these. If there is no such guidance, than we can just go back to pretending that doesn't exist\nHTTP intermediaries can't really do anything with unrecognized Upgrade tokens in HTTP\/1.1. (Not even the useful thing, which would be to convert them to\/from for Extended CONNECT.)"} +{"_id":"q-en-http-extensions-19693e3b7c0491f0336e03ab1572df6beddc0060d80ab6f40ac4022348f074a5","text":"Relax DNS requirements when ORIGIN is received Change the initial origin set to just SNI\nwait - no. You wouldn't send the alternative in the ORIGIN frame, you would send the origin that you are the alternative for, right? that would make sense - if you want to use an alternative you can't rely on the SNI default.\nthis is indeed looking goodminor nitsThis is looking good."} +{"_id":"q-en-http-extensions-aa1783ddcf260622b276185049f0ba1b63b9591cbb921f1a5a0855f8d554ab46","text":"Fixes URL\nNAME could you take a look?\nComment by NAME This is rather unimportant, but I just wanted to mention it in case the authors find it useful. Feel free to ignore. The document states that there are no new security considerations, but that's perhaps not quite true. I think it might be useful to call out that an implementation cannot rely on its peer behaving correctly, so implementers will have to take into account they may still receive oversized frames from misbehaving clients. This is arguably no different from the situation today, so it can be argued that the current considerations are accurate. I just thought it might be useful to call it out so some engineer doesn't remove validation checks since the other side is supposed to behave now. Just because we have standards, doesn't mean that everyone complies.\nRFC8878 says in its security considerations: Perhaps we could add something like: \"Decoders still need to take into account that they can receive oversized frames that do not follow the window size limit specified in this document and fail decoding when such invalid frames are received.\" NAME Thoughts?\nWe could do something explicit like that, yeah. But I'm partial to just broadening the existing statement a little: This seems sufficient to me because this issue is called out explicitly in para 4 there (URL). What do you think, NAME NAME\nDon't have strong opinions, but I think maybe adding a sentence explicitly about the window size limit is good, given the document is specifically about enforcing the window size limit. This will ensure nobody misses it even if they don't click through the reference to RFC8878."} +{"_id":"q-en-http-extensions-cff4c597b1296c36d3fe142afe2ebaa03df46b6203e95b113f6a9578b9891666","text":"This is a partial revert of 5469d51 from PR\nTalked with NAME a moment ago. I think there are some additional changes that will be necessary, but I don't object to the idea of this revert. Seems like a mistake in the original PR.\nHappy to amend the PR if you can give me an idea of what additional changes are required.\nI wrote up a change that should cover all the points we want to make. Namely, we still want UAs to accept that larger character set while encouraging server to use the more limited character set.\nThis looks reasonable to me, thanks to you both."} +{"_id":"q-en-http-extensions-57290cc7fb71f0e1c406140d12ec50cf17ff7b269b4d6052be175dcb58133e72","text":"Editorial stuff, mostly line wrapping and adjusting terminology. Happy to chat through it if you like.\nthanks for this"} +{"_id":"q-en-http-extensions-8a52c3aa4c568f505c33c4b4f887e893140eaf665cc63f31c2b9ef3056a03eb5","text":"This makes it clearer what standard is being depended on, while still deep-linking to the right section of that standard. This generates citations like \"see of ]\". The \"RequestDestination\" part is free text in the attribute. I don't see a way to change the \"Part\" text. We can use the to rearrange it to \"see ], )\" if y'all prefer.\nGreat, thanks for the PR. It's a lot cleaner than a bunch of independent references.\nNAME just filed, thanks. URL URL\nNAME have you asked the WHATWG folks for stable anchors to those sections? See URL for details of how to do that."} +{"_id":"q-en-http-extensions-fe29aa2ef684fb9f6b50f02573da6e7988c1514a8989be322e7161e3b9b5756a","text":"adds more used definitions from HTTP.\nrepresentation metadata, content, ... in Conventions and Definitions."} +{"_id":"q-en-http-extensions-1817ccc858b89a94715b20cf3118ebcd23100047c2aa57caa2d635a655841aac","text":"splits C-E and T-E sections clarifications quoted from HTTP and MESSAGING link to DIGEST-FIELDS appendix This PR contains some quotes from HTTP and MESSAGING. While they can be redundant, the previous wording made me think that they were not.\nTransfer-Encoding does not affect the selected representation data, see URL Since it only apply to MESSAGING, it should be probably decoupled from a section addressing representation metadata (e.g., content codings): URL\nThank you very much!"} +{"_id":"q-en-http-extensions-44ea626c0f137a6ec6926c2920e30bb30b73fbb4f816cce8390af0430c86786e","text":"Mark moved this note in his edits, but I think that we don't need it any more at all. The security considerations will suffice."} +{"_id":"q-en-http-extensions-40dfe2c88761b56c0eca4c0d0cdaab3b0ed38908beff8f35fbe51c0e90c2df9b","text":"Mostly editorial, but touches normative text.\nI'm looking for an additional qualification on the statement, something like: Originally posted by NAME in URL\nNAME also suggested a reframing to \"SHOULD\": Originally posted by NAME in URL (Also, should we rename the \"safe-method-w-body\" label to \"query-method\", now that that naming thing is settled. It took me a while to remember the right label.)\nI'm not confident that implementers will always know what might be sensitive and probably should not expose any of the payload in the URL. I think the qualifier doesn't reduce the suggestion's practical scope at all, so if it gets us to consensus, let's add it.\nI'm less concerned about this ending up in logs than you are, it seems :) For me, the risk that a server log might have sensitive information is already at 100%, which means that log processing needs to include considerable care as far as privacy and security risks go.\nBoth variants work for me, and IMHO are better than what we have now. Mike, pick one :-)\nQuery media types could evolve whose syntax allows for specification of elements that the user\/user-agent considers \"sensitive\", and also the server could know from the schema of the underlying data. But really I think a big hammer is needed here: the default should be that servers encode none of the query contents into the resulting user-agents should be able to request that behavior with a request header (e.g., ) But user-agents should also be able to request that the resulting encode as much as possible of the request body so that the user can end up having a URI that can form part of the UI. This should be completely as far as the server is concerned, and will not always be possible due to URI length constraints and\/or the difficulty of encoding complex queries in URI local-parts and q-params.\nI believe that should be completely up to the server.\nI'm inclined to agree with Julian here. From the client's perspective, the server is a monolith -- in the course of handling the request, the server necessarily sees the contents and the data (multi-party compute schemes notwithstanding) and the client trusts it to do so according to whatever privacy policy exists. It's the server operator's issue whether they wish to keep certain information out of their logs in order to reduce how tightly access to their logs must be managed. Plus, it's implementation guidance at this point, not protocol conformance. We point out the concern and let people in the real world figure out how they navigate it.\nAn optional header by which the client can request a URI that encodes the query (which may not be feasible, but often will be) vs. not, seems like a good idea to me -- the server doesn't have to grant the client's wish, but having this request header specified will result in better usability. (To go with this an optional response header by which to indicate which option the server went with would be nice.)\nThis is all so conditional that I wonder whether it even needs a SHOULD (NOT). Would it be enough to say instead. The other options are fine."} +{"_id":"q-en-http-extensions-36950483a22ee1086e8c67b3c2bc1d6a4a09d7d93d94dac869a21d59bab244ca","text":"The definition of upload creation requests allows the client to include the entire file, some file data or no data at all. While we provide this flexibility, we don't give guidance on when which approach is best suited. This PR adds this missing piece and explains how an optimistic client can attempt to upload the file in one request and how a careful client uses a separate request for creating the upload. In addition, it also mentions how to transparently upgrade to resumable uploads, which wasn't explained in detail before. Let me know if you have any feedback on this.\nAre \"Optimistic Upload Creation\" and \"Upgrading To Resumable Uploads\" the exact same way? Maybe the 3rd way could be checking first before uploading?\nThe way they are described right now they are similar, yes. In theory, an optimistic upload creation can also be used where the client doesn't upload the entire file, but just a large part of it (think 500 MB of a 10 GB file). When upgrading a non-resumable request to a resumable upload, this act of division is not allowed. But I am not sure if we should go into that much detail here. If not, then we can merge these two sections. Upload limits are a good point! With the careful upload, the client also knows the limits before sending the first PATCH request. The optimistic client must be prepared to receive a 104 response with limits that it has to adapt to. We can mention that a careful client may also send an OPTIONS request before hand if it deems that necessary.\nI think upload-limit can just be enforced on the server side. If it sends a 104 and a 5xx response, the client would automatically retry. We have deployed a client-side implementation that is not aware of the upload-limit header field.\nThanks for all of your comments! I updated the PR to put the section about upgrading to resumable uploads under the optimistic upload creation since they are similar. I also added a brief mention of Upload-Limit and shortened the text a bit. Let me know if you think the text needs more adjustments.\nJust a minor nit. Otherwise I think this is a very good addition and agree with NAME that we should add some text with regards to the . Proposal for optimistic: The client needs to adapt to the header provided in the intermediate response and fail the request if the file is larger than the maximum allowed size. Proposal for Careful: The client can use the header provided in the to verify that the file is allowed to be uploaded before sending any data.Some editorial suggestions, but one big question: What is the material difference between the first and third strategy? Could these be the same strategy?"} +{"_id":"q-en-http-extensions-9f40f9e0f24d65b1a1afe81659134e05c4eaaeca41d6a7f2d7762f0738d95894","text":"This change allows client to discover support for resumable uploads and potential limits via an OPTIONS request. Closes URL A brief comment on this sentence: Once the server has created an upload resource it might have more information available (user details, file metadata etc) and apply different limits than it would have known upfront for an OPTIONS request where this information was not available. We allow this difference but recommend that the upfront announced limits are not looser than the limits after the upload resource has been created. The client can then assume that it can create an upload resource if it obeys to the upfront announced limits even if looser limits will then apply later. Does this make sense? Or is this addition rather unnecessary?\nIs OPTIONS a required feature on the server side? What about requests?\nYes, I think a server should be required to support this. If it only wants to provide resumable uploads for specific users, it does not have to announce this for unauthorized users. If a server supports resumable uploads for all target URIs, then it should respond with to an request. But I am unsure how often this actually happens.\nintroduced for the server to indicate limits on the upload resource (e.g. maximum size). These limits are announced to the client during upload creation in the informational or final responses. However, the client cannot discover these limits without attempting to create an upload resource. A client may want to avoid creating an upload resource if it knows the the server limits cannot be satisfied. A possible solution is to allow clients to send an OPTIONS request upfront to the URL from upload creation. The server might then include the Upload-Limit header in its response, so that the client knows them before trying to create an upload resource. However, a client is not required to send this OPTIONS request upfront. This would also be handy in a possible future where browsers natively implement resumable uploads. The response for a CORS preflight request might then already include information whether the endpoints supports resumable uploads and, if so, under which constraints. The browser can use this data to optimize the uploads.\nEarly thought: Would it be beneficial to include some kind of marker that this endpoints supports resumable upload in the OPTIONS response? For client's that does not support upgrading requests (e.g. due to lack of support for 1xx responses) who whishes to know upfront if the upload can be split (Upload-Complete: ?0) or if a regular upload (without resumability) should be performed.\nInteresting idea! The presence of the Upload-Limit header could be used as an indicator for support of resumable uploads. In theory, its value can be an empty dictionary as well if the server does not impose any limits. If the response does not include the Upload-Limit header, the client can either attempt an upgradable upload or directly fall back to regular uploads.\nURL\nToo bad we can't use the empty dictionary header as an indicator. It's probably an edge case anyway that a client, not knowing the support of the server, would issue an OPTIONS request to find out? Or maybe not, given CORS? What do others think?\nThanks for the pointer, Lucas! If we cannot use an empty value, a server could also fill in a dummy value, e.g. . This is only necessary if the server does not apply any limit at all (not even ), which I suppose is very rare in reality. In general, the situation where a client wants to know upfront whether the server supports resumable uploads appears like an edge case to me. I assume the client will often know that the server supports it or otherwise will attempt to transparently upgrade a traditional upload into a resumable one. So I don't think we have to spend too much energy on this. How do you feel about this, Stefan?\nNAME Yeah I think that is fair enough. The use cases where a generic client (without prior knowledge of server support nor support for informational responses) would send an extra OPTIONS request seems slim to none. The CORS example is interesting but mostly (always?) happens in a browser which does support informational responses and thus would use the 104.\nOpened URL for this."} +{"_id":"q-en-http-extensions-1721e08b35c925fab09427e4b3444eab1b05c9c7fdf628e8bf230a1a6684cd8e","text":"Closes URL A client can express its interest in digests using during upload creation. The server can then respond with after POST and PATCH requests to share its digest with the client. The client computes its digest as well during the upload and compares the digest to spot issues. The representation digest of an upload resource covers the entire bytes that have been uploaded so far (i.e. right until the upload offset). NAME NAME Is this in line with the semantics of RFC 9530?\nNAME Need a week for the review :)\nPreliminary feedback: I'd simplify the language\/split text on multiple lines; Describe processing in separate subsections, e.g.\nThank you for the feedback, Roberto. I added subsections for representation and content digests in URL I think this division helps the reader and also makes sense since representation digests can cover the entire upload, while content digests only cover one request. Let me know if you think differently.\nURL added some guidelines on how a client may include integrity fields to upload requests for protecting the integrity of a single request or the entire upload. However, this either requires the client to compute the digest upfront (before sending a request or starting the upload), or the use of request trailers, which is not very widespread and not always possible (e.g. the ). Instead of requiring the client to compute the digest upfront, the server and client could calculate the digests while the upload is ongoing. Once the upload is completed, the server can send its digest to the client in the response to the request that completed the upload. The client can then compare the server's digest to its computed value. If they don't match, it can raise an error to the end user and decide to not continue using the uploaded data. The client should indicate its interest in the server's digest when an upload is created, so the server is not required to compute the digest if the client has no interest. I am wondering if the from can be used for this. Can a client include in the first upload creation request, but the server delivers the digest potentially later in a response for a subsequent request? An upload may be spread across multiple requests.\nShould also be supported? For multi request uploads, there's really no use to continue if request 3 of 100 got corrupted.\nAs NAME mentioned in URL, is not directly applicable here because it asks the server to send the digest corresponding to the response content, not the received request content. Good point that a client may validate digests while a multi-request upload is progressing. A server could respond with the latest digest of the entire uploaded data (not just the data from the last request) using after POST and PATCH requests. The client can then validate and decide to abort if the digests don't match. This usage would be in line with the PATCH example in . The server responds with the digest of the representation after the PATCH has been applied. The example also notes that the server is not required to respond with the representation itself and could also have returned a 204 (as is done for resumable uploads):\nMakes sense, thanks for clarifying. is most likely good enough given that it indicates the current checksum of what the server received.\nNAME please, stalk me for a review in the next week :)\nNAME It took me a bit more than a week, but here we go: URL Happy to hear your feedback :)"} +{"_id":"q-en-http-extensions-d6f2aa5db78f4a866dd3ef8cb65f0fe8e3d8f5c2ed57c877e5dc7bb3be5a1d87","text":"This increases the complexity of a minimal server implementation and decreases the complexity of a rich client implementation. Addresses\nAnother option would be to say that connect-tcp always uses capsules, that would reduce complexity even further\nThat would reduce the complexity of a rich server implementation, but it would increase the complexity of a minimal client implementation. I don't think that is a wise trade.\nDo we think minimal client implementations will exist? I was imagining that they'd use regular CONNECT\nThe spec probably wants more language to describe what a client should do when a server does not enable the capsule protocol, since thst a failure mode for this optional behaviour.\nNAME Template-based configuration is valuable with or without the capsule protocol. Also, if you're not actually using capsules, it adds bandwidth overhead without any benefit. NAME We don't usually say \"the server MUST Y\". Is there a reason why that is necessary in this case?\nIt seems weird to me that the server echos something back that the client doesn't check. Since the server has a MUST requirement to process whatever format the client tells it, we could just drop the response header and state that a 2xx is the signal that server accepted connect-tcp with or without capsules.\nThe response header is for the benefit of intermediaries. Clients don't rely on it. This is also true for CONNECT-UDP.\nThat's a good point and might be worth saying explicitly in the text. However, we do have a pattern in RFC9298 that levies requirements on request and response message and says \"if any of these requirements are not met, the client MUST treat the procying attempt as failed and abort the request\". Since connect-tcp levies a MUST about capsule-protocol on the response message, using similar text seems consistent to me.\nTo move this discussion away from a merged PR and into an issue, I filed\nAs of draft-ietf-httpbis-connect-tcp-05, connect-tcp now optionally supports the capsule protocol. However, this optionality adds non-trivial complexity to implementations. As currently written, if a client wishes to use capsules, the server can reject the use of capsules, and now the client needs to cache that this server doesn't support them. However it's undefined for how long the client should cache this information, and within what scope. We could potentially figure out recommendations for all these questions, but fundamentally it might be best to instead make capsules mandatory for connect-tcp, like they are for connect-udp and connect-ip. That would simplify implementations and improve consistency.\nIndividually, I think it would be interesting to try out this direction — probably based on implementation experience for how easy it is to add. On one hand, this makes connect-tcp less of a trivial drop-in. On the other, it opens up more possible use cases.\nI heard a good deal of support for this direction in the HTTPBIS meeting.\nThanks!"} +{"_id":"q-en-http-extensions-9dd9ad2cef10f61d879fb3a71c74e89c9730f1ce317113788955b0fb6f3c6959","text":"We started this discussion in the comments of but it deserves its own issue. Now that has landed, capsule support is mandatory for servers but clients can decide whether or not to use it by sending the header. Another way forward would be to just say that connect-tcp always uses capsules. Some reasons why: it simplifies server implementations it simplifies client implementations that want capsules it would be consistent with every other that uses capsules it avoids reusing to mean something different from what was intended in RFC 9297 Since I don't see any real-world reason to use without capsules, this sounds like the best path forward to me cc NAME NAME NAME\nI'm sympathetic to the argument that there may be cases where someone wants templates tcp without the overhead of capsules. So how about we move the capsule negotiation to the upgrade token? I.e. one token if you want capsules, another if you don't. Then it makes the use of capsule-protocol header more consistent with how it's already used in other connect-foo docs.\nThat's a good compromise. That allows me to only implement the upgrade token that enables capsules and not have to deal with the complexity of negotiation. Another option would be to say that if then the capsule extends to the end of the stream. That works because stream offsets only go to UINT64_MAX so that length cannot ever be valid. It's a shame we didn't do that in 9297 though\nI think this approach works for the sender to indicate its disinterest to use capsules. But it does not allow the receiver to refuse use of capsules by the sender. By saying using connect-tcp without capsules, I think we are referring to endpoints that do not want to implement capsule support at neither send-side or receive-side. Assuming that is the case, I think using different upgrade token is the way to go, if somebody asks for connect-tcp without capsules.\nOK, PR up: URL Please review.\nWfmThanks!"} +{"_id":"q-en-http-extensions-5dd5e70e6b21c7fef79794e030a6019bffdb6dc86862952755d8571c22511167","text":"This leaves only the Capsule Protocol mode\nI do think we should add FINAL_DATA, but I'm fine with merging this first and then discussing that in\nAs of draft-ietf-httpbis-connect-tcp-05, connect-tcp now optionally supports the capsule protocol. However, this optionality adds non-trivial complexity to implementations. As currently written, if a client wishes to use capsules, the server can reject the use of capsules, and now the client needs to cache that this server doesn't support them. However it's undefined for how long the client should cache this information, and within what scope. We could potentially figure out recommendations for all these questions, but fundamentally it might be best to instead make capsules mandatory for connect-tcp, like they are for connect-udp and connect-ip. That would simplify implementations and improve consistency.\nIndividually, I think it would be interesting to try out this direction — probably based on implementation experience for how easy it is to add. On one hand, this makes connect-tcp less of a trivial drop-in. On the other, it opens up more possible use cases.\nI heard a good deal of support for this direction in the HTTPBIS meeting.\nThanks!"} +{"_id":"q-en-http-extensions-a8c5de81ff2efc09c6273d9d14d5a9c8f99100440122d90d3552b8c6675e0dde","text":"The current text could be interpreted as applying to other HTTP versions, which is not the intent. cc NAME"} +{"_id":"q-en-http-extensions-c922a5a5126d7747a1cf5b09cbca3538a89b4ec8663fca7d8ebb76d3b1e69f77","text":"Adds an example to the Examples section to clarify and illustrate that cookie names are case-sensitive. Additionally adds a clarification that cookie names could be empty to improve readability.\nAgreed\nWhat should a user agent do with the \"set-cookie-string\" ? The current RFC says that the name string can be empty and then in a subsequent step to ignore that cookie. URL\nHi, instructs >If the name-value-pair string lacks a %x3D (\"=\") character, then the name string is empty, and the value string is the value of name-value-pair. Otherwise, the name string consists of the characters up to, but not including, the first %x3D (\"=\") character, and the (possibly empty) value string consists of the characters after the first %x3D (\"=\") character. which covers the case of a nameless cookie such as . I think clarifying, similar to the value string, that the name string could be possibly empty would improve readability, I'll make that change.\nThank you NAME This means there is a change in the recommended parsing algorithm: now empty named cookies are ok. Is that intended?\nThat's correct. Even though it's strongly not recommended we found that servers still produced unusual set-cookie strings, including empty cookie names, and many UAs accepted them. So the parsing change is meant to standardized behavior so that UAs can at least all do the same thing with them. I still strongly recommend that any sites only use cookies with a non-empty name that conforms to 's syntax. Same for the value field too, now that I'm mentioning it. It'll probably work otherwise, but why make your life harder?\nI think the spec would benefit from getting this fact explicitly specified. and are two different cookies.\nIf you had asked me if this already existed I would have guessed yes. Seems like an obvious improvement\nLGTM. It'd be very nice if we could remove the notion of empty cookie names, but since we can't we should document them clearly. :\/"} +{"_id":"q-en-http-extensions-58441c8da704c7a29c181adbe02b2c0a8b85345d37b33024d183fa554c217b4e","text":"If the client gives Upload-Complete=false then the server must respond with Upload-Complete=false and not Upload-Complete=true.\nThank you for this correction!"} +{"_id":"q-en-http-extensions-6d216fda611cb9b0d582830f95aa857c1186e310fc4b0e69be5d9f680015463b","text":"It's customary to put the terminology and BCP 14 block in the same section, but this works."} +{"_id":"q-en-http-extensions-0436614295fcfbed5def06d022a793239922a78ab5582dc55bb3f62692374ef8","text":"NAME thoughts?\nSeems fine, with the edits Mark suggested. I'll merge and make those changes unless Paul gets to it first.\nLooks pretty good to me, some tweaks here."} +{"_id":"q-en-http-extensions-5eb4fcb4f9a2a67deeda37a149da105a76d7a9965b6799f5b4d1ee916e5bb334","text":"Related discussions: URL URL\nNAME thanks for the review, updated.. ptal.\nThanks!\nLGTM"} +{"_id":"q-en-http-extensions-b9906a5cb966a0144747536940a312d739beee8bee1fff5607c90feca88b05af","text":"This changes the grammar and text to permit multiple ALPN tokens in the header. This is necessary for HTTP tunnels (\"http\/1.1\" and \"h2\") as well as some WebRTC cases. I made some editorial changes at the same time. I'll split those out if we don't accept this."} +{"_id":"q-en-http-extensions-ef1bb7164586fafcc3f342cd8fdc290c73c0f6efa397b77b7789dd449ab2e178","text":"and .\nLooks good overall. changes the semantics of . Not only does it effectively put on subsequent responses from the same origin for the given lifetime, it also suggests that Client Hints should be added to requests to that origin, no matter where the requesting page has come from. Not a problem, but right now it's conveyed almost as an aside. This needs to be clearly specified so it isn't missed \/ misunderstood.\nNAME updated, PTAL.\nThanks Mark!\nThis is better."} +{"_id":"q-en-http-extensions-e2349b19cd059310cf3f75d89a0e3df4b3c0c92f47b27053b56120a8a54fa9e0","text":"While testing firefox's implementation of this draft Vlad determined that some versions of Chrome throw PROTOCOL_ERROR and close the connection if it receives a type 0xB frame with a non zero length on stream 0. Other frame type numbers, such as 0xC do not seem to have that problem. testing indicates this is fixed in version 59 and is a problem previous to that. (57 is in release as I write this). presumably this is related to historical code around BLOCKED which was proposed to use 0xB. Prudence says to just use a different code point for the moment.\nQuestion for NAME and NAME should we somehow reserve 0xb to avoid this problem?\nIf this is fixed in Chrome 59, why bother? Actually, I'd also revert this change.\nNeither. There are still old Chrome versions deployed (some people lag, for various reasons), so it's not safe to use now, but it should be in the not-to-distant future; we just need to wait a while to use it (presumably for another extension). Reserving it for all time would be a bad precedent."} +{"_id":"q-en-http-extensions-adec970eb715e2aebac220da0e94d43e61a1f7c93f490653dd538aa89f650500","text":"This changes Expect-CT to be a comma-separated list of directives (which may be name=value pairs), and adds some examples. Also allows the server to send multiple header fields so that they can be combined, as per URL Fixes issue ,\nWould it be possible to include an example in the spec? Something like... Expect-CT: enforce; max-age=3600; report-uri=\"URL\" As I believe most developers like to work with examples, rather than just using the spec definition as found under the heading \"Response Header Field Syntax\". :-)"} +{"_id":"q-en-http-extensions-7277073ede753fd0693ad45171d3cadf6677df9fa4a4a8fc09396d3b4d901d90","text":"... for your consideration; feel free to take, discard, or do whatever else you like :)\nThank you so much for the improvements. This PR is a tremendous opportunity for me to improve my skills (both in terms of writing a specification and in terms of English)."} +{"_id":"q-en-http-extensions-20e6b0e691ca0eaf6699b3be266806499e1345f9e09ce942d8beadd07ea12266","text":"Fix one typo. Technically, a set is always a subset of itself. Therefore current wording implies that if two connections have identical Origin Sets, then neither should be used for new requests, and both should be closed. Fix this by saying \"proper subset\" instead of \"subset\".\nThanks!"} +{"_id":"q-en-http-extensions-3e8b444ee71cb2185f788bf48ff8892c55caeec5259db076fdd6e1960aac795d","text":"The PR introduces a new SETTINGS parameter that can be used by a server to notify if (and how) it makes use of the CACHEDIGEST frames. You might find the discussion about how to use the parameter interesting. The text of the PR, under the assumption that the underlying transport will be TLS 1.3, suggests the client: to wait for SETTINGS parameter when doing a full handshake retain the value of the parameter in the TLS session cache and reuse the cached settings parameter value when doing 0-RTT resumption\nLooks good to me, delta the one comment I made."} +{"_id":"q-en-http-extensions-8f07ba3650e5536526dad80768978aa03a5a3ed5dfc7d66cb4b27583dbb6a744","text":". Since I wasn't sure if we can make normative references from an appendix, RFC5234 (ABNF) and RFC4648 (Base64) has been added as informative references.\nNAME Thank you for your review. I believe that I have addressed all four issues in the commit above.\nNAME Would it make sense to also describe the cookie-based method in this spec?\nWhile we could do that, I am kind of negative against it. We need a specification for Cache-digest using HTTP\/2 frames or HTTP\/1 headers, since it needs to be defined as an interaction between a server and a client (that has the knowledge of what is being cached). OTOH, we do not need a specification for a Cache-Digest cookie, since it is totally controlled by the server. So it is entirely up to each server implementation to use whatever format it wants. Also, it is not easy to find out the right way to implement a cookie-based Cache-Digest that would work on all web browsers, since the timing the browser consumes the cookies sent from the server depend on the timing of the response. To summarize, the relatively low necessity and the technical difficulty makes me hesitate in pursuing the idea."} +{"_id":"q-en-http-extensions-ff0fa17ece8ff7d9985589e8e68a64038518d65421a23be226fd91757a91ea57","text":"The consequences for ORIGIN + Alt-Svc with a port change aren't awesome, but this should at least remove any confusion.\nAre you going to take another go at this?\nSorry, I let this one drop because I missed your suggestion, which is good.\nThanks!"} +{"_id":"q-en-http-extensions-1cd98a0ea231a84c666a59e285e3f1db9cc43151e3a9700031943d1699271baa","text":"The header needs to be registered if it's going to appear in an RFC. Looking at 7540, I think the frame type needs to be registered as well, since \"Experimental Use\" is private \/ local only.\nThank you very much!"} +{"_id":"q-en-http-extensions-8e710b3c8f663973858f7b733f2ce6d8819ab17aa38207309e173e35b091304f","text":"This is a PR that essentially rebases the change proposed in . To me the change proposed by NAME is a clear improvement. I believe that we should merge this.\nRegarding The requirement in the last sentence needs to be re-worded, since includes parsing and the client is clearly parsing those header fields and using them.\nI would rewrite the paragraph as\nCollision with my suggested edits; closing. Please ask to reopen if you still think there's a problem; we can address it in WGLC."} +{"_id":"q-en-http-extensions-f6eb9bda0a0d73cb4e884ab7da1dfd0e63f35b3c0ac5fa6a3c0f840387103f45","text":"removes statements that are either redundant or contradicts with RFC 7230, 7231 clarifies how a server is expected to do when generating a response\nWhile it is not required to specify how an intermediary might behave, doing so might be beneficial as suggested in URL We could possibly have something like the following below the example.\nworks for me.\nNAME Thank you for the answer. Adopted the text (with nits) in d0903d6.\nNAME Thank you for the review!\nLooks good to me!"} +{"_id":"q-en-http-extensions-8fddca51e25cefeb7c0a92f594a97b2809c2d2d2c1e9fef2910b88855180ea32","text":"Clarify that the disappearance of a header field (that once existed in a 103 response) from the following 103 responses does not indicate the retraction of the expectation that the header field will be included in the final response.\necb2c7e clarifies the general rule that this PR relies on (i.e. the nonexistence of a header field in the 103 response cannot be used as a signal that the header field will be absent in the final response), at the same time addressing Spencer Dawkins' suggestion to clarify that \"the server can add header fields in the 200 that were not present in the 103.\"\nIn section 2. \"103 Early Hints\" the last paragraph describes that 103 can be send multiple time. The given example reads as if a server may correct its previous 103 that is made from a cached resource. So the following 103 are correcting the older ones, they are only adding headers to the previous ones or are they replacing them completely? Maybe sentence clarifying this would be good.\nThis is an interesting question. I think that we should agree on what the expected behavior is before discussing how (or if) we should update the text. Regarding the question, I think that we should consider two (or more) Early Hints responses as expectations from different sources, rather than considering the following one to update the earlier ones. The reason I think so is because nonexistence of a header field in the 103 response does not imply that the header field will not exist in the final response, since we are not required to include in 103 response all the headers that are expected to be included in the final response. So consider the following example: a caching intermediary might generate an Early Hints response from a stale-cached response. That Early Hints response could contain a header field that will never be included within an Early Hints response sent from the origin. In such case, considering both of the Early Hints responses as genuine makes the most sense."} +{"_id":"q-en-http-extensions-8d91a045fed6ac3797f8ee0dbe8e040cccf3e55fec94a1b5aff3fd4864ed5663","text":"I left the term \"request-uri\" untouched (it was renamed in the base spec because it's not always a URI). We may want to change that separately, or even refer to the effective request URI (URL)\nNAME - ping?\nLGTM, thank you for the patch, and for the ping!"} +{"_id":"q-en-http-extensions-2a93b17f2af5185fc0a58b42f3230952bd97ed6814f6fb737609ee24bb3cf49d","text":"(I accidentally put this on master without noticing earlier, sorry.)\nCan we just have 425 now that it's official-ish? From martinthomson\/http-replay"} +{"_id":"q-en-http-extensions-658c6ed0e01dcb96d741a2e305a411b899031d7f37643e2d8b83a29ead482f2b","text":"NAME was confused by the text here and I confess that I don't remember why it was added. There are actually esoteric cases where you get server data that you can read before the handshake is complete (QUIC tends to enable that sort of mess), but there is no circumstance where the client can read the 425 but can't also determine whether that the handshake is not done to its satisfaction. The client might decide that it wants to abort and not provide a certificate or something, but most implementations don't really allow that sort of messing around."} +{"_id":"q-en-http-extensions-7d485ea136e72cc98210bb3d770f89569772b069c288d6d162e92a8b6e20ec54","text":"The best and easiest plan is to discard, but we know that throwing out perfectly good packets can have bad effects on performance. It sends a bad signal to a congestion controller for instance. So we need to say either discard, or ensure that the data is properly attributed and consistently processed.\nI think you covered it all this way. +1.\nThe conclusion in the draft is that early data is relatively \"safe\" (for some definition of the term, see the draft for details) if the handshake is complete. Early data in a transport that provides ordered delivery will always arrive before the handshake completes. But this is not the case in QUIC, or anything like DTLS. We should offer some advice for these situations. From martinthomson\/http-replay\nBut can this really happen ? I mean, my understanding is that the handshake happens in response to a Hello message which also conveys early data. Can we have early data outside of a hello message, or may we receive a handshake response before a hello message ?\nWith reordering of packets in QUIC, you can get a packet that was sent before the handshake completes after the handshake completes. The order of sending might be \"ClientHello, 0-RTT1, 0-RTT2, Client Finished\", but the server can get \"ClientHello, 0-RTT2, Client Finished, 0-RTT1\". Early data is sent in separate packets (this is a hard rule right now, but we might change the format so that the packets can be sent together in future). This is unlike TCP fast open where you only get to send the \"early data\" in the SYN segment. You can send an awful lot of early data if you want.\nOK thanks for the explanation. I tend to think that such behaviours are out of the current scope of the draft but will fall into it as QUIC progresses. Maybe all of this can be summarized (for now) as \"In case a TLS implementation supports receiving early data after the handshake completes, such early data has to be silently discarded\". That covers this case for QUIC and any possible TLS implementation specificity or even bug.\nI think that we might add \"... discarded unless the data can be unambiguously identified as being early data\" or something like that. After all, it's not replayed data if the handshake completes, so I don't want to prohibit its use. The main problem is that it's hard to properly identify and handle. FWIW, we debated this for our DTLS 1.3 implementation and chose to discard. That is the easiest answer.\nI'd still be tempted to simply discard for now, just to avoid the problems of confusing the client with an out-of-order handshake that may make it think the \"late\" early data were lost while in the end they are considered. I'd rather avoid a client sending a request again as regular data and seeing the server concatenate it with the late arrived early data for example. I really think it's something we can improve later as the out-of-order use cases become more obvious. It would also offer us a safety belt allowing more fancy stuff in QUIC, otherwise we could end up saying \"nah don't do this or that, it could cause trouble with early data\". The rule might then be relaxed. It will not affect TCP-based implementations anyway."} +{"_id":"q-en-http-extensions-3a12ffd27f41d5e214da8526cb4dd1f5389a8fd02d941ff00560a6c065bcbbbd","text":"This text from Kyle Rose highlights the decision process. If any node might make a different decision about processing early data, then this node has to wait.\nI like it, +1"} +{"_id":"q-en-http-extensions-e3d2e638ba601e0ad054d726511b9cfe623d6361fd180fcfa8c6004eb05e34f5","text":"Resolves URL\ncc NAME NAME\nHey Yoav, Thanks; will take a look. Two immediate things: You're getting a error in the markdown; . I see you've added yourself as an author. That's generally the decision of the chair - NAME in this case.\nHopefully fixed. Is there a way to test it locally? Apologies for the noobness. Removed myself.\nSee URL for build info.\nNAME Thank you for working on the proposal. Am I correct in assuming that changes other than the switch to Cuckoo filters and the introduction are unintentional? For example, I see flag of the frame being removed. Do you have a working code that implements Cuckoo filters? I am curious to see it working. The concept of makes sense to me. Maybe we might want to adjust the codepoints and the naming in relation to .\nOh, I now understand the intent of removing the flag. The motive of the proposal is to build a digest without referring to every response object stored in cache. The fact means that it is not be easy for the client to determine the freshness of the entries that is going to be included in the digest. I am sympathetic to the idea, but I am afraid if the approach works well with the current mechanism of HTTP\/2 caching. My understanding is that browsers that exist today only consume a pushed response when it fails to find a freshly cached response in its cache. Otherwise, the pushed response never lands in the browser cache. Unless we change the behavior of the browsers to respect the pushed response even if a freshly cached object already exists in its cache, there's a chance that servers would continually push responses that gets ignored by the client (due to the existence of a freshly cached response in the browser cache with the same URL). NAME Assuming that I correctly understand the motive of removing the distinction between a fresh digest and a stale digest, I would appreciate it if you could clarify your ideas on the problem.\nThanks for reviewing, NAME :) My intent was to include all stored resources in the digest, regardless of them being stale or fresh. Entries are added to the digest when a resource is added to the cache and removed from the digest when a resource is removed. The reason is that I think the distinction doesn't make much sense, and maintaining it adds a lot of complexity, basically forcing browsers to recreate the digest for every connection at O(N) cost. Under this premise what servers should do is: Push all the resources that are known not to be in the cache digest Push 304 responses for resources that are in the cache digest, but are likely to be stale (short freshness lifetime, etc) Don't push resources that are in the cache digest and have a long term freshness lifetime or are immutable. Does that make sense? I'm not sure I understand your reference to the push cache vs. the HTTP cache in your comment. In light of my explanation, is there still an issue there in your view?\nURL is the reference implementation. Happy to change it. Do you have any specific changes in mind?\nNAME Thank you for the explanation. I now understand the intent better. I think that we need to consider two issues regarding the approach. First is the fact that a browser cache may contain more stale responses than fresh resources. Below are the numbers of cached objects found in my Firefox's cache (to be honest the date is from 2016, I haven't been using Firefox in recent weeks and therefore cannot provide up-to-date data). total ---: 2,273 1,003 As you can see, large scale websites tend to have more stale objects than fresh objects. In other words, including information of stale-cached objects increases the size of the digest roughly three times in this case. Since performance-sensitive resources (that we need to push) are likely to be stored fresh (since they are the most likely ones marked as immutable, or near-immutable), transmitting only the digest of freshly-cached responses makes sense. Second is a configuration issue on the server side. One strategy that can be employed by an H2 server (under the current draft) is to receive a digest of freshly cached resources only, compare the digest against the list of resources the browser should preload by only using the URL, and push the missing resources to the client. It is possible for a H2 server to perform the comparison without actually fetching the resource (from origin or from cache) since only the URL would be required for calculating the digest. The proposal prevents such strategy from being deployed since it requires the ETag values to be always taken into consideration (should they be associated to the HTTP responses). In other words, servers would be required to load response headers of the resources to determine if it needs to be pushed, which could be a huge performance degradation on some deployments. Fortunately, servers could avoid the issue by not including ETags for resources that it may push. I think such change on the server-side configuration would be possible, but we need to make sure if we are to take the path (of removing the fresh vs. stale distinction). Let me explain using an example. Consider the following case: client has with and on server-side, the resource has been updated to When receiving a new request from the client, the server cannot determine if the client has URL in its cache. Therefore, would be pushed. The client, when observing (or equivalent tag), tries to load the resource. Since the fresh resource exists within the browser cache, that would be used. The pushed version is ignored and gets discarded (). This would be repeated every time until the cached object either becomes stale or gets removed from the cache. My understanding is that the browser behavior (explained in ) is true for Firefox and also for Chrome. Am I wrong, or missing something? Thank you for the link. I will try to use it. OTOH, do you have some working code that can actually calculate the cache-digest value taking a list of URLs as an input (something like URL)? I ask this because it would give us a better sense in how the actual size of the digest would be. One way to proceed would be to split the discussion of from Cuckoo filters into a separate issue or a PR. I do not have a strong opinion on the naming or the codepoints. What do you think? NAME\nNAME Have you considered the approach using Cuckoo filter to generate GCS? I can understand the fact that you do not want to iterate through the browser cache when sending a cache digest. Per-host Cuckoo hash seems like a good solution to the issue. OTOH, as I described in my previous comment, it seems that sending the hash directly has several issues. That is why I am wondering if it would be viable to generate GCS from the per-host Cuckoo filter that would be maintained within the browser. I can see three benefits in the approach, compared to sending the values of Cuckoo filter directly: the size of the digest will be smaller we can keep the distinction between fresh vs. cache. Sending digest of fresh resources only would end up in even smaller digests. Retaining the distinction lowers the bar to deploy cache-digests on the server side. note: you can store the time when the cached object becomes stale in the data associated to the Cuckoo filter entry (assuming that you would have associated data to handle resize, as we discussed in URL). That information can be used when builiding the GCS to determine if a particular object should go into a GCS of fresh resources or that of stale ones less change to the browser push handling (no need to handle pushes of 304 or replace a freshly cached object when an object with the same URL is being pushed) In case of or in the comment above, sending fresh-only digests using GCS would be about 1\/3 the size of sending fresh & stale digests using Cuckoo filter. The biggest cost of calculating GCS from Cuckoo hash would be the sort operation. But I think that the cost could be negligible compared to the ECDH operation that we would be doing for every connection, considering the fact that the number of entries that we would need to sort would be small (e.g., up to 1,000 entries of uint32_t), and the fact that sort algorithms faster than O(n log n) radix sort can be deployed (e.g. radix sort). WDYT?\nSo have a cuckoo filter digest and then put its fingerprints in a GCS? I have not considered that. Need to give it some thought... At the same time, it's not clear to me how that would enable a \"stale\" vs. \"fresh\" digests, or handling of improperly cached resources (fresh resources that were replaced on the server).\nI would appreciate it if you could consider. To me it seems it's worth giving a thought. Under the approach proposed in this PR, structure that stores the per-host digest would look like below. is required for resizing the filter (e.g., when doubling or halving ). What I am suggesting that you could change the structure to the following. In addition to the hash value, each entry will contain the moment when the entry becomes stale. The moment can be calculated when the entry is added. For example, if the entry represents a HTTP response with a , can be calculated as . If the entry represents an immutable HTTP response, then should be set to a very large value (e.g.. assuming that underlying type of is ). When building a GCS digest, you would do the following: step 1. prepare an empty list that would contain hashes of fresh responses step 2. prepare an empty list that would contain hashes of stale responses step 3. foreach entry in cuckoofilter: step 3-1. check if the entry is fresh or not, by checking the value of step 3-2. if the entry is fresh, append of the entry to the list of the hashes of fresh responses step 3-3. otherwise, append of the entry to the list of the hashes of stale responses step 4. sort the list of hashes of the fresh responses, encode as GCS, and send step 5. sort the list of hashes of the stale responses, encode as GCS, and send You can skip the operations related to stale objects (i.e. step 2, 3-3, 5) if the server is unwilling to receive stale digests. Whether the approach can be implemented depends on if a client can determine the moment a response becomes stale. I anticipate that it is possible to determine that when you register the entry to Cuckoo filters (which is when you receive the response from the server).\nNAME NAME I'm planning to attend the IETF 100 hackathon this weekend in Singapore. (First timer here. ) I'm happy to collaborate on a (URL) implementation of this spec if either of you are around and interested. I'm fairly familiar with the current spec, having implemented it and .\nNAME Wonderful! I'll be attending the hackathon on both days (i.e. Saturday and Sunday). I do not think that I would have time to work on Cache Digests, but would love to discuss with you (or help, if you need) about your work on Cache Digests.\nI've got an incomplete initial reference implementation at URL It doesn't yet include removal and querying (that's what I'll be adding next), but I did run it on a list of ~3250 URL (which I got out of my main profile chrome:\/\/cache\/) and it seems to be creating reasonable sized digests. One more advantage, the digests seem to be highly compressible when sparse. Results so far: Digest with 1021 entries (so room for ~4K URLs): 5621 in-memory, 5233 gzipped (when filled with 3250 URLs). Digest with 2503 entries (so room for ~10K URLs): 13772 in-memory, 6879 gzipped (same 3250 URLs). Digest with 7919 entries (so room for ~31K URLs): 43560 in-memory, 9984 gzipped (same 3250 URLs). In practice, I think ~1000 entries is most probably enough, but it's good to know we can increase the digest size (to avoid having to recreate it), without significant over-the-wire penalty.\nOK, I now have a complete reference implementation and it seems to be working fine. It also exposed an issue with the initial algorithm, forcing table allocation to accommodate a power of 2 number of entries. Latest results for 3250 URLs taken from my cache: One note: the 1021 entries table had 35 collisions, so it seems like it's insufficient for that number of URLs, unless we're willing to absorb extra pushes for ~1% of the resources.\nNAME Interesting! It's good to know that we have numbers now. What is the value of P (the false positive ratio) that you used?\nP=8 (so 1\/256 false positive)\nNote that the numbers here may be possible to further optimize. One example is semi-sorting of the buckets which the Cuckoo-Filters paper mentions, and which I have not yet implemented. It adds some runtime complexity, but can reduce the fingerprint size per resource by a full bit, so could have resulted in ~9% smaller digests in this case.\nNAME Awesome! Me not being familiar with these data structures (despite reading Wikipedia article ), why does the 1\/256 probability (~4\/1000) result in 35 collisions?\nThe 35 collisions are on top of the false positive rates, and represents resources that we failed to put into the table to begin with (due to both their buckets being full). That rate of collisions seems high compared to the results in the paper, so I need to dig further to see who's wrong...\nThe collisions are now fixed. It was an algorithm issue, where the entry to be pushed was always the same one at the end of the bucket. I've change that to be a random fingerprint from the bucket, which significantly improved things. The ref implementation is now collision free almost up to the point where the digest is full."} +{"_id":"q-en-http-extensions-0b0abdec1081f29a91911dd72538620e1741288fc1451b25e61bcd8f68a021c7","text":"mention the syntax update the reference type the citations consistently\n1) It should be RFC 5234, not RFC 4234 2) If it's there, it should actually be mentioned somewhere in the spec."} +{"_id":"q-en-http-extensions-053f2404a1721add16c20d7fd11580086c168bceb9628e1cd7f5e7489e85f251","text":"This makes mitigation mandatory. Either receive a signal that early data is safe for the given resource, or apply the mitigations. That's consistent with the other text.\nLGTM"} +{"_id":"q-en-http-extensions-2576a4f4eae20a9f9445896c75e26911ab22b577ebee4e5024fa74e53dbd275c","text":"I think that these are the main places where we weren't clear enough about the relevant TLS role.\nLGTM"} +{"_id":"q-en-http-extensions-a8ae2befaeb5f1c160d3c91b7183ec529885a76781a3851acdcf0b637b3f7045","text":"John Mattsson pointed out that we didn't really discuss the risks of use of early data. That's fair, but we don't need to. This rewrites the abstract.\nJust one fix : s\/mechanisms that allows\/mechanisms that allow\/ Nice overall."} +{"_id":"q-en-http-extensions-57f6471da372c39c5c53fd4ab250c6a0800a86ac5dd2cd503a9c583ce3ada5ab","text":"Hopefully none of this collides with other PRs in flight.\nI think it's fine with this. Adding a bit of pragmatism as you did in the last commit also reminds that the low cost of retrying is not worth a lot of trouble :-)"} +{"_id":"q-en-http-extensions-a632bb22f3cdef4ced49ad17ce123b4259cf2790dd441767cb03dae7186970a2","text":"Thanks NAME and NAME for pushing on this.\nI think this is good and can even emphasize how to consistently disable 0-RTT across a cluster for any reason then consistently re-enable it."} +{"_id":"q-en-http-extensions-bca4d830d00b967fcb5b8c77d900dabf9a311a50978579a10e813377fea7ec5a","text":"Note this was manually incorporated into draft-ietf-httpbis-rand-access-live-03"} +{"_id":"q-en-http-extensions-b9c33dd1bf30cc7bbaf8260ed8d2f2c39488f97cee88eb3497702aa10e3eb928","text":"Add SETTINGS prefix to parameter name ENABLECONNECT_PROTOCOL. Fix one typo (\"s\/a entry\/an entry\/\"). Remove \"(type = 0x8)\" which is ambigous (what is type?) and redundant with code listed in IANA Considerations section.\nthanks"} +{"_id":"q-en-http-extensions-6876759721d0d463bff550f00a04c7f565114832f5915a33e6373e501517872a","text":"Herve, could you please open an issue for this, so that we can assure that we track the underlying issue? It's OK to merge the proposal if the editor feels that it's helpful, but we need to make sure the WG gets consensus here. Thanks,\nLinked to issue ."} +{"_id":"q-en-http-extensions-5f33f4b455188a141519056da5d689c3b0a9f80a2cfbbf45d13281e24bb471f2","text":"Port over of PR from mnot repo. Modified last sentence to avoid to much usage of the word \"further\".\nThanks!"} +{"_id":"q-en-http-extensions-ef14e483e02b788ab5cafb3e2cd9c18299d5d6b39fcaf565bca5f49de2d3bac0","text":"Mention obsoletion in the abstract, tune section title in appendix, and actually cite RFC 3205."} +{"_id":"q-en-http-extensions-69bb39f30c67330c4de31b8066c352cb6795a987c290d6c6f22eaca3749c8214","text":"Aligns with other SETTINGS parameter names defined in other docs.\nNAME icymi\nThank you for the fix!"} +{"_id":"q-en-http-extensions-e47e0c3ea87249490f3bb97d24503ac3ee2e94f96bf6e90e4c8d8e3620994b0a","text":"This adds a section regarding falsification of Tunnel-Protocol. I didn't expand this to note that WebRTC won't suffer from this problem because the application won't be able to set the header field. That seemed like too much detail."} +{"_id":"q-en-http-extensions-b01066f08af22c3ad024404dadb455221791bc8604e796cab6a8051c4f0d8833","text":"for encryption. I had these in my local repo. They weren't pushed, but I wanted to make sure that they are OK before merging them."} +{"_id":"q-en-http-extensions-ecb14597e3594a8cec0f2c0c7de0ba7f2d5ed816cbd2f1110a89a4a58aa5299a","text":"This algorithm automatically enforces the ABNF syntax, including length limits. It still punts on the actual strtoi\/strtof part. ?\nMuch better, thanks!"} +{"_id":"q-en-http-extensions-aa48b5f18d7605b495f584310dd08e886fd8f46c0aa8ebcc96a6a5b90121a2e9","text":"This just fixes a typo, but this text has more problems. I'm having trouble connecting this final sentence to the text that precedes it. I think that the point is that including a body on GET is unwise because it might cause explosions. A point about side effects needs far more exposition, especially since GET is defined to be free from overt side effects (safe, idempotent, and all that business). If this were just another statement and the \"As a result\" is dropped, that's different.\nBarg, I think that's a copy\/paste error, will fix. thx."} +{"_id":"q-en-http-extensions-a7b2cb610009454e487f650c9cfc8ed134d34eef426e09ef1789b2e236a44fc7","text":"For\nAllow a response to associate itself with more than one variant. (moved from mnot\/I-D)"} +{"_id":"q-en-http-extensions-af867fd3924747288a06ff6780d24bfc9c4e32bcdfa917b66878393b1bbdc3ca","text":"There is currently a discrepancy in the spec regarding invalid values in the SameSite cookie attribute. For example, this cookie: Set-Cookie: foo=bar; SameSite=bogus is expected to be dropped entirely according to the \"Server Requirements\" under Section 4.1.2.7: If the \"SameSite\" attribute's value is neither of these [ \"Lax\", \"Strict\" ], the cookie will be ignored. whereas under Section 5.3.7 of \"User Agent Requirements\", the cookie is to be kept but the attribute is ignored: If cookie-av's attribute-value is not a case-insensitive match for \"Strict\" or \"Lax\", ignore the \"cookie-av\". Additionally, the end of Section 4.1.2 also matches the behavior described in Section 5.3.7: User agents ignore unrecognized cookie attributes (but not the entire cookie). From a forward-compatibility point of view, the behavior described in section 5.3.7 is the ideal one since it allows for future expansion of this feature such as: Set-Cookie: foo=bar; SameSite=medium\nNAME\nDone. Let me know if I missed anything else.\nURL Section 3.1 of the spec states that \"SameSite\" alone (without an attribute value) is a valid token: URL Section 4.1 URL states If \"cookie-av\"'s \"attribute-value\" is not a case-insensitive match for \"Strict\" or \"Lax\", ignore the \"cookie-av\".\nAh. URL draft is outdated vs. URL The latter no longer allows a bare \"SameSite\" token, and it calls for the cookie to be dropped in the event that the samesite-value isn't either \"strict\" or \"lax\". However, I think this text isn't quite right: If 's is not a case-insensitive match for \"Strict\" or \"Lax\", ignore the . The in this case is 'SameSite=whatever' and we want to ignore the whole cookie, not just the invalid attribute?\nis my proposal to fix this. We ran into this issue in the Firefox implementation: URL\nThanks!"} +{"_id":"q-en-http-extensions-65c8089743948da0b93c1f785c24974ea828b8828085a5d225cfb830472de222","text":"Also considering if \"Security Considerations\" can be \"Compatibility Considerations\".\nI haven't renamed \"Security Considerations\" in this version. And I have no issue with calling this \"Compatibility Considerations\". But I have some recollection that this is a required section...\nYeah don't do that or various people will start to scream at you :)\nCall me a perfectionist, but on reread, I realized there was a cleaner\/shorter way to clarify. Let me know if you have any objection...\nThanks Martin!"} +{"_id":"q-en-http-extensions-8e34f2c4eaac1fe117eddf3c0969289dc0a62d16f33bed475d94333e2ea330ce","text":"According to RFC 7231, Section 8.3.1. I didn't include anything here about Vary because I wanted to discuss that first and these seemed uncontroversial. It seems OK to say that Vary is permitted here because we expressly talk about varied reactions based on the presence of Early-Data. If that's right, I will amend this with a second commit. (I'm jetlagged, so being a little cautious.)\nYour proposal looks very clear to me, despite your jetlag :-) I agree with you that Vary should be OK. I don't even think we need to say anything about it then."} +{"_id":"q-en-http-extensions-952ac8a5dcbb7ccfc8c472cda49b89d11299c7fa187936a8fc8cb5f3592fcdd9","text":"Our basic list of mitigations missed a fairly obvious one. Mentioning it should help with the class of confusion Magnus N. had with the draft. I decided not to include a note about the server being unable to examine early data before making this decision. That's just something people will need to discover for themselves. Generally, you have to decide whether you want 0-RTT without seeing any of it. Partly this is because it avoids a potential deadlock, but mostly it's because the TLS stack will not even decrypt 0-RTT if it is rejected.\nLooks good to me, and indeed clearer. Thanks.\n+1"} +{"_id":"q-en-http-extensions-88ecf244523f8825e07a49a2f1b10e545594301507ca73a1b812fbaa13d5faa1","text":"This was a little obtuse in the previous iteration, likely as a result of trying to be parsimonious. Expand it a little and clarify the attack.\nIt's indeed better like this in my opinion."} +{"_id":"q-en-http-extensions-ebd2c1b952f4a15bdc0665eac3b9be2a409ea9267e1548a98f06c0448483c689","text":"URL too wide artwork by changing the second example to use multiple field instances (which, in itself, is good to have in the examples anyway)\nThanks!"} +{"_id":"q-en-http-extensions-4e276f056dcea9c299f2080cbd8e23ba9e185af518ced67d62c3e0f3fff8bd3b","text":"Last call discussion identified this as a solution to the confusion that exists around the intent of the header field.\nApparently, this is confusing. Picking a new name might help."} +{"_id":"q-en-http-extensions-19d90c2af567586699d5630f81b269984fadd7ff3189cbdc13e8966138bdba2b","text":"They were both special, but for some reason the first reference wasn't getting brackets in the text output. Putting whitespace between them doesn't affect that, but putting other characters does.\nLGTM. (And yes, I tried whitespace as well :-)"} +{"_id":"q-en-http-extensions-55a2d506fecda086ea8ad659e278b37ab63968eef6daff239badca299e7ef636","text":"NAME had a few comments in the review of this that are easy to solve.\nAll changes are OK to me as well. Thanks!\n43d2c16 does a great job capturing the point that I made poorly off-list; thanks again!"} +{"_id":"q-en-http-extensions-b48a53a4921f7ea713195a333c5abb10f9d3236e9c1afb6aff09550dd2ec6f95","text":"Just editorial suggestions here, but all good ones.\nNice wording, I really like this new version. Thanks!"} +{"_id":"q-en-http-extensions-2111948a36e21f3d183902d52f1a6199595ef76bf99424baaae184eec631b6ad","text":"This will conflict with the changes ekr suggested, but I integrated them, so merging shouldn't lose the important changes.\nFor \"sent in early data on another connection\", I think Spencer's proposal \"sent in early data on a previous hop\" would make it even clearer regarding the hop chain so as not to confuse this with another connection from the same user-agent. OK for the rest."} +{"_id":"q-en-http-extensions-72f1c8e067dcc6d1983276b5a5b360eb36de62e7ab0218eceed6a2cbb0cce307","text":"Follow-on to ; in the process of fixing this, I noticed that I'd added a \"server\" header but missed adding a corresponding \"client\" header, so the client discussion appeared in the server section.\nFor some reason, that seems to be endemic in this doc. Let's do that cleanup separately.\nI appreciate the addition of the section heading. Though Section Headings Use Title Case."} +{"_id":"q-en-http-extensions-617c620f3c6f4d9b301fc596be66b0b8f755e1508efd02bd4f1a75aa2f335647","text":"Per discussions in fetch [1], update the opt-in example to apply to same-origin resources only, with Accept-CH-Lifetime being processed on navigation requests. [1] URL \/cc NAME NAME"} +{"_id":"q-en-http-extensions-1589b923694b432e5d5ea0e14034c85e4e68fe7d2a2c5099291b3b13484848e3","text":"Step 4.4 equates \"the range %x00-1f or %x7f\" to \"not in VCHAR\", but VCHAR doesn't include %x20; this change adds \"or SP\" and imports SP from RFC5234.\nThanks!"} +{"_id":"q-en-http-extensions-21435ace0a1bd2c0d1b1894b9f83e3fbf9f4b1856d25aad47fc168123a6c3790","text":"Serialisation algorithms for dictionary and parameterised-list didn't include commas (?) This copies the equivalent steps from the algorithm\nJust noticed that this is\nThx."} +{"_id":"q-en-http-extensions-c44ff30c99d6d697e34a7fdb43e1ca1a14218036dadeecb31920f6b32427903a","text":"Makes \"Parse a Number from Text\" not be greedy, and return (not fail) when it encounters a non-digit character.\nI'm pretty sure it works. Steps 5 & 6 ensure there's always at least one digit. All the calling contexts of \"Parsing an Item from Text\" (including the top-level algorithm in §4.2) inspect the next character, and handle it appropriately.\nSGTM\nSection 4.2.6 (Parsing a Number from Text) is ambiguous. It makes no mention of the exit condition from the while loop in step 7 on parsing a number from an input buffer.\nI don't see it. The loop terminates with \"fail parsing\" in 7.4 (invalid character), or 7.5\/7.6 (too many digits); or terminates successfully by virtue of \"input_string is not empty\" becoming untrue.\nAn example might clarify what I find ambiguous. Let us suppose that the input string is \"-11-1, 4, 3\", and that we are parsing a list. We'd call parseItem(), which would then invoke parseNumber(). When should parseNumber() stop reading in characters? Should it read in \"-11\" and then leave the rest to the next parsing function, or should it read in \"-11-1\" and then fail parsing? The other sections are unambiguous as to when to stop reading in characters (for instance, section 4.2.7 step 4.3 states that parsing of a string would stop when you encounter a DQUOTE, and section 4.2.9 step 3 states that parsing of binary content would stop when you encounter an ending asterisk).\nOh I see what you're saying. Yes, step 7.4 shouldn't be so aggressive."} +{"_id":"q-en-http-extensions-77a9e3464e2011aef6549418ce8db65cf61fbf39762e6f5999fc6b6f9cfc4242","text":"Serialised wasn't getting appended to\nWould it be better to just have each of the steps before return their result?\nI did wonder that when writing it. Or make them all . Or however you think looks best. I probably should have just filed it as an Issue. :thinking:"} +{"_id":"q-en-http-extensions-2b09a71e50088c424be1a9bd2207a363679bfebd04dcbe814da69ca35ecdba99","text":"When parsing a number, multiply back in to before testing against the range defined in Section 3.5.\nThanks!"} +{"_id":"q-en-http-extensions-1407f2ee7afa971c248ebb8674cb30c6b3d63904283bfb9126e50f9d4bd8f1f6","text":"How's that? It looks kind of like the equivalent case in Parse Dict\/Parse List, but more descriptive than it was."} +{"_id":"q-en-http-extensions-c601d0adaa7cc78ede2b8a369e8eae1e6af6d0224afe9e819088d8906bb528f5","text":"Closes URL NAME mind taking a look?\n(NAME would you mind adding NAME to the repo as a collaborator so I can ask him to review via the GitHub tool? :) )\nPing.\nInvite was sent a while back. Still listed as \"pending.\"\nIt is an internal flag, but I don't think the PR needs additional documentation. The change we're making is in the middle of an algorithm, and is basically a variable that was set earlier in the process. I think folks reading the document will understand that. :)\nThanks for the review! I'll merge this, and look forward to your PR adding yourself as an editor.\nRFC 6265 section 5.3 defines a cookie by the name, domain, and path. However, given these two headers in a response from a request to URL Set-Cookie: mycookie=nothostonly; URL Set-Cookie: mycookie=hostonly Most browsers will have two cookies since they include host-only-flag in their definition of a unique cookie. I'd like to update section 4.1.2 and 5.3 to reflect the behavior of modern browsers with regards to host-only-flag.\nDiscussed in Berlin; seems reasonable.\nI probably need to read more of the context but would it be useful to point out that this is an internal flag for the user agent? It is, right? Inferred from whether the setting of the cookie included the domain or not."} +{"_id":"q-en-http-extensions-8c3090516e0259b0ffadfbcdd296268046123cd058ba4f186fba6503f33a5fb2","text":"For\nfrom the list Maybe we should be talking about a Variant-Key syntax that lets you specify alternative sets of values, e.g.,"} +{"_id":"q-en-http-extensions-f58e1f2e8d7dd7e4161646563c5858b3fe9a699e27acee54d259ff592a97ee72","text":"Takes a dependency on URL\nI think that it would be better to explicitly reference CERTIFICATE_REQUEST in the cases where a CERTIFICATE is generated directly in response to one. Using the \"get context\" API from exported authenticators is ugly. This is more important now that exported authenticators folds the request into the response. We probably want to use a flag for whether this is spontaneous so that we can avoid reserving a Request-ID for spontaneous CERTIFICATE messages.\nI'm a little dubious about this. Having the same value transported in two different fields and then checking that they're equal seems potentially more error-prone than just transporting it in one place, even if that one place is by calling into a library to interpret for you.\nFor posterity, discussion in Montreal: Nice not to have to dig out the Request-ID. However, if you duplicate it, then you have to compare and check that they're equal, so you're still calling into TLS. There is an existing duplication in the CERTIFICATE_REQUEST, where the Request-ID is carried outside the EA."} +{"_id":"q-en-http-extensions-0f2d1393d37ee8376dfce5ab004865b92d3c6252e042fa81ce2933cb1ceea7c0","text":"but I'd prefer a better recommendation here than \"educate your customers.\"\nSuppose a site is hosted on a particular provider (or CDN, equally). The customer flips the DNS entry to point to a new provider. However, the old provider doesn't know that and might continue to serve traffic via ORIGIN + Secondary Certificates for requests that originally reach the provider via other DNS records. It's current industry practice to leave the old provider configured and hot during a transition to ensure the graceful cutover. However, the DNS record change is considered to be sufficent. We might want to recommend that servers periodically ensure the DNS record points to them before serving Secondary Certificates \/ ORIGIN references. I think this is Security Considerations prose, not protocol elements or requirements, however."} +{"_id":"q-en-http-extensions-93b2de4e966e8d5a794908cdcca6c7bf1422a342661698912dbb2d5c0ccebee2","text":"Serialisation of dictionary and param-list currently say instead of This fixes that.\nBlah, good catch; thanks."} +{"_id":"q-en-http-extensions-25e4a284ac1eea511ff78597507d3ed316dbdc648e2aee4f7e5e60d406498647","text":"Suggested elaboration \/ re-focusing in the introduction to clarify how CH fits into things.\nCI failed due to rfc-URL DNS failure; should be transient.\nFeedback addressed, I think.\nThanks for working on this! A few comments...Looks great, thanks Mark! Based on recent direction and discussions, the sections that mention \"defines initial set of hints\" and reference to User-Agent header may need to be updated, but we'll tackle that in a separate thread."} +{"_id":"q-en-http-extensions-9fa5f55b0a43da2f9c40566ceba64293c904ec70ae1af2236663a8da28f5397e","text":"This change passes 'null' to the content-negotiation algorithm if the request header is absent. For list headers, the algorithms can always turn that into the empty list, but for non-list headers they need to come up with something appropriate to do. This also tries to do better about , but may not completely fix it.\nThanks.\nURL says \"... For each variant in variants-header ... Let request-value be the field-value(s) associated with field-name in incoming-request.\" I don't see any handling of the case where there is no header in . The algorithms in URL then also don't handle the case where request-value is empty. Sorry if I've just missed where you handle the empty case."} +{"_id":"q-en-http-extensions-7b2c9b171a2d820a909c37068d5dcff30b4f304b4d01f206151a122f15790d6b","text":"I've also added examples of the new behavior. This is an attempt to\nURL shows an example of an origin server that contains English, French, and German versions of a resource, a cache that has cached the English and French versions, a request that asks for German or Spanish, and, I believe, permission in the spec for the cache to return the English version instead of asking the origin server for the German version. This happens because URL appends the first Variant to the list of requested languages even if the list of requested languages overlaps with the available variants. Is this a mistake in the algorithm, or intentional for some reason?\nThe language algorithm does that so that an origin can denote a 'default' language -- i.e., what to serve if there isn't any overlap between the client's preferences and what's available. For example, if a user configures their browser to send and the page is only available in English and French, there needs to be a way to say \"send English by default.\" I don't know of any use case where someone would want to serve a hard error when the language doesn't match, but if that's in scope, we could figure something out. If you're looking for something more deterministic in caches (here, the distinction between \"these ar the responses that are available and preferred\" and \"here's the default\" is blurred), I'm interested. P.S. in that diff, the line with the comment \"prefers French, will accept English(?)\" seems wrong.\nI think establishes the default language without confusing it with the responses that are available and preferred. I am also not interested in a hard error when the language doesn't match. And thanks, I've fixed the example.\nI'm not entirely happy with the optionality of behaviour on the cache's part with those MAYs. The most explicit thing we could do would be to break this information out into a separate header (e.g., ), but sticking with the current syntax for the moment, what if we had special values that, when occurring in that first slot, indicate that the default should NOT match? E.g., where does the magic.\nI'm personally happy with turning those MAYs into MUSTs. They're MAYs because that's what was in the older RFCs, but I agree that I don't like making things optional. If I do this in my update tomorrow morning, do you still want to explore the token? I don't really like it because it seems like always the wrong thing for a client to send. I also don't see a particular need to pull the defaults out to a separate header.\nI'm a bit interested in , but not adamant. I think it'd be fine if they flipped to MUST as long as the implications were noted."} +{"_id":"q-en-http-extensions-1a4fec4cc0279c27bc77429c7600788d8af9fbfe8dd7eaf18dbdd7e36ad8a1bf","text":"I think this also but I might have missed something. Misc changes that aren't actually necessary to use Structured Headers: \"variant-item\" became \"variant-axis\". I defined what clients (\"caches\"?) MUST do when a server violates one of its MUSTs. I removed the (incorrect) definition of available-value in favor of allowing arbitrary . It wasn't clear what excluding characters like was gaining us, and the restriction would have required extra words when parsing the headers. I removed {{gen-variant-key}} entirely and had Compute Possible Keys return a list of lists that could be directly compared against the Structured Header parse of . It would be difficult to keep the old behavior of returning a list of strings without re-adding some restriction on an available-value, for example by requiring that they don't contain spaces. I simplified Compute Possible Keys a bit.\nURL shows a Variant-Key of But that's now This also has implications for the URL algorithm, which currently ignores semicolons.\nlooks like a parameterised list is currently a list of strings, but if is adopted, its syntax will need to be changed, as it doesn't fit into parameterised list particularly well (although I guess it could).\nCurrently waiting on resolution of .\nNAME is closed now. Any hints?\nHi NAME - getting to it. Are you planning on implementing or using?"} +{"_id":"q-en-http-extensions-69c8927a24988a16825cc35b3c7e1affc374635b6db94ad87ffe832d2c029c4f","text":"This change switches from \"URL\" to \"URI\" specifically for two contexts: URI schemes URI syntax (It might be good to also add an explanation somewhere in the Introduction; I can make a proposal if needed)\nIt is a PR, no?\nWeird, it didn't show the UX for PRs before, but does now. shrug\nSee URL"} +{"_id":"q-en-http-extensions-76b48b91463d4fa53efacb3d4d95b97bd0f99d8e9cd19beeaa97af67a51358c3","text":"Tweak\/cleanup in light of\nNAME or NAME can you push the green button?\nLGTM!"} +{"_id":"q-en-http-extensions-858f23886f11d3b301b7fdda871f14ee20079d7d4200667fe2b6c22368dcdb39","text":"and filed by NAME Given , the AUTOMATIC_USE text is entirely superfluous, and given , the correct wording would be \"for a stream.\"\nNot sure if this is an issue or not. Emphasis added in the below quote. This seems to have been left out of the change to stream 0 for all frames (URL)\nSection 5.4 contains some left over text that refers to the now removed AUTOMATIC_USE flag. Cf. URL"} +{"_id":"q-en-http-extensions-17ae5a3d59386705e0e68853138cfb244227c287f6993b817b671c1d9e8071e7","text":"Signed-off-by: Piotr Sikora\nThis is effectively editorial, since the error restriction was removed in URL, where this type was renamed from to , but the description didn't reflect that change. cc NAME"} +{"_id":"q-en-http-extensions-6459f52f75d596c7baa66d0cb1a0f13b33aee2989606203abbf84b9ba6a991ca","text":"This modifies parsing to consider empty or not-present header fields as empty containers; serialisation omits them from output.\nThe SH syntax does not directly allow empty field values, such as defined for Accept, although this might be allowed by field definitions like MyField = [ sh-list ] Likewise, the list types do not allow for empty lists. sh-list = list-member *( OWS \",\" OWS list-member ) list-member = sh-item I think this is a very odd choice for a generic syntax because the empty set is often given semantic meaning in generic communication. It's hard to do math without zero. If this is intended, the rationale should be explained in the spec, and there should be some discussion with examples about defining fields as an optional value.\nFWIW, even if empty lists may be problematic top-level, the situation is different when they are nested...\nietf104 - interest in the room in supporting empty (perhaps null?) syntax\nI agree this is something important. It may be used to mean \"I know I must pass the info but I don't know the info\", which is different from \"I don't care about the protocol\" or passing a dummy value just to fill the hole.\nIn the original base material of header definitions, empty values were generally just not sent at all. Having a specific Null value may or not make sense, not sure.\nI personally like the distinction between \"known to be empty\" and \"not presented either because I don't know or because it's empty\". Some semantics might explicitly require the presence of some fields which we would enforce by the protocol. This doesn't mean they can't be empty, they must be present, for example, to make sure they weren't dropped in the middle. For example, \"Connection: foo\" gets rid of the field, not just the value. Just my two cents, Willy\nDo we have any RFC'ed headers where there is a difference between a non-present and an empty header ?\nThere's an accept-something if I remember well. Also I've been using \"Host:\" in many occasions with HTTP\/1.1 requests when I didn't know the host name and this tends to work where no Host field fails ;-) Willy\nI'm not against a '?N' null token (along the lines of true\/false), but I have a hard time thinking of a use for it.\nURL\nWouldn't an empty list be equivalent to an empty value, thus we don't need a new syntax?\nPutting an empty list where for instance an integer is expected, just to say that there will be none, sounds counter-intuitive to me.\nOn Fri, Apr 05, 2019 at 01:37:11AM -0700, Julian Reschke wrote: Indeed, probably.\nOn Fri, Apr 05, 2019 at 03:11:00AM -0700, Poul-Henning Kamp wrote: Agreed in general. My point was that sometimes specs use \"MUST send\" when the sender doesn't know, resulting in stupid values appearing. That's exactly what led a number of agents to place a content-length with stupid values (0 or huge ones) with the CONNECT method a long time ago. An empty value where an integer is needed by the application is more of an application-level error than a protocol-level one in my opinion : it's up to the application to decide how to recover from this missing piece of information (is it really needed to process the request or not). For me it can mean \"don't care\", \"don't know\" or \"not representable\". After all floats support \"NaN\" for the same reason :-) Willy\nIf there is a \"MUST send\" requirement, there must also be something available in the same spec that can be sent ?\nThere is not, and never has been, an interop problem with sending empty field values. They are a list value of zero items or a string item of zero length. Accept and Host, respectively.\nNAME how do you know that?\nBecause if a recipient drops an empty Host field the request will be rejected with 400. There have been no reports of such breakage in 25 years. It isn’t even a likely scenario given how message fields are parsed and reproduced.\nIf a recipient doesn't drop an empty host header, the request will still be rejected with 400, surely? Regardless, I'd note that many headers address this situation with a reserved value like . Is there value in standardising on one such value for future headers? I'm not convinced, but I'd note that we also have many headers that use to denote \"all\"; it seems to me that if we're going to do null, we should do that too (but again, I'm not convinced this is adding value).\nOn Mon, Apr 08, 2019 at 05:16:41PM -0700, Mark Nottingham wrote: It depends on the header field that was empty ;-) I've used the empty Host header field quite a bit (even in health check requests). My observation has been that while HTTP\/1.1 without Host naturally causes a 400, HTTP\/1.1 with empty Host is often processed similar to HTTP\/1.0 without Host or as 1.1 with an unknown Host. I'd say \"why not\", but it's useful to consider lists and the iterative processing that can happen, removing one item at a time. Having to place a specific value to denote emptiness would require specific processing (i.e. check for emptiness after removing any item just for the sake of replacing it with the special value). By the way, speaking about lists makes me think we may complicate interop when a header field appears multiple times. For example if we have this in a request : List: a,b,c,d Foo: bar List: e,f,g,h It's not unreasonable to think that some simplified implementations may replace the first occurrence of List with \"none\" after processing it, leading to this : List: none Foo: bar List: e,f,g,h Or possibly this : List: none Foo: bar List: none Which may be processed as \"List: none,e,f,g,h\" or \"List: none,none\" respectively. Just my two cents, Willy\nWilly, I think that would be clearly broken, and we should be able to catch it with tests. I think we're talking about all of the container types here (dict, list, list of lists, parameterised lists), correct? Julian, you gave as an example. Given that the behaviour recommended is exactly the same as if the header was missing, that's not very compelling. Besides the potential stripping issue (I have heard of implementations that strip empty headers, but don't have details on hand), some header libraries may not make a distinction between missing and empty headers. Overall, I don't see a lot of value in accommodating empty headers directly, and I don't think we should be promoting it as a practice.\nAgree, I'm not convinced either. I see the potential, but Gettys rule applies: \"The only thing worse than generalizing from one example is generalizing from no examples at all.\" (May actually be from Phil Karlton originally, sources differ.)\nHm, no. \"A request without an Accept-Encoding header field implies that the user agent has no preferences regarding content-codings.\" vs. \"An Accept-Encoding header field with a combined field-value that is empty implies that the user agent does not want any content-coding in response.\"\nYes, but in the real world, the outcome is always the same.\nNo. HTTP\/1.1 introduced an IESG requirement that Host must always be sent and recipients must always reject with 400 if Host is not received. When I pointed out that not all URIs have a host, the response was that clients must send an empty Host field in that situation. It is a MUST requirement in Semantics section 5.4: This is ingrained in HTTP semantics and commonly used by non-browser HTTP implementations. There is only one example of an HTTP header field using \"none\" and that is Accept-Ranges. It isn't a good example for anything. To be clear, it would be a bug for an HTTP message processor to drop a received header field just because the value is empty. Nothing about SH will change that. What we are discussing is whether SH is capable of representing an empty list or an empty header field, both of which it is currently incapable of doing.\nIf that spec specifically says \"empty field-value\", a SH \"empty\" token will not be usable for Host: anyway, and we're back in the \"generalizing from no example at all\" again.\nOn Thu, Apr 11, 2019 at 03:53:15PM +0000, Poul-Henning Kamp wrote: After having thought a bit more about this, I now think that we don't need empty list elements in the end because lists are not strongly ordered, they're not arrays with a position so an empty element in a list brings no value. I'd further say that removing an element from a list must result in removing the list once the last one was removed as it is a set. So I think I'm fine with not dealing with empty lists. This then leaves us only with the empty header field alone, likely where a string or a token are expected. I don't see a reason not to support an empty string just like any other string. A header field is not just a value, it's a name+value combination. With a void value the name still serves as a boolean (present or not) and this is exactly how it's used when the field is made mandatory to distinguish between agents generations or as a signal to detect support for something. We've all filled administrative forms where the requested information was irrelevant but mandatory in the form and just left it void so that the person in charge deals with the particular case (on paper) or filled with whatever was accepted (on web forms) hoping it would just be ignored since irrelevant. For me the Accept and Host fields are exactly in this category. My concern is to be certain that a valid HTTP request can be transmitted over SH. And ironically, the only mandatory HTTP header (\"Host\") is also the one requiring the support for an empty value due to its mandatory nature. Willy\nI have lost count of the number of packages I have received from USA where part of the address is \"Denmark does not have states\" :-)\nAgain, the purpose of SH is not to allow all possible HTTP header to be represented in the data model -- it's to solidify the 80% case and promote best (and interoperable) practice. Representing a boolean as an empty header field is not good practice.\nSee PR above.\nI've put in the changes for the list-based header fields. It's less clear to me that it's desirable to default a missing or empty item to an empty string (strings and tokens) 0 (integers and floats), False (boolean) or a 0-length binary array (binary data); to me, those are all going to be application-specific defaults that are better specified explicitly. I'm happy to adjust the spec to make that more clear, if folks feel it already isn't."} +{"_id":"q-en-http-extensions-a1ec12710eb0e14d05a729332c35f68ca6aa3cb554d7911543ca4ff90fd6f4ad","text":"in , NAME asks about having Parameterised Lists whose parameterised identifiers are things other than Tokens. Because of the way SH is put together, I think this would be a fairly easy change, in terms of the spec and implementations; it would require a few more tests. The only downside I can think of immediately is that specs would need to constrain the type of the parameterised identifier themselves, and specify error handling (if it weren't the default \"blow up\"). Thoughts? I also wonder if we should come up with a better name than \"parameterised identifier\", especially since we've ditched \"identifier\" elsewhere. member name? member id? primary identifier?\nThat works, since splitting lists from parameterised lists. Are you thinking , or just adding a few types? It also changes my thinking a tiny bit on , although not much. I like \"parameterised element\" but that introduces a new word. \"parameterised item\" is a nice shade, if any item is allowed there, but \"primary identifier\" works too. :art: :paintbrush:\nThe downside here is that it would be very hard to represent in JSON -- e.g., in the . At least some programming languages support hashes\/dictionaries\/objects with arbitrary key types (e.g., Python, JavaScript, Ruby), but I'm not sure how widespread that is -- especially if folks represent the different between and with an object for . NAME is this a nice-to-have or something that's a blocker?\n(Fwiw I do think we should call this a Parameterised Token)\nAt this point do we just have to accept that the SH data model is essentially a subset of the JavaScript model? Numbers are bounded by JavaScript's limitations, object keys have to be stringy, nobody likes 'token' because it isn't a native JS type... I don't mind if that's the case, but it would be good to just say it outright somewhere. It would also give us a solid position to start from when considering other proposals. Hmm, C++ and Go could be have issues mixing key types within a single map, and Perl is definitely not going to like non-string keys.\nIt doesn't have to be -- it's just easier for me to write tests this way :)\nThis was a nice-to-have for me. I’d worked around it by putting an unused string as the identifier. It looks like that workaround is no longer necessary?"} +{"_id":"q-en-http-extensions-90b81c1a32cf596dff5f9e0a9cb6bc91c8c20decf76142787d8ddc05ea8463e7","text":"fixes typo in reference\nFYI, there is (at least) a 180-4 now."} +{"_id":"q-en-http-extensions-9cf500ac5bc72930d1a900ffc8f4378186b9fb2de6f3bc90f8bb20084bc6092f","text":"Fixes unused references. Removed reference to FIPS180-4 and opened issue Removed unneeded references Uses required references.\nShould we update the FIPS180 reference to the latest one? NIST updated the document defining SHA-* algoritms. The latest reference is digest-headers references preserve the ones cited in RFC3230 and subsequent.\ncross-posting from elsewhere, I said I think we probably want to use FIPS180-4 for SHA-256 and SHA-512, and FIPS180-1 for SHA-1.\nThere is also .\n[x] Warning: no \\ in \\ targets \\ [x] Warning: no \\ in \\ targets \\ [x] Warning: no \\ in \\ targets \\ [x] Warning: no \\ in \\ targets \\ [x] Warning: no \\ in \\ targets \\\nFixed in"} +{"_id":"q-en-http-extensions-f7281fb5efbb4487ebec56e3d847916dbe8b94684f4cbb130081ffbcde7210ea","text":"Adds -latest to docname\n[x] WARNING: The Internet-Draft name 'draft-ietf-httpbis-digest-headers' should end with a two-digit sequence number or 'latest'. (at line 20)"} +{"_id":"q-en-http-extensions-51355b5c7419c4fcaf07c19f8e2236f2af5a9207371a09f314f10c6eee4be4dd","text":"consistent use of instead of \"http\" make S1.1, P3 a bit more readable"} +{"_id":"q-en-http-extensions-213fd24ec12446b3179bf8934331df52c816cc4d52df5526d04a9568cfcad3a0","text":"NAME - can you take a look and see if that clarifies things?\nThis seems helpful given what's currently in the spec. However, I think some of the material I was talking about when I filed is no longer there. But I suspect that material may now be in other specs, and could perhaps still gain from being clarified at its current location.\nI've been reading Client Hints (both and the ), and I'm having a little trouble understanding how things fit together in terms of which header fields are sent when and lead to what. My confusion started when I looked at the section and noticed (to my surprise) that was being registered as both a request header field and a response header field (when, having read the spec quickly, I was expecting it to be only a response header field, and wasn't sure what it would mean as a request header field). Then I looked back over the spec, and noticed that section 2 of the spec is called but that the entirety of the section appears (I think) to be about response header fields. I think it would help the spec be easier to understand if it: more carefully distinguished between request header fields and response header fields, and more clearly explained what the request header field is.\nlgtm, thank you!"} +{"_id":"q-en-http-extensions-4fe9f28c437068df577594b852f0f8d2307207ad274769539e1f8fc491ccd06b","text":"NAME - can you take a look?\nMigrating .. Julian:\nNAME NAME per URL, how about... Does that make sense? Would appreciate your guidance on this one :)\nI'm starting to wonder if Client Hints should adopt Structured Headers to avoid having to deal with these sorts of questions. Ilya, WDYT?\nNAME that would hit a major reset on all existing CH implementations and set us back another couple years, I don't think this is the right call for CH. \/cc NAME Surely CH is not the first instance where we had to spell out behavior for selecting one out of N header values? That said, if we don't have established verbiage for this, I'm happy to just make up some language ourselves to clarify how it should behave.. what we're aiming for ain't rocket science. :)\nI wouldn't want to set CH back that much (although it is lingering on :). I only mention it because all of the defined syntax seems like it would easily fit into SH, and Chrome already has a good start on a SH parser, thanks to NAME work on signed exchanges, etc.\nAs an aside, the same clarification should also apply to response headers. Currently Chrome treats multiple responses headers as an error and ignores both.\nNAME my preference is to get this over the finish line without introducing dependency on SH. For one, we're also relying on Feature-Policy for delegation and we decided against SH there. WDYT of suggested text in URL — warmer? NAME ack.\nNAME I hear you. My thinking is: We've already waited a good while for CH, and it's not clear that it's really really done yet (I suspect we have at least one more round of doc review). People are already using CH as a framework for new hints, so getting SH in would be best now if it's going to get in. If CH are going to be commonly sent, having well-defined parsing behaviour and the possibility of better serialisations (which SH gives us) is very attractive. To be clear -- this is me as me, not me as chair.\nNAME - could you clarify what you mean by CH adopting SH? Do you mean that the \/ headers would be defined as \/? Or that they'd be define as with extended processing model (which guarantees that the last item in the list \"wins\")? Or something else altogether?\nBoth and could be s. would be a of s. Both and would be s. would be a . I'm not really sure why its current ABNF is a list. NAME Considering that parsing and serialisation algorithms for these haven't been defined yet at all, I don't think it would take a lot of work (and would be happy to do a PR).\nGood catch, I think that's an oversight and we can fix that. Stepping back, my preference is (still) to proceed at this stage without introducing dependency on SH. First off, introducing SH does not address the issue we're actually trying to solve in this bug. Second, it adds another significant dependency to CH adoption that I would like to avoid at this stage, as it's yet another reason for a browser vendor to drag their feet and delay integrating support for SH. To get CH over the last call barrier, I believe we have two outstanding tasks: Clarify Accept-CH-Lifetime definition, per above. Provide some minimum guidance on how to deal with aggregated values — see URL I don't think we'll land on perfect text for (2), but this is not a unique or new gap vs other and existing specs, and I don't think we should block on this. NAME does this sound reasonable?\nfriendly bump :-) NAME NAME PTAL, any thoughts or guidance on the above text?\nThe issue is that it's defined to only have one value. I think you'd need to say something like: Or, to really define it, define an algorithm.\nThis was discussed at the HTTPWG meeting today and it was decided to go ahead with Structured Headers!\nBased on group discussion at IETF, everyone is on board."} +{"_id":"q-en-http-extensions-71026c109698af50b9a786ef6d95056139b0505a7c97b94a3b91886cbfb1d649","text":"Signed-off-by: Piotr Sikora\ncc NAME\nThis field uses for a specific purpose, but that's already in the generic parameters. Suggest or similar. \/cc NAME\nLooks like has the same issue. Should we use for both? Note that the identifier is context-based, since if proxy configured SHA256-based validation, then would be , if it configured SPKI-based validation, then would be , etc. We could also use and instead of (and for ), but that doesn't seem very future-proof. What do you think?\nHmm, that's awkward. Maybe two fields? E.g., : [, ... ] : ... Is there a registry we can refer to, maybe?\nI'm afraid that there isn't any registry that we can refer to, at least none that I know of. (opaque type) and (opaque value) seem a bit awkward. Wouldn't it be easier to just have and params? Also, in Envoy, we actually configure list of acceptable and values, and matching against any of them works, so we should probably emit both of them anyway. I could envision merging and into a single status type that lists all 3 params, e.g. See . What do you think?\nBetter than what we had :)"} +{"_id":"q-en-http-extensions-f0ba5cfb9ee11d42a83c99055208f3afff98fc78838fa04f6a16686e75986a30","text":"States that MUST NOT be used. Sets the MD5 status to deprecated.\nLooks good, thanks!"} +{"_id":"q-en-http-extensions-98c2f6151b9f327bd47b74ce215fdc37a8c3c99a4740c04e26dcd2a92d9a67fe","text":"Simplifies references to sha algorithms pointing to RFC 6234 instead of FIPS180 documents.\nNAME ping :)\nShould we update the FIPS180 reference to the latest one? NIST updated the document defining SHA-* algoritms. The latest reference is digest-headers references preserve the ones cited in RFC3230 and subsequent.\ncross-posting from elsewhere, I said I think we probably want to use FIPS180-4 for SHA-256 and SHA-512, and FIPS180-1 for SHA-1.\nThere is also ."} +{"_id":"q-en-http-extensions-67a2ec90ba0fec4ae0f7910aacda07c5a7672d4ce0a9e13f474c63fd77b27882","text":"Obsolete ADLER-32 but don't forbid it.\nNAME will this work for you?\nI think the pull request is OK as-is, but might be improved slightly. In particularly, I'm wondering if SHOULD NOT is too strong for the server. A server could support ADLER32 without that support impacting on cryptographic data integrity provided that server also supports a cryptographically secure hash; e.g., SHA-512. Clients that wish to protect against malicious actor changes would use the SHA-512 hash and not request (or ignore) the ADLER32 value. Therefore, I would suggest that, for the server, the document says a server MAY implement ADLER32. I think it is perfectly reasonable for the document to say the client SHOULD NOT use ADLER32, for the reasons stated. That said, I can't decide if this distinction is really helpful, so I defer to your judgement, here.\nI'm not so sure we want to make a distinction between client and server as you write it. Both client and server can attach Digest header to request or response. It is the receiver of Digest that is exposed to the risk of using a weak digest. The current language avoids mentioning either endpoint so we leave it to implementers. I'm not opposed to spelling things out more clearly but we'd really need to nail down what we mean by \"using Digest algorithms\" because that relates to requesting them, generating them, regenerating them (proxy), validating them (endpoint or proxy) etc.\nThat's a good point about client\/server. Sorry, I had that wrong. As you say, it's the recipient of a Digest that must decide whether the algorithm is sufficient. This recipient may be either the client or the server. I think the document is right to point out the dangers of accepting a weak digest. Perhaps there is a useful distinction between generating a digest and accepting a digest; e.g., an agent MAY support generating a weak digest but SHOULD NOT accept a weak digest.\nNot wrong, just partially true. That could work for me. I'll leave it to NAME to decide what he things makes sense to do.\nNAME NAME as this issue is for ADLER, I moved the discussion to As it's in the listed algorithms, the server MAY already implement ADLER32, though it SHOULD NOT be used by both sides of the communication. I think sender and receiver should behave consistently (see again).\nMerging as for the positive feedback from NAME and NAME comment URL"} +{"_id":"q-en-http-extensions-778cc3be071da3656d4b51df294a5b7fe8415f3efcb0c740642c1bdbe27c7c16","text":"Based on discussion at IETF 105, removing explicit lifetime in favor of implicit opt-in registration and persistence. NAME can you please take a look? In particular, current language states the preference should be persisted but doesn't explicitly state for what the lifetime is.. Modulo, in privacy section we do talk about MUST criteria for clearing it. Any suggestions or recommendations for how to best approach this?\nLGTM"} +{"_id":"q-en-http-extensions-59dd11b4a3638a95a900da885b37fde850a8d420f88efc0ba5f2e7574d6c5a7f","text":"Adds id-sha-* algorithm examples throughout the document.\ndraft-ietf-httpbis-digest-headers includes examples of sha-256 in the Digest header value, but no examples of the two new algorithms it introduces: id-sha-256 and id-sha-512. Examples would be extremely useful to illustrate the difference between sha-256 and id-sha-256.\nThat sounds reasonable to me.\nNAME WDYT?"} +{"_id":"q-en-http-extensions-2f930191afd36bc3a1d1390110587e61fb15d2ee803cc9351a57acfc5eac05d7","text":"Align the I-D title with the .md file name suggested by NAME before the adoption.\nWe can consider to change the to the more direct ."} +{"_id":"q-en-http-extensions-ec5b3792c493fc102ac1ca1304ffbf9037744349d673f0f203166024cd7c20ed","text":"Now that we are generalizing how the types are used, I think it might be a good idea to allow a dictionary value to be a mapping (i.e., a list of parameters). This would be a generalization. At the moment, top level elements can contain scalars, lists, or dictionaries. However, it is at the moment impossible to use a dictionary at the second level even though scalars and lists are permitted. In terms of use-case, the proposed change would allow for example something like a cache-control directive to take a mapping as an argument (see URL). Logic-wise, I assume that the added complexity would not be significant, as we can reuse the definition for the parameters of a parameterised list.\nNAME are you just asking for parameters on dictionary values, in a manner that's equivalent to list values?\nNAME Sorry for the belated response. Yes. I'd be fine with member-value being exactly list-member."} +{"_id":"q-en-http-extensions-26bdb942e8b96eb40a155df698ce90f46d394440ec058aa8766d38bcf461ea30","text":"Is this PR meant to include ?\nYeah, oops. Need more coffee.\nTry now.\nOtherwise, looks alright to me.\nThe current definition of sh-float is surprisingly hard to serialize correctly, the best C-code I could come up with is: int serializeshfloat(char buf[18], double d) { int p = 0, l, dig; buf[0] = ''; buf[1] = '\\0'; if (isnan(d)) return (-1); if (fabs(d) >= 99999999999999.95) return (-1); if (d = 99999999999999.95) return (-1); if (d < 0) { buf[p++] = '-'; d = -d; } p += snprintf(buf + p, 18 - p, \"%-16f\", d); while (buf[p-1] == ' ') p -= 1; buf[p] = '\\0'; return (p); }\n+1 from me, our primary focus should be on data types that will have good interop. That level of precision isn't necessary for any current use of HTTP headers I'm aware of, off the top of my head.\nProposal: Call these decimal. Allow a maximum of 9 digits on the left of the decimal place and 6 after. This matches the definition for int and allows storing this in the same sized slot with a or conversion factor.\nABNF gets much simpler.\nNAME That would probably curtail the range too much. I think it is desirable to be able to move posix time_t timestamps in these.\nIf this really is an integer with fixed point semantics, why do we need it at all?\nBecause it's easier than making people specify \"divide the number by a million to get its real value\"? That said, I agree that 9 digits to the left of the point is probably not enough, so a floating point representation is more useful. This is the ABNF equivalent of NAME C code, yeah?: That easily gets us a current 32-bit time_t plus microseconds, and a whole range of other values besides.\nWhy do the arguments about size of integers not also apply here? is 64-bit on my system, which doesn't fit in the sh-integer definition either.\nThe current time is ish. You don't need to prefix it with 30 bits worth of zeroes.\nI've read the IETF105 minutes, and thought about the fixed-point discussion. All the uses for \"decimal number with fractional part\" I can think of\\ would reasonably served by at least 10 digits to the left\\ and at least 3 to the right of the point. So I would support any fixed- or floating-point representation that supports that range of values. \\ that's all that matters, right? \\ a ten-decimal-digit time_t gives us until November 20, 2286\nUsing the max-six-decimal definition, and allowing 15 digits with no period or decimals, this C-code will do the job: static char * sh_float(double d) { static char buf[40]; int n; if (fabs(d) < 1000000000) { n = snprintf(buf, sizeof(buf), \"%f\", d); while (buf[--n] == '0') buf[n] = '\\0'; if (buf[n] == '.') buf[n] = '\\0'; } else { n = snprintf(buf, sizeof(buf), \"%.15g\", d); } return (buf); }\nSorry, wrong button.\nIf I take , then I can almost represent that in a JS float in the most horrific way imaginable: But that doesn't quite fit. 69999999999999999 doesn't fit into a double. If you want to remember the number of additional decimal places so that you don't lose any precision on those trailing decimal places (things like 0.3 are surprisingly tricky to re-encode), you are left with supplementary values. Can someone make a better case for any more than 10^10 values here? I couldn't see a good one. You can fit a 32-bit value in that. If you really think that time_t is valuable, which I'm totally not convinced of, then fewer steps might be acceptable. I can pack this:\nPR updated. Any further comments, or ready to merge?"} +{"_id":"q-en-http-extensions-f3bab18185f78ea4296fe188cb0fdae698cba336c3ae8ce82e7c4302a87db599","text":"thx!\nBoolean values can be conveyed in Structured Headers. The ABNF for a Boolean in textual HTTP headers is: sh-boolean = \"?\" boolean boolean = \"0\" \/ \"1\" Either say \"0\" is false and \"1\" is true, or maybe rephrase the ABNF to: sh-boolean = sh-false \/ sh-true sh-false = \"?0\" sh-true = \"?1\""} +{"_id":"q-en-http-extensions-266e476648ea2367d6972e9c360ce62dcfe79596a2c712a6c08e3c6d45b48193","text":"This addresses feedback from the list: URL URL NAME NAME does this look good to you?\nVery nice, thanks. You almost managed to do this with a net reduction in total lines. The only additions here are my fault. Respect."} +{"_id":"q-en-http-extensions-98c298a957e0fec362e956634ebc80693ee879f0a1e178b390720e7fdb8dd924","text":"(Should I go ahead and stamp a -02 now, or wait a bit? Not sure how one usually handles such things.)\nWGLC ends on Thursday, so let's wait until then to see if there's any other feedback.\nIt is Friday, so I've now pushed -02. It just contains this PR. URL"} +{"_id":"q-en-http-extensions-fe0c664b9cd72d4254e7083f0227a7a73d8ee143788282c953e236a71adcad3e","text":"Fixes artwork formatting.\n[x] requests and responses to use different artworks [x] artworks not containing trailing\/heading carriages"} +{"_id":"q-en-http-extensions-6c1dc6e5cf01d4d463b80dc6f4422e136273345e90000cc21daf180ad3594178","text":"to use the new \"validator\" name. See: URL URL\nNAME I'm quite weak on that side: is this PR correct? If not feel free to fix at will :)\nNAME I agree... this comes from the original RFC 3230. Merging the change as a simple terminology update. The relation b\/w validators and the resource is discussed in , where I tagged you as your guidance will be great. See too.\nto clarify the following passage for defining the behavior of digest with caching (eg. when Digest is returned together with Last-Modified and ETag). and to better extend the definition of Digest to POST and PATCH requests. This is because though only when they include explicit freshness information. If we go on the new http-core definition (which is cleaner) we have that eg. According to URL POST responses identify returned resources in different ways according to the following: the primary resource is identified by Content-Location the enclosed representation matches the primary resource eg. 1 eg. 2 primary resource identified by enclosed representation matches the status of the request according to the resource's own semantic eg.\nre-reading the old RFC my understanding is that validator fields should just be aligned with Digest (that is: digest-value should be aligned with the values of last-modified and etag). While it is trivial, the original editor could have considered that a potential issue.\nThe change itself looks right. I wonder however about the whole paragraph: what does it mean to say \"resource is specified\"? Doesn't that also need request header fields upon which the returned representation varies?"} +{"_id":"q-en-http-extensions-7bae4953fa75cd1f3e0e486dee22ce5000c5e0b27bff9be5772c23475d351f27","text":"Rationale for 503 is that the service is unavailable due to too many connections. 429 could potentially be an option as well, but that seems more specific to a single user hitting a rate limit, while seems more related to a global limit. \/cc NAME NAME"} +{"_id":"q-en-http-extensions-4f2b894c2b18df7ca91f168c5c97cdcdf131693a33d953a188d9a45009fd7c4e","text":"Replaces the old with from URL The relations of Last-Modified and ETag with Digest are examined in"} +{"_id":"q-en-http-extensions-0c10f98e796c77947e24a919c615ead8fe3c0163013a74af09f80b4050fe3e2b","text":"Clarifies the \"returned value\" expression.\nThere are two occurences of the fragment This is not the most helpful of statements, when we say \"returned value\" what exactly are we trying to convey. I think it would help to be a bit more exact.\nThanks. Longer term, I'd like to do an broad editorial sweep to tighten up document and this is an enabling step that I think this takes us most of the way there. I made some suggestions to move words around a bit."} +{"_id":"q-en-http-extensions-e756b01f003787ad0d8e548e763faebac9d947fe30ca6194beafef7dc3cf8c38","text":"I want to republish the CH draft as it expired, and the change logs didn't include a bunch of the recent changes. NAME NAME - Can you take a look?"} +{"_id":"q-en-http-extensions-da0ae89632a38c550cdbf1e94a453724b29a377ada7abbeeb2ae14ebedc16068","text":"An alternate can send an Alt-Svc field that overrides that sent by the origin and is cached on an equal basis."} +{"_id":"q-en-http-extensions-c75392ef0dceca9dbcb56fb52f04c7d054ac131db8642d05a6ac891f67ed1e89","text":"clarify editorial rendering of binary payloads\nin this example (and other), URL something like that:"} +{"_id":"q-en-http-extensions-ac719dea10063504f3502bc009bf6a6a1c75275f616181b01b88ce43fff0a088","text":"For . Note that this makes missing values ; if we like that approach, I think we should do it for missing parameter values too (currently they use , which isn't part of our type system). I used because it aligns with the semantics of current uses nicely.\nOK with me.\nNAME This is a keen observation. Assuming that we are fine with requiring that knowledge on what the top type is, maybe we should also consider adding support for rule. It has been my understanding that the only reason we do not support rule in SH is because can only be applied for certain top types (but we are now requiring that knowledge).\nWe've discussed requiring knowledge of top-level types extensively, and I think we have agreement that it's OK. I\"m not sure what you want regarding support for ; can you give an example (or open a new issue)?\nI'm running my SH implementation against the HTTP Archive dumps (400,000,000+ HTTP requests) to see how the from BSH parse in something like the real world. A number of headers are failing because they have valueless dictionary members; e.g., , , . It should be pretty easy to accommodate these, as we already have optional values on parameters.\nPTAL at the PR, especially the note about using True for missing parameter values.\nPing NAME NAME\nSeems like a reasonable change. I'm happy to remove the 'null' from parameters too.\n(already OK'ed in the PR)"} +{"_id":"q-en-http-extensions-d7dbdcd7389f36b24bdcca28ac632f49eb8961c257013d8a3981d2cc2e010034","text":"I'm really not sure what happened there, but don't have time to investigate. I suspect GitHub didn't actually merge."} +{"_id":"q-en-http2-spec-dba9844c65fba0443cc4e98381f6cd1f7c61325144ce5679df756b72739e1cb6","text":"As they relate to changes in SETTINGSHEADERTABLE_SIZE. This is the strict, by-the-book interpretation. That is, if the value changes, you have to send the update, even if it is pointless (that is, you aren't changing the size of the table). Note that I'm eliding the point about multiple instructions, which are necessary if the decoder reduces the maximum below the current size. The only nod in that direction is \"at least one\" and a reference to the section that talks about the need for multiple updates. This pulls in more of RFC 7541 than I might have liked, but we have to deal with the integration somewhere and this seems like a reasonable place to do that. It might have been better to put this text in a -bis revision of RFC 7541, but we decided to leave that document as-is.\nLGTM, this is a nice clarification that removes the possibility of a different interpretation that's known to have caused interoperability issues (nghttp2 vs haproxy). Let's merge it!\nOh I see this PR already being merged. Opened .\nI think this PR is in the correct direction. Endpoints have to process instructions using dynamic table, even when they advertise SETTINGSMAXHEADERLISTSIZE of zero. That is because the default is 4096. We did fix this problem in QPACK, but HPACK had already shipped. OTOH, I have concerns regarding new text suggesting Dynamic Table Update Size be sent even when there is no change in the size.I agree with Willy, this is a really good clarification."} +{"_id":"q-en-http2-spec-7033c0151524e6bd8c33a7441b75acd444bf3fff60c0f82b9cbfe9ca351675af","text":"This reverts commit 290e381719b2fefe4e6182d3be288c48e8f5fd7e. See for most of the discussion. The key change here is to address by only requiring that the change be signaled when the table size is reduced. When I dug more into this, it's a more disruptive change, so I've made a bunch of additional changes in support of that.\nI have one suggestion about a point that seems to be possible to infer from all this but is not explicitly written and may condition the sequencing of operations. Do we want to support multiple settings updates on this parameter, hence announce in the middle of a communication that we changed our local decoder size ? It seems to me that this is the case but it does not appear in the original text where it more likely sounds like the initial size. If we want to support repeated size changes from the decoder, then I agree that it's better to consider that 4096 is initial and that the setting changes it and Kazuho's wording sounds fine. But then it possibly ought to be explicitly mentioned such as: This makes sure that it's safe for an encoder to only process the settings frame once and to send a single DTSU instead of one per SETTINGS frame.\nNAME I struggled a little with your text, because I think that is what this already says. However, to address your question about settings changes, I don't think that we do really want multiple settings changes, but that is the protocol we have. So multiple settings changes are possible and implied by the design of SETTINGS. Multiple changes are possible within the same SETTINGS frame, in case you were interested. The designers of this protocol seemed to love unnecessary complexity. There are some good reasons, but it's not clear that those reasons are sufficiently strong to justify the headaches they cause. I would really like it if implementations could just signal once in the preface, handle the preface from their peer and be done with settings. That's how HTTP\/3 works and while we might have been forced into it there by the complexities of the QUIC integration, it turned out to be a good outcome in my view. Here, however, we need to deal with the fact that the protocol permits changes.\nHi NAME I'll have to clone your repo and run the diff here by hand then because I can't figure in this horrible interface how to view the complete set of changes at once resulting from multiple subsequent changes. And it will be easier to reread Kazuho's proposal in context. Regarding \"do we want\" I didn't imply that we love this situation, just \"do we want in order to support this interpretation of the protocol\", given that the current text never explicitly states that (as Cory mentioned a while ago, the settings change is acked by a the settings ack) and that even the protocol's original co-author back then already stated that it wasn't meant to be interpreted that way ( URL ). Sure, the result is that we have what we have and we have to deal with it in the best way possible. In that spirit I think that Kazuho's proposal tries to be as closed as possible to the current state of deployment by indicating how some implementations behave vs what others expect and suggesting not to play in the gray area there.\nHervé's post there is quite useful, yes. That supports this overall view. I've cleaned up the comments on the diff view; let me know if what you see works. The diff overall is fairly clean (it's mostly additions).\nI'm just a bit confused, where am I supposed to see the unified diff ? the only way I know is by clicking on each individual commit, which is the solution I find inconvenient :-\/ Do I have to add a magic \".patch\" in the URL bar somewhere ?\nThere's a \"Files changed\" tab toward the top of the page: ! Adding \".patch\" to the URL is a neat hack for finding something you can download, but usually I find that pulling from the branch is nicer to work with.\nThanks for pointing that one, I always thought it only contained the original PR's changes, not the latest proposed ones :-) I'm seeing this part which remains confusing: \"Any change to the maximum value set by the decoder takes effect when...\" which I'm reading as \"the decode first sets a size, then any change to this size ...\", thus reinforces what most servers currently do. I think Kazuho's proposal to move the sentence about initial values solves this: \"The dynamic table has a maximum size that is set by a decoder using the SETTINGSHEADERTABLESIZE setting; see XXX. The encoder at both client and server is initialized with a dynamic table size of 4,096 bytes, the initial value of the SETTINGSHEADERTABLESIZE setting. Any change by the decoder to the maximum value takes effect...\". This makes it clear that the initially advertised value is a change, which currently is the main point of disagreement. Proabably that we should wait for the end of new year holiday before merging it, so that other implementers for whom this is a change of behavior can have a look.\nNAME NAME it would be nice if you, as implementers, could have a look at the proposals here, given that your implementations will need to be updated as well. In haproxy this results in a little ugly trick where the H2 layer has to inject an HPACK opcode (0x20). That's trivial but ugly enough that it warrants a second thought before jumping on any final solution.\nNAME Apache is lazy and uses libnghttp2 for h2 protocol handling. It does not change the dynamic table size and all its handling and ACKs are done by nghttp2 transparently. If any changes are necessary, I'd expect them to come with an updated version of libnghttp2 and, since that is linked dynamically, no need to us to release anything new.\nNAME I actually wonder if you failed to push the branch here, I saw new commits being pushed to URL instead.\nUgh, my bad. Having two remotes sometimes bites me. Sorry about that Willy. (And thanks for noticing Kazuho.)\nNo problem Martin, don't be sorry, you're the one doing the job :-) I'm almost fine, almost everything is OK but there remains one ambiguity in that sentence: \"A decoder MUST treat a field block without a sufficiently small Dynamic Table Size Update instruction that follows an acknowledgment of a reduction of SETTINGSHEADERTABLESIZE as ...\". It ought to be \"a reduction of SETTINGSHEADERTABLESIZE below the current size of the header table size etc\" (which is slightly different since we don't want to have to re-emit a DTSU if the previous value was already smaller). Given that this principle was explained in the previous sentence, I think we can simply address this by adding the word \"such\" before \"a reduction\": \"... follows an acknowledgement of such a reduction of SETTINGSHEADERTABLE_SIZE\", or probably even simpler, write \"acknowledgement of such a change\".\nAhh, yeah, it's hard to balance correct and clear here.\nI think it's OK now. NAME NAME , care to recheck ?\nGood catch indeed!\nOK for me as well now. Let's hope it will not cause difficulties (e.g. for those who would pre-queue stream frames).\nEnqueuing stream frames (or any frame) and then allowing a settings change to occur ahead of those frames is very risky. Anything that you do in those frames might violate a changed constraint. If you must, enqueue all frames, SETTINGS included.\nI generally agree, but it's a matter of design choice (or constraints). Before this change nothing implied a possible change of headers frames representation after the requests started to flow, so that was technically possible. There are not that many H2 implementations anyway so we'll see. But I also agree that here at least we keep the option of postponing the SETTINGS ACK frame, which leaves some options open if this issue is raised. Thanks!\nHi Cory, I suspect, as you suggested in your last sentence, that writing \"HPACK encoder\" and \"HPACK decoder\" here would remove any ambiguity. Of course we know it's not the HPACK encoder in terms of code that acks for example, but it's the same endpoint and that is sufficient to declare who does what.\nI've taken Mike's suggestion and mixed in a few \"HPACK decoder\" and \"HPACK encoder\" statements, using \"endpoint\" for protocol actions (settings, connection errors). Tell if this makes more sense to you NAME or if I've messed it up.\naside the typo above, LGTM. Thanks Martin!\nNAME I guess we've left it long enough for reviews, it's probably okay to merge it by now, what do you think ? As a bonus it would close the last issue.\nOpening a new issue as I notice that has been merged already. Text added by states: An encoder sends a Dynamic Table Size Update instruction after acknowledging a change of SETTINGSHEADERTABLESIZE even if it is not changing the size of the dynamic table or an increase to the maximum size is subsequently reverted before the field block is sent. I am not sure if this correct interpretation of RFC 7541, and also am concerned that this text might cause new interoperability issues. states: A change in the maximum size of the dynamic table is signaled via a dynamic table size update (see Section 6.3). This dynamic table size update MUST occur at the beginning of the first header block following the change to the dynamic table size. In HTTP\/2, this follows a settings acknowledgment (see Section 6.5.3 of [HTTP2]). As can be seen, this paragraph of RFC 7541 starts \"a change.\" When an HTTP\/2 endpoint is using an HPACK encoder with a dynamic table of size 4,096 (the default) and receives SETTINGSMAXHEADERLISTSIZE of 4096, it is not changing the size of the dynamic table. My interpretation of RFC 7541 would be that it does not require Dynamic Table Update Size to be sent when the table size remains the same. Interoperability-wise, do existing servers respond with a Dynamic Table Update Size instruction when the client sends SETTINGSMAXHEADERLIST_SIZE no less than 4096? It seems to me that this newly added text suggest such behavior is required, but I'm not sure if many servers behave as such. added text indicating that endpoints MAY error-close the connection when not seeing such behavior. Even if we are to agree that Dynamic Table Size Update MUST be sent in all cases, I'm concerned that we might see more interoperability issues by suggesting that endpoints can error-close.\nYup, I think this works well. Nice work NAME"} +{"_id":"q-en-http2-spec-afc9131f5575cab7e585f51252be004eaa570b0958c37ac34ead1814d0f08a48","text":"draft-ietf-httpbis-messaging is now going to be known as \"HTTP\/1.1\". Follow suit."} +{"_id":"q-en-http2-spec-4022b6bc768fd3178d5302bdc2863b800d26ba4834e3b5ae3342c8624cd8b0cd","text":"It's going to happen anyway, but we can at least warn people. This doesn't specify what validation is necessary in any detail (that is somewhat involved), it only notes that simple concatenation is almost certainly not secure.\nOK that's fine by me. At least they're warned, and those who want to know more will easily find info on the subject, including these conversations. Thank you Martin!\nBefore we definitely forget about it, could we please have a consideration for that one: URL There is nothing in H2 that prevents pseudo headers from being abused, and it's suggested that they have to be reassembled to make the whole request line, that is usually parsed later to extract the relevant parts, leading to the trouble reported in the portswigger article. And semantics only apply to the recomposed parts, which is too late. I would really like that we suggest controls there so that future implementations do not get caught by this uncovered area, and the comment above proposes the minimalist controls that protect against such attacks.\nWilly, this document is now through IESG evaluation and is about to go to the RFC Editor. Adding substantial new text with requirements would require going back to the working group.\nSorry for letting that comment pass at the time NAME It's a reasonable request, but I don't think that we can go as far as specifying specific validation rules. URIs are just too complicated for that. I suggest that we add a special note to which already talks about how intermediaries might be forced to pass on bad stuff. See .\nNAME I understand and your approach looks reasonable, many thanks! I've added a tiny comment so that we don't forget to mention \":method\" that was also severely abused there, and it's OK for me. NAME I'm not trying to perform substantial changes but have been trying to reload that one over the last 6 months so that we don't miss it. It has been the main cause of generalized vulnerabilities coming from word-for-word implementation of the spec into code, the least we can do is warn implementers.\nLGTM."} +{"_id":"q-en-http2-spec-d102e30de466bb19bd12a44d1679306c214b0b39bb7d49eaa4800a3b11ae1926","text":"Not \"of the HTTP protocol\".\nand 9: To avoid the appearance of \"of the Hypertext Transfer Protocol protocol\", should these two instances of \"of the HTTP protocol\" be \"of HTTP\" (or perhaps, in the case of , \"HTTP attributes\" instead of \"attributes of the HTTP protocol\")? Original: The frame and stream layers are tailored to the needs of the HTTP protocol. ... This section outlines attributes of the HTTP protocol that improve interoperability, reduce exposure to known security vulnerabilities, or reduce the potential for implementation variation.\nYeah, \"of HTTP\" is probably best here."} +{"_id":"q-en-http2-spec-826c616b237fe2b46d35ec1244b05635c9e5f376f73855678cc0013fb3c3a420","text":": We had trouble parsing this sentence. Is a word missing? If neither suggestion below is correct, please clarify what \"any frame carrying ... SETTINGS\" refers to. Original: A frame size error in a frame that could alter the state of the entire connection MUST be treated as a connection error (); this includes any frame carrying a field block () (that is, HEADERS, PUSHPROMISE, and CONTINUATION), SETTINGS, and any frame with a stream identifier of 0. Suggestion (... and a SETTINGS frame): A frame size error in a frame that could alter the state of the entire connection MUST be treated as a connection error (); this includes any frame carrying a field block () (that is, HEADERS, PUSHPROMISE, and CONTINUATION) and a SETTINGS frame, and any frame with a stream identifier of 0. Suggestion (... or a SETTINGS frame): A frame size error in a frame that could alter the state of the entire connection MUST be treated as a connection error (); this includes any frame carrying a field block () (that is, HEADERS, PUSH_PROMISE, and CONTINUATION) or a SETTINGS frame, and any frame with a stream identifier of 0.\nSuggestion 1 is correct.\nI think that the list structure is supposed to be... includes: any frame carrying a field block (...), a SETTINGS frame, and any frame with a stream identifier of 0. Which suggests option 3, in my PR.\nLGTM."} +{"_id":"q-en-http2-spec-60215168b72077dcf4933ca23e8da80db490303be788c6b3587179a0c60616b5","text":"This looks terrible, but it's the form that the RFC we cite uses (also, RFC 1323).\n: We changed \"Flow Control Performance\" to \"Flow-Control Performance\" per the title of (\"Flow-Control Principles\"). Please let us know any concerns. Original: 5.2.3. Flow Control Performance Currently: 5.2.3. Flow-Control Performance\nand 5.2.3: Should \"bandwidth-delay product\" be written as \"bandwidth * delay product\" per RFC 7323? Original: Note, however, that this can lead to suboptimal use of available network resources if flow control is enabled without knowledge of the bandwidth-delay product (see [RFC7323]). Even with full awareness of the current bandwidth-delay product, implementation of flow control can be difficult. ... If an endpoint cannot ensure that its peer always has available flow control window space that is greater than the peer's bandwidth-delay product on this connection, its receive throughput will be limited by HTTP\/2 flow control.\nI think \"bandwidth-delay product\" is the standard representation of the term. It's not a term specific to this document but a broader term of art. I recommend not taking this suggestion.\nThanks, I hate it."} +{"_id":"q-en-http2-spec-5eaa0b01df97661cabc0356e004a00aa24299fd62b6de6ab4b186188e869d558","text":": As this sentence seemed to indicate that the sender itself was sent, we updated the text to clarify that the GOAWAY is sent. Please let us know if this update is incorrect. Original: Once sent, the sender will ignore frames sent on streams initiated by the receiver if the stream has an identifier higher than the included last stream identifier. Currently: Once the GOAWAY is sent, the sender will ignore frames sent on streams initiated by the receiver if the stream has an identifier higher than the included last stream identifier."} +{"_id":"q-en-http2-spec-c647b91991330d680050c72ac1746f7e43fc232cffd70a49791bfe9929837e48","text":": We had trouble parsing this sentence. If the suggested text is not correct, please clarify what prevents cookie-pairs from being sent on multiple field lines. Original: This header field contains multiple values, but does not use a COMMA (\",\") as a separator, which prevents cookie-pairs from being sent on multiple field lines (see Section 5.2 of [HTTP]). Suggested: This header field contains multiple values but does not use a COMMA (\",\") as a separator, thereby preventing cookie-pairs from being sent on multiple field lines (see Section 5.2 of [HTTP])."} +{"_id":"q-en-http2-spec-796313485b38112b0c853e99c1de5c49434e6ef10532197d20ef08501b317706","text":"I had to add to one use of :authority.\nand subsequent: Please note that, per the rest of this cluster of documents, we have placed the terms beginning with a colon (\":method\", \":scheme\", etc.) in double quotes where they appear in running text. Please review, and let us know any objections."} +{"_id":"q-en-http2-spec-738ff9a9666a0003012610ce5cfbf30b66cce6f7cfad66c8569f510b3a2f8218","text":"Also, use .\nand 9.1.1: We quoted \"https\" where preceded by \"resources\". Please let us know if this is incorrect. Original: CONNECT is primarily used with HTTP proxies to establish a TLS session with an origin server for the purposes of interacting with https resources. ... For https resources, connection reuse additionally depends on having a certificate that is valid for the host in the URI. Currently (best viewed in the .html or .pdf output): CONNECT is primarily used with HTTP proxies to establish a TLS session with an origin server for the purposes of interacting with \"https\" resources. ... For \"https\" resources, connection reuse additionally depends on having a certificate that is valid for the host in the URI.\nFolded into .\n: We made quoting consistent around lowercase instances of \"http\" and \"https\" when describing URIs or schemed URIs, per quoted instances in , 3.2, and 10.1. Please let us know any objections. Original: http and https schemed URIs http or https schemed URIs http or https URIs Currently: \"http\" and \"https\" schemed URIs \"http\" or \"https\" schemed URIs \"http\" or \"https\" URIs"} +{"_id":"q-en-http2-spec-1715ed4d195b7566aeb4ad80212b5cfa31ca3fe35d5ade081e3557797917e490","text":"CLoses .\n: Does \"the same sequence of frames as defined in \" mean \"the same sequence of frames as that defined in \" or \"the same sequence of frames, as defined in \"? It appears to us that it means \"URL that defined in \", but please advise. Original: The server uses this stream to transmit an HTTP response, using the same sequence of frames as defined in\nIt means \"as that defined in Section 8.1\"."} +{"_id":"q-en-http2-spec-6ca6c54ccc7d9650edd620eb1f958877953a667878258349f3168f99f7485b46","text":": Does \"as\" mean \"while\" or \"because\" in this sentence? Original: Connections that remain idle can become broken as some middleboxes (for instance, network address translators or load balancers) silently discard connection bindings.\n\"As\" means \"because\" in this sentence."} +{"_id":"q-en-http2-spec-080a1238cc1bb028d16b16629483d73edeb427d72357688de92a0e8ec43ab2ea","text":"This is awkward, but this is better.\nand 8.8.4: Because \"control data and a request header and message content\" reads a bit oddly, we changed it to \"control data and a request header with message content\" per \"control data and a request header with no message content\" in Please let us know if this is incorrect. Original: An HTTP POST request that includes control data and a request header and message content is transmitted as one HEADERS frame, followed by zero or more CONTINUATION frames containing the request header, followed by one or more DATA frames, with the last CONTINUATION (or HEADERS) frame having the ENDHEADERS flag set and the final DATA frame having the ENDSTREAM flag set: ... A response that includes control data and a response header and message content is transmitted as a HEADERS frame, followed by zero or more CONTINUATION frames, followed by one or more DATA frames, with the last DATA frame in the sequence having the ENDSTREAM flag set: Currently: An HTTP POST request that includes control data and a request header with message content is transmitted as one HEADERS frame, followed by zero or more CONTINUATION frames containing the request header, followed by one or more DATA frames, with the last CONTINUATION (or HEADERS) frame having the ENDHEADERS flag set and the final DATA frame having the ENDSTREAM flag set: ... A response that includes control data and a response header with message content is transmitted as a HEADERS frame, followed by zero or more CONTINUATION frames, followed by one or more DATA frames, with the last DATA frame in the sequence having the ENDSTREAM flag set:"} +{"_id":"q-en-http2-spec-55089ce8aa9923e83a24426b44cad2360c052472584402c607b1821f9d696dbc","text":": This sentence reads oddly. As it appears that the problem in question causes TLS handshake failures, we updated accordingly. Please let us know if this is incorrect. Original: To avoid this problem causing TLS handshake failures, deployments of HTTP\/2 that use TLS 1.2 MUST support TLSECDHERSAWITHAES128GCMSHA256 [TLS-ECDHE] with the P-256 elliptic curve [RFC8422]. Currently: To avoid this problem, which causes TLS handshake failures, deployments of HTTP\/2 that use TLS 1.2 MUST support TLSECDHERSAWITHAES128GCMSHA256 [TLS-ECDHE] with the P-256 elliptic curve [RFC8422]."} +{"_id":"q-en-http2-spec-c35943e78409720feea1b1e3f6d7d779fd4096321ac25d2d13daa3e36e49b3b4","text":": Should \"Content-Length\" be enclosed in \"\" elements in the XML file? We ask because we see three instances of \" header field\" in the XML for Original:\nProbably, yes."} +{"_id":"q-en-http2-spec-b7ce40876c2e652b6941355d106fc423ab65e89b902bcf8117951d9808ec1646","text":": We had trouble parsing this sentence. Does \"The use of field section compression and flow control depend\" mean \"The use of (1) field section compression and (2) flow control depends\", or does it mean \"Flow control and the use of field section compression depend\"? Original: The use of field section compression and flow control depend on a commitment of resources for storing a greater amount of state. Possibly (\"Using ... depends\"): Using field section compression and flow control depends on a commitment of resources for storing a greater amount of state.\nThe former is the correct variant, I think.\nSuggesting instead:"} +{"_id":"q-en-http2-spec-85ec284522ec08afc642ea65fc83b44624bf6aeab1f548b1f781c7c4716b1ae3","text":": Please confirm that \"i.e.,\" is correct here (in which case \"SETTINGS changes, small frames, field section compression\" should be \"SETTINGS changes, small frames, and field section compression\") instead of \"e.g.,\". Original: Most of the features that might be exploited for denial of service - i.e., SETTINGS changes, small frames, field section compression - have legitimate uses.\nI think that this should probably e.g., not i.e."} +{"_id":"q-en-http2-spec-84bed8dad78807d26aeffd7e9a32a59bd02a18e0c55e7e6976db1f4fb5addb91","text":"Values is correct in the first case, but the second should be \"a value\" by which it means \"any value\".\nand 10.9: Please confirm that these instances of \"value\" should not be \"values\". Original: These include the value of settings, the manner in which flow-control windows are managed, the way priorities are allocated to streams, the timing of reactions to stimulus, and the handling of any features that are controlled by settings. ... Ensuring that processing time is not dependent on the value of secrets is the best defense against any form of timing attack. Possibly: These include the values of settings, the manner in which flow-control windows are managed, the way priorities are allocated to streams, the timing of reactions to stimulus, and the handling of any features that are controlled by settings. ... Ensuring that processing time is not dependent on secret values is the best defense against any form of timing attack.\nThese should both be \"values\".\nValues is correct in the first case, but the second should be \"a value\" by which it means \"any value\"."} +{"_id":"q-en-http2-spec-2747c4e3eec3f11fb4d4e344b84bdc9f0dca1d969544e19aa79c7529e38ebaf8","text":"Error when stream identifier field is 0x0 is not stream error but connection error because it is impossible to send RST_STREAM of 0x0."} +{"_id":"q-en-http2-spec-18bf6bd713bb0da4e594f4a571e1ffe60c9ff8586aa98d27b168f3d8b95bdc4e","text":"It is no longer used after d5a8faeaa605bdff7b96287d72913b4b742104cf\nI've avoided renumbering of codes when removing things. We probably need to have a big renumber event at some point, but there is no harm in having gaps for now.\nThanks Martin. It makes it easier for us on the SPDY side :) On Wed Jun 12 2013 at 9:37:50 AM, martinthomson EMAIL wrote:\nNAME I've updated the commit just to remove only the unused error code. Please don't forget to renumber others in the future."} +{"_id":"q-en-http2-spec-354008099f21d943a6ca9723471e15222856519823ed381186f23f6507df6716","text":"As discussed in the interim, 8-byte PING frame, opaque octets that MUST be included regardless of whether they are used."} +{"_id":"q-en-http2-spec-7dc6492d88e2a4f66b45d2ddf654dc0bc61346638bfffdecdd42b6dd6442df67","text":"Curious why the connection flow control window initial size is 64K (65536 bytes) but the stream flow control window initial size is 65535 bytes. The negative flow control example (taken from the SPDY spec) states that the initial stream flow control window is 64K (the SPDY\/3 value).\nI have to stop trusting Will's edits. He, rightfully, challenged the use of SETTINGS to end connection-level flow control because WINDOW_UPDATE provided another way to do the same, and two ways to do the same thing is bad. I'll fix that up."} +{"_id":"q-en-http2-spec-a39d6b9cfea6d3b2aa6df37013bd84c9bfeb131fde45e794010af15f4ba1288a","text":"All other frame descriptions specify explicitly if the frame type does not have any type-specific flags. The PRIORITY frame description should do the same if there are no flags."} +{"_id":"q-en-http2-spec-67ce7b98e10a77ce8fb3622646ad70af2cc1e0f29fa6ed4acec7d6ec95329393","text":"This branch also renames the FINAL flag to ENDSTREAM and the CONTINUES flag to ENDHEADERS.\nAddress\nI'll take it, and we can continue to discuss the redundant bit."} +{"_id":"q-en-http2-spec-983a98f965eee8f5fd611d0edc2808b8328a0ccad70cb3a125560e359bb3540b","text":"Also reservers flag 0x2 in DATA and HEADERS frames and renumbers flags accordingly.\nPlease feel free to exercise editorial privilege with respect to wording. Reserving 0x2 at NAME requests."} +{"_id":"q-en-http2-spec-5f5492ed582ee864b40ae7f2a64eed5a96752465045622ee979ffdaa1bcd25b3","text":"As discussed in the interim... (I had previously submitted this in a batch against the Layering branch... redoing against the Master)\nLooks good for the most part, consider the comments as advisory only."} +{"_id":"q-en-http2-spec-e4acf6b46cb1a509eb765fb4a880b4946c3d68688c8944b5b068e455460238e4","text":":Client advertising settings during Upgrade dance URL\nI assume that this header field is hop-by-hop in the HTTP\/1.1 sense and should appear in the Connection header field.\nOoops, yes! Just checked that in.\nThis issue is to track whether the client should advertise its settings (e.g. contents of SETTINGS frame) as part of the Upgrade GET (e.g. as HTTP\/1.1 headers in the GET request). Normally, the first HTTP\/2.0 frame the client emits is the SETTINGS frame. This means the server will receive the client's settings before getting the SYNSTREAM from the client. However, in the Upgrade Dance the server receives the GET and has to respond with a 101 HTTP\/1.1 response followed by the HTTP\/2.0 SYNREPLY. The ugliness here is that the server will be in a situation where it has to send the SYN_REPLY (and possibly DATA frames and possibly start push streams) without knowing the client's settings. This means the server may blow the client’s flow control buffers, or emit a pushed stream even though the client is incapable of processing pushed streams, etc. If we include the settings as part of the initial Upgrade GET then 1) Server receives the pre-requisite initials settings and the initial GET request in a single package. Makes life simpler. 2) Brings us to parity with the SSL route where the server will have the cilent's SETTINGS before it starts working on the initial request.\nDiscussed at SF Interim; targetting 1st implementation draft. Gabriel to make proposal.\nAddressed via \"HTTP2-Settings Header Field\" section."} +{"_id":"q-en-http2-spec-d2442f797aed45dc9daec41e6d1184b4e3a2119a5b440a45d375549c0efc4547","text":"half closed (client\/server) is no longer used. :scheme header MUST include HTTP Request\nNAME The fixes of the commit of 73dce91fcccecce7079d7ccf6294e80350d5ce19 were somehow lost after ea5abcf90460dc300b94eb27d82162f56a1dc526 . I'm not sure why it happened without conflicting. Please fix them by your hand.\nWill do. It's easy enough to fix."} +{"_id":"q-en-http2-spec-f10fe02635ee7c89b581f3d3262cb784a0ce38ad012ac2b3683752c6c213c06f","text":"I think this is strictly an editorial clarification, not a technical change, but something that came up in discussions within the team. If there's any question about that, feel free to push it back and we'll discuss on-list.\nThis looks like , sort of. I think that we need a little more than what you include here. This covers the initial SETTINGS frame, but there are bits in 3.8.5 that need fixing too.\nSimilar issue, different layer…. is talking about multiple instances of the same setting value in a single SETTINGS frame. This commit is talking about multiple instances of the “HTTP2-Settings” header in an HTTP\/1.1 request that’s offering Upgrade. From: martinthomson [mailto:EMAIL] Sent: Thursday, July 18, 2013 3:49 PM To: http2\/http2-spec Cc: Mike Bishop Subject: Re: [http2-spec] Clarify no duplicate instances of HTTP2-Settings () This looks like URL, sort of. I think that we need a little more than what you include here. This covers the initial SETTINGS frame, but there are bits in 3.8.5 that need fixing too. — Reply to this email directly or view it on URL\nAhh, I don't see this as a big deal, since \"a header field\" and \"exactly one header field\" seem equivalent to me. I'll pull it in."} +{"_id":"q-en-http2-spec-5011a5e5183c5221e8df2f3d7044e31e3edbd0f4501892ef8eb6bdf622338bf1","text":"Fixed the word \"client\" to \"request\" on \"HTTP2-Settings Header Field\"."} +{"_id":"q-en-http2-spec-94549539e3ed7cd2c2ce4f129223ce559dbb5a5e08e0eb9c4f07230661149621","text":"We've seen varying amounts of advantage from having header table substitution enabled. Most are fairly minimal, even assuming perfect foresight. Some tests show single digit percentage improvements, others are much less. The feature adds some complexity. Is this complexity justified? Should we remove this feature?\nOffline discussions: NAME and NAME agree that if this isn't removed, we should swap the opcodes so that substitution gets a worse (the longer) opcode.\nDiscussion in Seattle resulted in a conclusion to remove the substitution opcode from the draft.\nFixed in ."} +{"_id":"q-en-http2-spec-fc2eda3a5f5f65e071cc6897cca3dfa4d5908ae76291c2bae3117bf6426bdb0f","text":"On Sun, Oct 20, 2013 at 05:23:50PM -0700, Roberto Peon wrote: Here's a proposal for that."} +{"_id":"q-en-http2-spec-dbef60a7132a64ef4c3010155dbd26151088d1fb17c1cb6d8066720c40f63b4d","text":"Instructions for Indexed Representation of entry already in header table say to add entry to header table again, don't mention reference set."} +{"_id":"q-en-http2-spec-8d0a812e8398a40a8b43a08a39d5bf726ce62eaa2797721b47ecf3adf4fdbee4","text":"We've long since agreed that full request headers are sent in the PUSHPROMISE frame; found a lingering reference to pushed resources \"inheriting\" headers from the original request. Revised it to reference the request headers in the PUSHPROMISE frame like everywhere else."} +{"_id":"q-en-http2-spec-1bd005ce8d1b607828ab5b3756930fd67edddcb01936f53199e94655930364f0","text":"This updates the concurrent streams in the XML and updates the HTML too (which picked up some other changes).\n\"For implementors it is recommended that this value be no smaller than 100\" - what is the significance of \"for implementers\" here? I'm concerned this language is ambiguous enough to cause trouble. E.g., some may read it as pure advice; others might take away that values smaller than 100 can be assumed to be 100. Best way to specify is with MUST\/SHOULD\/MAY, identifying who it applies to (sender \/ recipient), and ideally specifying what should happen on both ends of the wire (when generating and receiving).\nI agree this is ambiguous. I snuck in a short explanation there. I feel like we should take this back to the working group mailing list. I propose we either commit as is, as my extra explanatory clause is innocuous, and open an issue to resolve the ambiguity. Or I can withdraw this pull request for now, go back to the mailing list, and proceed with the flow control pull request first.\nI'm happy to merge now and note an issue; just trying to minimise the number of such edits we cause with each change, as they have a tendency to stack up."} +{"_id":"q-en-http2-spec-2f7622272eec482bcbb98994488fa8b3031658b98053ac524ced6334fea3db82","text":"TLS Client Hello's between 256 and 512 bytes need to be rounded up to 512 for compatibility reasons. This incents us to keep Client Hello parameters as small as possible to avoid the rounding - this change shortens the ALPN token of HTTP\/2.0 to be simply \"h2\"\nDiscussed in Zurich; we will take this and hold back the eventual heat death of the universe a byte at a time."} +{"_id":"q-en-http2-spec-3ae446ed79904a545242f88a7c789cdcd310a27fcef2aef349bc3bdf43c6ef54","text":"In URL;list=ietf-http-wg, it was proposed that the PRI method be reserved in the HTTP\/1.1 method registration document to avoid possible future reservation and ensure that the magic string remains a reliable way of differentiating HTTP\/1.1 connections from HTTP\/2.0 connections. There was a +1 and some offline discussion along the same vein, no dissent that I heard or saw. However, I don’t see it in URL, presumably because no RFC exists to reference. The appropriate doc is probably HTTP\/2.0 rather than breaking the glass on the prepopulated registrations.\nThere was a question whether \"SM\" should also be reserved; I personally don't think so, since that line will never be interpreted as a potentially-valid HTTP request line or header. However, I don't think it would break anything if we did.\nBreak things, no. Squat on prime name-real-estate, a little."} +{"_id":"q-en-http2-spec-0639a840411aa3219c1dbbb8f733938de77cd263c2e27975c7c2b074d9e1986d","text":"The sentence in 8.3: \"CONNECT is primarily used with HTTP proxies to established a TLS session with a server for the purposes of interacting with https resources.\" ... does not read. I believe it should be: \"CONNECT is primarily used with HTTP proxies to establish a TLS session with a server for the purposes of interacting with https resources.\" This pull request fixes this incredibly minor issue."} +{"_id":"q-en-http2-spec-b56606253b929b82ebde6b5b41609ad1315734f54677f78466fa8474c7955308","text":"This is almost a straight revert of the various GOAWAY -> GTFO changes. It seems like this is the popular move as per the discussion on the httpbis mailing list. Arguments for the revert: GOAWAY is self-descriptive and serves it purpose well. Editorial changes that don't clearly improve the spec should be discouraged so we can reach a stable spec sooner (granted this change doesn't affect the wire at all, but documentation churn counts too). GTFO is crass and doesn't belong in internet standards. GTFO, if it really stands for \"General Termination of Future Operations,\" has a surprising definition and may produce confusion. We should strive for absolute clarity above all else in editorial changes.\nBoo! You guys are no fun.\ngoaway is just as bad as gtfo for the same reasons as the argument against gtfo... just called it disconnectclient or endcommunication if you're going to be a grumplestiltskin.\nQUENCH may be a better, more descriptive word that has been used before in RFC's (for TCP Source Quench messages)\nI think we should call it BIKESHED. , Sean McElroy EMAIL:\nHere we go.. useless HN circle jerk.\n:cat2:\nHow about STFU?\nBoo!\n:-1:\nThis commit should be reverted. It's time we stop behaving like oversensitive automatons and display the beautiful human quality that is a sense of humor every once in a while.\n^ this\n^ that\nhow about something like PLZSTOP? it's slightly humorous, but it actually reflects what is wanted (if I understood the spec correctly)\nGOAWAY is clearer for non-native English speakers, easier to pronounce, and more professionally appropriate than the common connotational definition of GTFO. There are times and places for pushing the boundaries of professional speech, but the HTTP2 spec isn't one of them.\nAt least the pendulum didn't swing the other way NICETALKINGTOYOU\nThis revert is for the express purpose of stopping endless bikeshedding. We shouldn't encourage trivial changes to frame names. If the name is good enough, it should stay. I'm all for fun and games, but clarity and concision (as well as spec stability) come first. For those that prefer GTFO, please explain how it is clearer than GOAWAY (please consider an international audience too).\nso mean spirited, this change.\nCould always name it SHUTDOWN (yay SCTP) , Adrian Cole EMAIL:\n\"International audience\" here. I'm Armenian, English isn't my native language, not even my 2nd language. GTFO to me is more meaningful and more concise than GOAWAY, or any other option mentioned in the comments. GTFO to me doesn't produce any confusion and its actual definition of \"General Termination of Future Operations\" is actually nice, smart, and descriptive rather than \"surprising\" in any negative sense. It also formally de-crasses the word. Of course you can tell me that I'm just a single example, but then again you can tell that to any amount of people. Pretty much the only valid argument out of the posted ones is that GTFO is crass (see I didn't know what \"crass\" meant before reading this). Everything else about descriptiveness and understandability by \"international audience\" are in my opinion totally irrelevant and unfair.\nNAME thanks for the internationalization datapoint and on-topic discussion! Can you elaborate on how GOAWAY is not clear, and how GTFO is clearer? To me, the main argument is that GTFO is not meaningfully more clear than GOAWAY.\nIf folks have serious contributions to make, they should be on the mailing list, not here, as per URL Now I get to try out NAME\nNAME START\nNAME Neither option is appropriate or clear... they're both childish choices and neither are semantically appropriate. I've seen some much better options in this thread. if this change were actually about caring about professionalism, clarity of meaning, or necessity, the innumerable amount of better choices (SHUTDOWN, ENDCONNECTION, QUENCH, etc...) should have been used over GOAWAY or GTFO... But its not.. this is about personal preference. this change made nothing better, it made nothing worse. In my opinion, for that reason, it should be reverted.\nFWIW, and for lack of anyone else talking about a pro of GTFO. When at the working group, a lot of thick topics were discussed at length and everyone there seemed 100% dedicated to having the best spec there is. GTFO, as a word, is harmless to implementation for reasons including the opcode is the same. IOTW, the binary representation is the same. There's no technical reason why it matters. I am one of the implementors of this specification. When the change was suggested towards GTFO, I felt motivated I mean the audience of this spec are implementors, some of which may be uptight about crassness others less so. If you look at github (ps this is on github) there's ample evidence that implementors are motivated by words that aren't boring. For example, there's a popular package manager called \"fpm\". Guess what that stands for? I'm not saying go back and re-word everything to be fresh, rather have patience with those who are literally implementing this, in open source, and are ok with the choice. Expect many more implementors to arise from github, a place relatively unburdened by crass-ness or location.\nNAME I'll split the answer to 2 sections. AWAY vs. Out -- AWAY implies a certain longish physical distance, whereas Out seems to imply just \"out\". AWAY vs \"General Termination of Future Operations\" -- obviously the second is much more clear and descriptive.\nGTFO would be my vote. No future operations, full connection termination, it describes the action best.\nNAME if you feel strongly, mail on this thread as not everyone there are on github to see your pov URL\nNAME - will do, thanks. I need to dig my W3C mailing list info; haven't used it in years. Cheers!\nFrom the W3C mailing list: Thanks, NAME :) Cheers!\nAll made points aside, I couldn't find a very important one; to me actually being the most important. If you think about what \"Get Out!\" could also mean (the offensive TF part is actually interchangable...), it becomes even more confusing. An expression of disbelief is surely the very last thing this acronym should imply. But it did to me; even as a non-native speaker (German). \"General Termination of Future Operations\" is, also in my eyes, a made-up term to fit the joke. It is more fitting than GOAWAY, but once you have read GTFO your mind is sort of made up, since you spelled GTFO in full in your head, as soon as reading the acronym. My suggestion on this one is: Simply get rid of \"General\" \/ \"G\" and add \"of\" to the acronym! TOFO still has a nice ring to it, is quick and easy to pronounce and \"Termination Of Future Operations\" would be sufficiant descriptive. regards (yes, that's my real lastname :P ) On Jan 29, 2014 8:31 AM, \"Eugene Ciurana\" EMAIL wrote:"} +{"_id":"q-en-http2-spec-ebf3a3a46c264bc266a36e3dbded246c3088b1d81d3c85b7457214627fd79555","text":"Felt that this order was clearer since the requirement then back-referenced the previous flag instead of forward-referencing it. Feel free to ignore if you disagree.\nI considered that option. I think that it's easier if the order of the bits is the same as the order the fields appear. I'll reopen and take this if others point this out though.\nConfused by that. In the as-written docs the flags are: [R, R, R, R, PADLOW, PADHIGH, R, ENDSTREAM] but the frame layout is: [PADHIGH (8)], [PAD_LOW(8)] so the order of the bits is opposite the field order.\nLet's chalk this up as yet another case of blindness on my part."} +{"_id":"q-en-http2-spec-cf272d68181914767d51c7279c8fdb816c35f411f62357620416f913b1f3e7e1","text":"Editorial changes through Section 3.1.2. Further updates forthcoming.\nRebased onto current master."} +{"_id":"q-en-http2-spec-16c053f22bebd78d9dee40051085028b748c692723b73d5b41adbb9bcb88f2ed","text":"NAME did you just forget to create the pull request?\nDidn't realize you couldn't merge from the referenced issue. What do you need from me to merge the commit?\nJust some indication that you think that this is ready, that's all.\nDo you think we need to add a requirement for must emit? or is the not coalesce sufficient?\nNow that you mention it, MUST preserve is really what we are going for.\nadded \"MUST preserve\"\nThere is some usefulness from both an extensibility perspective and a load balancing perspective to indicate multiple messages over an individual stream. This issue is to un-reserve the 0x02 END_MESSAGE flag to ensure that intermediaries that are deployed proxy the flag so that they are agnostic to the layered protocol and can still successfully load balance messages appropriately.\nNote that this places a requirement on proxies that they cannot coalesce data frames across \"END_MESSAGE\" boundaries.\nDiscussed in Zurich with no real conclusion. Support and opposition were weak.\nDiscussed in Zurich; accepted in principle. Jeff to do a pull request.\nRenamed ENDMESSAGE to ENDSEGMENT to avoid any confusion with HTTP messages. Feel free to re-name. Couldn't find any text that stated intermediaries could combine frames but may want to add a requirement that intermediaries must re-emit the flag at the same byte position."} +{"_id":"q-en-http2-spec-76f8723d470c968210df216f4abb4717ccb38ccd0b194a449511a15711e25ff6","text":"This is a really minor thing, but as I was reading the spec, I was wondering for a bit what \"extensions\" meant (user defined? something else?). I think the intent is to reference extensions defined by future RFCs that revise HTTP\/2 protocol."} +{"_id":"q-en-http2-spec-96e12ec254e20e4f69440097a43f415d34cab5d2614d804d2a69ea17f8017ca8","text":"Padding is made to a frame and other frames (DATA and CONTINUATION) have these 2 bytes right after the frame header. It is easier for implementation to put the same constructs in the same position."} +{"_id":"q-en-http2-spec-50c9693a7b35793c528514ee3014d867f45a07d63cd3ea2f75dd7506b43ac135","text":"This simplifies the text and makes flag consistent with CONTINUATION which only bears ENDHEADERS. Technically, both have the same meaning of ending sequence of header block fragments. Therefore there is no reason to use the different flag name for the same purpose. This will also make the implementaion easier because most likely HEADERS and PUSHPROMISE handling share the same code and dancing with ENDPUSHPROMISE and END_HEADERS is not fun. Although currently they have the same number, it is not guaranteed to the last. If they are guarenteed to be the same, then it is simpler to use the same name for same value."} +{"_id":"q-en-http2-spec-5f335040010384d7a6f2dfd66ff2d0acb16ea9319abf5a2b9a578d92edf86bd0","text":"l.815 before: An endpoint MUST NOT send frames other than than HEADERS\/x:ref or fix: An endpoint MUST NOT send frames other than HEADERS\/x:ref or"} +{"_id":"q-en-http2-spec-459f459823658c7297e20700977e5c14c01bf55bfa39f8493d64939fd914de6b","text":"I realize this is somewhat redundant given section 5.1.1, so I don't mind if this pull is rejected. The main reason I think this might be useful is since there are two different stream identifiers involved in PUSHPROMISE: the promised stream id and the stream identifier in the common header. The extra sentence just makes it explicit how PUSHPROMISE is linked to an earlier request. It also puts the following sentence in better context."} +{"_id":"q-en-http2-spec-87f3f7ec8e52493fc78d4660231359d7d3c6e2b8dc21c4bb33f7da1d755a26d0","text":"I believe that PADLOW and PADHIGH bits in DATA frame were not updated when the dependency based priority was merged. It is quite a headache for implementors if these bits differ across frames.\nYeah, my bad. I missed one."} +{"_id":"q-en-http2-spec-22dc003ce331f79040639991a1aef96c10ec2d8852c4c5f2c22c4e82ee2fde74","text":"HTTP\/2 has different properties to HTTP\/1 regarding server-side load, because load balancers etc. currently exploit HTTP\/1's short connections. HTTP\/2 has much longer-lived connections, making it more difficult to load balance traffic. A mechanism to direct the client to use a different server\/port would mitigate this. While GOAWAY could be used to do this, it introduces latency as the new connection is set up.\nAlt-Svc?\nURL\nNAME is this really in the next milestone? It doesn't seem like we're ready yet.\nShould get out a draft momentarily; if we don't make it, that's fine, but if we can, it'd be good to have a bit of discussion first.\nNAME this missed the boat. Can I request that you put together something we can discuss in London instead.\nYes, it's on the list for discussion there. The boat technically is delayed at dock...\nDiscussed in London; agreed to using ALTSVC frame for this."} +{"_id":"q-en-http2-spec-ada2e194c26af371a379d48d651781cd50c5f7517156df0f0d9ae7910fdb93a3","text":"This wording once existed in priority branch in github, but missing in the current draft."} +{"_id":"q-en-http2-spec-03450726678ce1cd76f7b159013470ff581d9ba91808b7d289c7d8fd6a90ae27","text":"The priority groups are abandoned in favor of a single weighted dependency tree. The root node of the tree is the connection, assigned the stream identifier 0x0. By default, streams are assigned a stream dependency of 0x0 with a weight of 16.\nJust read through it, I think it looks good. I'd like to understand how weight of the stream is calculated and changed. Here is my understanding: The effective weight of each stream is determined by traversing tree node, taking into account of distribution of weight of parent stream among siblings. This effective weight is temporal on the tree and initial weight is unchanged for a stream, so when reprioritizing that stream, its weight is initial one. For example, suppose we have the following dependency tree: X<--(w)--Y means that Y depends on X with weight w (and w is initial weight). Calculated effective weight is like this: If parent stream is closed, then its initial weight is distributed to among the child streams and this updates initial weight of child stream. So, after A was closed, the initial weight of streams becomes like this:\nNAME thanks for this. I'll have to read through this more carefully before I comment. NAME that seems right. Note that this means that a server might want to maintain more precision on it's weight values than the 8 bits we mandate in the protocol. That said, if there are deep dependency trees and you are killing off nodes that are high up, at some point you will need to round off, so I think that how much precision is allocated is largely a discretionary thing.\nLooks good. , Martin Thomson EMAIL:\nIt seems that we can make a stream to depend on stream 0x0 exclusively, which makes all existing connections its descendant. This is not possible in the current draft version. I wonder this can be directly proxy-able since stream 0x0 is imaginable stream.\nYea, the idea is that '0' is a reserved stream ID, and so it is safe to use it as a flag for saying that (effectively) any stream with '0' as a parent has no parent. , Tatsuhiro Tsujikawa wrote:\nWell, then we should explicitly say that receiver must treat exclusive bit as 0 if stream ID field is 0.\nNAME I would add an explicit hint to implementers that having more than 8 bits of internal precision is a good thing. NAME I think we should leave open the possibility to insert a new stream between the connexion and all its children, even if it's probably a large re-prioritization.\nI think large re-prioritization is not a problem. If proxy gets such requests and it aggregates several frontend connections to single backend connection, inserting under stream 0x0 with exclusive flag set seems to be impossible in backend.\nIf it aggregates to a different backend connection it is not impossible to proxy. Consider streams A and B depend on stream 0x0 and are proxied to some backend connection as streams X and Y. If we get a new streams, C, that exclusively depends on 0x0, we can proxy it as stream Z by dropping the exclusive dependency and then re-prioritizing streams X and Y to depend on stream Z.\nYes, it works like that. But if we just don't allow 0x0 with exclusive dependency, we can do all priority stuff via proxy without doing any extra handling except for stream ID mappings.\nYep, true -- I don't have much of a preference -- if clients feel like it is not an important use case than I am perfectly happy to disallow it.\nIs it possible to move this discussion to the mailing list? If you need comments from a client, I'm happy to take a look later. On Tue Apr 15 2014 at 9:39:56 AM, Jeff Pinner EMAIL wrote:"} +{"_id":"q-en-http2-spec-d4e06bab18309b7ea3932de9220019b94a735ff9da701aec0d85a8827f48e135","text":"A proposal for adding GZIP compression to DATA frames. One possible resolution for .\nThis looks good. Thanks for putting this together. We'll see how the discussion progresses.\nI realise we might also want to modify Section 8.1.3.5 Malformed Messages, to explain that Content-Length either cannot be sent if any frame is compressed, or measures the length after inflation, depending on how closely this maps to transfer encoding (URL). I suppose that's a follow-up issue for the WG to discuss.\nNo. You are right. Content-length applies pre-compression or post-decompression. We need to that clear. On Apr 22, 2014 4:28 PM, \"Matthew Kerwin\" EMAIL wrote:"} +{"_id":"q-en-http2-spec-2bca9867cd812c92d8d0590ec07ede36e57800932feb68f57d8ae173d205a686","text":"We left open pending some explicit text about the double GOAWAY graceful shutdown. Here is some proposed text that we can iterate on. One difference from what we discussed is the first GOAWAY has last-stream-id of 2^31 - 1. The calculation based on MAXCONCURRENTSTREAMS isn't really needed and just opens up implementations to bugs.\nI first brought this up in URL but I wanted to open an issue to track this as I believe this is a design flaw with HTTP\/2 as it is today. Consider an HTTP\/1.1 connection to a proxy that speaks HTTP\/2 to the server. If the server needs to shut down, it will send a GOAWAY acknowledging the last request it processed. However, there is a race condition where a request could be in flight or queued. When the server eventually sees the delayed requests, it will not process it and the proxy will eventually return an error (most likey 5xx) to the client. Requiring proxies to buffer the request in order to replay for this case is not feasible. One solution is to have the server send an ALTSVC frame, but this is not ideal as it implies each server must keep track of a list of alternatives. It may not make sense for the origin server itself to maintain this (potentially quickly changing) list. Another solution is to allow acknowledging unreceived streams in the GOAWAY. For instance, the server could acknowledge last-stream-id plus SETTINGSMAXCONCURRENTSTREAMS. This might work in practice, but seems hacky. Another (and in my view best) solution would be to introduce a two phase shutdown mechanism. Some strawman language: \"If a server needs to shut down for maintenance, the server SHOULD immediately send a DRAINING frame on all its client HTTP\/2 connections to indicate that it will soon shut down. This allows the client to stop issuing new requests and avoids unnecessary retries. After a reasonable amount of time, the server SHOULD send a GOAWAY with status NOERROR on all its connections and then close the connection. Clients MAY may open new connections to the server after receiving a DRAINING frame.\"\nI have added some text in URL that covers some of this. I'm going to leave this one open, because I don't think that this is completely done yet. We need to discuss whether we want to be more explicit about this particular use case.\nDiscussed in NYC; double GOAWAY sounds good."} +{"_id":"q-en-http2-spec-a53116fc5ec08a97d46273f5633684d13bd37ce2dc75f03b4fd6ce0fb02b47fb","text":"A proposed explanation for what to do with large header blocks.\nDiscussed in NYC; this can be considered editorial."} +{"_id":"q-en-http2-spec-3c8a3c55c69e628b859fe75aec7988ef8012adb5a970e4c84b03063ff4385eff","text":"... as new priority scheme demands and implies, but does not expressly permit. For ."} +{"_id":"q-en-http2-spec-78d44a6d37dc2f0b8ac67a7085a02ebd3e013dfa3d97aa775866841260c67501","text":"Proposed text for .\nThis would be consistent with the plan for TLS 1.3.\nNAME Firefox hard fails here, which I think is the right decision. I'll confirm with the working group."} +{"_id":"q-en-http2-spec-9394397e13f0ee2e4717ee4f1b142d892ad3fb90087e5f9d7815fdd918188043","text":"HTTP\/2 requires clients to support the gzip content-coding. The justification for this has always been that it helps avoid intermediaries and other interposed software (e.g., virus filters) that strip Accept-Encoding to avoid the pain of decompressing responses. However, it has been pointed out that doing so means that intermediaries that translate from 1 to 2 are now required to synthesise new entity tags for decompressed responses, breaking semantic transparency and\/or losing significant HTTP functionality.\nSee also .\nIt also requires to rewrite ETags in request headers (conditional ones), which is an open-ended list. Furthermore, ETags also appear in payloads (WebDAV properties, for instance).\nThe outcome of this issue also affects\nDiscussed in NYC; will remove implicit gzip from spec; may create new content-coding required status code or other strategy (non-blocking)."} +{"_id":"q-en-http2-spec-c0af6c2e2953048617bc0d7fbd662374763e92c607649e3fe69edde5de19e41a","text":"Frames can be coalesced or split -- this is antagonistic to end-to-end, per-frame compression. This requires decompression \/ recompression in proxies that need to coalesce or split frame.\nDiscussed in NYC; no implementer interest, security concerns about proxying it. We believe it can be addressed as an extension now if someone wants to scratch that itch (and can find someone to interop with); however, fixing this properly means doing it at the HTTP semantic layer (e.g., with a new range type). In the meantime, remove frame compression."} +{"_id":"q-en-http2-spec-82b244c6b4217d3587dd7157bb07018b45f8ccb41f649779ea061a0b0e7bb668","text":"Two minor things here: more strongly recommend the use of the frame type in HTTP\/2 over the header field add a frame type number (no need to defer the choice now, this lets people build code based on this)"} +{"_id":"q-en-http2-spec-a7ce13e14cf33bee4633f7a3a1110e5c57cc0fe63a85b1950cd5ee694d6597d0","text":"I have written what I think is a slightly more robust Security Consideration for HPACK. This covers: the attack in the general sense, how the attack might apply in HPACK and HTTP, particular areas of concern, how HPACK inherently mitigates these attacks, what environments might need additional mitigation, and some suggested mitigation strategies. Mitigation strategies that I have described are: actor-based isolation (a generalized application of the origin isolation principle) destroy values on failed guesses (thanks here to Adam Barth for the idea), either probabilistically, or based on a count, with a recommendation that shorter values be made harder to guess specific protection for \"special\" header fields There are a few formatting changes in the first commit, sorry."} +{"_id":"q-en-http2-spec-86f996ce76d6adef39231ee4395451da925808384b50f79d8f5e5e569b67ed30","text":"Two small changes here. First, adding unnegotiated semantics to reserved bits in existing frames feels like a REALLY bad idea, since you're almost guaranteed to have collisions, so I'm suggesting we remove that as something that extensions can do unannounced. Second, we permit negotiated changes to existing semantics, so I'm suggesting that instead of saying only DATA will ever be flow controlled, we say that making anything else flow-controlled is a changed semantic that has to be negotiated.\n(And as a side-note, section 6.9 implies that there may be frames other than DATA which are flow-controlled defined elsewhere: \"Flow control only applies to frames that are identified as being subject to flow control. Of the frame types defined in this document, this includes only DATA frame.\")"} +{"_id":"q-en-http2-spec-d74fa585f14b31a346c159190e2c90a6cebd036bb2ed678800cba43f697171b9","text":"Mostly minor stuff from a readthrough. Of particular note: Regarding headers, \"name-value pair\" implies only one value; either it's a pair, and collections may have multiple instances with the same name; or it's a name with a collection of associated values. Either is reasonable, but both is confusing. Adjusting phrasing in SETTINGSHEADERTABLE_SIZE to better reflect that HPACK now lets the encoder change the table size at will. Removing suggestion that a GOAWAY frame can get \"lost\", which can't happen in TCP; actual issue is that recipient may have acted on the previous GOAWAY and retried the requests the server says it didn't process."} +{"_id":"q-en-http2-spec-ef5b1e72b8c98a1c3567e08d5e9b65386afeae1dea7117cda48a565a63f24891","text":"Could have been \"a\" just as easily, but I felt like a two-character pull request was just overkill."} +{"_id":"q-en-http2-spec-6cd7346a23ce54761d79e25facad61b274ed98989d3d741a5a732d7f4ddef7ba","text":"Note: adds downref to Informational spec.\nIt has been noted that there is an opportunity for interoperability failure with the rules we have regarding ephemeral key exchange. e.g., client has only DHE, server has only ECDHE, can't use HTTP\/2 Do we want to specify a mandatory to implement cipher suite so that we can avoid this?\nYes, I think you do. TLS 1.3 will be mandating ephemeral key exchange, so I think you just want to use whatever their MTI is (presumably either DHE or ECDHE with AES-GCM). I expect to have a preliminary answer to this in YVR. Will that be soon enough?\nTOR, I hope. But I think that should suffice. We can discuss whether we try to preempt that decision or leave a placeholder.\nSorry, I meant YYZ.\nDiscussed in NYC; will make a decision about it in Toronto. Under discussion - ECDHE \/ DHE + RSA + AES-GCM + SHA256. Maybe another too.\nWe need to settle this for WGLC. NAME any chance of getting a decision earlier?\nis editor-ready."} +{"_id":"q-en-http2-spec-0cd4bd26bb406b5602953d709e7ec6d0d0b4ec6e2aa074096e72e0fc134299d5","text":"Now that we have decided to remove the reference set, the hacks we added to address header field ordering are no longer entirely necessary. This change does the following: remove the requirement to concatenate with '\\0' require pseudo-header fields to all appear before regular header fields change 'header set' to 'header list'\nThis has implications beyond ordering w.r.t. compression efficiency, especially now that the number of items incurs a cost per item. , Martin Thomson EMAIL wrote:\nnice, more red than green :)\nIncluded into HPACK. Encoder and decoder SHOULD preserve ordering. :headers MUST appear before normal headers (in both encoded and decoded lists)."} +{"_id":"q-en-http2-spec-57ef9165ad5bf6f9915937b3ff165244b81c9c27c60f204aea9696eb7d561d05","text":"Looks like I caught some trailing whitespace at the same time.\nThe trailing whitespace came from an automatically generated part of the spec and will reappear if I ever re-generate the examples. In addition, it really should be here ;-)."} +{"_id":"q-en-http2-spec-66128cf79ea01cccff4c0fc6eab7f2c8bd29d80095ee72163ff3f66e6c8ad4dd","text":"Editorial suggestions for WGLC. I don't think there's anything controversial or semantically significant here, but please have a look over it before accepting."} +{"_id":"q-en-http2-spec-6b04b683cf02aefd3e147836b411331de1d453ca8942fd698ce9a16b8e336620","text":"I think the section could benefit from rearrangement some after this, but I'll leave that until later.\nOnce that's done, I'll have a look to see if there is anything that can be done to improve the section structure. The expanded material might need a new section heading and maybe some re-framing. Suggestions are, as always, very welcome.\nI'll tweak this to soften the SHOULD and MAY."} +{"_id":"q-en-http2-spec-e029a98525f7809660fd5c348153cb26e4ed48de95a47c774fd66267cc4b1738","text":"A couple nitpicky things that jumped out at me in another readthrough. 3.2.1: Says that the SETTINGS parameters don't need defaults because of HTTP2-Settings, but they have defaults. Removing this claim. 3.3: Says that ALPN is used for HTTPS if you don't have prior knowledge, but 3.4 says you have to use ALPN even if you do have prior knowledge. Removing the caveat in 3.3 to reconcile. 5.1: Allowing WINDOW_UPDATE to be send in reserved (remote) state. This has been discussed in-person and on-list as a reasonable client strategy (low initial window, unchoke the pushes you want), but per the current text this is actually prohibited until after receiving the HEADERS frame. 5.2.1: Fixing sentence fragment. 5.3.2: Possibly just a personal\/regional style, but from\/to or between\/and works for me; between\/to doesn't. 6.1 and 6.2 use different definitions of the padding field. Copied 6.1's to 6.2.\nThanks Mike."} +{"_id":"q-en-http2-spec-ea0d77af771a013da7634a0a4adc4d6fa26e980ce5efd9e75bf085de1a385b61","text":"If an http2 implementation receives a frame with an unexpectedly large size, what should it do? As one example, a WINDOW_UPDATE should be 4 bytes. Or should it be at least 4 bytes? The Extending HTTP\/2 notes at URL aren't clear about whether an unexpectedly long frame is okay. The closest I can find is: \"Implementations MUST ignore unknown or unsupported values in all extensible protocol elements. \" But I can't find a definition of an \"extensible protocol element\". It seems that the spirit of Section 5.5 would be to permit overly long frames.\nI forgot to mention: there is FRAMESIZEERROR, but it only says: \"If a frame size exceeds any defined limit, or is too small to contain mandatory frame data, the endpoint MUST send a FRAMESIZEERROR error.\" But only PING and SETTINGS mention it or a limit. What is the intended behavior for PRIORITY, RSTSTREAM and WINDOWUPDATE?"} +{"_id":"q-en-http2-spec-ae96d90b4b465f6fced8347484969c1363b5be639206efb3a4ea1c8c609c0154","text":"For . Here, I moved the text that covers the calculation of the dynamic header table size up into a new section. Then, I reworded the text to reduce the direct dependency on HTTP\/2 and its settings. The emphasis is now on the using protocol setting a maximum, of which the HTTP\/2 setting is just an example. Apologies for the diff, I'll try to split the move commit out to make it more readable.\nOK, the first commit does the rewording, the second only moves text around."} +{"_id":"q-en-http2-spec-4f0e2c50ab8c4d251a01db3105f485df0b1a70d342fa4b561ca5ac45b627ed27","text":"This is an update of :\nThere have been a number of discussions about scenarios where we may want the client to fall back to 1.1 (a server forcing gzip, APIs incompatible with HPACK, etc.). HTTP\/1.1 defines status code 505 as indicating that the server \"refuses to support the major version of HTTP that was used in the request message [and] is unable or unwilling to complete the request using the same major version as the client [...] other than with this error message.\" However, that doesn't define a programmatic way to tell the user agent what version to retry with, simply a representation to be shown to the user. We should define a way to inform the client what HTTP version to retry with (presumably 1.1).\nDiscussed in NYC; task force-ish to come up with a proposal either using 505 or a new status code + header to point to the version(s) to bounce to.\nMark to write draft. use 505 extra header what version to down\/upgrade to what the scope of applicability is (server vs resource) cache-control applies\nDiscussed in person, but capturing here: Scope could also be a path prefix, not just a single resource.\nMarking this one as non-blocking. We need to discuss adoption of a mechanism for this; and that's not going to happen in the sorts of time frames we're talking about.\nCan this be closed -- i.e., do we need to actually reference the new draft in HTTP\/2?\nYep.\nReopening (preliminary) based upon feedback from Mike."} +{"_id":"q-en-http2-spec-6165b6525b0f6b297ae59c745464efdecd2b06dd7ffa34bb656d49c5279179dc","text":"I think \"an\" is correct.\nAn is generally used only before a vowel. As in, \"A, E, I, O, and U\". An can be used before a \"vowel sound\" as well. But \"h\" in \"http\" is neither in my book. See URL for more info.\nIn (British?) English, 'h' can be pronounced either as 'aytch' or as 'haytch' so I would say that both 'a' and 'an' HTTP something should be considered fine.\nLet's at least be consistent.\nIf you pronounce 'a' as 'ahh' then \"a HTTP\" sounds ok. If you pronounce 'a' as 'aye' then it sounds wrong as you have two 'aye' sounds in a row unless you pronounce your 'h' as 'haytch'. American English would generally expect \"an\" to be used here, I think. Dialects might disagree. RFC 2616 & RFC 7230 use \"an\".\nWe had this fight (almost verbatim) for bis; \"an\" won. As Martin says, let's be consistent."} +{"_id":"q-en-http2-spec-bb1c929fa18c17405729797f3614b2ec93da6120e54b4676147214f877941a58","text":"Bits seem to be numbered from 1 to N, which is very confusing since almost every software starts at zero. Also it's common to find macros like : #define BIT(N) (1 << (N)) and here that would introduce mistakes since one constantly has to think about shifting bits before using the macro (eg: BIT(3) to check bit called \"4\" in the spec, corresponding to value 0x8). Since there are not that many places where this happens, this commit aims at fixing them. Note that the hexadecimal values for each of them were already correct and non-ambiguous."} +{"_id":"q-en-http2-spec-465775590b4a84dca192e7aa78ca992096bec9fc11a511898071072a280bdb5c","text":"Two changes here, one editorial and one probably-editorial: The endpoint might send a connection or a stream error of HTTP11_REQUIRED, depending on the scope of the requirement. Note that this was non-normative language, so removing the implication that it would always be a connection error simply makes clearer what was already permitted. The server might know as soon as the connection is established that it will want the cert, but want it protected by the handshake. Either endpoint, not just the client, may choose to renegotiate before sending their preface."} +{"_id":"q-en-http2-spec-29d55ee657f2e84d630c64a671941344c10baa9cd29520bdb025168eeb6d972e","text":"As discussed on the mailing list.\n:+1:\nMake it so that PRIORITY can be sent on a stream in ANY state. i.e., change so that PRIORITY is permitted in the \"idle\" state.\nDiscussed in Honolulu; seems reasonable and no objection. Take to list to confirm."} +{"_id":"q-en-http2-spec-14ecacd45a20c014a9232693298098b0f139b0eb61ecdcb1ec42a7c0b6a1765f","text":"This is based on the long set of negotiated changes added to . This adds the blacklist as Sam Hartman proposed at IETF 91.\nSection 9.2.2 places restrictions on the ciphers that are acceptable for a http\/2 connection that are different to the acceptable ciphers for a https connection that may be offered over the same handshake. To comply with 9.2.2, as server accepting an ALPN connection must either: a) influence the cipher selection to ensure an acceptable h2 cipher is selected; b) be informed of the cipher selected and if it is not acceptable then select http\/1.1 instead of h2 as the protocol. Neither of these capabilities are required of a RFC7301 compliant implementation. Specifically there is no requirement for an ALPN extension to be able to influence cipher selection, nor is there a requirement for an ALPN to make the cipher that will be selected available to the protocol selection.\nEven if ALPN does allow arbitrary selection of cipher\/protocol pair, it is impossible for a server to select an acceptable cipher\/protocol pair because clients interpretations of 9.2.2 will vary over time and available ciphers. If a server selects a cipher\/protocol pair that it believes is h2 acceptable, but the client disagrees, then communication failure results. Thus the moment there are different interpretations of 9.2.2 in the client population it will be impossible for a single h2 server to accept connections from 100% of the population.\nThe server need only support the minimum requirements in 9.2.2 at the TLS layer, i.e. enable TLSECDHERSAWITHAES128GCM_SHA256 overall and make it a suitably high preference. If the 9.2.2 requirements are increased in the future, then that requires an ALPN bump too and thus it can be deployed without worry.\nNote that I think discussion on this topic has moved beyond the specific concerns raised in this issue. This issue should probably be decomposed into: Should we be restricting TLS ciphers within the application protocol draft document The usage of the ALPN handshake is fragile as any differing interpretations of 9.2.2 against future ciphers can result in communication failure. The ALPN RFC does not require that a ALPN implementation provide functionality needed to implement 9.2.2, specifically either to a) use protocol selection to influence the cipher selection; or b) be informed of the cipher selected to influence protocol selection.\nIdeally, no, the HTTP\/2 spec should not be setting TLS details. HTTP\/2 should simply require TLS 1.3, or at least a 1.2.1 with the required improvements done in the relevant spec. This, however, requires a level of coordination between working groups that isn't currently available. This section is just a hack to make a TLS 1.2-h2 variant that's the best that's currently attainable. It's a transitional spec until TLS 1.3 is eventually complete some time in the indeterminate future, just as HTTP\/2 is likely to be a transitional spec until HTTP\/3 gets written. The cleanest route would be to cut out sections 9.2.1 & 9.2.2 and make a new TLS 1.2.1 spec that satisfies those requirements, then separately publish that and require its usage here. (though, you'd have to come up with some way to handle the minor version number; maybe implement TLS 1.2.1 entirely as an extension) It's probably too late in the HTTP\/2 development process to do this, unfortunately, though it would be a cleaner spec with less fiddly bits that can break interoperability.\nDiscuss on-list, please.\nActually, it's about ethics in game journalism.\nConsensus to adopt ."} +{"_id":"q-en-http2-spec-c32ffdb0e47646cecc5d9ea5b5dd73861346d4886daedf734ee72219599214b9","text":"Proposal for\nTaking Mark's base, taking Herve's suggestions and removing some text. I think that this is good.\nFolks in the Tor community point out that coalescing connections is a genuinely new vector for tracking users in HTTP\/2. This should be pointed out in the Privacy Considerations subsection of Security Considerations, along with mitigations. While we're there, more specific text about mitigating HTTP\/2-specific fingerprinting would be nice.\nNAME I've marked this as Design, but I think we can prepare a proposal pretty easily. You want to take a crack at it, or should I do a pull?\nI'm happy to take input. Since I'm basically offline this week, getting a start would help lots.\nMarked as editor-ready; please consider list feedback."} +{"_id":"q-en-http2-spec-490f4783b06b22cbac92ac202feba3173c311edd210f152ba9eba1c7ab76ec31","text":"Rather than just PROTOCOLERROR, allow REFUSEDSTREAM. REFUSED_STREAM is a more lenient error. It allows clients to retry requests. Servers might also set the GOAWAY stream number to a lower value if they upgraded.\n\"5.1.2 Stream Concurrency\" says about SETTINGSMAXCONCURRENT_STREAMS: \"Endpoints MUST NOT exceed the limit set by their peer. An endpoint that receives a HEADERS frame that causes their advertised concurrent stream limit to be exceeded MUST treat this as a stream error (Section 5.4.2)\" But which stream error code should be used? Almost all other references to stream errors in the spec say \"treated as a stream error of type XXX\"\nAlso, if this is a stream error, what happens if the client has a CONTINUATION already in flight right behind the current-streams-violating HEADERS frame? The server would then return a stream error to the HEADERS frame, and then see the CONTINUATION frame that's not associated with an in-progress header block. I suppose a server should maintain enough state to remember that it sent a stream error on that stream ID and simply ignore any following CONTINUATION frames instead of returning a connection error?\nThe server knows that a continuation frame is coming as it is flagged in the HEADERS frame. Regardless, to ensure against corrupting the compression state, even if it is thrown out, all HEADERS\/CONTINUATION data must go through the compression context unless you're closing the connection. This is even true for stream IDs that are not in a valid state (though I'd hope in those cases that the receiver would close the connection after immediately sending an error). , Brad Fitzpatrick EMAIL wrote:\nNAME good point. My tests don't cover the case of a CONTINUATION having valuable compression context needed for future valid streams. I will update the test suite I'm building.\nDiscussed on-list; let's go with ."} +{"_id":"q-en-http2-spec-411e9cd5bca08c16a41305516e90c35a9236e908f239c5c8b17db8dde3890c50","text":"Two issues were raised about the use of the words \"existing\" and \"current\" to refer to streams. These were a little squishy, so I tightened the language a little. The text no longer uses either word to refer to streams, instead using stream states to describe where the relevant text applies and favouring \"identified\" as a fallback.\nuses some under specified termonlogy: For example is for a PRIORITY frame which can be sent\/received in the any state the same as in the following contexts: Can be translated to the stream's state? For example does it roughly mean ? If so then the usage in the PRIORITY frame section is a bit more confusing because a PRIORITY frame could be the first valid frame sent\/received for a stream.\nNAME - Thanks for the clarification!\nNAME - I do have a question regarding your clarifications to the SETTING frame (specifically the SETTINGSINITIALWINDOWSIZE portion). It seems a bit confusing if a PRIORITY frame has been exchanged on an IDLE stream for the SETTINGSINITIALWINDOWSIZE not to apply to it. Consider the following example: 1) Stream is in the state (no frames exchanged) 2) A PRIORITY frame for stream is received. 3) A SETTINGS frame is received to update SETTINGSINITIALWINDOW_SIZE. ... 4) Stream transitions into the state. Is seems a bit counter intuitive that stream 's window will be the window size prior to step 3. Just imagine there are many more SETTINGS frames exchanged (and other events) before step URL stream 's window will be even more difficult to understand how it got to where it is.\nThat's a fair point. The current text is: From this, you might infer that a reserved or idle stream isn't affected. Implicit in here is that you don't actually have an active flow control window until streams are in those states. There is no sense in performing any accounting until you are actually sending, right? But then, it's only accounting and that 32-bit number might be allocated and used in some implementations. So, I think that this can be clarified, rather than rely on people only accounting in those states, the text might be better saying: Does that make more sense?\nThis does make more sense. On a related URL is the rational to prohibit WINDOWUPDATES on the IDLE streams when PRIORITY frames have been exchanged? It seems like if a stream is allowed to be put into the priority tree (via a PRIORITY message) and its window size controllable (via SETTINGS) then why would a WINDOWUPDATE also not be allowed? I look to the following statement and think that accounting may already be required to track the PRIORITY and SETTINGS updates. The WINDOW_UPDATE accounting seems like it is very similar (or may be close to identical depending on the implementation).\nWe want to strictly limit the amount of state that is maintained for non-open streams. Currently, that is limited to priority. Opening that to more state isn't really necessary. For WINDOWUPDATE, there is only the case where you want to open the flow control window immediately after opening a stream, which currently causes a potential stall on the remote\/sender side. As specified, you can create a priority graph that is separate from the other stream state (flow control in particular). If we allowed WINDOWUPDATE in idle (or reserved), then you need to commit more state. We already acknowledge that priority information needs special handling outside of the key stream states, so the risk is both accounted for, and already present. The most important factor though is that priority is expendable. Under DoS, you can drop priority on the floor. You can't do the same for flow control. Therefore, any state commitment needs to be strictly bounded for that.\nRight. So for these cases the peer creating the stream will have to wait to expand (or shrink) the window until after the stream has transitioned out of the IDLE state. So state may not be required however if a peer wants to change the window for that stream then state will be necessary to keep and act upon when the state transition occurs. I\"m not sure I follow. WINDOW_UPDATE is allowed in reserved according to . Priority is an interesting portion of the spec. It almost seems like you could ignore priority all together and be compliant :)\nHmm, strange how you forget these things. Yes, you can send WINDOWUPDATE in \"reserved (remote)\". This allows a client to affect the flow control window of the server that has promised a push. This is something that I know Firefox uses so that it can resume stalled pushes when it knows that they are needed. No extra credits over the initial window size are given to pushed streams until they are recognized as necessary. You can't send WINDOWUPDATE in \"reserved (local)\". After all, the other side (the client) won't be sending anything on that stream. So none of this really affects processing of the setting :)\nI think I agree with you in general (it is not necessary). I agree that minimizing state on IDLE streams seems like more of a driving force, and I also recognize that the peer can wait to send WINDOW_UPDATES until transitioning out of IDLE. I can also imaging a scenario that if an endpoint puts the node into a priority tree, it is possible they have an estimate of what they are going to be sending. It may then follow the theme of \"a more flexible expressions of priority\" (from ) to also be able to control the window of those streams (at a finer granularity than the SETTINGS frame). Anyways thanks for the discussion and clarification.\nEditorial problem, I hope: The spec uses the phrase \"current stream\" in two places (both related to PRIORITY), but without any definition of what that means: 1) \"5.1 Stream States: ... half closed (local) ... PRIORITY frames received in this state are used to reprioritize streams that depend on the current stream.\" 2) \"6.3 Priority ... Note that this frame could arrive after processing or frame sending has completed, which would cause it to have no effect on the current stream.\" I don't have alternate wording proposals because I'm not exactly sure what those sentences are trying to say. Note that I'm only talking about \"current stream\" in the singular. I see no confusion with the use of \"current streams\" (plural) in \"6.9.2 Initial Flow Control Window Size\".\n\"subject stream\" or \"identified stream\" would work effectively well. I'll take suggestions.\nBoth sound fine, but \"identified stream\" is already used often in the spec and \"subject stream\" is never used.\nExcuse me for trolling this issue but I couldn't help but notice how evil the issue number is."} +{"_id":"q-en-http2-spec-0a13abf1b3a70cc4092c38eb4acfc4accbc2a4136cd80967e30edd3c1a3afaf2","text":"Hey so I have a bunch of typos I found and this is one, \"an\" –> \"and\" in the \"Limits on Header Block Size\" section"} +{"_id":"q-en-http2-spec-09fda89caf8287ca3279975ac7fe57c40b38a4e85e652feecdf480e77b0c451b","text":"New safe methods can be defined. Does this mean that the client needs to respond with a stream error for methods not known to be safe? See URL\nThat would be the only logical conclusion. If the client cannot determine if the method is safe, it must assume that it is not.\nIndeed. We just need to fix it."} +{"_id":"q-en-http2-spec-e66635bb0343d189db4073de1ec4fd473b7663f247cc8d7f328a22aa267d3a0d","text":"The latter part is vestigal, from the time when gzip C-E was mandated. Remove it. Here, I'm less concerned."} +{"_id":"q-en-http2-spec-70bc5cc6e9199f0e74c8513cf672a142e6805a0b8bd1ddbaf37580b8db798375","text":"Only if it makes sense to do so. It's a perfectly good word that has been spoiled by an unnecessary inclusion in some popular RFC (2119, I think)."} +{"_id":"q-en-http2-spec-78318a5f439a3dd4da1a24f6e846601b1896e88d14a3ef276e1a2b710cfa521a","text":"There seems to be some disagreement about this, but I think that Roy is right. It's not critical how it is registered, since it shouldn't appear in a valid request anywhere, but it meets the criteria.\nI really don't think it matters, so I'm fine with it either way. Theoretically, there may be some compat value in having it listed as safe\/idempotent since an intermediary may be more likely to let such methods pass. But the time slice in which this registry has existed and HTTP\/2 hasn't will be miniscule -- I expect any HTTP\/2-unaware intermediary won't look at the registry values either.\nTo the extent that it matters (i.e., very little, as you point out), I think that the intermediary example weakens the case for safe+idempotent. I really don't want HTTP\/2 to pass an intermediary without triggering some deliberate recognition; that has been the source of some very serious bugs in the past."} +{"_id":"q-en-http2-spec-6683291f67d88da4adef80977240f190388e8e28cbcda6b0901b88487e2583ec","text":"There were lots of small changes as a result of . The only change of note is URL, which adds text on what a receiver should expect and what they are obligated to do for streams in the \"half closed (local)\" state."} +{"_id":"q-en-http2-spec-ce384c31d5b59c6b5d7ee69e1ffeed11e0818b88c12e8f3ec001ef5b184de077","text":"had a few actionable comments that seemed relatively easy to handle. In the absence of feedback from NAME and NAME I responded. Of note: the introductory text in URL about the choice to make the format inflexible."} +{"_id":"q-en-http2-spec-80908227d8a542b32bdb3e7256a89aa14b061b3847ddc11c59178fb0d9f0fa6b","text":"Kathleen Moriarty's IESG review noted some confusion that I think derived from the use of the word \"prohibited\" in relation to the cipher suite black list."} +{"_id":"q-en-http2-spec-aac120704501e7b004077299fcdd8096942c0e02aaf6f0422eb2be3956248045","text":"Proposal to replace . I think that this is a little clearer this way. Also, it's strange that this was not added earlier, but then sometimes the most obvious things get missed. cc NAME\nSo I'm not seeing how the new sentence is any different than this one two paragraphs below: A receiver MUST terminate the connection with a connection error of type COMPRESSIONERROR if it does not decompress a header block. My chief concern (merely editorial) is that HPACK consistently uses the phrase \"decoding error\" whereas here we are using the phrase \"decompress.\" I think it good to make clear that any and all decoding errors MUST result in connection errors of type COMPRESSIONERROR. Picky, I know. ;-)\nI think that Martin's proposal tries to specifically address your concern. The sentence two paragraphs below address the case of an endpoint deciding for whatever reason not to decompress a header block: this also has to result in a COMPRESSION_ERROR. So I think that both are useful.\nAhhh - thanks NAME Makes perfect sense and I agree."} +{"_id":"q-en-http2-spec-2f5d2c193515ab8c4bcf961df66a12d65af1c5cce586e26c617baf651cb3eaf9","text":"Expands section 7.1.3 to describes characteristics of header needing the protection offered by the never indexed literal representation.\nI think that this looks good now."} +{"_id":"q-en-http2-spec-145cab2a026da8c716aaf26ab28c7a3b5f87617f4bdfb703706a44e47ffbaeff","text":"Fixes for those things that were obviously busted. Which (I hope) addresses the comments, with one exception (Richard's discuss contained suggested edits that should clear that one up)."} +{"_id":"q-en-http2-spec-579689571c7b98e697aec60724f0f59fe164500cfb5ffaf60891fb0eae6e6542","text":"Multiple editorial commits....\nJust an fyi... I have additional pending editorial edits, some of which are a bit more extensive. I will hold off committing them until this pull request is processed.\nOk, try now...\nNAME - please don't do ANY reformatting in pull requests; we need to be able to precisely identify changes and their provenance. Thanks,"} +{"_id":"q-en-http2-spec-29d35172e949f4597778d3c081761c4bae156603b8092d9e275a9490003af3f9","text":"Moves considerations of defining an ALPN token for a non-TLS-based protocol to 3.1. Changes \"token\" to \"protocol identifier\" in 3.3."} +{"_id":"q-en-http2-spec-47ea61d48c276f4cca8d1483bea56193fc196f81030f1ca019bc0c2085aee9ee","text":"LGTM\nMerged by RFC editor.\nThis wasn't obvious, but all descriptions of padding reference DATA: However, this is subtly wrong, as NAME notes. The actual text should be:"} +{"_id":"q-en-http2-spec-db2b1a90561be8a5f22ea7f84f13f631cf80628f9f9e6a5bf8f67446d83da1ff","text":"This was confusing a few people who weren't clear if the prohibition on frame sending for CONTINUATION trumped this or not. This isn't a perfect fix, but it should be enough.\nLGTM\nMerged by RFC editor."} +{"_id":"q-en-http2-spec-eda8b981215a927642e333df58202df058e0566f2872974658a399b442b52924","text":"This is a minimal proposal for solving .\nLGTM\nLgtm as well. On Mar 17, 2015 9:47 AM, \"Martin Thomson\" EMAIL wrote:"} +{"_id":"q-en-http2-spec-6ccce857f32ea132fbc37894380e9bfc8154011873ff5fa8064f7a57b6a4479f","text":"After some sleep, I've read through the section and noticed a few other minor inconsistencies. This alters the structure slightly, as well as replacing the first sentence with a more accurate one. Supercedes . .\n+1, thanks\nLGTM\nMerged by RFC editor.\nWe are working on a HTTP\/2 library and we are a bit confused and we would like to get it right. Is it correct that after a sender sent a frame with some to a receiver, the sender may still open new streams, even with stream identifiers higher than the sent in the frame? That is the frame and the only limit the ability of the receiver? If this is correct, and I believe it is, then I find the following sentence in the first paragraph at confusing. In my opinion it should say something of the like Thanks.\nI am also intersted in getting clarity on this topic. It seems what the \"spirit\" of the is something like: Each endpoint maintains the goaway last stream id from its peer. When a goaway is received then no local streams can be created (i.e. streams can only transition to ), and all local streams with > from the goaway frame are closed (or as the spec says ). Remote streams can still be opened, and to ensure a graceful connection shutdown a goaway frame should be sent in response.\nNot quite: Upon receipt of GOAWAY, new streams can be created, but with the following restrictions: those with stream identifiers higher than the will be ignored. Those with stream identifiers lower or equal are subject to the usual risk of loss, but they might be processed. Success is determined by what happens on the stream, of course.\nNAME - Can you clarify the following from : This is what lead me to believe that streams can not be \"created\" after receiving a ?\nHmm. I'd forgotten that. That is in conflict with the graceful shutdown we've specified. I think that we need to take this to the list. This seems like a genuine bug.\nThanks for the quick response NAME Would you prefer to bring the issue to the list?\nI'm just trying to formulate something.\nhmm I see NAME As a follow up, does the in the GOAWAY have to be the identifier of a stream that actually exists or could it also just be a \"random number\"?\nThe intent was to say that the receiver of a GOAWAY can't create new streams higher than the value in GOAWAY. Let me check back in tomorrow; maybe fresh eyes will add perspective.\nSo the intention of the change is allow receiver of GOAWAY are still allowed to create stream lower or equal to received last-stream-id, accepting the fact that those streams are ignored by peer? As the graceful shutdown process as described in draft 17, server first sending GOAWAY with (2^31) - 1 starts to ignore any new streams created by peer and after some time (this ensures that GOAWAY is reached to the client), server sends another GOAWAY with updated last-stream-id where processing has occurred already.\nNAME the intent of the change is to not change anything :) The process has been unclear, but the point is that if you receive GOAWAY with a last-stream-id greater than anything you have initiated, you can use what space remains for new streams. What that means is that you can issue new requests on that connection while you take the time to create a replacement connection. This was considered critical for . That means that 2^31-1 serves only as a warning of impending closure, though in theory it need not change any behaviour for the existing connection. I'd take steps to make a replacement connection though.\nThank you for clarification!\nHere is a link to NAME doc from the mailing list which provides additional clarity to this issue URL\nLatest PR is much more comfortable to me. Thanks!"} +{"_id":"q-en-http2-spec-177875b96af84c90c0a03d68df09450e37bf32e319db57dc4bfa8550541a7b5f","text":"LGTM\nMerged by RFC editor.\nMinor bug that needs to be tracked... I assume the text in section 6.2 needs to be modified to add \"idle\"?"} +{"_id":"q-en-http2-spec-5670fcd20d8f41cc4e3e4ae6a4908a67b5eae11d9f8d0a692ad3d57b23cc4f95","text":"Thanks for the PR NAME Should we move the issue discussion (URL) to this PR?\nLGTM\nNAME if you have comments, please let me know ASAP.\nMerged by RFC editor.\nhas the following text: Draft 17 added the changed to . Can someone clarify why would streams in other states be excluded?\nThose are the only states in which data (or DATA) can be sent.\nDoes your stream have to be eligible for data to be sent to have its windows manipulated? For example are IDLE streams intentionally excluded? Would this mean that the initial settings frame wouldn't apply to any streams? I think it may make sense to be able to alter window sizes of streams in other states because of dependencies\/priorities and how the windows are distributed amongst the tree.\nWhile it is true than an idle stream is affected, the text there is to highlight the fact that streams with active flow control windows (i.e., ones that might be in use) might be affected.\nIs it possible to clarify this? I think there are a few options: If we are going to explicitly call out states: we should call out all states where it could possibly impact streams. If we are not going to explicitly call out states: Use a description which implies stream state which is defined elsewhere in the spec. Something like: Note this is just an example... The current text in the specification is in conflict with this clarification you have provided in this issue. It seems like the following sentence in the spec may be sufficient by itself?\nNAME - Ping.\nThe inclusion of stream state in your clerical response leads me to another question related to the priority tree, stream state, and flow control windows. indicates that . The flow control window is a resource which can be divided amongst nodes in the priority tree. However the frame can be sent\/received in all states, but the frame is more restrictive in its allowed states. This discrepancy makes controlling the resource (the window) difficult (i.e. providing a mechanism to avoid deadlock). Is there any way to clarify or allow the resource (the window) to be controlled adequately along with priority?\nAs far as it goes, yes, a server might allocate flow control window based on priority. But that only applies to the logic that an implementation uses to decide how to allocate credit to flow control windows, which is intentionally (and expressly) unspecified. I don't think that it needs a special call-out.\nI agree that it doesn't need to be specifically called out that the window can be treated as a resource, and that this is an implantation detail. However calling out the stream states when referencing frames that impact the flow control window may be overly restrictive. For example the are not the only states in which a stream may be in the priority tree. Here are what I believe to be the relevant portions of the specification which relate to flow control window and stream state. seems to be very careful about stream state and doesn't necessarily limit to just is much more restrictive and declares that is not allowed in , or . For there is special mention that these frames must be ignored, or may be an error (if it arrives ). If the clarifications in this issue are applied then it seems like the impact on the flow control windows is further limited to .\nBTW, I consider the state associated with having a stream in the priority tree to be necessarily separate from the state that is maintained for an \"active stream\" (i.e., one with active send or receive state). Lumping them all together exposes you to some potential attacks.\nCan you be more specific (particularly about the )? I guess what I'm after is two fold: Consistency amongst the sections that discuss window update and stream state. Not necessarily preventing window_update being controlled when a stream is in the priority tree, or rational as to why it makes sense to do so.\nThe amount of state you commit to a stream that is active is potentially large (flow control windows, primarily). That is strictly bounded though by your value of the MAXCONCURRENTSTREAMS setting. If that state has to live on the same lifetime as stream priority, then it is not covered by the setting and you are potentially in a position to commit a much larger amount of state. You can (obviously) manage this by careful management of the transition to and from the active states, but I believe that it is better to instead have separate tracking of priority.\nOne way to implement this cleanly is to have a (small) priority state for streams that is created and managed by a garbage-collector. Active stream state can be a separate object that maintains a reference to the priority state (that might be nullable, or you could require that only closed or idle streams be exposed to the GC).\nI think we are straying into implementation details a bit, and we are in agreement that the state could be managed, but in the scope of this issue....I re-read your PR and I am fine with it :)\nNAME Does \"old value\" here means the old initial window size?\nYes"} +{"_id":"q-en-http2-spec-34f33c90add0767f60792fc06725e7ac20f2cf61ee8b6a5d3ed8a9c31f87dbd4","text":"This makes it clearer that they can't be sent otherwise.\nLGTM\nMerged by RFC editor.\nhas the following clarification added in draft 17: Should this be instead of ? If not I think we have an inconsistency with which has the following for the section:\nNAME - Ping.\nThis is merely a problem with perspective. As in, there is none. I can provide that thusly: Does that work?\nThis is an improvement. I'm not sure if we can avoid it but it is a bit confusing with respect to :\nI don't think that's a problem. The (remote) peer isn't sending, but the (local) peer can.\nAgreed. Ship it :) Thanks!"} +{"_id":"q-en-http2-spec-e83d5e0e6d4cc2de02c71c2a0b4e250caa677b896eb94eb25bf608efd87072c6","text":"Same edits as before with additional tweaks based on feedback. I've adjusted the formatting on my edits to match the current styling as much as possible (must... resist... OCD... tendencies ;-) ...). Specific edits can be reviewed in the commit log.\nCareful, you could end up with a new job if you keep this up."} +{"_id":"q-en-http2-spec-c8be626a90fc39b11f5f8cc82095bb185d7404b97b3812eff25ed6b0753063fd","text":"A lot has happened to TLS since we did RFC 7540. This tries to capture that succinctly. It rolls in changes from RFC 8740.It mentions early data and RFC 8470.It updates references, citing TLS 1.3 rather than 1.2 for a generic TLS reference.\nA few small changes are included in , which could be integrated easily.\nI won't create the pull request just yet, but here are some proposed changes: URL NAME do you think that you could review these? I've paraphrased some of RFC 8740 and I want to make sure I haven't missed anything.\n(The GitHub link did something silly, but I'm assuming 837d46a5ff9e23a62cb719f69be7f1d10bf8da80 is the right commit and just looked at that.) s\/the handshake\/\/ Though I wonder if this sentence can just be omitted. I needed to address it in RFC8740 because RFC7540 didn't have a TLS version dispatch. But now the TLS 1.2 features section is explicitly mentioned as being 1.2-only, so that exception already doesn't apply. Otherwise LGTM. (I think the default values in server settings is implicit from the fact that we haven't received anything from the server yet, but whatever.)\nThanks. I've updated my branch to remove the renegotiation piece. I've updated that link (which I should have checked)."} +{"_id":"q-en-http2-spec-d47577aa920883d427d69692283209c19d69326307c1662929879115d95f3ee7","text":"The thinking has evolved considerably since the original reference was made. This is the right way to refer to this concept.\nSo the re-request review button doesn't appear to work; NAME any objections to landing this with the new reference target?\nThe current reference is in the WHATWG \"infra standard\": URL To use this, we need to ask for the \"tracking-vector\" anchor to be made permanent.\nURL filed. I have changes for a pull request .\nI'm a bit surprised you chose infra; the fingerprinting guidance document in the W3C is much more complete. There are also tons of academic references that have better descriptions; to start, Eckersley's original paper.\nI was looking for a direct replacement, which is what that is. I can cite URL if you think that is better. Or URL if that makes more sense. I never know which is better.\nProbably the TR - whatever bibxml says, I'd think.\nI'm going to go with something simpler. RFC 6973 has an adequate definition.\nNone whatsoever."} +{"_id":"q-en-http2-spec-d07dbe3b8f566fd57b940fdcba8abd68b4fa31ea5e56f311bc35edeab838d13f","text":"This is a first stab at . I've gone for a minimal change here, simply clarifying that only HEADERS and PUSHPROMISE frames have the effect of implicitly closing streams. While I was writing this I did have a question that may need to be raised on-list: do HEADERS and PUSHPROMISE only implicitly close streams if they are accepted? That is, if a client sends a HEADERS frame that triggers a stream error, does that HEADERS frame still implicitly close all streams with a lower stream ID that are in the \"idle\" state? This may also be worth clarifying.\nLike it. Take the first, maybe take the second, and we can merge this one."} +{"_id":"q-en-http2-spec-76f70be117e2b5350a5e2b7c62eb0cd11a60606c9faec82162cbb296f46322f7","text":"Thanks NAME ready for re-review I think.\nOh, you proposed changing \"handling\" to \"reporting\": did you want to do that pervasively in this document?\nLet's take that up separately.\nReady for merge I think.\nI agree, but as this is substantive, let's run this by the working group.\nAs there is a draft deadline, I will open an issue for my niggle, and ask for forgiveness rather than permission.\nIt's not clear to me from RFC 7540 how an endpoint should behave upon receiving a frame that is erroneous in more than one way. For example, suppose a server receives the following frame when it has stream id 1 in half-closed (remote) state: According to §4.2, the frame \"MUST be treated as a connection error\" and the server \"MUST send an error code of FRAMESIZEERROR\" (because the length of 4 \"is too small to contain mandatory frame data\", namely the weight and header block fragment fields); According to §5.1, the server \"MUST respond with a stream error (Section 5.4.2) of type STREAMCLOSED\" (because a frame \"other than WINDOWUPDATE, PRIORITY, or RSTSTREAM\" was received for a stream that is in \"half-closed (remote)\" state); and According to §5.3.1, the server \"MUST treat this as a stream error (Section 5.4.2) of type PROTOCOLERROR\" (because \"a stream cannot depend on itself\"). That is to say, according to the RFC, the server MUST (at very least) respond to this frame with one GOAWAY frame and two RSTSTREAM frames. However, sending all these frames would violate other requirements: According to §5.4.1, \"after sending the GOAWAY frame for an error condition, the endpoint MUST close the TCP connection\". This obviously prevents other frames from being sent. According to §5.1, sending a RSTSTREAM transitions the stream to closed state, in which \"an endpoint MUST NOT send frames other than PRIORITY\". This prevents any further RSTSTREAM frames from being sent for the same stream. In this particular case, it seems reasonable to me that the frame size error should halt further processing of the frame such that neither of the stream errors are even detected: after all, an incorrect frame size may indicate corruption (in which case the rest of the frame's data must also be suspect). But this is neither clear from the spec nor might other cases be so clear cut. I see three possibilities, with a reasonably strong leaning toward the first: Clarify that further processing of frames SHOULD (or MUST?) be halted upon encountering an error (I suspect this is what most implementations currently do, irrespective that the RFC says they \"MUST\" behave otherwise?). But then the order of processing becomes significant: is it right that this should be left as an implementation detail, or should it be specified? After all, the above frame might trigger a STREAMCLOSED stream error from one server and a FRAMESIZEERROR connection error from another, with very different consequences: how might these differences impact intermediaries, caching and retries? Better I think for the order to be specified. Define a precedence to errors, and mandate that that with highest precedence be sent (for example, connection errors might have higher precedence than stream errors). But then endpoints must continue processing frames they know to be erroneous simply to discover whether any higher precedence errors also exist (at least until an error of the highest possible precedence is encountered). Permit (require?) all errors arising from a single frame to be transmitted irrespective of any other errors the frame may also have triggered. Again, endpoints would then continue processing frames that they know to be erroneous—but the peer would then be fully informed of all problems in order that it is better able to recover.\nI think the simplest explanation is that connection errors are those that suggest there's no way to (safely) continue attempting to parse anything. I suspect most implementations would find that this frame is not long enough to actually be a HEADERS frame and not attempt to parse it as a frame it cannot actually be. Thus, while a given implementation could possibly discover one of the stream errors first (thereby sending a RST_STREAM), once you discover the connection error, you're sending the GOAWAY and throwing the connection state away. The logical implementation is likely to find the connection error immediately and not read far enough to find the others.\nI agree, but this is not specified. Is it okay to leave as an implementation detail, even if most implementations will do the \"logical thing\"? Also this was just one example of a frame with multiple errors... others may not be so straightforward.\nThe question we should ask is: Is there any behavior that would change that would cause it to not interoperate here? I suspect that, regardless of how one interprets this, processing must stop, and the client may or may not receive an error, thanks to the vagaries of packet-loss. If this requires any clarification, I'd pick the path of asserting that any connection error stops any other processing.\nActually I also found error handling quite tricky to do, especially regarding when and what to send in GOAWAY frames. One really needs to think in terms of error severity to implement a form of funnel so that more severe errors can still be emitted after less serious ones. One example that quickly comes to mind is that we can send a GOAWAY frame to indicate a graceful shutdown of a connection yet this one is not an error and may be followed by other GOAWAY reporting protocol violations. I also faced an annoying situation related to TCP where you can't send a GOAWAY frame due to a buffer full situation, and closing after this causes a reset to be emitted due to pending incoming data. So we actually have to enter an input drain state in error situations, which further adds to the implementation complexity.\nQUIC has that I think h2 would benefit from.\nSee for some proposed text to address this. We probably need to workshop it a bit.\nThanks!"} +{"_id":"q-en-http2-spec-7e7bc84172951932738fc9d21adbca859547ec6200003ee8c8d4fc4f40a0fb02","text":"HTTP\/1.1 to HTTP\/2 Upgrade was never widely deployed and has no particular defenders. This change removes the text explaining and defining the mechanism. This change still requires discussion on the mailing list.\nNAME I've returned the two IANA registry entries and pointed them backwards at RFC 7540, as the sections that define those fields were removed from this document. I'm not actually sure that's the right IETF process there. Happy to take guidance as to what I should be doing instead (e.g. putting in a pair of dummy sections saying that the field and token don't have semantics any longer and referring to those instead).\nOk, I've replaced the registrations with tombstones, this is ready for another go around.\nIt would be good to more explicitly say that the upgrade mechanism is NOT RECOMMENDED, both because we're encrypting everything and because of the it enables (even if they're mostly shoddy implementation, it's more surface area).\nSounds like a reasonable addition, but it might be better to track that with a separate change.\nHappy to add that here or make a separate change, either way.\nLooking into this a little more, I think that Section 10.3 () is probably where some extra warnings might be worthwhile. The reason I might prefer a separate change is that we can hit Upgrade, Transfer-Encoding, Content-Length, and other Connection fields at the same time if we do it right.\nThis has not been implemented, so can likely go. See the discussion on URL for more context.\nI’m in favour of removing this. It’s clear that the ecosystem has not found a use for it.\nAs a co-author of this one, I agree as well. This resulted from a misunderstanding of the use of the preface that everyone happily uses in clear to distinguish between H1 and H2 (and hence serve as a much easier upgrade).\nSeems to have support; does anyone have an issue with this?\nAs stated in today's meeting, this sounds good to me, but it's worth posting on the list."} +{"_id":"q-en-http2-spec-eac425a06ad6a90bcbe640b3bf3a1a87b0e44b8e4f5bc5374220e9ca562de27c","text":"After carefully reviewing the Netflix advisories, it was clear that the problem was that implementations didn't read the original text. The TCP window reduction attack in CVE-2019-9517 was not mentioned, and it is a clever exploit. The rest were clearly already documented. I took the opportunity to restructure things a little bit. The first few CVEs were about causing the victim to enqueue a bunch of frames and then exploit the fact that this queuing was badly inefficient. Those got a list of their own. I purposefully left CVE-2019-9516 out. Though I am sad that people didn't free() after malloc(), this is pretty much just a systematized memory leak and I don't feel like calling that out in an RFC. I've cited the disclosure page, so I think we're covered there anyway.\nThe spec includes language on this, but it turned out that many implementations failed to properly safeguard themselves. Clearer descriptions of problems based on this experience would be good.\nI don’t think it hurts to tighten the text up here and add more examples, at least so new implementations are better able to learn from past mistakes. However, we should probably keep in mind that this cannot possibly be an exhaustive list of exhaustion attacks.\nThis is the Netflix write-up, which might be a good reference: URL I can only imagine citing a markdown file on GitHub will raise eyebrows, but it's not a bad summary of issues.\nOne minor suggestion, otherwise I’m happy with the new text."} +{"_id":"q-en-http2-spec-c05e3c26ee0be6a851ab2c18e97685091a74392b34928f4339d99a435a2721b9","text":"Additionally, section 4.2.3 contains a lot of text that is similar to 4.2.2, but there may be a reason for that.\nI'll try to remember to fix the content-length thing."} +{"_id":"q-en-http2-spec-5df1a8508ce93b71854766bf7470188515b272e445ecde1332267eabc2f4221f","text":"The previous text implied that a value of 1 was a problem, which was somewhat in tension with the default being 1 and servers mostly not sending the setting. So clearly what was being checked by client was the frame, not the value. This changes to checking the setting as it arrives, changes where that text lives. Also, this changes from \"endpoint\" to \"client\" or \"server\" as appropriate to match actual practice.\nThe definition of ENABLEPUSH implies that it can be set to 0 by a server, but the effect of that is meaningless as it results in treating PUSHPROMISE as an error. PUSH_PROMISE can't be sent by a client. Should we update the definition?\nI don't think it's a good idea to start forbidding ENABLEPUSH = 0 from servers. It's clearly acceptable from the perspective of the RFC, and while it's a silly waste of bytes it doesn't introduce any protocol ambiguity. I do think we should clean up the text in § 6.5.2, which currently reads: However, Section 8.2 implies the default value for servers is 0: Arguably these two sections should be reconciled. In principle we could simply say that the value of ENABLEPUSH has no effect for servers, as the protocol forbids servers from pushing anyway, and remove the requirement that clients error.\nI think that you mean clients are forbidden from pushing, but otherwise I would agree. The initial value for ENABLE_PUSH is defined as 1, which muddies things further. I think that we can change the definition to say that the setting has no effect when sent by a server and drop the connection error requirement.\nWould we leave the connection error as an option?\nI'm inclined to remove it as being inconsistent with reality. Though we might have to note that some implementations might do that (because they haven't updated) and so advise strongly against servers sending any value for this setting.\nIs it inconsistent with reality? Presumably some implementations actually do police the MUST here.\nTwo things are in tension: policing probably happens in the sense that if a server sends ENABLEPUSH with any value (especially 1) a client might choke on it the initial value is 1, so those clients should probably choke on a connection preface from the server as well Removing the requirement that ENABLEPUSH result in an error avoids the second, but you need to keep a recommendation not to send it or you hit the first.\nI suspect most implementations treat initial values somewhat specially, and while they enforce that they cannot receive a value of 1 from a server, they don’t prevent the server’s setting value having the value of 1.\nThis might be one worth circulating with the WG tbh, if only to get a sense of what existing clients actually do. We can also investigate putting together a test suite.\nOne minor language nit, otherwise seems great."} +{"_id":"q-en-http2-spec-ce978e639a4e50db0eaaaa03fa31a282d6a2db72ddfcc19f981ef2410d0dad5a","text":"NAME says: It's not very widely used and many implementations either don't support it at all or don't support it in a way that improves end-user performance. This sparked a lively discussion at URL\nI’m open to dropping this, but I think the case needs to be be made for what the burden is. It’s one of the easiest protocol features to just ignore: send SETTINGSENABLEPUSH=0 and then you can remove all awareness of PUSH_PROMISE from your state machine altogether.\nCrazy thought: what if we made part of HTTP\/2, and then factored server push into an extension?\nOr what if we minted a new ALPN () and flipped the default value?\nOr left it as is, since it is used outside of browsers. The browsers have an API problem that has not been solved, and it is unlikely that push will be effective there (except as a competitor to inlining) until that changes.\nI have no particular objection to flipping the default under a new ALPN. Factoring it out to an extension is harder, but not impossible -- it's pretty embedded in the state machine, but we might be able to abstract that away. Removing it entirely seems challenging for several reasons. The purely process one is that RFC 8030 requires Server Push, and removing a feature used by a Standards Track RFC presumably requires deprecating that other RFC.\nI should note that my preferred outcome here is \"do nothing\". I'm just spitballing ideas.\nFlipping the default if we mint a new alpn seems sensible based on the comments above. This would also align with HTTP\/3. Moving it to an extension is probably more work than it's worth, I'll admit. I wasn't aware of RFC 8030. Is that an actively used RFC?\nPush is being used outside of browsers (curl supports it, once implemented for a particular company that uses\/used it pretty widely across the globe), although I suspect that outside of browsers we also have no real means of figuring out exactly how much or little. Saying \"do it in extension\" will probably equal killing it, but I also think that removing it from the spec is more work than its worth and just toggling the default seems like the best middle ground.\nWe may want to be a bit relaxed about flipping the default. As the setting is client-only the default value never really exists on a connection: the client’s SETTINGS are the first frame it sends. The reason to flip the default is to save 6 bytes on all connections. This is not nothing, but it’s not a huge win either.\nI agree that it's not worth changing anything here, except if we want to be sure to cause some confusion between implementations. The first frame sent is the one indicating support (or lack thereof) by the client. Maybe we should just add a paragraph in the PUSH section mentioning that 5 years after the relerase, PUSH has still got very low adoption among implementations and should not be assumed as a granted feature for protocols sitting on top of H2.\nI think NAME suggestion of adding a note about deployment and not expecting wide support would be very helpful. My biggest concern with keeping it in the spec is that people will expect it's widely supported and its use is recommended for most use cases.\nFeeling here seems to be supportive of no normative changes, but possibly some editorial additions clarifying the status of push. If you feel otherwise, please say so soon.\nSGTM. FYI, Chrome is expected to drop support for HTTP\/2 server push in the= near future and has never implemented push for HTTP\/3.\nNAME has it been announced somewhere? I can't find it.\nSeveral popular web frameworks actively use Server Push including (mostly for assets) and (for API relations). Server Push is also (currently) supported by most browsers as well as by NGINX, Apache and Caddy. On the APIs field, , which aims to replace GraphQL-like document compounding is also gaining adoption, and rely on Server Push (even if it can fallback on preload Link headers and Early Hints too). I just published a benchmark showing how useful Server Push is for the specific use case of web APIs. Under certain conditions, relying on it can be 4x times faster than using compound documents: URL As pointed in martinthomson\/http2v2, for these use cases, Server Push could be replaced by Early Hints or maybe by something using WebTransport, but these specs aren't currently implemented while Server Push is broadly available. Removing support for Server Push in Chrome will hurt all this use case. What's really missing with HTTP\/2 Server Push is a way to prevent pushing resources already stored by the client. could have been a solution, but the work on it looks stopped.\nURL\nNAME suggested calling out the current status: lack of support in browser APIs for push, pointy edges in the usage, and the limitations in the spread of its usage. I think this is a reasonable editorial change.\nNAME to draft a document to encourage users to set the value to 0 by default and move on.\nDiscussed at Feb 2021 interim; intent to document concerns \/ caveats, recommend sending setting = 0. NAME to PR.\nLGTMVery nice. Thanks Mark."} +{"_id":"q-en-http2-spec-f70c244996816eb565db63d53a02e036e8be294a0d07c6f0d80fe2420a6a4239","text":"In the spirit of the naming simplications of the recent HTTP changes, we can talk about settings in this revision. This aligns better with HTTP\/3, though that still uses \"settings parameters\" (with that ugly double-plural). cc NAME\nI mean, it's not too late to Auth48 the same change into H3.... Not quite, anyway."} +{"_id":"q-en-http2-spec-c763f8cb64d2ab471b55344cac4c378dea71dfdffd4ffea5a93fb891f4b799cd","text":"From IANA: As we have the patient open already, it might be OK to change the registry name to follow modern conventions.\nTo clarify, IANA would like us to delete the word \"Parameters\" from the following text?"} +{"_id":"q-en-http2-spec-55e75b40b68c30b2d9eebc2f031c8d028780f44b61a5f45f26f1f7e7144b05fa","text":"The current rules state that you need to include text in abstract and introduction. This will result in redundant entries for the specs with other changes that add 7540 references, but we can sort that out later.\nPicking my battles more judiciously (as much as I'd like to fight this one, it's a low reward, high risk situation).\ninformatively, as they are obsoleted by this spec\nRules are boring. Be a rebel without a pause. Omit that text."} +{"_id":"q-en-http2-spec-b34d6f355e7b90e93187946a8a3307dfb0e8a838af0d678b85431be51daca457","text":"This required a little careful rewriting of text that only recently rewrote. Hopefully this retains the improvements of that rewrite.\nRight now, we have a lot of 1.1-specific language. In theory, it should be possible to remove 1.1 references and use core HTTP semantics to describe the protocol elements. That would be much cleaner, but it could be tricky.\nNice. I think the retains the original meaning of the text well enough, and is as clear as it can be without ever needing to normatively invoke HTTP\/1.1. Needle well threaded."} +{"_id":"q-en-http2-spec-bbb49e413b6fe446e876b14d07dcaaf368fcb3291ab63f59906165aeb6e6fbb1","text":"This encodes the conclusions from interim discussions: CR, LF, and NUL cannot appear anywhere in field names or values. SP and HTAB cannot appear at the start or end of field names or values. COLON cannot appear anywhere in a field name, except for the colon at the start of a pseudo-header field name. The strong requirements about validating fields according to ABNF has been replaced. The text is clearer about how the pieces fit together: it is HPACK that allows any octet, whereas HTTP\/2 makes certain choices invalid.\nThis allows whitespace and control characters in field names. Is that intentional?\nThis was intentional, but it can be revised by expanding the set of characters we prohibit. I had imagined that this would not be different from HTTP\/1.1 where \"Foo\\tBar: ?1\" might parse, but is not valid in the same way that \"Foo\\Bar: ?1\" is not. I thought that we had agreed that a close policing of the field name grammar was not required from HTTP\/2 implementations.\nNAME take another look. I've refactored your suggestion.\nYes.\nNAME if you can suggest text that would address your remaining concerns, that would be good, but I think we're going to include this text in the next draft. Of course, there will be ample time to improve on this; we just want to make sure that we have an updated draft to discuss at the upcoming interim meeting.\nNAME What I think is missing is a description of what this change is trying to achieve. The text identifies that there is a set of , but notes that that is a super-set of of . It then gives text description of what the set of is, but it is not clear exactly what the intent of that set is meant to be? The possibilities are: == union . But if this was the case then why not define it in terms of the HTTP ABNF? == union union . This seems a strange thing to do, but the motivation was originally given that this was describing what existing implementations do? If this is the case then it should be described as such. == intersection . I think this is what the text is actually striving for, but a) it would be good to say so; b) it is unclear to me why h2 needs a superset of option 1. above... if so it should be said why. If it is indeed option 3. then I assume that there are some fields in the set that are nether in the nor sets. What are those fields and what should an implementation do with them? We are told later in the update that if we are an intermediary then we can validate strictly against the HTTP ABNF and that if we don't we risk invalid fields (and by inference of the section title encapsulation attacks). But what do we do with those fields if we are not an intermediary? Do we pass them onto our applications and let them work it out that they are not members of ? There is nothing there that says we MAY treat such messages as malformed - so at the very least, it would be a good addition to the text to say that non-intermediaries can do stricter validation. So in short, I can see lots of text defining , but it is entirely unmotivated as to why it is > . Finally, to harp on my original complaint, I still do not understand why is defined in 4 bullet points of prose that have already confused some readers. There is a paragraph about how field names have to be lowercase and then the bullet points ignore that and define all visible characters except space (so if space is visible does that mean tab, CR, LF are also visible?) and except colon. Some simple ABNF would be so much better: Or better yet, just go with HTTP field names, but with names in lower-case: Edit: removed comment about ESC... I would looking at an ASCII table from before I was born!\nNo it doesn't. It gives a text description of a set of invalid H2 fields. The text does not say these are the only limitations on h2 field validity (noting that there is also an ABNF description that applies when -semantics applies), neither does it say that any field that doesn't contain any invalid characters as described in this section is valid. I do think the text could be clearer in saying that this is a non-exhaustive list of reasons why a header field may be invalid, but at the same time I'm reluctant to have this document spend too much time re-hashing reasons that exist in other specifications that are presumed to apply. This question is one for -semantics, not for HTTP\/2. -semantics gives the answer to this: you MUST NOT emit (§ 2.2), but as a receiver you are cautioned that (§ 2.3): Additionally (§ 2.4): This gives you the permission you want: you are encouraged to parse the field defensively and you may attempt to recover, which implies that by default you may fail, and that -semantics does not specify exactly how you will handle these failures. HTTP\/2 continues to grant your application that freedom. The specification does not bind you: you can handle this as you want.\nOK True. Which is actually worse, as it leaves the set of undefined other than we know it is a subset of the compliment of the set of invalid fields define here. Also worse, as it means the document only vaguely defines validity. There is actually no definition of what is a valid h2-field. This reason could be used to say that there should be no additional invalidity check in this document. Just allow any hpack-valid field and leave the rest to be a matter of semantics. The fields excluded by this definition are only dangerous if forwarded, which is already covered by \"Intermediary Encapsulation Attacks\" section that suggests that full HTTP ABNF grammar should be applied (in addition to the vaguely defined exclusions of \"HTTP Fields\"). So I'm back to why is this document in-precisely defining valid\/invalid h2-fields. It is not for intermediaries as there is a section in the document dedicated to that. It is not for locally interpreted HTTP as you say HTTP semantics should be applied. Why does this document say that a field name containing space is invalid, yet one containing a double quote is not? Neither are valid HTTP, both are valid hpack. There is no good reason I can see for either and whilst I don't know of any specific attack that either could be used for, I would not be surprised if both could be used to some evil ends. This vague definition is going to have a significant carbon footprint. Many Implementations will ultimately end up double validating field names: once to exclude the s in their h2 layer and then later in their semantic layer to include only the s. Double validation could be avoided if this document either didn't restrict field validity or precisely defined valid fields as a subset of s so that semantic layers could trust the fields received from their h2 layers as being valid HTTP. Maybe there is a reason for this double validation and the resulting CPU cycles, but I have yet to see why validity is only partially defined.\nIt suggests it but does not require it. The other text requires validation. Most intermediaries will not apply the full ABNF validation because a) it requires perfect understanding of the ABNF of all header fields, which they will not have, and b) even if they did, the -semantics ABNF is sufficiently costly to validate that they won't do it. Cheaper validation steps are more likely to be implemented. This document (and -semantics) is imprecisely defining them because a . The ABNF you're citing represents, fundamentally, guidance. If you want to reject things that don't conform to the ABNF you are free to do so (as -semantics says), but there is no document that obligates you to reject them. This is deliberate, because there is (IMO) absolutely no point in adding normative requirements that, if followed, would harm interoperability. In this case the document is calling out specific cases that MUST be rejected. Practically speaking the intention of this is to encode requirements from other messaging formats, such as -messaging, to ensure that HTTP\/2 implementations do not encode header fields that cannot be safely translated to HTTP\/1.1. This provides a lower bound on validation: so long as everybody does this, we can safely rely on HTTP\/2 messages being represented in HTTP\/1.1's framing. Note that this doesn't meant they'll be valid HTTP\/1.1 messages, just that they are not going to parse incorrectly. With that in mind, the constraints added here are, in order: A cheap bitwise check that can be safely vectorised and applied to reject most invalid field names. This is sufficient to get close to the ABNF, but is cheaper. A check that reserves pseudo-headers and is a single-byte comparison. Three vectorizeable field-value checks (NUL, LF, CR) all ensuring valid HTTP\/1.1 framing. 4 single-byte comparisons against the first byte of the field-value. This problem is solved by not doing that. The ABNF in -semantics, if enforced, is a strict superset of the requirements here except for (2). If your implementation will enforce the requirements in -semantics, you can choose to not enforce these ones at the h2 layer, knowing that the -semantics layer will cover you. The RFC does not bind your implementation, only your observable behaviour: if your implementation ends up rejecting header fields that meet these characteristics, it doesn't matter why you did it, only that you did.\nI don't get that. We're still talking about field names, right?\nNo, all of this applies to names and values:\nWell. Validating names and values are very different things. It would be best to clearly separate these topics.\nI get it that there is a minimum standard being set. In section \"HTTP Fields\" it says that h2 implementations MUST treat as malformed any fields that violate any of the described conditions and an intermediary MUST NOT forward any such headers. Then again in section \"Intermediary Encapsulation Attacks\" is says the validations of the \"HTTP Fields\" MUST be implemented, which is kind of a duplication as they are already at MUST strength. I don't see any normative text that says that an implementation MAY or SHOULD validate against the HTTP ABNF and that if they do so then that is a superset of the conditions in section \"HTTP Fields\" I would very much prefer to see text in the \"HTTP Fields\" section that clearly says h2 implementations MAY validate against the HTTP ABNF and that any implementations that do not MUST validate fields against the following conditions. I'll have a try at this later today.... stand by....\nNAME I'm reading your comments as supportive of the technical change, but a strong preference for a different way of presenting the information. I want to clarify that this change is about the mandatory validation that endpoints perform on fields. It looks like we all agree with stipulating a minimum amount of validation that applies to all implementations, especially those that would otherwise just forward fields without processing. Additionally, we all agree that additional validation is permitted, up to the point that knowledge of the semantics for the fields is applied in validating the values. Where we might disagree is in the way that these requirements are described. I've chosen to use words; NAME would prefer ABNF. As we agree on the technical substance and the question of presentation is an editorial decision in which Cory and I have discretion I'm going to merge this pull request. We would like to publish a revision ahead of the upcoming interim. If you disagree with this decision, especially the technical aspects, then I encourage you to open a new issue. One aspect that seems like it might worth discussing is whether we encourage additional validation rather than simply permitting it. That is, \"SHOULD\/MAY fully validate\".\nNAME that's a correct summation of my position. I would have preferred that some text for SHOULD\/MAY fully validate to be resolved in this PR (with or without ABNF), but will open another issue.\n: While most of the values that can be encoded will not alter header field parsing, carriage return (CR, ASCII 0xd), line feed (LF, ASCII 0xa), and the zero character (NUL, ASCII 0x0) might be exploited by an attacker if they are translated verbatim. Any request or response that contains a character not permitted in a header field value MUST be treated as malformed (Section 8.1.2.6). Valid characters are defined by the \"field-content\" ABNF rule in Section 3.2 of [RFC7230]. There are multiple issues here: it's confusing to say \"HTTP\/2 allows\" when next they are actually forbidden with a MUST requirement; it might be better to say something like that the wire protocol in theory enables the transmission of these forbidden characters this puts a normative requirement on top of the base spec; but so does the revision of the core specs (see URL). We need to make sure that there are no conflicting requirements. It also should be checked whether the currently present requirements are actually implemented in UAs (and if they are not, what could be done about it)\nIt would presumably be more accurate to say that \"HPACK allows\", right?\nHPACK also allows encoding a field line with an empty name. I'm not sure this needs to be mentioned explicitly or not.\nWell, that's consistent with HTTP\/1.1 :-)\nI hate having to do this, but what is possible and what is permissible need to be more clearly separated.\nI'm going to tag this as editorial: the possible\/permissible split is clearly editorial. And we've decided to depend on the core docs, and whether or not it was clear before what was allowed or not, this is now very crisp in the core semantics. I want to wait until we get in before doing this though.\nDiscussed in the context of leading and trailing whitespace in -semantics. I think that we should be clearer here about a few things: CRLF and NUL are strictly prohibited leading and trailing WS are strictly prohibited Maybe loosen the requirement a little so that it is less strict about adherence to the ABNF That makes this non-editorial, I think."} +{"_id":"q-en-http2-spec-e1fec64b5f67e60443a8b1401f95920907ef5548cab55a85d8ef9d8c4934cfdc","text":"The language doesn't properly acknowledge the possibility that some processing might have been performed before an error is detected. This could use some tweaking.\nProposal here: URL\nSeems like a reasonable change.\nSeems like a nice clear improvement"} +{"_id":"q-en-http2-spec-e5f635b4ca1b79e186a6a45235ae4a86c5460f2f9abe410382679fa4cd20ac5c","text":"This is part of the fix for . This part removes the experimental range for frame types and settings. Also updates to use section references to RFC 8126, which replaces RFC 5226.\nNice clean change. No notes."} +{"_id":"q-en-http2-spec-3f265317a090d9eae5db7479f399c818c51e763449f49bae4a12745310f63f59","text":"QUIC includes the following statement: I like this. (No accident. I wrote it.) Should HTTP\/2 include a similar statement? We have PROTOCOL_ERROR, which has a definition compatible with this statement, so I guess we're already there on as far as allowing a generic code goes. Any sense in saying it explicitly?\nYeah, I think this is sensible text to include.\nReads clearly, captures the intent."} +{"_id":"q-en-http2-spec-b43ba7c5370564fb8bb3befa5ec8b37b24c17850456e9e27d78523af3edb3d9a","text":"Not sure that this needs a citation.\nURL describes how HTTP\/2 can be used to remove some network-induced jitter from the timing measurements. If your server has timing side channels, then putting requests that can be compared in the same IP packet ensures that the server receives them at the same time. This doesn't help if timing differences are masked by server queuing, CDNs or load balancers, or response delivery, but it can help reduce measurement noise. Consider documenting this.\nI'm reading this as editorial not normative (or at least not changing the wire protocol); seems uncontroversial.\nYeah, I'm not sure what else we can say here. There are risks here, there are not really many mitigations available."} +{"_id":"q-en-http2-spec-12144375a744b9100e4ec3c5a223793518d93c51fe6f6b1eaa52d54a9640c46c","text":"Here is some draft text. In principle we don't need any of this at all: as this text shows, all of the relevant guidance is already present in both semantics and H2. I've elected to throw up some possible text, but if we wanted to say that some or all of it is entirely redundant with existing specifications then I'm happy to do so.\nUpgrade, Transfer-Encoding, Content-Length, Connection, and anything listed in Connection MUST NOT be forwarded without modification. See also .\nNote that this probably goes in\nI have no personal knowledge of this issue, but it came up on the #whatwg IRC channel. URL describes Safari rejecting responses that contain the header field, while other browsers accept it. It'd be good to converge the major clients on a single behavior and, if that's not the behavior currently specified, update the spec to match.\nTo be clear, that's nginx being spectacularly broken by forwarding the header field. I'd characterise the philosophy of H2 as being strict with errors that endanger the whole connection -- and counts the server asking for an upgrade in a context where it doesn't make sense as qualifying as such. While I'd be surprised if folks wanted to relax the former (the strictness is a net win for H2), the latter might be worth revisiting, not because nginx is broken, but because H1's use of headers for connection-level mechanisms turned out to be so broken. I'm also not sure that aligning all clients to be so permissive is such a great idea; accommodating brokenness like this is a race to the bottom, and while it might be the only practical path when you have to interoperate with a large and extremely diverse pool of existing deployment (HTML), there aren't that many HTTP implementations that can move the needle. Interop might be better served by holding the line and forcing broken servers to address their problems.\nAdditionally, it seems broken for Apache to send the header when the connection between nginx and Apache is over TLS.\nFiled .\nApache treats the effects of sending over TLS as a UA-side bug. :-(\nYeah, this looks bad. The spec is fairly unambiguous about the requirement to ignore Upgrade in one place. But it also prohibits the use of Connection, which implies that you can't use Upgrade. I think that we should fix this, potentially along with . I hope that we are able to convince NAME to change Apache here. There is no value in sending this value.\nFYI analysis,"} +{"_id":"q-en-http2-spec-af333512222711b0942996c2c08c8951a1deccc6c5fb349870e1d75f072661ed","text":"Brought over the QUIC text about flow control performance. cc NAME\nThank you Cory. After the first sentence, I'd like to add the following because I'm pretty certain it is easily overlooked, especially in gateways where there's basically no way to figure the really available room:\nUnder what circumstances can that happen?\nNAME basically any time on the forward path: if you only count on the local buffer you're using to receive a stream, the window will always be ridiculously low, while you know that you have plenty of buffers on the other side (e.g. forwarding to H1, which is dedicated to your stream, you can count on the H1 buffers and system socket buffers). When forwarding to H2 you can theoretically at least count on min(streamwindow,connectionwindow,buffer room+system buffers), though it can be quite hard to figure. The problem with a multiplexed protocol (when doing fctl-over-fctl like H2 does with TCP) is that your per-stream buffers are always smaller than the connection-level buffers (that include the system's huge socket buffers), which are usually automatically tuned to accommodate the BDP. So in the end relying on stream buffers only implies you can never reach the BDP and will be forced to limit the single-stream bandwidth. This problem doesn't happen with other protocols like QUIC simply because there's no need in-order buffering in the system that you have to rely on. In the end, a reasonable approach consists in having a configurable margin on top of the per-stream buffer room to figure what to advertise, but we need to warn about the risks of advertising too large a window.\nHmm, sorry NAME but I don't see how that relates to the original note you gave. Where is there a head-of-line blocking issue? Having small stream window sizes prevents a single stream reaching full network performance, sure, but that isn't a head-of-line blocking problem. It's a window management problem. If you have multiple streams, all with small windows but all of which can make progress, you can in principal avoid any issues with window management. What am I missing that brings HOL blocking into play?\nSorry Cory if my explanation was not clear. What I meant is that when you start to use reasonable (i.e. small) windows for your streams, you immediately see the network performance become horrible, despite all the buffering capacity around (kernel etc), only because of the streams window. I've had users complain about 5 Mbps uploads for example with 64kB windows :-\/ As such it's extremely tempting to overprovision the stream window to make it cover some of the extra buffers available around that you're almost certain exists (i.e. the output buffer for where that frame is going to be forwarded), but when starting to play with this you risk HOL because it's often very hard to be certain about really available room (i.e. if you forward that over H2 and several streams consider that same room as available etc). Window-in-window is a very well-known anti-pattern and we knew it was going to hit one way or another and we have to live with it here. For most use cases where a browser is the sole owner of a connection, it's trivial to advertise huge windows and deal with it this way (no real risk that the browser blocks itself). When coalescing multiple agents it becomes quite visible a problem. I hope I managed to make it clearer this time, otherwise do not hesitate to let me know :-)\nThanks Willy, this is definitely clarifying things. So let me try to verify my understanding in bullets. The default per-stream window size (64kB), and other small per-stream window sizes, has performance problems over fat links. This is true. It is not necessarily true if you have multiple streams in flight and your connection window is larger, but it is definitely true for single streams. To that end it is tempting to advertise larger stream windows, to improve throughput. However that window may not be available in the target logical stream, such that if the user actually consumes it your application would have to buffer it. Possible reasons for this include the TCP send buffer not having room, the target H2 connection window not having room, the target H2 stream window not having room, etc. So this is definitely a resource management issue. Presumably the reason there's an HOL issue here is that, in response to needing to buffer on the outbound side, you stop reading from the H2 connection. I can think of a few ways to manage this, but this isn't really the forum for that kind of engineering chat. :wink: Instead, I'll say that I think the current text covers this. Specifically, the text says \"bigger windows improve performance, but you need to bear in mind the resource exhaustion concerns elsewhere in this document\". Should we be saying anything else more specific?\nWell, I only partially agree, in that it's not exactly a resource exhaustion in the pure sense of the term as there can be plenty of resources available (socket buffers everywhere), but more a matter of difficulty\/complexity to reliably measure available resources at any instant (i.e. the TCP buffer you have on one side is not necessarily dedicated to a single stream, so what is doable during a POST upload can suddenly change before you even have the option to advertise a window size change). I agree that this could be discussed elsewhere in the doc in more details, but my point was to at least put a short warning like the one I proposed to raise awareness of implementors against the risk of pushing this setting too far because \"it seems to work much better\". For the record, in haproxy what we've done to mitigate this on the download side is to avoid mixing multiple client connections over a single outgoing H2 connection. This way a slow client cannot block a fast one and it remains possible to advertise slightly larger stream windows that more realistically take into account available room on the path. For uploads, users generally consider that an uploading browser doesn't perform other operations in parallel over the same connection so it's okay to increase the advertised stream as well. But all these mitigations resulted from experience, were not obvious and will not necessarily apply to other implementations, which is why I thought a small warning could be useful.\nGreat, so do you think the current text covers it?\nLooks great like this, thank you :-)\nThanks for the constructive discussion. No doubt we missed something (that's a constant risk here), but this looks like this change is a useful one. We'll track further improvements with new issues as the opportunity arises.\nH2 strongly suffers from head-of-line blocking when streams do not progress at the same spead, typically when they come from coalesced connections from different clients. Per-stream flow control is generally ineffective against this as practical window sizes matching a BDP often require impractical buffering between the intermediary and the client. As such we should mention in the recommendations that intermediaries that coalesce connections either have very large buffers or try to group streams from a same client connection together on a server connection. Suggested by NAME\nI'm not sure that I'm following the problem statement very well, so I'm not sure what sort of recommendation would go with it. Is the problem that when aggregating requests from multiple clients you expose all of those clients - and all of the requests on the corresponding connections - to head-of-line blocking on the shared connection? That doesn't seem quite right to me - though the requests that share that connection might suffer, the way that a stalled request on one connection might affect other requests on a different connection is limited. If a request affects others, is it because it would consume shared resources like flow control window and connection-level flow control? Keeping per-stream flow control relatively small relative to the connection limits would be the way to avoid that exposure, would it not? That said, larger connection-level buffers might reduce contention, but in general more buffering just means more latency. NAME am I chasing the right idea? Maybe you can provide a paragraph on the problem and I can try to massage it into shape for the draft.\nThe issue is that the intermediary that coalesces many client connections into a single H2 connection cannot have infinite resources to buffer store the response data for these clients, yet will have to advertise a significantly large stream window to the other end to maintain an acceptable bandwidth. In practice it most often goes well because your advertised window is lower than what can be stored into system buffers. But once a client stalls, all the chain stalls, and you quickly end up in a situation where the advertised stream window is ahead of what you're really able to store. Note that looking at how TCP buffers are filled in front is not always effective. You may very well be using H2 on the front with other streams making it hard to figure how much you could really push, you could have SSL under it having its own buffers, you could be applying compression in the middle, also making it hard to figure how much the output gives you on input, etc. In haproxy we've finally adressed this by deciding that by default multiple clients are not coalesced over a server-facing H2 connection. We have multiple strategies of connection reuse so this remains configurable, but this was the best compromise we could find.\nProbably that we could propose something like this: Also, please note that the same situation happens the other way around, with multiple uploads over a single H2 connection to multiple servers behind working at different speeds. It's generally less of an issue though, but the large windows necessary to permit fast POST uploads may very well block the retrieval of small objects by the same client within the same connection.\nIs this just an over-commitment of flow control then? That is, you advertise a window of N when you only really have space for M < N? What is interesting is the implication here that you might be forced to advertise the larger window because if you don't you can't get decent throughput, but to do so you are betting that most of that capacity will be used by in-network state (data that is on the wire or in routers and whatnot, along with all the in-system buffers you don't see). Of course, if that stalls, all that stuff in those buffers will all be forced into your buffers. That can affect all connections that are somehow entangled with the affected stream. That means all requests on the same connection, but it could affect other connections. All that said, that's much more involved than your text here. You appear to be looking for something much simpler. We should have advice about the connection-level flow control credit being backed by real resources. You say stream, but the purpose of connection-level flow control is to limit resource allocation. While the stream-level values more directly determine stream-level throughput, the general idea is that they can be set with minimal regard to resource management if there is a good connection-level limit (the main consequence of not jointly managing these is a range of complicated priority issues). The current document is not very good about saying much about any of this. So any addition along these lines, including a version of that text, would be worth adding.\nWhen flow-control is set to values that the endpoint will respect, it minimizes retransmits due to buffer overruns. Minimizing retransmits does not always guarantee highest bandwidth; The more conservative one is with respect to preventing retransmits or buffer overruns, the more probable it is that some resources will be unused. This is similar to the tradeoff space of congestion control, however the buffers here are visible to the endpoints.\nThe thing is that due to the window-in-window you don't even benefit from the outer buffering (including the wire) if you have too small stream windows. With a single stream (or even H1) this problem does not exist at all as TCP acks at the edge. Here it's different, the data need to be delivered to userland and wait for its turn before the window can open again. I've seen POST requests slow down to 1 Mbps where the equivalent in H1 were 20 times higher using the same buffer size and TCP stack. So users bump the advertised window to compensate for this horrible performance and face random trouble. And on the server side the issue is the same but less visible, it mostly appears when the coalescing gateway and the servers are located on different datacenters a few milliseconds apart. But even at 4 ms with 64k windows it's only 128 Mbps per stream, so you can be sure that users are quite tempted to increase this for certain use cases where they see their devices twiddling thumbs on their 10\/40\/100G NICs. This is where it is possible to use the whole pipe's length when it's reserved or all streams of a given connection. What I'm seeking is not to fix the protocol, it is how it is and we all knew about this limitation during its design, but rather to warn users against the risks and effects of not accurately calculating the advertised stream window size in case of combined streams.\nBy the way, one easy trap to fall into is to advertise a window corresponding to the buffer used at the connection level, while what matters is how much you can extract from this connection to be able to immediately parse subsequent frames. And often you have no precise ideas of how much space there is outside.\nLet's try to pull something in, maybe copy from QUIC: URL"} +{"_id":"q-en-http2-spec-69c3603a066beb13c15236db1fda18907a76b73f3e4ee6ef56a125aed92cdc8d","text":"I haven't adjusted for vs. until that war is over. Note that I've explicitly added a format for the SETTINGS frame itself, which was missing.\n+1 . This would make it easier to write extensions for both h2 and h3. Priorities draft suffers from the mixed presentation format right now.\nOK, thanks for doing this NAME I merged this manually after fixing a few things up. From my commit: Use all caps for frame names Add a value for Type in frame definitions Change order of flag descriptions to match order in layout Make Exclusive field of PRIORITY optional Move PING ACK Flag to the right place Add SETTINGS ACK Flag Double-checking FTW. Of course, there's a good chance I also messed something up here. So feel free to check my work also.\nI think the language about ordering is a bit weird, but it is clarified somewhat by the hex representations shortly after. I think those hex representations are doing the heavy lifting, as I’m not aware of any issue with interoperability caused by getting the flag indices wrong in practice. Nonetheless, if we’re really worried about it we could just delete “bit n” and replace it with “the x bit”, e.g. delete “bit 0” and replace it with “the bit”."} +{"_id":"q-en-http2-spec-c4331d86d85aadb3fe90e03e65c4dbb290fc32a523869cdd7eec43856f6dcd49","text":"The validation for uppercase characters is no longer listed separately, but instead included with the minimal validation.\nNAME : factored out non controversial changes from"} +{"_id":"q-en-http2-spec-a3f014936e8c32f91681518164d25e98f5d52e351644711224eb6b31cce84651","text":"See as an alternative.\nThanks everyone for your patience here. It's good to see this work out :)\nValidating fields for characters has raised a concern where we don't have a clear delineation between framing and semantics. So the way that endpoints handle those cases is unclear. For requests in particular, the choice between resetting a stream and sending a 4xx response is unclear. It would be good if we could have more consistent overall behaviour here, but we have seen from the character composition issue (see ) that different implementations will enforce the rules at different layers. We probably need to allow for some flexibility. NAME has volunteered to help us navigate this somewhat nuanced issue.\nI think this is reasonable. LGTM"} +{"_id":"q-en-http2-spec-31b69cf73eb45d43b39abf3947595df755b39bb4e637cdb5771dc59190e255e2","text":"new registry for fields; see\nNAME removed upgrade and changed the registration of this header field to: Is that OK? Or would you prefer a complete registration update, as in: Field Name: HTTP2-Settings Status: Standard Ref.: Section 11.5 of RFC 7540 Comments: This field is obsoleted (this document).\nThat should be fine."} +{"_id":"q-en-http2-spec-8af7a69fec8d00d0e8716474a72635429d1288792af155d57f9874f9fc3f73d7","text":"This is a rearrangement of the HTTP-related sections, in preparation for actually addressing 867.\nThis doesn't actually address the issue -- I came to a point where I realised this would add a bunch of new requirements and prose that would need heavy scrutiny, and if we want to finish soon, that may not be advisable (although maybe we can figure it out during WGLC). In the meantime, I think this rearrangement is much clearer than the original ordering -- see what you think.\nThe section talks about how to generate the various pseudo-header fields from a message, but not not how to create a message from the abstract constructs in HTTP semantics, such as the target URI. This includes security considerations around trusting the contents of . I'm happy to do a PR for this. Also, this is a weird paragraph: \"HTTP\/2 shares the same default port numbers\" implies that it could choose to do otherwise; while that might have been an open question pre-core, I think this needs to be re-worded. Likewise, \"requests for target resource URIs\" is odd; is this meant to be \"requests with target URIs\"?\nYeah, most of this is just OLD. For instance, the \"discovery\" aspect really only relates to the need to use ALPN, but we have a better nomenclature now. If you are happy to do a PR, I'm more than happy to review rather than write."} +{"_id":"q-en-http2-spec-a54fa606e7309e9ab8b58d91e2d030ec4bd5ff801bb749b953d2a7f0f622276d","text":"RFC 4492 is obsolete. This is a TLS 1.2 section, so it is OK to refer to the TLS 1.2 document here."} +{"_id":"q-en-http2-spec-b7e30cf1191d6202b13454cccbf795efd0187c05c2e8fb1961dc882b379f83b0","text":"URL is frame formats. I've chosen to put blank lines before and after both frames and the stream identifier part of the stream header. That should make this a lot clearer. NAME was right that this was a little hard to process without that. Hopefully this is a tiny bit better.\nI think this helps. The only other clarification I can think of is to pull the flags out into a separate illustration, but let's see if this is sufficient first.\nBah I didn't respond at the right place. One day I'll understand github's interface. Or not :-) Both changes look good and possibly sufficient to me. Let's just wait a bit to see if anyone in the WG has any better proposal. Thanks for your quick reaction!\nFor the flags (and only the flags), what about trying this:\nShould I add the octet values of flags? I would only do that for non-unused flags though.\nI believe the prose text contains them, so there's at least a defensible reason for leaving them off.\nThey're indeed present but not trivially spottable. I have a dead-tree copy of 7540 here with them overlined in pink to spot them faster :-\/ We could save a tree by noting them there :-). I agree with Martin that we can avoid the unused ones though, because what matters in tests is not to know the unused ones but rather to figure them as the complement of all known ones. I won't roll over the floor crying if that's rejected, but as an implementer I find that it significantly helps during debugging.\nAs I don't know how to comment in this format and I don't want to start inventing something (else), I think I'm going to take the coward's option.\nWhat are the semantics of a blank line in this format? \/me ducks"} +{"_id":"q-en-http2-spec-fbcc7d82f780f28483ad4c5acd4db48084badedb7fd2b41f6369008fc329badf","text":"The Priorities section itself has been replaced with a discussion of why priorities aren't there, which is all well and good (). So I found this still being in the introduction a bit odd: If we're talking about the structure of the protocol, I'd think either we don't talk about a feature that's no longer in the protocol, or we explicitly mention it's been cut out."} +{"_id":"q-en-http2-spec-b1ea5b0b210b0a28d7ff31eee5bc18e4c7a2db9b50aad1c7ede644f01a625a37","text":"This fixes issue .\nI agree on 1 that there's no reason to exclude response pseudo-headers from this requirement. (I was worried about contradicting something regarding multiple interim responses, each of which needs a field, but each of those are in a separate field section, so a rule of no duplicates within a field section should work for pseudo-headers, both request and response.) However, I think that a general prohibition is a good thing. If a future (negotiated) extension wants to introduce a new pseudo-header that can be duplicated, it could override an otherwise general prohibition in this draft. If a future extension doesn't do that, it gets the more sane\/safe default for free.\nHmm, yeah, I think I can see a justification for that argument. Given that adding new pseudo-headers needs to be negotiated anyway, I don't think that's a big deal. So I'm happy to go ahead with the change for (1) and skip (2).\nNAME is \"this\" the current text, or revised text to address Cory's comment?\nThis pull request. I think that a blanket \"at most once\" is fine and what you have is enough. Future extensions can deal with the consequences of that if they want >1.\nNAME Do you want the change to cover responses as well?\nThat would mean moving it up to the \"HTTP Control Data\" section. That is a good idea. I missed that this was under requests only.\nI moved the language up to the HTTP Control Data section and slightly tweaked it."} +{"_id":"q-en-http2-spec-eba62f3a140ffc4d23f6b5a27c9a33e897aa3dbe358b6762fa15ef1d5f8953d6","text":"The abstract says This is taken directly from RFC 7540, and seems a bit old hat given where we are at now.\nYeah, probably not needed any more. Propose text?"} +{"_id":"q-en-http2-spec-9784c749248895824ab427c46a94c1c19a092a88a6ffa78e6c8737b28f80f682","text":"By removing the word negotiating and avoiding the actual identifiers, we can get the important point across\nMaybe a hangover from trying to tear up the photos of the \"http\" URI and upgrade dance. But this seems to read oddly. On one hand: on the other: I don't have a solid suggestion to improve things.\nHow about we just remove \"h2c\" from the latter statement.\nOK, that isn't going to fly. I just tried it. Pulling that string is unwise. We agreed to keep \"prior knowledge\" in the document, and the \"h2c\" ALPN token is part of that. The \"h2c\" upgrade token has been removed already. (Too subtle? Probably.)"} +{"_id":"q-en-http2-spec-c42752673fbf0e79dc392358cb240cb888a98b19dc6b6aee15a01e71ca670be4","text":"The old text stated padding things were identical to DATA frames. That might not be entirely true. So just spell out the requirements expicitly.\nI just realised RFC 7540 was always this way, but it doesn't seem right to me. The DATA frame definition includes normative requirements on sender and receiver using the field. I presume the HEADERS frame is supposed the treat padding the same but it isn't written down. edit: plus PUSH_PROMISE\nAh, it's stated in a separate paragraph at the end of each frame definition section. That's annoying but I guess its fine.\nYeah, so the short answer there is that they behave differently in some ways (at least with respect to flow control). If you think there’s a better way to structure the section I think you’re welcome to take a crack at it.\nI didn't actually make the leap to flow control, that makes me more inclined to avoid the cross reference to DATA frames. I made to just make everything explicit at the cost of a few more lines of text."} +{"_id":"q-en-http2-spec-b42b6a9cf4dffff848bfd7feb69c82958ad31f6b1529f2d67e26d823e57d3fd5","text":"thanks both, good suggestions\n... and HTTP\/2's positives too! protocol. However, the way HTTP\/1.1 uses the underlying transport ([RFC7230], Section 6) has several characteristics that have a negative overall effect on application performance today. a time on a given TCP connection. HTTP\/1.1 added request pipelining, but this only partially addressed request concurrency and still suffers from head-of-line blocking. Therefore, HTTP\/1.0 and HTTP\/1.1 clients that need to make many requests use multiple connections to a server in order to achieve concurrency and thereby reduce latency. I think time has shown these claims to not be great. How about cutting cruft and saying and then a tweak to a later paragraph in the intro to state that no one is perfect\nIt looks like you have a pull request that is dressed up as an issue :)\nIf you're receptive to change, I am willing to try and edit horrible XML\nOne change and I'm happy"} +{"_id":"q-en-http2-spec-66eeef3c3ec56a5d1c79d3592dbab39641b66d99087888ea5149637ec888dedf","text":"At two points in the document, it refers to \"previous versions of this document,\" which I take to mean RFC 7540, since we don't discuss differences from previous drafts when we publish the RFC. Is there a reason not to simply say \"RFC 7540\" in these places, similar to the discussion in 5.3?\nProbably not."} +{"_id":"q-en-http2-spec-636827a88287145974ac89a52a10f60161852cb940eb525e896ac15f40ce5a86","text":"Which changes to\nThe new reference is to the frame type (not the settings themselves), so I might have preferred the original, but I don't mind. The section is about more than the frame anyway."} +{"_id":"q-en-http2-spec-ab49fda7276115058134125fddafd0304dd8bf7e99c65f670bec3e2737767021","text":"We could try to make every statement precise. We could add \"of the frames defined in this document\" to a bunch of places. Or we could make a (better) blanket statement. That is what this does.\nAs discussed in , the current language of 5.1 prohibits all frames on closed streams other than the ones indicated. It is not clear whether the requirement in Section 5.5 constitutes \"more specific guidance.\" My suggestion in the erratum was to scope \"any frame\" to \"any frame defined in this document\" where we might allow an extension frame to appear, since that was the minimal change to the RFC 7540 text, but the editors might choose a broader rewording.\nSo the text already says: \"Frames of unknown types are ignored.\" right next to the guidance you refer to. Maybe we just need to be more direct about this.\nBe careful Martin, you removed the sentence \"Frames of unknown types are ignored\" so I think we're losing one piece of information when looking at the patch. I should re-read it withiin its context to make sure we don't miss anything, but I preferred to warn.\nNAME yes, that was a deliberate change. I replaced that text with a complete paragraph and a reference to the extensibility section, which already covers the rules for unknown frames clearly: \"Implementations MUST discard frames that have unknown or unsupported types.\"\nI think this is fine."} +{"_id":"q-en-http2-spec-7e13d4f5b5d07beddd6917ef9079c58a8de7f31ad5ec064337ba41ff295ef3c9","text":"Going to let NAME have a look at this as well.\n(Warning, I may have reached peak pedantry.) In Section 5.1, there are two references to frames which \"contain\" an ENDSTREAM flag. All HEADERS, CONTINUATION, and DATA frames contain this flag, which might be set or not. Most other references in the document are to the flag being set, which is more precise. There are also two references to frames \"bearing\" ENDSTREAM, which could be taken either way.\nIf you care enough to write the issue, maybe you also care enough to offer a pull request."} +{"_id":"q-en-http2-spec-bf137c6a857405661990ecce5911c80da2f45a6a78a1af5de643254d58638d85","text":"by copying the text from DATA\nsays: I think the intended reading is that when the parser reaches the point that only the FBF and the Padding remain, the length of Padding is given by Pad Length and the fragment is the balance. If Pad Length exceeds the remaining length of the frame, it's an error. There is a less-likely reading also allowed by the current phrasing, that Pad Length can't be more than half the remaining bytes (else Padding would exceed the size of the field block fragment). That's very unlikely to be the intended reading, but we should be clear here. I'd suggest simply taking the note from the end of DATA and putting it here verbatim.\nLGTM, will let NAME take a read as well."} +{"_id":"q-en-http2-spec-8c30e2d4acd2fd29b99abed80ef1d8a96adb522eb4e1b6f45eddcb3c31617a66","text":"... or the analysis in the security considerations. We had to walk a more careful path when RFC 7540 was published, so we were careful to legitimize h2c. In the years since, things have changed and we can be more direct. This isn't a \"don't use h2c\", but more of a disclaimer as far as security goes.\nMaybe a hangover from trying to tear up the photos of the \"http\" URI and upgrade dance. But this seems to read oddly. On one hand: on the other: I don't have a solid suggestion to improve things.\nHow about we just remove \"h2c\" from the latter statement.\nOK, that isn't going to fly. I just tried it. Pulling that string is unwise. We agreed to keep \"prior knowledge\" in the document, and the \"h2c\" ALPN token is part of that. The \"h2c\" upgrade token has been removed already. (Too subtle? Probably.)"} +{"_id":"q-en-http2-spec-e9fa8c8a3a8146ad638a818e909c29e20edd01b792db12ab8860c3824fcf3a1a","text":"As stated in , it's misleading and contradictory. Better to concentrate on configuration for this, as that is the common usage.\nSection 3.3 says the following about cleartext HTTP\/2: I had previously conceived of Alt-Svc as requiring a certificate and TLS, but that's not strictly true. RFC 7838 says that a client MUST have reasonable assurances, and a TLS cert is one way to do that. It also says that the connection must have equal or better security properties, which implies you can't go from TLS to non-TLS. This statement suggests that it's valid to offer on an http:\/\/ URI if your server can accept direct-to-H2 connections, but doesn't explicitly say so. Is that the intended reading here? But RFC 7838 says: Perhaps that doesn't apply, because the alternative is the same server on the same TCP port. Still, I feel like this waves at Alt-Svc for a use which is, at best, under-specified; at worst, not actually possible."} +{"_id":"q-en-http2-spec-9c36f7385d599ddafaf48bff1ec3f2d28c3aa02fe66739d76ea5c15bacc3588c","text":"This just moves the reference to the canonical definition of what is valid up to the top of the section. That's the canonical text. This emphasizes the point that HTTP\/2 is placing additional requirements on endpoints with respect to validation. This does not really in the sense that it leaves validation of DQUOTE and friends to the core semantics implementations. Note that any \"MAY\" requirement for rejecting a messaging can have the desired effect on those generating those messages: if an endpoint puts DQUOTE in a field name, that message will have little hope of being successfully handled.\nMy 2 cents: it seems that there's disagreement about whether the validity requirements from the core specs need to be repeated. Of course, DRY. Semantics currently says: The issue here is that H\/2 actually extends that ABNF (without doing that as ABNF), which makes the link somewhat weak. Maybe formally (==ABNF) defining \"h2-fieldname\" as \"lowercase-http-fieldname \/ h2-pseudofield\" would be helpful.\nSo the proposal would be:\nJulian, YES PLEASE for ABNF (yeah I know that is editorial, but it makes it so much more concise and exact). I think everybody agrees that a h2 impl MUST NOT generate invalid HTTP fields. The differences appear on the receiving side as the current document allows that h2 impls MAY accept some invalid HTTP fields. I've never understood the use-case for this? cheers\nWell, AFAIU, we do not have a hard requirement on recipients in the base spec (and HTTP\/1 either): So this seems consistent with what the base spec says.\nI have no objection to adding ABNF to define H2’s fields. I would say that we should keep the prose, as the prose does a good job of expressing the why of our choices: for example, explaining the expectations around lowercase field names. However, I agree with NAME and my past self that we do not need add a “MUST validate” section.\nThanks Martin. I'm having a bit of difficulty parsing this sentence: \"These checks are the minimum necessary to avoid accepting messages that might avoid security checks\". Maybe \"These checks are the minimum necessary to forward or process the message as an HTTP message\" ? Also \"A recipient can treat a message ...\" why not \"s\/can\/SHOULD\" ? I think that should be enough to solve it.\nMy objection to ABNF is the same as before: we risk that being mistaken as defining what is valid. Something that meets that ABNF is not valid. The point of this section is to define what the absolute minimum is with respect to any form of message handling. Our attempt to require more complete validation roundly failed, so this is where we end up. Willy, I've tried again with the wording. But I want to avoid normative statements regarding the core semantics documents. Those already define what a receiver might do. Those requirements aren't strict enough, at least in the \"acting as a tunnel\" case, for HTTP\/2. The point of this text is to point to HTTP and note that these checks won't guarantee you a valid message, but to allow those implementations that do apply those checks the option of using HTTP\/2 signaling (reset streams) to indicate the error condition. I don't think that needs a SHOULD, but I'm OK with adding it.\nNAME I think this is a flawed approach. By only defining a subset of invalid fields we create a set of fields that are not valid HTTP but may be valid H2 fields (by merit of being not defined as minimally invalid - just describing it is convoluted). I have yet to see any use-case for a field to be valid in H2 but invalid in HTTP? has identified that the example we have frequently used for such a field (double quote in name) will be a concern if passed onto CGI scripting language. So we have no known uses for such fields and security concerns about at least some of them. So why not just define valid H2 fields as a proper subset of valid HTTP fields plus pseudo fields? We can avoid defining what is valid by referencing other specifications (which do use ABNF). As suggests a field should be valid in H2 if and only if it is: I.e. the set of valid H2 fields is a proper subset of valid HTTP fields plus pseudo fields.\nThat is what RFC 7540 did. That failed. We can revert all of these changes (aside from editorial tweaks) and go back to what the previous version said, but would that be any more successful?\nI take it that you consider RFC7540 failed because of URL which ultimately references URL These indicate that there are cases of CTL characters in field values observed in the wild. The response has been to relax the validation on fields so much that now security problems have been created. I think this is because the minimal validation approach is essentially trying to be secure with a black list (in this case a list of known bad characters). It is far more secure to use a white list approach (i.e what is the minimal character set required to carry the HTTP semantic ). I don't really understand why no attempt was made to enforce the previous specification - I really hope it was not because of who was violating. But if enforcing the spec is seen as impossible, then rather than just leave the validity of fields somewhat undefined, surely a better approach would have been to make some explicit exclusions to allow just the needed control characters in field values. Or perhaps field names must be valid HTTP field names (or psuedo and lowercase), but h2 is more relaxed on field values? Deserting the field and not actually precisely defining what is valid just seams like an inviation to create more special cases and more unforseen security problems.\nThe real reason for this situation is that we all know that what is extracted from H2 then HPACK will need further processing and that this processing may already be fooled by the contents extracted from there. In addition, H2 being a standard multiplexed protocol on top of TCP is abused by some not really interested in the HTTP aspect of it (e.g. URL) , putting intermediaries at risk of breakage and giving them even more motivation for dropping essential controls and becoming insecure. While I would prefer a wording directly referencing the HTTP spec, like Martin I'm convinced that some will simply not even read it. So I think the proposed addition here can be welcome provided that it does not use the terms \"valid\" or anything else that might make implementations think \"OK that's sufficient\", and that presenting the root cause and the risk remains useful to convince those who are hesitating (or at least give them good arguments for not disabling the tests). Let's try again: It's not far from what Martin wrote in the last update, it just tries to better present the context and mention that this has to be done before HTTP parsing. And I'd like to use for treat as malformed.\nNo attempt was made to enforce the previous specification because the IETF and httpwg have no enforcement mechanism. What could have been done to enforce it? W.r.t. this patch, I agree with NAME that it seems that the conversation has come full-circle to where we were with RFC 7540. In that circle I think the wording here is closest to what I’d want: a clear list of specific rules with a note that these rules are a lower bound on processing, not an upper bound.\nI agree that misunderstanding the safety of the current spec is a real problem, but I don't think avoiding the word \"valid\" will help. Many developers will naturally think that a HTTP\/2 specification would only support valid HTTP fields. You can't fix this in the text because that assumption has already been made in existing software and will be made again without reading the text. Even those that do read the text might get lost in the fine distinctions we are trying to make here between a field that is valid and one that may not be invalid. What is a safe field? If the safety checks are not sufficient for consuming HTTP semantics, what are H2 fields safe for? What layers can consume them without extra validation? Why are consumers protected against CR, LF and ':' attacks but they are not safe from DQUOTE ones? Fields passing these safety checks cannot not be passed to any existing layers that are written with the assumption that they will only receive valid HTTP fields and new user will wrongly assume that H2 safety checks are at least as good as those in HTTP\/1. RFC7540 said that an imple MUST be invalid HTTP fields as malformed, but some impls ignored that. So how about we say that impls SHOULD treat invalid HTTP fields as malformed and then define the specific circumstances in which consenting peers can exchange invalid HTTP fields, perhaps indicating as much with a SETTING or new psuedo header that is needed to specifically allow h2 framing to be used for non-HTTP semantics? Yeah I know some will say \"you can't force those users of invalid HTTP fields to do something special\". So instead we make all users of valid HTTP fields need to do extra validation else risk being insecure? Anyway, sorry for re-re-re-re-litigating. My live-with criteria is that an implementation MAY treat invalid HTTP fields as malformed, so in that sense this PR is fine.... it just doesn't (and Martin says as much), so this is really just an editorial PR unrelated to .\nGreg, I'm not suggesting that an H2 field is safe, but \"safe to be parsed as HTTP\". The main reason is that some basic HTTP components have been broken into pieces in H2 and need to be reassembled before being processed by HTTP. :scheme, :authority, :path are two such examples. In HTTP together they're called a URI. Cookie is another one. Some H2-specific issues may happen at the H2 layer that are impossible by definition in HTTP since HTTP\/1. In my opinion this is the trouble that is being attempted to be addressed here. And yes it turns out that it diverges a bit from though it can address it if it mentions that consumed fields are subject to these checks. For sure, if we enforce strict validation of all H2 fields against HTTP ones it will address everything, except that we've already seen that implementors are not keen on switching between many specs at once and need some guidance. It's a delicate balance. Julian proposed to duplicate the ABNF there, I thought it was a good idea. And if we don't want to make it look normative we could write the minimum elements as a reminder.\nBut the spec currently as currently written and after this PR is not \"safe to be parsed as HTTP\". If that parsing is done by a scripting language then DQUOTE and other control characters that are current permitted by the spec may well not be safe. To take your example of a cookie header, how can you be sure that code which is currently safely the cookie header value (or set-cookie value) will be robust in the face of control characters within that value? Isn't the set of h2 fields that are \"safe to be parsed as HTTP\" a true subset of valid HTTP fields? What extra characters can we allow into fields that are not valid HTTP that we are sure will be parsed safely as valid HTTP by all existing layers currently running on top of H2 implementations? The intent of these changes have been to normalize the practise in the wild of h2 framing being used to carry control characters in fields for non-HTTP purposes. But if we nomarlize that, we are putting in jeopardy all existing code written to HTTP semantics that may not even be aware of the version of HTTP carrying the fields that they are interpretting. Surely there is some other way to allow h2 framing to be used to carry non-HTTP streams without putting all existing HTTP code at risk? If we don't want developers abusing the current protocol, then give them a proper solution for their use-case rather than corrupt the existing primary use-case.\nThey should represent the exact same level of risk as having the same code parse HTTP\/1. But actually that might in fact be what we're all seeking without precisely realizing it, which is making sure an H2 message can be at least reconstructed as an H1 message that is safe to parse. The rules on CR\/LF precisely have always been a concern to all of us for this. Same as above. This is always difficult to say if we'd try to extend what HTTP permits. However we're certain that some will almost always be dangerous along a chain or cause parsing issues even inside implementations. Not exactly. This is actually the current situation probably because some consider that the effort needed to comply with the rules in semantics is too high for little value the this level. This is something I can understand. For example in haproxy we have a semantics layer which performs a lot of controls, deduplicates content-length, checks Host against etc. But at the lowest level it is difficult to enforce such checks when you're just converting header lists to internal messages. However we already have the NUL\/CR\/LF checks that allows the upper layer parser to proceed (the initial implementation used to build an H1 message but that's no longer the case). Thus maybe we should replace all this with an approach centered around this tricky HTTP\/1 translation, which also does not infer how implementations should work internally nor suggest to put anything non-HTTP inside HEADERS frames: Just to be clear, I really do not want to see HEADERS frame being abused to carry non-HTTP because I know pretty well what it's like to be an intermediary developer whose product is pointed the finger at for breaking stuff when inserted in a working but non-compliant chain. This is also why I would like to see the SHOULD reject as malformed.\nI like this paragraph and I'm totally OK with a spec that has conditions like \"implementations which reassemble elements as an HTTP\/1-like message before ....\" However, it is not just HTTP\/1 as there are many HTTP semantic layers that are unaware of the protocol version, so it needs to be any impl that represents streams as HTTP messages must do specific validation whilst the h2 \"abusers\" are free use streams to the limites of HPACK capabilities, so long as they don't represent the results as HTTP messages - either on the wire or to layers above. Furthermore an impl that is representing HTTP messages cannot just do the minimal safety checks you listed. It MUST validate the fields against the general syntax of HTTP fields. For example, if I provide a h2 implementation of the Java class, then my impl MUST validate the fields against HTTP syntax. But if instead, I use my h2 impl to implement a gRPC API, then I'm under no such obligation as I'm not representing HTTP messages. More over, it is only the protocol impl that can do this validation. Another example is that my server is used on PAAS platforms where the application was deployed decades ago and the source code is probably lost and the dev team all retired. The app cannot be redeployed and it certainly cannot be reviewed for how it parses the Cookie header. It's not sufficient for me to switch them from HTTP\/1 to H2 and in the process expose those applications to non-valid HTTP headers in HTTP messages I deliver to them. If the h2 impl is not going to validate the fields are valid HTTP then who is? So how about something like:\nThe thing is, stacks that support multiple versions will certainly not implement the full header syntax in each version because that is exactly what ought to be done at the semantics layer and why the HTTP spec was split like this as well. We really have messaging (H1,H2,H3) and semantics. The transport explains how to extract elements from a byte stream and how to encode them into a byte stream. The semantics details their syntax and consistency. NUL\/CR\/LF purely come from the H1 world but H1 was HTTP before H2 existed, so it could be argued that they are legacy limitations that probably affects most stacks. The cookie header splitting is purely H2, and once recombined it must parse correctly according to the semantics definition. But whether your cookie header field comes from H1\/H2\/H3, it will have to pass through the exact same validity checks. What is certain is that these HTTP syntax checks must be performed somewhere. But the messaging layer is not always exactly the best place for this. Typically an H1 implementation will not have code to check for embedded LF characters because it's a delimiter, while H2 needs to have explicit check against this. The rules on the resulting cookie header field format apply after Cookie header reassembly, not before. And this reassembly is specific to the messaging layer which defines how to serialize fields. Last, the generic rules are not sufficient to safely reassemble the pseudo-header fields. Maybe in the end we could combine our two parts, starting with yours to indicate what every HTTP implementation must do, and putting the focus on the extreme care that is required when translating to HTTP\/1-like since we know that it's one of the most natural approaches (and even used in the examples in the spec). This could give roughly: With this I think it that the intent is clear, the guidance as well, it is exhaustive and doesn't leave any doubt on what needs to be done.\nOh, and of course: :-)\nI think extreme care is need anytime a HTTP message is represented, not just when represented as a HTTP\/1. If a H2 layer is delivering messages to a semantic layer that was developed against h1, then it will not be assuming HTTP valid charset in no matter if the message is passed as byte streams or lines or XML or JSON or Strings in classes. It seams wrong for one particular messaging layer (h2) to require changes in transport impartial HTTP semantic layers so the messaging layer can be \"abused\" for non HTTP purposes? As you say, the actual syntax validations needed to be done are dependent on the messaging layer. So they are best done in the messaging layer as it knows what it is and can sometimes do things efficiently like checking charsets during parsing. If the syntax is not validated by the messaging layer, then the semantic layer will have to do full validation, which will result in wasting CPU duplicating any checks that are done by the messaging layer (or are intrinsic to the transport).\nI think I'm OK with this. There's just one minor thing: It's actually \"LF and COLON\", but to be exhaustive I'd write \"CR, LF, and COLON\" since we've always kept extreme care on CR due to older H1 implementations. I think we really want to say something about the pseudo headers. They do not exist in HTTP\/1 and the recent portswigger report clearly shows that many of us were relying on the tests performed on the reassembled start line, but that was too late (in my case everything was already NUL\/CR\/LF clean but LWS were not imagined there). I think it would be appropriate to place the small enumeration I mentioned above somewhere (probably after the \"minimal validation\" part). But it's possible that it doesn't have its place in the headers section and that it would be better discussed in the security considerations. And maybe then the whole \"minimal validation\" part can be moved there with it, since after all, it's what all this is about, covering risks that are known for having already been abused. If you're interested I can propose a whole paragraph about all this, as being an implementer makes it easy enough for me to describe the traps. I'm just still having difficulties with all the toolchain and the XML docs which is why I don't send PRs.\n(I'll add LF) As for pseudo-headers, welcome back scope creep. Would a simple prohibition on SP (and maybe HTAB) for pseudo-headers work? I'm fairly confident that the substitution is performed as needed, so there are no valid uses for the character. Given that that is the real problem, would that do? I don't feel like a full enumeration is going to be effective, but just one more simple bullet might not hurt.\nI've initially thought about it when fixing my bugs (\"if I had filtered LWS as well...\"). But that's not enough for :scheme nor :path. If :path starts with any character that is valid in a domain name, or a colon, you can change the authority when concatenating values. E.g. or Would result in `URL\" or \"URL\". Similarly if \":scheme\" contains a colon, we're lost. In haproxy we've seriously hardened this part so that we don't have to deal with a variant of it any time soon. But I think that the absolute bare minimum is: no NUL\/CR\/LF\/LWS in any of the currently known pseudo header fields no colon in :scheme nothing but \"\/\" or \"*\" as the first char of :path I don't see how to fool a start line parser when reassembling something based on such minimum controls, but I do see how to fool them by relaxing any single one of the rules above.\nEditors, it looks like this is ready to ship; anything stopping that?\nOnly that discussion was continuing. But it seems to have settled now.\nThanks Martin. What about adding the point I made above in URL for :scheme, :path and :method which need further protection (that's not addressed in core since purely H2) ? I could propose this text (yeah I know my formulations are not always great):\nHTTP has always forbidden token delimiters in field names. It seems very odd that they are not forbidden in HTTP\/2, aside from the specific restriction on colon. The token delimiters are DQUOTE and \"(),\/:;?NAME We really don't want those allowed in a field name because back-end gateways like CGI assume they have been blocked, and it would mean adding an impossible conversion step when forwarding h2 to h1. Also, it would be easier to read the Field Validity section if it first referenced what is valid for an HTTP field-name, then exclude uppercase because of lowercasing and permit the special-purpose colon for pseudo-fields, and finally list specific requirements on handling invalid characters because the length-delimited protocol elements might be used to carry invalid characters. Likewise, separating the error handling for field-name from that of field-value is useful because they are two different algorithms that are usually implemented separately.\nThis is really good other than a couple of niggles Expressing it this way makes it really clear the difference between recommended full HTTP validation and the minimal validation that must be imposed if full validation is not done."} +{"_id":"q-en-http2-spec-bb79eeaaea34a80ce87f579a5acd841a98feba5f6198afaafc00ec6611d82b3c","text":"This text wasn't actually changed in , but I noticed it while reviewing. This says that implementations which support TLS MUST use ALPN, but that's not really tied to what the implementation supports -- it's tied to what the current connection is doing. If you support TLS but are using cleartext TCP for the current connection (regardless of why), you're not required to use ALPN."} +{"_id":"q-en-http2-spec-a31048907f4b28b3280c179a819d9a2564387d8ac24054a911b9fdc5c61a241e","text":"The original issue here was concerned with Transfer-Encoding, but this was not a normative statement, it was a note. With a forward reference, this is better. The section on connection-specific headers probably could have been more specific. Now that the -semantics draft has a specific list of things (plus whatever lists), we can lean on that more directly.\nHTTP\/2 prohibits Transfer-Encoding, but doesn't actually prohibit other values: HTTP\/3 broadened this language to cover all transfer codings: Should H2bis follow suit?\nCan you take this to the list Mike? It seems like the only sensible answer, but I'd like to double-check.\nIn my mind, I was relying on Section 8.2.2: However, that probably isn't linked well enough to the . Unfortunately, there isn't a term we can just lift out; perhaps something like this:\nOn review with full context, the text that Mike cites is fine. It's an informational note. Mark's note here about the linkage to -semantics is good though. I will float this on the list to see if I messed up (again)."} +{"_id":"q-en-http2-spec-4dcd8aa32f698dc38c85bf328263f2d76f474b33637b8d8b035e83231a9b2931","text":"I know that Cory beat me to it with , but I had written text. I only forgot to make the PR. Cory can decide which he likes better :)\nIn the aside about HTTP\/2 deliberately not supporting Upgrade, it feels worth mentioning that we later partially reversed ourselves and brought back a moral equivalent of Upgrade with a less obvious name. Reference to RFC8441 appropriate here?\nYou put more things in your citation block, so it’s clearly better."} +{"_id":"q-en-http2-spec-8ad476af8fd57f0d4ebbe85a507d58223327c1ebad0cfc1de1e4f017c0cb5a36","text":"The code for the IANA policies in Section 11.1 looks like this: This document establishes a registry for HTTP\/2 frame type codes. The \"HTTP\/2 Frame Type\" registry manages an 8-bit space. The \"HTTP\/2 Frame Type\" registry operates under either of the \"IETF Review\" or \"IESG Approval\" policies. That would be fine if that rendered as I would expect, as: Instead, it renders this way: That word ordering doesn't make sense. I'm not sure exactly what XML magic would be needed to produce the expected sentence order. Same thing occurs in Sections 11.2-4.\nappears to fix only Section 11.1."} +{"_id":"q-en-http2-spec-a41d6908157b5e2833e6fa9be41e5be82b2ceaccb44728bd0efa17ce84e2ded9","text":"Followup for .\nThe code for the IANA policies in Section 11.1 looks like this: This document establishes a registry for HTTP\/2 frame type codes. The \"HTTP\/2 Frame Type\" registry manages an 8-bit space. The \"HTTP\/2 Frame Type\" registry operates under either of the \"IETF Review\" or \"IESG Approval\" policies. That would be fine if that rendered as I would expect, as: Instead, it renders this way: That word ordering doesn't make sense. I'm not sure exactly what XML magic would be needed to produce the expected sentence order. Same thing occurs in Sections 11.2-4.\nappears to fix only Section 11.1."} +{"_id":"q-en-http2-spec-73f18a8e784d353f28447c10d5beb465e7e31d4d2c50df9d3122ff97e6fb4622","text":"Fix some typos\n\"overridden\" is the correct spelling; can you remove those changes? Thanks.\nNAME it was \"overriden\", I changed it to \"overridden\". Am I missing something here?\nNo, I just need to slow down when I read diffs :) Thanks!\n:smile:"} +{"_id":"q-en-http2-spec-be7d74c2d92c30d03677cb8e15e706152d2630c38450c244669542019a9bf0d9","text":"Just state it. No need to justify it so (especially as this invites questions about the claimed benefits). This is less than Julian wanted from , but I think that it is important to include a requirement like this very clearly and it isn't really said anywhere else. At least not directly.\nconverted to lowercase when constructing an HTTP\/2 message. I think this is somewhat misleading, it just provides the motivation why the lowercase format was introduced initially. I would just remove the sentence and potentially insert a note about lower-casing into the field name validity statements.\nAgreed, I don't think the spec needs to defend itself on this choice."} +{"_id":"q-en-http2-spec-3fb5227731250236ef98f7b6df9e4512ee286eab19cbb930d447d85f0d5bf08d","text":"The HTTP2-Settings header field and the h2c Upgrade token are both obsolete in the same way for the same reason, but it's rendered differently in their respective IANA sections. HTTP2-Settings is marked as Obsolete in the registry, with a reference back to the IANA Considerations of this document h2c gives no instructions to IANA, but includes explanatory text that the capability is removed and refers to Section 3.1. I would suggest that these should be aligned and combined, such that each one does two things: Instructs IANA to mark the entry as Obsolete Add a note that the element was used for Upgrade and reference Section 3.1 for more detail on Upgrade's removal"} +{"_id":"q-en-http2-spec-f5b0e8653a53888ac3481b7e12aca409d77add143edfbb25945eb4a996d6551a","text":"Only pseudo-header fields can be mandatory, so no point saying anything about fields more generally here.\nsequence of HTTP\/2 frames but is invalid due to the presence of extraneous frames, prohibited fields or pseudo-header fields, the absence of mandatory fields or pseudo-header fields, the inclusion of uppercase field names, or invalid field names and\/or values (in certain circumstances; see Section 8.2). Are there any mandatory fields that are not pseudo-header fields?"} +{"_id":"q-en-http2-spec-cd809870fc97b1287420f7f43e13bc323f42e53d1a7cb18ba1c177631d0b909a","text":"As noted in , we don't need to redo these. A backward reference will do. This cuts quite a bit of XML out of the document, which I'm happy about.\nNAME I disagree; IANA hold the registration data, not the document. My understanding is that templates need only appear in the originally registering documents; if there's doubt about that, we should check with the IESG and IANA.\nThat is true, but the registrations point to specs. The IANA Considerations need to take care of updating these references.\nThat's a good point; I will shelve that and change to adding a note that says \"references to RFC 7540 for X, Y, and Z are updated to point to this document\".\nAbsolutely. It just doesn't need to list them individually; a simple statements such as \"update the references to RFCxxxx to this document in Registry Foo\" will suffice.\nIf that's all which is necessary, yes. Hint: section numbers might change, and you don't want to handle that during AUTH48.\nThe current IANA Considerations for H2 bis has several requests to IANA that seem to be appropriate for the original document, but not a bis now that the registries are already filled out. Shouldn't these be reworded to say \"this document updates...\" or explain that the references for registries should point to this document, rather than the old RFC?\nI don't think this is a good idea. At the end of the day, the IANA registry should point to the updated document."} +{"_id":"q-en-http2-spec-c3dfc0b6c06404d66732232e036081d1b0a2796926643d82e7cfb7168386aa30","text":"Includes changes from , as that hits the same paragraph.\nwhy is c4d3528 included?\nHmm, poor rebase on my part I think. I made , you made the changes there, then I made a branch for this and then rebased on top of . It should go away when merged."} +{"_id":"q-en-http2-spec-1a4cec2fa1f4de97a225eed0be497b94e84c46398fb4d883458a133c78e387f6","text":"A recent xml2rfc update changed the way that the element was rendered in text. Take advantage of that by adding quotes to strings that really do benefit from additional quoting, such as the \"PRI ...\" preface string. Most of the changes shown in a recent diff are down to losing quotes on header field names and other protocol elements. These do not need to be quoted for their meaning to be clear and unambiguous. The use of for these items improves the HTML rendering, but the loss of line noise in text renderings is a net win (at least in my opinion).\nI agree with your reasoning, this seems like a good change."} +{"_id":"q-en-http2-spec-fbf04be1d11e68bceca0f1071c668a860831e9199ce3e8815bfa278c18201008","text":"I'm working on the wrong machine and my setup needs tuning. I'll just drop the errant commit.\n[x] 1. ----- FP: Is the working group aware that RFC 793bis is in IESG evaluation (URL) ? Was the choice of having a normative reference to 793 conscious, in order to avoid any delay that might come from publication of draft-ietf-tcpm-rfc793bis? (Just checking this was considered) [x] 2. ----- for HTTP\/2 over TLS. The general TLS usage guidance in [TLSBCP] SHOULD be followed, with some additional restrictions that are specific to HTTP\/2. FP: Given this requirement, I would have expected to see [TLSBCP] normatively referenced, rather than informatively. [x] 3. ----- layer. The frame and stream layers are tailored to the needs of the HTTP protocol and server push. FP: I would think the server push is part of the HTTP protocol, which makes this formulation \"HTTP protocol and server push\" confusing. [x] 4. ----- Type: The 8-bit type of the frame. The frame type determines the Flags: An 8-bit field reserved for boolean flags specific to the FP: I would find a reference to the IANA registry useful. [x] 5. ----- implementation of flow control can be difficult. When using flow control, the receiver MUST read from the TCP receive buffer in a timely fashion. Failure to do so could lead to a deadlock when FP: \"When using flow control\" might be rephrased to indicate \"when flow control limits are lower than the maximum\" (or something of the sort), since if I understand correctly the capability is always used, it is just the window size that changes (effectively implementing control of flow). Also \"timely fashion\" - I know that it is probably hard, but it would be nice to have a more precise qualification, or at least a hint of what is timely. [x] 6. ----- stream that it successfully received from its peer. The GOAWAY frame includes an error code that indicates why the connection is FP: \"error code\" - here as well I would have liked a reference to the IANA registry. [x] 7. ----- Exclusive: A single-bit flag. This field is only present if the PRIORITY flag is set. Stream Dependency: A 31-bit stream identifier. This field is only present if the PRIORITY flag is set. Weight: An unsigned 8-bit integer. This field is only present if the PRIORITY flag is set. FP: I would have expected to see some definition of how the fields are used. If this is defined somewhere else, a reference would be good. [x] 8. ----- SETTINGS Frame { Length (24), Type (8) = 4, Unused Flags (7), ACK Flag (1), Reserved (1), Stream Identifier (31), Setting (48) ..., } Setting { Identifier (16), Value (32), } FP: Is there any reason why the Stream Identifier line is not: Stream Identifier (31) = 0, [x] 9. ----- a server does include a value it MUST be 0. A client MUST treat receipt of a SETTINGS frame with SETTINGSENABLEPUSH set to 1 as a connection error (Section 5.4.1) of type PROTOCOLERROR. FP: This is just my curiosity: what is the reason for this stronger requirement - I would think it shouldn't be a problem for the sender if it wants to advertise that it would permit\/support server push. What am I missing? [x] 10. ----- A value of 0 for SETTINGSMAXCONCURRENTSTREAMS SHOULD NOT be treated as special by endpoints. A zero value does prevent the FP: When is it ok that the 0 value is treated as special? [x] 11. ----- set. Upon receiving a SETTINGS frame with the ACK flag set, the sender of the altered settings can rely on the value having been applied. FP: nit s\/value\/values. Also I believe this could be misunderstood - can it be made more precise on the fact that only the values that are present in the received frame with the ACK flag set (and not those that might have been ignored because not understood) have been applied. [x] 12. ----- A receiver MUST treat the receipt of a WINDOWUPDATE frame with an flow-control window increment of 0 as a stream error (Section 5.4.2) FP: nit s\/an\/a [x] 13. ----- FLOWCONTROLERROR (0x3): The endpoint detected that its peer violated the flow-control protocol. STREAMCLOSED (0x5): The endpoint received a frame after a stream was half-closed. FP: would be good to add a reference to the relevant sections. [x] 14. ----- set after receiving the HEADERS frame that opens a request or after receiving a final (non-informational) status code MUST treat the FP: Where is a \"non-informational status code\" defined? [x] 15. ----- FP: Are the following two sentences in Section 8.1.1. in contraddiction? request or response. Malformed requests or responses that are detected MUST be treated as a stream error (Section 5.4.2) of type PROTOCOLERROR. on the remainder of the request being correct. A server or intermediary MAY use RSTSTREAM -- with a code other than REFUSEDSTREAM -- to abort a stream if a malformed request or response is received. FP: In section 5.4.2. I read: An endpoint that detects a stream error sends a RSTSTREAM frame So the first sentence above implies RSTSTREAM MUST be sent, while the second sentence states RSTSTREAM MAY be sent. [x] 16. ----- their definitions in Sections Section 5.1 of [5.1] and Section 5.5 of [5.5] of [HTTP] respectively and treat messages that contain FP: References need fixing. [x] 17. ----- from the control data of the original request, unless the the FP: nit Remove one \"the\" [x] 18. ----- Note that request targets for CONNECT or asterisk-form OPTIONS requests never include authority information. FP: Please add a reference to 7.1 of [HTTP] as this is the first time \"asterisk-form OPTION\" appear in this document. [x] 19. ----- Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. FP: Do these two sentences contradict each other? In the first sentence I am reading that the server can't create the necessary stream to send the PUSHPROMISE frame, in the second sentence I read that it can? [x] 20. ----- on a DATA frame is treated as being equivalent to the TCP FIN bit. A FP: Can a reference be added to the section where the TCP FIN bit is defined? [x] 21. ----- Content-Type: image\/jpeg ==> - ENDSTREAM Content-Length: 123 + END_HEADERS FP: I think it would be good to add a sentence about the meaning of - and + (which I understand to be flag set or not set) in section 2.2. [x] 22. ----- treated as delimiters in other HTTP versions. An intermediary that translates an HTTP\/2 request or response MUST validate fields according to the rules in Section 8.2 roles before translating a message to another HTTP version. Translating a field that includes FP: is \"roles\" supposed to be there? [x] 23. ----- The CONNECT method can be used to create disproportionate load on an proxy, since stream creation is relatively inexpensive when compared FP: nit s\/an\/a [x] 24. ----- FP: I think it would be good to keep the 3rd paragraph in Section 11 instead of asking the RFC Editor to remove it, just to keep a trace of the registries that have been defined in 7540, since those registries will now reference this document, but this document does not contain all the definitions of the different fields. [x] 25. ---- Comments: Obsolete; see Section 11.1 FP: I would suggest to be explicit and add \"of this document\" (unless links can be maintained in IANA registries Comments fields).\nAll done, closing.\nAssuming the BCP stuff gets merged via the other PR first."} +{"_id":"q-en-http2-spec-38194abbac0e8613f48d0de43b9b458ac98122e1b31f55c5e3e1a103c762cc04","text":"Replaced 'body' with 'payload' in instances where it would be confusing. Added a flag in the HEADERS frame for header continuations. Added pointers in HEADERS+PRIORITY and PUSH_PROMISE which reference the flags in the HEADERS frame. This should\nThe continuation bit is inverted with respect to the FINAL flag. Is HEADER_END a better choice? Also, if an unbounded number of frames are permitted, then you need some way to place an upper bound on size. Especially since this requires* that all frames be assembled before processing. Currently, there isn't a default maximum, a setting or an error code.\nIt could be. I chose this polarity so that it was immediately compatible with what implementations do today for SPDY\/2\/3 (they don't know about such a bit). If I recall correctly, there currently isn't an explicit limit on header size in HTTP. Unless we're willing to impose such, it is up to the implementation to reject things which are too large by RESETTING the stream. That case probably does need more expounding, though.\nLet's move this discussion to the list.\nNAME suggested that his header compression () would include the ability to continue header blocks across frames. We should track this as a separate issue. Important consideration is the way that header blocks mutate session state (for header compression). Interleaving of continued header frames will cause issues if we don't address this. We should also consider whether this is a general facility or not.\nI believe we should fix this by declaring that, if there is a continuation happening, no new headers frame is allowed to be sent until the continuation is finished. It just gets too difficult to deal with otherwise. , martinthomson EMAIL:\nTo clarify - you mean that no headers frame on a different stream can be sent?\nyes. It isn't the only possible solution, but it is relatively simple and it seems like it should be a rare enough case... -=R , Mark Nottingham EMAIL:"} +{"_id":"q-en-http2-spec-1239a50cfd370548489c87fde6078588bc9e923453451543c1758d97308e95db","text":"Editorial change only. The idea is to read as soon as data arrives in the TCP buffer. In practice, this is not necessary, but leaving things in the TCP buffer indefinitely can deadlock. (Some implementations of over TCP-based protocols do this and in this case it is BAD.) For .\n[x] 1. ----- FP: Is the working group aware that RFC 793bis is in IESG evaluation (URL) ? Was the choice of having a normative reference to 793 conscious, in order to avoid any delay that might come from publication of draft-ietf-tcpm-rfc793bis? (Just checking this was considered) [x] 2. ----- for HTTP\/2 over TLS. The general TLS usage guidance in [TLSBCP] SHOULD be followed, with some additional restrictions that are specific to HTTP\/2. FP: Given this requirement, I would have expected to see [TLSBCP] normatively referenced, rather than informatively. [x] 3. ----- layer. The frame and stream layers are tailored to the needs of the HTTP protocol and server push. FP: I would think the server push is part of the HTTP protocol, which makes this formulation \"HTTP protocol and server push\" confusing. [x] 4. ----- Type: The 8-bit type of the frame. The frame type determines the Flags: An 8-bit field reserved for boolean flags specific to the FP: I would find a reference to the IANA registry useful. [x] 5. ----- implementation of flow control can be difficult. When using flow control, the receiver MUST read from the TCP receive buffer in a timely fashion. Failure to do so could lead to a deadlock when FP: \"When using flow control\" might be rephrased to indicate \"when flow control limits are lower than the maximum\" (or something of the sort), since if I understand correctly the capability is always used, it is just the window size that changes (effectively implementing control of flow). Also \"timely fashion\" - I know that it is probably hard, but it would be nice to have a more precise qualification, or at least a hint of what is timely. [x] 6. ----- stream that it successfully received from its peer. The GOAWAY frame includes an error code that indicates why the connection is FP: \"error code\" - here as well I would have liked a reference to the IANA registry. [x] 7. ----- Exclusive: A single-bit flag. This field is only present if the PRIORITY flag is set. Stream Dependency: A 31-bit stream identifier. This field is only present if the PRIORITY flag is set. Weight: An unsigned 8-bit integer. This field is only present if the PRIORITY flag is set. FP: I would have expected to see some definition of how the fields are used. If this is defined somewhere else, a reference would be good. [x] 8. ----- SETTINGS Frame { Length (24), Type (8) = 4, Unused Flags (7), ACK Flag (1), Reserved (1), Stream Identifier (31), Setting (48) ..., } Setting { Identifier (16), Value (32), } FP: Is there any reason why the Stream Identifier line is not: Stream Identifier (31) = 0, [x] 9. ----- a server does include a value it MUST be 0. A client MUST treat receipt of a SETTINGS frame with SETTINGSENABLEPUSH set to 1 as a connection error (Section 5.4.1) of type PROTOCOLERROR. FP: This is just my curiosity: what is the reason for this stronger requirement - I would think it shouldn't be a problem for the sender if it wants to advertise that it would permit\/support server push. What am I missing? [x] 10. ----- A value of 0 for SETTINGSMAXCONCURRENTSTREAMS SHOULD NOT be treated as special by endpoints. A zero value does prevent the FP: When is it ok that the 0 value is treated as special? [x] 11. ----- set. Upon receiving a SETTINGS frame with the ACK flag set, the sender of the altered settings can rely on the value having been applied. FP: nit s\/value\/values. Also I believe this could be misunderstood - can it be made more precise on the fact that only the values that are present in the received frame with the ACK flag set (and not those that might have been ignored because not understood) have been applied. [x] 12. ----- A receiver MUST treat the receipt of a WINDOWUPDATE frame with an flow-control window increment of 0 as a stream error (Section 5.4.2) FP: nit s\/an\/a [x] 13. ----- FLOWCONTROLERROR (0x3): The endpoint detected that its peer violated the flow-control protocol. STREAMCLOSED (0x5): The endpoint received a frame after a stream was half-closed. FP: would be good to add a reference to the relevant sections. [x] 14. ----- set after receiving the HEADERS frame that opens a request or after receiving a final (non-informational) status code MUST treat the FP: Where is a \"non-informational status code\" defined? [x] 15. ----- FP: Are the following two sentences in Section 8.1.1. in contraddiction? request or response. Malformed requests or responses that are detected MUST be treated as a stream error (Section 5.4.2) of type PROTOCOLERROR. on the remainder of the request being correct. A server or intermediary MAY use RSTSTREAM -- with a code other than REFUSEDSTREAM -- to abort a stream if a malformed request or response is received. FP: In section 5.4.2. I read: An endpoint that detects a stream error sends a RSTSTREAM frame So the first sentence above implies RSTSTREAM MUST be sent, while the second sentence states RSTSTREAM MAY be sent. [x] 16. ----- their definitions in Sections Section 5.1 of [5.1] and Section 5.5 of [5.5] of [HTTP] respectively and treat messages that contain FP: References need fixing. [x] 17. ----- from the control data of the original request, unless the the FP: nit Remove one \"the\" [x] 18. ----- Note that request targets for CONNECT or asterisk-form OPTIONS requests never include authority information. FP: Please add a reference to 7.1 of [HTTP] as this is the first time \"asterisk-form OPTION\" appear in this document. [x] 19. ----- Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. FP: Do these two sentences contradict each other? In the first sentence I am reading that the server can't create the necessary stream to send the PUSHPROMISE frame, in the second sentence I read that it can? [x] 20. ----- on a DATA frame is treated as being equivalent to the TCP FIN bit. A FP: Can a reference be added to the section where the TCP FIN bit is defined? [x] 21. ----- Content-Type: image\/jpeg ==> - ENDSTREAM Content-Length: 123 + END_HEADERS FP: I think it would be good to add a sentence about the meaning of - and + (which I understand to be flag set or not set) in section 2.2. [x] 22. ----- treated as delimiters in other HTTP versions. An intermediary that translates an HTTP\/2 request or response MUST validate fields according to the rules in Section 8.2 roles before translating a message to another HTTP version. Translating a field that includes FP: is \"roles\" supposed to be there? [x] 23. ----- The CONNECT method can be used to create disproportionate load on an proxy, since stream creation is relatively inexpensive when compared FP: nit s\/an\/a [x] 24. ----- FP: I think it would be good to keep the 3rd paragraph in Section 11 instead of asking the RFC Editor to remove it, just to keep a trace of the registries that have been defined in 7540, since those registries will now reference this document, but this document does not contain all the definitions of the different fields. [x] 25. ---- Comments: Obsolete; see Section 11.1 FP: I would suggest to be explicit and add \"of this document\" (unless links can be maintained in IANA registries Comments fields).\nAll done, closing.\nYeah, good rewording here."} +{"_id":"q-en-http2-spec-7be43d5d974550c30d49979ad4dc993632a361f9142ae01ccdce56a4ec9725b7","text":"For .\n[x] 1. ----- FP: Is the working group aware that RFC 793bis is in IESG evaluation (URL) ? Was the choice of having a normative reference to 793 conscious, in order to avoid any delay that might come from publication of draft-ietf-tcpm-rfc793bis? (Just checking this was considered) [x] 2. ----- for HTTP\/2 over TLS. The general TLS usage guidance in [TLSBCP] SHOULD be followed, with some additional restrictions that are specific to HTTP\/2. FP: Given this requirement, I would have expected to see [TLSBCP] normatively referenced, rather than informatively. [x] 3. ----- layer. The frame and stream layers are tailored to the needs of the HTTP protocol and server push. FP: I would think the server push is part of the HTTP protocol, which makes this formulation \"HTTP protocol and server push\" confusing. [x] 4. ----- Type: The 8-bit type of the frame. The frame type determines the Flags: An 8-bit field reserved for boolean flags specific to the FP: I would find a reference to the IANA registry useful. [x] 5. ----- implementation of flow control can be difficult. When using flow control, the receiver MUST read from the TCP receive buffer in a timely fashion. Failure to do so could lead to a deadlock when FP: \"When using flow control\" might be rephrased to indicate \"when flow control limits are lower than the maximum\" (or something of the sort), since if I understand correctly the capability is always used, it is just the window size that changes (effectively implementing control of flow). Also \"timely fashion\" - I know that it is probably hard, but it would be nice to have a more precise qualification, or at least a hint of what is timely. [x] 6. ----- stream that it successfully received from its peer. The GOAWAY frame includes an error code that indicates why the connection is FP: \"error code\" - here as well I would have liked a reference to the IANA registry. [x] 7. ----- Exclusive: A single-bit flag. This field is only present if the PRIORITY flag is set. Stream Dependency: A 31-bit stream identifier. This field is only present if the PRIORITY flag is set. Weight: An unsigned 8-bit integer. This field is only present if the PRIORITY flag is set. FP: I would have expected to see some definition of how the fields are used. If this is defined somewhere else, a reference would be good. [x] 8. ----- SETTINGS Frame { Length (24), Type (8) = 4, Unused Flags (7), ACK Flag (1), Reserved (1), Stream Identifier (31), Setting (48) ..., } Setting { Identifier (16), Value (32), } FP: Is there any reason why the Stream Identifier line is not: Stream Identifier (31) = 0, [x] 9. ----- a server does include a value it MUST be 0. A client MUST treat receipt of a SETTINGS frame with SETTINGSENABLEPUSH set to 1 as a connection error (Section 5.4.1) of type PROTOCOLERROR. FP: This is just my curiosity: what is the reason for this stronger requirement - I would think it shouldn't be a problem for the sender if it wants to advertise that it would permit\/support server push. What am I missing? [x] 10. ----- A value of 0 for SETTINGSMAXCONCURRENTSTREAMS SHOULD NOT be treated as special by endpoints. A zero value does prevent the FP: When is it ok that the 0 value is treated as special? [x] 11. ----- set. Upon receiving a SETTINGS frame with the ACK flag set, the sender of the altered settings can rely on the value having been applied. FP: nit s\/value\/values. Also I believe this could be misunderstood - can it be made more precise on the fact that only the values that are present in the received frame with the ACK flag set (and not those that might have been ignored because not understood) have been applied. [x] 12. ----- A receiver MUST treat the receipt of a WINDOWUPDATE frame with an flow-control window increment of 0 as a stream error (Section 5.4.2) FP: nit s\/an\/a [x] 13. ----- FLOWCONTROLERROR (0x3): The endpoint detected that its peer violated the flow-control protocol. STREAMCLOSED (0x5): The endpoint received a frame after a stream was half-closed. FP: would be good to add a reference to the relevant sections. [x] 14. ----- set after receiving the HEADERS frame that opens a request or after receiving a final (non-informational) status code MUST treat the FP: Where is a \"non-informational status code\" defined? [x] 15. ----- FP: Are the following two sentences in Section 8.1.1. in contraddiction? request or response. Malformed requests or responses that are detected MUST be treated as a stream error (Section 5.4.2) of type PROTOCOLERROR. on the remainder of the request being correct. A server or intermediary MAY use RSTSTREAM -- with a code other than REFUSEDSTREAM -- to abort a stream if a malformed request or response is received. FP: In section 5.4.2. I read: An endpoint that detects a stream error sends a RSTSTREAM frame So the first sentence above implies RSTSTREAM MUST be sent, while the second sentence states RSTSTREAM MAY be sent. [x] 16. ----- their definitions in Sections Section 5.1 of [5.1] and Section 5.5 of [5.5] of [HTTP] respectively and treat messages that contain FP: References need fixing. [x] 17. ----- from the control data of the original request, unless the the FP: nit Remove one \"the\" [x] 18. ----- Note that request targets for CONNECT or asterisk-form OPTIONS requests never include authority information. FP: Please add a reference to 7.1 of [HTTP] as this is the first time \"asterisk-form OPTION\" appear in this document. [x] 19. ----- Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. FP: Do these two sentences contradict each other? In the first sentence I am reading that the server can't create the necessary stream to send the PUSHPROMISE frame, in the second sentence I read that it can? [x] 20. ----- on a DATA frame is treated as being equivalent to the TCP FIN bit. A FP: Can a reference be added to the section where the TCP FIN bit is defined? [x] 21. ----- Content-Type: image\/jpeg ==> - ENDSTREAM Content-Length: 123 + END_HEADERS FP: I think it would be good to add a sentence about the meaning of - and + (which I understand to be flag set or not set) in section 2.2. [x] 22. ----- treated as delimiters in other HTTP versions. An intermediary that translates an HTTP\/2 request or response MUST validate fields according to the rules in Section 8.2 roles before translating a message to another HTTP version. Translating a field that includes FP: is \"roles\" supposed to be there? [x] 23. ----- The CONNECT method can be used to create disproportionate load on an proxy, since stream creation is relatively inexpensive when compared FP: nit s\/an\/a [x] 24. ----- FP: I think it would be good to keep the 3rd paragraph in Section 11 instead of asking the RFC Editor to remove it, just to keep a trace of the registries that have been defined in 7540, since those registries will now reference this document, but this document does not contain all the definitions of the different fields. [x] 25. ---- Comments: Obsolete; see Section 11.1 FP: I would suggest to be explicit and add \"of this document\" (unless links can be maintained in IANA registries Comments fields).\nAll done, closing."} +{"_id":"q-en-http2-spec-9e02f42fabd23faacb75be391ba996add546dc12ee15cade031304b9f4a0567f","text":"Two paragraphs up we mandate the use of PROTOCOL_ERROR, so this is not necessary. For ."} +{"_id":"q-en-http2-spec-959d9703908bdecb4dfc3bcc907e3c7f50a7281d9f3e8c4ab91ef4b5f99a58b6","text":"I improved the text along the lines Jörg suggested. However, I realized that the transition should really be in the diagram. That's basically impossible given the nature of the diagram, but it's worth including in the description at least. For .\nThis is a good clarification. One note about a whitespace change but the textual changes are :+1:"} +{"_id":"q-en-http2-spec-e2cc2860faf3b84d55fb62d982932844d7eb62c15b809f27e3449b47e27053d7","text":"For .\n[x] Section 5.1.1., 3rd para: When a stream transitions out of the \"idle\" state, all streams that might have been initiated by that peer with a lower-valued stream identifier are implicitly transitioned to \"closed\". Should one say \"that might have been but were not initiated by that peers\" to make clear that this applies only to un-used streams? [x] Section 5.3.1 Maybe update the section heading to read \"Background on Priority in HTTP\/2 as per RFC7540\"? [x] Section 6.5: The document implies that SETTINGS frames are ACKed in the order they are sent. Should one make this explicit? [x] Last para of 6.5, top of page 37: \"If the sender of a SETTINGS frame does not receive an acknowledgement within a reasonable amount of time, it MAY issue a connection error (Section 5.4.1) of type SETTINGSTIMEOUT.\" Should one provide some intuition on what \"reasonable\" could mean? An example? It seems, an HTTP\/2 sender of a SETTINGS frame could know when this was passed to the TCP layer and may have some insights on application layer RTTs. [x] Section 6.9.1: Concerning flow control, should one say something about avoiding silly windows? [x] Section 8.1.1, p52 1st para: Probably just me, but I am mildly confused by the statement about messages having a non-zero content-length field but still no data in the data frames. The reference section 6.4 of the HTTP semantics draft didn't resolve that. [x] Section 8.2.1, p53: parse error? \"When a request message violates one of these requirements, an implementation SHOULD generate a Section 15.5.1 of 400 (Bad Request) status code [HTTP], [...]\" [x] Section 8.4.2, 4th para: \"A client can use the SETTINGSMAXCONCURRENTSTREAMS setting to limit the number of responses that can be concurrently pushed by a server. Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. Confused: it disables server push, but the server can send PUSHPROMISE frames and then the client is supposed to reset a stream it wasn't willing to accept or capable of accepting? Which stream number would the server legitimately put into the PUSHPROMISE? [ ] There are a few cases where I am wondering if normative language should be used. Examples include [ ] -- 5.4.2, 2nd para, \"sends an RST_STREAM frame\" [x] -- 6.8, p43, 1st para, \"the server can send another GOAWAY\" [ ] -- 6.8, p43, 2nd para, \"the sender can discard frames\" [ ] -- 9.1, 3rd para, \"clients can create\" [ ] There may be further occurrences of \"can\" or similar.\nFYI NAME I've updated this post to have a list of checkboxes for \"addressed\".\nI've reviewed the changes here and I'm happy with 9\/14. The others I don't think need to be addressed."} +{"_id":"q-en-http2-spec-bebdc07cfa0f5a93d4ac8b686d4f1fd1b287bfbf5ec912c6b813ca4898f620af","text":"This isn't TCP RTT based. For .\n[x] Section 5.1.1., 3rd para: When a stream transitions out of the \"idle\" state, all streams that might have been initiated by that peer with a lower-valued stream identifier are implicitly transitioned to \"closed\". Should one say \"that might have been but were not initiated by that peers\" to make clear that this applies only to un-used streams? [x] Section 5.3.1 Maybe update the section heading to read \"Background on Priority in HTTP\/2 as per RFC7540\"? [x] Section 6.5: The document implies that SETTINGS frames are ACKed in the order they are sent. Should one make this explicit? [x] Last para of 6.5, top of page 37: \"If the sender of a SETTINGS frame does not receive an acknowledgement within a reasonable amount of time, it MAY issue a connection error (Section 5.4.1) of type SETTINGSTIMEOUT.\" Should one provide some intuition on what \"reasonable\" could mean? An example? It seems, an HTTP\/2 sender of a SETTINGS frame could know when this was passed to the TCP layer and may have some insights on application layer RTTs. [x] Section 6.9.1: Concerning flow control, should one say something about avoiding silly windows? [x] Section 8.1.1, p52 1st para: Probably just me, but I am mildly confused by the statement about messages having a non-zero content-length field but still no data in the data frames. The reference section 6.4 of the HTTP semantics draft didn't resolve that. [x] Section 8.2.1, p53: parse error? \"When a request message violates one of these requirements, an implementation SHOULD generate a Section 15.5.1 of 400 (Bad Request) status code [HTTP], [...]\" [x] Section 8.4.2, 4th para: \"A client can use the SETTINGSMAXCONCURRENTSTREAMS setting to limit the number of responses that can be concurrently pushed by a server. Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. Confused: it disables server push, but the server can send PUSHPROMISE frames and then the client is supposed to reset a stream it wasn't willing to accept or capable of accepting? Which stream number would the server legitimately put into the PUSHPROMISE? [ ] There are a few cases where I am wondering if normative language should be used. Examples include [ ] -- 5.4.2, 2nd para, \"sends an RST_STREAM frame\" [x] -- 6.8, p43, 1st para, \"the server can send another GOAWAY\" [ ] -- 6.8, p43, 2nd para, \"the sender can discard frames\" [ ] -- 9.1, 3rd para, \"clients can create\" [ ] There may be further occurrences of \"can\" or similar.\nFYI NAME I've updated this post to have a list of checkboxes for \"addressed\".\nI've reviewed the changes here and I'm happy with 9\/14. The others I don't think need to be addressed."} +{"_id":"q-en-http2-spec-c2f00d16a8305d6dcece0e881163e9ef0b7633d781192613a214152512c2d09b","text":"And an update to the section reference. For .\n[x] Section 5.1.1., 3rd para: When a stream transitions out of the \"idle\" state, all streams that might have been initiated by that peer with a lower-valued stream identifier are implicitly transitioned to \"closed\". Should one say \"that might have been but were not initiated by that peers\" to make clear that this applies only to un-used streams? [x] Section 5.3.1 Maybe update the section heading to read \"Background on Priority in HTTP\/2 as per RFC7540\"? [x] Section 6.5: The document implies that SETTINGS frames are ACKed in the order they are sent. Should one make this explicit? [x] Last para of 6.5, top of page 37: \"If the sender of a SETTINGS frame does not receive an acknowledgement within a reasonable amount of time, it MAY issue a connection error (Section 5.4.1) of type SETTINGSTIMEOUT.\" Should one provide some intuition on what \"reasonable\" could mean? An example? It seems, an HTTP\/2 sender of a SETTINGS frame could know when this was passed to the TCP layer and may have some insights on application layer RTTs. [x] Section 6.9.1: Concerning flow control, should one say something about avoiding silly windows? [x] Section 8.1.1, p52 1st para: Probably just me, but I am mildly confused by the statement about messages having a non-zero content-length field but still no data in the data frames. The reference section 6.4 of the HTTP semantics draft didn't resolve that. [x] Section 8.2.1, p53: parse error? \"When a request message violates one of these requirements, an implementation SHOULD generate a Section 15.5.1 of 400 (Bad Request) status code [HTTP], [...]\" [x] Section 8.4.2, 4th para: \"A client can use the SETTINGSMAXCONCURRENTSTREAMS setting to limit the number of responses that can be concurrently pushed by a server. Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. Confused: it disables server push, but the server can send PUSHPROMISE frames and then the client is supposed to reset a stream it wasn't willing to accept or capable of accepting? Which stream number would the server legitimately put into the PUSHPROMISE? [ ] There are a few cases where I am wondering if normative language should be used. Examples include [ ] -- 5.4.2, 2nd para, \"sends an RST_STREAM frame\" [x] -- 6.8, p43, 1st para, \"the server can send another GOAWAY\" [ ] -- 6.8, p43, 2nd para, \"the sender can discard frames\" [ ] -- 9.1, 3rd para, \"clients can create\" [ ] There may be further occurrences of \"can\" or similar.\nFYI NAME I've updated this post to have a list of checkboxes for \"addressed\".\nI've reviewed the changes here and I'm happy with 9\/14. The others I don't think need to be addressed."} +{"_id":"q-en-http2-spec-08a1dd25a353d6541421e2f1db53649c7b2be3a76995950d3225f6830744ef2c","text":"This is confusing; hopefully this rewrite is a little less so. For .\n[x] Section 5.1.1., 3rd para: When a stream transitions out of the \"idle\" state, all streams that might have been initiated by that peer with a lower-valued stream identifier are implicitly transitioned to \"closed\". Should one say \"that might have been but were not initiated by that peers\" to make clear that this applies only to un-used streams? [x] Section 5.3.1 Maybe update the section heading to read \"Background on Priority in HTTP\/2 as per RFC7540\"? [x] Section 6.5: The document implies that SETTINGS frames are ACKed in the order they are sent. Should one make this explicit? [x] Last para of 6.5, top of page 37: \"If the sender of a SETTINGS frame does not receive an acknowledgement within a reasonable amount of time, it MAY issue a connection error (Section 5.4.1) of type SETTINGSTIMEOUT.\" Should one provide some intuition on what \"reasonable\" could mean? An example? It seems, an HTTP\/2 sender of a SETTINGS frame could know when this was passed to the TCP layer and may have some insights on application layer RTTs. [x] Section 6.9.1: Concerning flow control, should one say something about avoiding silly windows? [x] Section 8.1.1, p52 1st para: Probably just me, but I am mildly confused by the statement about messages having a non-zero content-length field but still no data in the data frames. The reference section 6.4 of the HTTP semantics draft didn't resolve that. [x] Section 8.2.1, p53: parse error? \"When a request message violates one of these requirements, an implementation SHOULD generate a Section 15.5.1 of 400 (Bad Request) status code [HTTP], [...]\" [x] Section 8.4.2, 4th para: \"A client can use the SETTINGSMAXCONCURRENTSTREAMS setting to limit the number of responses that can be concurrently pushed by a server. Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. Confused: it disables server push, but the server can send PUSHPROMISE frames and then the client is supposed to reset a stream it wasn't willing to accept or capable of accepting? Which stream number would the server legitimately put into the PUSHPROMISE? [ ] There are a few cases where I am wondering if normative language should be used. Examples include [ ] -- 5.4.2, 2nd para, \"sends an RST_STREAM frame\" [x] -- 6.8, p43, 1st para, \"the server can send another GOAWAY\" [ ] -- 6.8, p43, 2nd para, \"the sender can discard frames\" [ ] -- 9.1, 3rd para, \"clients can create\" [ ] There may be further occurrences of \"can\" or similar.\nFYI NAME I've updated this post to have a list of checkboxes for \"addressed\".\nI've reviewed the changes here and I'm happy with 9\/14. The others I don't think need to be addressed."} +{"_id":"q-en-http2-spec-1a121b8c0e309e1c9fe67162817dc89a2ed64d3b80714616128db77b6fa353a6","text":"This is the one case in Jörg's review where I think we need text. In the other places, the text is secondary to other normative language and should not be restated in direct terms (or it would need to be far more precise than it is). For .\n[x] Section 5.1.1., 3rd para: When a stream transitions out of the \"idle\" state, all streams that might have been initiated by that peer with a lower-valued stream identifier are implicitly transitioned to \"closed\". Should one say \"that might have been but were not initiated by that peers\" to make clear that this applies only to un-used streams? [x] Section 5.3.1 Maybe update the section heading to read \"Background on Priority in HTTP\/2 as per RFC7540\"? [x] Section 6.5: The document implies that SETTINGS frames are ACKed in the order they are sent. Should one make this explicit? [x] Last para of 6.5, top of page 37: \"If the sender of a SETTINGS frame does not receive an acknowledgement within a reasonable amount of time, it MAY issue a connection error (Section 5.4.1) of type SETTINGSTIMEOUT.\" Should one provide some intuition on what \"reasonable\" could mean? An example? It seems, an HTTP\/2 sender of a SETTINGS frame could know when this was passed to the TCP layer and may have some insights on application layer RTTs. [x] Section 6.9.1: Concerning flow control, should one say something about avoiding silly windows? [x] Section 8.1.1, p52 1st para: Probably just me, but I am mildly confused by the statement about messages having a non-zero content-length field but still no data in the data frames. The reference section 6.4 of the HTTP semantics draft didn't resolve that. [x] Section 8.2.1, p53: parse error? \"When a request message violates one of these requirements, an implementation SHOULD generate a Section 15.5.1 of 400 (Bad Request) status code [HTTP], [...]\" [x] Section 8.4.2, 4th para: \"A client can use the SETTINGSMAXCONCURRENTSTREAMS setting to limit the number of responses that can be concurrently pushed by a server. Advertising a SETTINGSMAXCONCURRENTSTREAMS value of zero disables server push by preventing the server from creating the necessary streams. This does not prohibit a server from sending PUSHPROMISE frames; clients need to reset any promised streams that are not wanted. Confused: it disables server push, but the server can send PUSHPROMISE frames and then the client is supposed to reset a stream it wasn't willing to accept or capable of accepting? Which stream number would the server legitimately put into the PUSHPROMISE? [ ] There are a few cases where I am wondering if normative language should be used. Examples include [ ] -- 5.4.2, 2nd para, \"sends an RST_STREAM frame\" [x] -- 6.8, p43, 1st para, \"the server can send another GOAWAY\" [ ] -- 6.8, p43, 2nd para, \"the sender can discard frames\" [ ] -- 9.1, 3rd para, \"clients can create\" [ ] There may be further occurrences of \"can\" or similar.\nFYI NAME I've updated this post to have a list of checkboxes for \"addressed\".\nI've reviewed the changes here and I'm happy with 9\/14. The others I don't think need to be addressed."} +{"_id":"q-en-moq-requirements-746882014653e670be37e8d65cd5182841ace7f2119a77cc23c147a22b1ac091","text":"This adds a few requirements ripped from my blog post, and cleans out use cases and such.\nI have some thoughts about the content of this PR, but let's merge it, and see if my thoughts turn into text :-)"} +{"_id":"q-en-moq-requirements-775d921f6e5012fb72cc65c6619f1d677220a5a4cd88df30e16506e368958fe9","text":"NAME I think this is ready for review and merging .\nDuplicated \"without without waiting\" in Section 2.1. I wish I had spotted \"where a user wishes to observe or control the graphical user interface of another computer through local user interfaces\" in 3.1.2 - I'd suggest saying this as \"where a user wishes to observe or control the graphical user interface of another computer through user interfaces that are local to the user\". I think \"This may include audio from both microphone(s) and\/or cameras, or may include \"screen sharing\" or inclusion of other content such as slide, document, or video presentation\" in 3.1.3 might need adjusting. Doesn't this need to say, \"This may include audio from microphone(s) and\/or cameras, and may include \"screen sharing\" or inclusion of other content such as slide, document, or video presentation\"? (Also, we named everything except video as a media type, but if we're changing that paragraph anyway, we can add video as well) in 3.2, I had difficulty parsing \"As this has a much larger total number of participants - as many as Live Media Streaming , but with the bi-directionality of conferencing, this should be considered a \"hybrid\".\" I wasn't sure \"much larger than what?\" Is this intended to say something like \"This use case can have an audience as large as Live Media Streaming as described in Section 3.3.3, but also relies on the interactivity and bi-directionality of conferencing as in Video Conferencing as described in Section 3.1.3. For this reason, this type of use case can be considered a \"hybrid\". (if we're forward-referencing section 3.3.3 in Section 3.2, this might be clearer if we swapped the order of Sections 3.2 and 3.3) In section 3.3.1, are there any non-hypothetical media types beyond \"audio and\/or video\"? In Section 3.3.3, I asked you to distinguish between the live edge and the trailing edge (thank you for that), but I didn't realize that \"either because the local player falls behind edge\" was now unqualified. Is this \"either because the local player falls behind the live edge**\"? In 4.9.1, \"this is infeasible in many use cases where provisioning of client TLS certificates is unsupported or infeasible.\" has two \"infeasible\"s. Perhaps one could be \"impractical\"? Your choice which, of course.\nIn submitting the -00 working group version, we tripped over an incompatibility between XML2RFC and google-i18n-address 3.0.0. The details are at URL and at URL I took the shortcut of removing the country: attributes for the authors in -00, but (per the comments on those issues, a version of XML2RFC that has been deployed, so we can add our country attributes back when we're fixing the nits identified in this issue. And NAME says it's \"The Netherlands\", but I'll let NAME and NAME wrestile that one out ... :zippermouthface:"} +{"_id":"q-en-moq-requirements-e411cdc8af3065408c007d980f58b1d9bc6609349e4310e41acd328e138d18aa","text":"Note - this is a rough draft, although I have done some edits while adding this material. I do have notes for to-dos, and I'm pretty sure I want to figure out how to do the diagrams in some way that will format with straight lines, for starters.\nI'm happy to just merge this in now as is just to unblock other work and make progress, and make any adjustments later because there's nothing that stands out as controversial, NAME are you okay with this?\nWe have a plan on how to resolve our terminology migration for in this PR\nBased on review comments from NAME on the mailing list .\nDraft is in suhasHere\/moq-drafts.\nNAME - NAME and I are starting with the scenarios from URL\nNAME - I think it would be good to close this issue when we have a reasonable plan for where the material from the scenarios draft goes, and then handle further changes via issues and PRs as usual. Does that make sense? NAME NAME and NAME - I wanted to let you know that NAME and I are (finally, finally!) merging text from the scenarios draft in the requirements draft, as NAME suggested. I think I have homes for almost all of the text from the requirements draft, but I do have some work to do, so that the material is in the right place, and that it flows with text from the previous version of the requirements draft. I am, of course, always interested in comments. And thank you each for the work you did on this material.\nAwesome - let me know if something I can do to help\nThanks NAME .. look forward to the updates. +1 to NAME , please let me know if we can be of any help Here is my review. Firstly, I'd like to say that this document is badly needed. The requirements document doesn't get into the level of detail that this document does, and I think that this level of detail is important to make progress on the design. Terminology I think it would be useful to define some of the terminology, including terms like \"track\" and \"media switch\". Section 2.2 A few questions: Are the trackIDs \"t1, t2, t3\" separate MediaStreamTracks, or are they just multiple encodings of the same track? Are there scenarios in which it would be useful to be able to ingest multiple MediaStreamTracks (either audio or video)? I ask because the RUSH draft doesn't support ingestion of multiple tracks and I've been told that this is a problem by some customers. Section 2.4 A media switch, sourcing tracks that represent a subset of tracks from across all the emitters. Such subset may represent tracks representing top 5 speakers at higher qualities and lot of other tracks for rest of the emitters at lower qualities. [BA] The use of the term \"media switch\" suggests to me that it is forwarding tracks, rather than sourcing (e.g. transcoding) them. Can you clarify? subscribing to subset of the tracks [BA] I think there may be an implicit assumption here, which is that the receiver only indicates the track it wishes to subscribe to, rather than indicating the quality it wishes to receive. Above setup brings in following properties on the data model for the transport protocol [BA] The paragraphs that follow appear to describe different ways that the system could be implemented, rather than \"properties on the data model\". Do you mean to say that the design should be able to handle any of the approaches described? Media Switches to source new tracks but retain media payload from the original emitters. This implies publishing new Track IDs sourced from the SFU, with object payload unchanged from the original emitters. [BA] Is the term \"source new tracks\" implied by changing the Track IDs? If the payload is retained, this seems like forwarding to me, rather than \"sourcing new tracks\". Media Switches to propogate subset of tracks as-is from the emitters to the subscribers. This implies Track IDs to be unchanged between the emitters and the receivers. [BA] What does \"as-is\" mean here? Are you referring only the lack of a change in the \"Track ID\"? If the switch drops some frames in order to reduce quality (and bandwidth consumption) is it still \"as-is\"? Subscribers to explictily request multiple appropriate qualities and dynamically move between the qualtiies during the course of the session [BA] Is it a goal to be able to support such a \"receiver driven\" approach, as well as the other two approaches, which are sender driven? Section 3.1 For example, rather than waiting for 30 seconds before connecting, the user might quickly download the \"key\" frames of the past 30 seconds and replay them in order to \"synchronize\" the video decoder. [BA] I think this sentence points out that the term \"access point\" may need to be clarified, because in such a scenario, the term \"access point\" may have a different meaning in stored media than in media transmitted over the wire to speed up join latency. Section 3.2 It is possible to use groups as units of congestion control. When the sending strategy is understoud, the objects in the group can be assigned sequence numbers and drop priorities that capture the encoding dependencies, such that: an object can only have dependencies with other objects in the same group, an object can only have dependencies with other objects with lower sequence numbers, an object can only have dependencies with other objects with lower or equal drop priorities. [BA] This text seems to be mixing distinct concepts such as dependencies and layering. In layered coding, there is typically only a requirement not to depend on higher layers, but it's possible to depend on frames in lower layers. Mixing drop priority (a transport concept) with groups seems confusing to me. The main drawback is that if a packet with a given drop priority is actually dropped, all objects with higher sequence numbers and higher or equal drop priorities in the same group must be dropped. [BA] Leaving gaps in the sequence number space does seem problematic, but that's a choice, not a requirement. If the group duration is long, this means that the quality of experience may be lowered for a long time after a brief congestion. If the group duration is short, this can produce a jarring effect in which the quality of experience drops perdiodically at the tail of the group. [BA] What does \"group duration\" mean? Are you referring to time between I-frames? This paragraph seems to assume that group duration is a constant rather than something which can be set dynamically. Section 4 Some video codecs have a complex structure. [BA] Are you referring to codecs, or encodings? It would send for example: [BA] A picture might help here. I think you're referring to the scalabilty mode known as \"L2T2h\": URL Of course, we could encode these dependencies as properties of the object being sent, stating for example that \"object 17 can only be decoded if objects 16, 11 and 7 are available.\" However, this approach leads to a lot of complexity in relays. We believe that a linear approach is preferable, using attributes of objects like delivery order or priorities. [BA] Why would this lead to complexity in relays in a receiver-driven scenario? The relay would only forward the layers that the receiver asked for, and wouldn't have to store state relating to the receivers. Also, the real issue here isn't dropping layers, which can occur at any point. The reason dependencies (and more) are needed (and why the \"linear approach\" doesn't work) comes when it is desired to add layers. It is of course possible just to start forwarding additional layers, but that doesn't improve quality unless the forwarded frames can actually be decoded. To figure this out, whoever is in charge (receiver or relay) needs to be able to determine the \"upswitch points\". This also implies feedback from the receivers to the encoder (e.g. LRR). Section 4.2 We propose to express dependencies using a combination of object number and object priority. [BA] How would this permit restoration of video quality after congestion clears up? I don't think you have enough information. Section 4.3 . Relay behavior In case of congestion, the relay will use the priorities to selectively drop the \"least important\" objects: [BA] Dropping is easy, it's adding layers back (and decoding them) that's hard."} +{"_id":"q-en-moq-requirements-3f0ff00aafd4f591d11ec0e8729fc934711dc4becb9c95019ec13e08555c51e5","text":"We have an expectation that media itself is encrypted between a client and a server. We have identified a need for MOQ relays to have access to at least some, but perhaps not all, media metadata, in order to make forwarding decisions. So, for instance, timing metadata, but not names of media. If we intend to support media translators as well, the media translators will also need access to the media itself. It would be useful to be clear about what security relationships will be needed in various deployment architectures.\nI think this is dependent on the conversation about MOQ entities and their functionalities, and what they need access to, in order to perform their functions, that we're having (especially starting at the January MOQ interim)."} +{"_id":"q-en-moq-requirements-36898ea9b45b016d2d7dcf8da3be7be5ebad3f44140497009a3b8fb13eceba8a","text":"I suspect this may require a little bit more thought, but want to sense-check the direction. NAME any feedback you have would be welcome.\nNAME NAME thank you both for your feedback, I've rewritten the block again based on your feedback and hope this is a lot more clearer.\nMoQ Transport, in combination with a MoQ Streaming Format, needs to support mechanisms to allow ad insertion. This ad insertion: Should be compliant with current IAB and SCTE standards such as SCTE-35, which specifies signals that can be used to identify advertising breaks, advertising content, and programming content. VAST (Video Ad Serving Template) VPAID (Video Player Ad-serving Interface Definition) VMAP (Video Multiple Ad Playlist) Should be able to participate in both server-side and client side ad insertion schemes. The server-side solution may have a mode in which the ad is stitched seamlessly into the primary content and the ad boundaries are indistinguishable to the client. Should be dynamic in nature, allowing for just-in-time decisions to be made for choosing which content to insert. Should allow access to, and compatibility with, a subset of existing ad inventory.\nSome of my initials thoughts: To stick to the separation model of \"church and state\", we should also have agility to support more than SCTE-35\/IAB specs in the transport layer Is it required that all streaming formats must support event tracks? Or is the requirement here that just the transport through the catalog enables it? Is there a benefit to describing these requirements more generically - although enabling advertising for live broadcast is clearly the obvious use case having this functionality could be useful for other reasons, such as: In teleconferencing, a presenter switching one (or all) viewer's video to a different source such as the slide show or the presenters camera Falling viewers over to another source when the current one ends e.g Twitch's \"raiding\"\nGiven the PR above is merged, I'm closing this noting the above issue on ingest will carry on."} +{"_id":"q-en-moq-requirements-b97d877192bbdc646905872fd7edbaa2142b4e18418dff5c3c4f45bf9c46f9a9","text":"Hi, NAME I think this looks mostly fine (it looks like the abstract is a tiny bit garbled, but we can fix stuff like that in an editorial pass). Based on the changed title, I'm assuming that we aren't worrying about SRT and RUSH in this draft - that's probably a good call, because I don't think their requirements are going to look much like RTP over QUIC requirements. There will be some places that I will need to tighten up what I wrote, because they were still in scope when I was typing. I'll merge this, and see where I end up in an hour or two of typing.\nI needed a bit more context for this: I'm probably confused by \"The protocol\". URL says it provides this authentication using TLS 1.3 handshake. Are you thinking of something in addition to the QUIC cryptographic handshake? (Assuming the definition of \"Media Transport Protocol\" in ), something provided by the Media Transport Protocol level? Or something else?\nSomething in addition to the QUIC cryptographic handshake (in which I am talking about TLS mutual authentication here, where client presents a valid certificate to server, and verifies the server certificate). Consider: Existing protocols in this space usually offer some \"shared secret\" of some variety as part of either\/or an authentication handshake, and encryption of media: which are just secret values set in the field during connection handshake (see § 6.4) - but note they also support a cert-based approach which more closely matches TLS mutual auth where the Stream ID and user define the authorisation Before RIST Main profile there hasn't been a need for a lot of certificate-based authentication and so existing folk in field would use these types of shared secret patterns, and that having to keep PKI for devices may be problematic. Consider things like encoding hardware deployed in remote locations. Other options could be considered, like using a more SIP-like approach, but we would have to be careful not to bring the rest of the SIP kitchen sink with its round peg that might not fit in the square QUIC slot. I get this requirement presently is a bit flaky in part because the requirements are a bit unclear. Perhaps I put a little more description on the requirement, or remove it from the draft now, and leave it for a \"ask later\". Thoughts?\nNAME I think (1) adding the explanation that this is beyond the authentication mechanism provided by QUIC, and (2) leaving it in the draft, with a note that more detail is needed, and will be added as discussion proceeds, is all that's needed for now.\nClosing this as I think it has been addressed, please by all means speak up if we still should do work here.\nI THINK what we are doing, is roughly Start with current \"Media - Media Transport Protocol\", where \"Media Transport Protocol\" might be RTP Encapsulate in QUIC, to give \"Media - Media Transport Protocol - QUIC\" If that's right, we should be using \"Media Transport Protocol\" to describe the first layer under \"Media\", and what we're doing might better be described as something like \"Media Transport Protocol Encapsulation\" (\"in QUIC\"). Whatever we choose, we should be consistent - the Abstract says which would make me believe that we're designing a replacement for existing media transport protocols, but then under \"Non-requirements, we have I think that the second quote is correct, so we just need to be clearer in the first quote.\nI agree with this, and it should be fixed. Would you like me to clean it up or do you want to roll it in with other changes?\nHi, NAME would it be OK if you did this cleanup? I think adding the picture of the two protocol stacks, in some form, will be super helpful for readers.\nI think we can close this as our nomenclature for the document is kind of done now the -00 is out.\nWe have been talking about \"Media Over QUIC\" as a broad term that would include any media transport protocol running over QUIC, which (apparently) would include RTP over QUIC SRP over QUIC RUSH, which runs over QUIC If we are doing \"Media over QUIC\", all of these (and potentially others) would be in scope. If the scope of this document is \"Media over RTP over QUIC\", that would be fine, but we should say that. Thoughts?\nI think we've fixed this one - I'll close it for now."} +{"_id":"q-en-moq-requirements-700e246ae631027f509d467d59172fee47ed7c1cfb4ac1ec011675be06109691","text":"One more use case \"Distribution withing a platform\" is suggested. It does not seem to be covered by other use cases, but might be worth describing it as well. Some minor suggestions like UDP multicast instead of QUIC as the last mile transport, etc.\nThanks!"} +{"_id":"q-en-moq-requirements-4a202ec2d31bc371d0759491f3cdbc5e42cbcf983191ab42411c52fecce1cdc8","text":"This is just a starting point, but brings the document closer to alignment with our spreadsheet. I've also added some additional details which should cover themes raised in the current BoF agenda."} +{"_id":"q-en-moq-requirements-8ed7d41199bca7caa585a14626381e4b56a93b5dae106a4914e391d3233acb2c","text":"NAME - Please let me know if this seems about right. One thing that's worth mentioning to you and to NAME - the latency characterizations that we wrote did not make use of the new \"ultra-ultra-low latency\" categories. I changed the one for gaming to ull100 (\"ultra-low-latency under 100 ms\") based on conversations at IETF 112, but other people would be better choices to review the characterizations of the use cases they're most interested in.\nThe opcons drafts uses 4 bands: ultra low-latency (less than 1 second) low-latency live (less than 10 seconds) non-low-latency live (10 seconds to a few minutes) on-demand (hours or more) I think these bands lose some granularity that is useful. For example, a functional video call or interactive game probably needs much less than 1s latency, while low latency live may span lower than 1s in some cases?\nNAME - that's very true. The opcons draft was scoped for \"streaming operators\", and although we've talked (in MOPS) about doing a separate draft for low-latency streaming, we haven't done that yet (in MOPS). That may be for the best, if MOQ turns out to be \"not ABR, less than 500 ms\", or something like that. I'm also one of the opcons drafts, so I'm thinking we should add at least one, and probably two, lower-than-ultra-low(!!!) latency bands, and explain why we're doing that. I'm assigning this to me, so expect a PR soon.\nThe industry is terrible at naming things. \"ultra-low\", \"low\", \"non-low\", I mean really? It's all terrible marketing rather than an attempt to classify protocols. I like that the opcons tries to reuse and simplify the terminology but... latency is a complicated. Some protocols have artificial delays, but for the most part, protocols are defined by their ability to deal with congestion. A protocol like RTMP will be \"ultra low-latency\" within a datacenter or for a user with a perfect network connection, while it will be \"low-latency\" for somebody on a cellular connection, while it will be \"non-low-latency\" for somebody going through a tunnel... all of which depends on the implementation. At the end of the day, these lines in the sand don't add to the discussion. I would rather focus on the use-cases like \"having a conversation with someone\" rather than arguing over millisecond budgets."} +{"_id":"q-en-moq-requirements-fd1d00f0e51011cbd98b531d0ed56f4aa630f55649460b1aff2fa769fc4a24b5","text":"The section \"Moving Beyond RTP over QUIC\" should cover the position of assessing the feasibility of the RTP family of protocols, and the size of work of any possible changes that may be required for them to be incorporated.\nThis We're going to need more work on RTP versus other transport protocols, but we definitely need to make this change!\nI've already ranted enough about this section but I figured I'd file a ticket so it doesn't get forgotten. :)\nI think what needs changing is that this is removed as a non-requirement, however I still think that we have to make it clear that a consideration \"to RTP or not to RTP\" must be described in the document.\nI think we've now covered this, please chime up if there was anything missed."} +{"_id":"q-en-moq-requirements-3b90d5bf03edc1e0f9860bf2ec6c9db27c1550a08f569d6ce90b1fc57ed71b7c","text":"This should address\nAlthough it's true that RUSH predates DATAGRAM, streams were an intentional choice as RUSH implementations can leverage standard QUIC library reassembly of messages longer than one MTU.\nAbsolutely. It's disingenuous to say that RUSH is using QUIC streams because QUIC datagrams didn't exist yet. QUIC streams provide fragmentation, reliability, flow control, and cancellation (error codes). In fact, you could (but shouldn't) use QUIC streams to deliver individual RTP packets without much downside.\nThe above commits should fix this, please let me know if I got it wrong."} +{"_id":"q-en-moq-requirements-4e316771d8b1280ee978913351656110bc112c5e79c3df3fc1514a72306a5405","text":"Although it's true that RUSH predates DATAGRAM, streams were an intentional choice as RUSH implementations can leverage standard QUIC library reassembly of messages longer than one MTU.\nAbsolutely. It's disingenuous to say that RUSH is using QUIC streams because QUIC datagrams didn't exist yet. QUIC streams provide fragmentation, reliability, flow control, and cancellation (error codes). In fact, you could (but shouldn't) use QUIC streams to deliver individual RTP packets without much downside.\nThe above commits should fix this, please let me know if I got it wrong."} +{"_id":"q-en-moq-requirements-9353312bac602d0e82cdf5a0be023eea951df855fb5523d599b643d9c7a18655","text":"(related to )\nNAME could you please take a look and let me know if it's an accurate description?\nI'm merging this, so we make sure that WARP is included. If we got anything wrong, please let us know - we'll make sure it gets fixed.\nLuke Curley has written . It's QUIC based, using CMAF for encapsulation, stream based, has no ALPN amongst other features. That said, it should be included in the \"prior art\" section of the document.\nAgreed!\nSome things worth considering, but you guys decide what is worth mentioning. Warp is the only protocol that is not frame based. Instead it attempts to map the GoP structure to QUIC streams. High quality media is basically linear within a GoP in order to maximize compression, which fits very nicely with QUIC's streams. There's no point delivering frames out of order if they cannot be decoded out of order. Warp uses prioritization instead of deadlines to achieve dynamic latency. This removes the need for feedback outside of QUIC ACKs. It avoids a whole class of problems caused by dropped\/delayed feedback. Warp uses CMAF segments for full compatibility with HLS\/DASH, avoiding transmuxing on edge or at origin. Warp uses the full suite of QUIC functionality. This includes fragmentation, congestion control, flow control, streams (multiplexing + cancellation), etc. Some other protocols use QUIC datagrams and disable everything else, effectively using QUIC as a complex DTLS replacement. Warp supports a bidirectional API using QUIC streams however this was not in scope for the first version of the draft. I wanted to avoid bike shedding to focus instead on the above features. The actual version in production allows the player to manually choose the rendition and pause\/play. Warp uses WebTransport at the moment, but we can decide if it's better suited for native QUIC (with ALPN) or even HTTP\/3. Warp would even work over HTTP\/2 (and Http2Transport) albeit with additional head-of-line blocking."} +{"_id":"q-en-moq-requirements-648018cad0e06b898587f276043bc261a58d1a396ce439138b2bfb112805b3f2","text":"Media over QUIC is just the name of a mailing list and a general area of discussion. In some places, the draft refers to it ambiguously where a read may interpret it as the name of a protocol or working group. Maybe can call out that it is not those things and disambiguate references with \"Media over QUIC mailing list\" or \"Media over QUIC problem space\"?\nI think the issue (or at least the resolution) is also tied to - the objectionable (or at least the obsolete) text in needs to change, and making the changed text also define all the ways we've used the label \"MOQ\" seems helpful. Expect a PR that attempts to resolve both.\nStreams are equally capable of moving media, though the number of different ways to design a protocol have expanded with the addition of datagrams.\nAbsolutely. It might be worth mentioning that QUIC datagrams in their current state have caveats. They are congestion controlled, elicit acknowledgements, and these acknowledgements . QUIC datagrams certainly have their benefits, but protocols need to be designed to use them properly. Otherwise SRTP and DTLS might be better options.\nThis term may imply latency constraints that may not exist in all use cases. Further, I wonder if the framing as 'encapsulation of a media transport protocol as a payload' jives with what Victor mentioned during our call -- that part of what's interesting is using QUIC as a transport protocol.\nNAME - two things (one each, for your two excellent points): this draft started out as a survey of what was being described as \"Media Over QUIC\" last summer (after the IETF 111 side meeting), then lurched downhill to being focused on RTP over QUIC, and then lurched BACK uphill, but not all the way uphill, to focusing on a subset of what's been proposed under the banner of \"MOQ\". Thanks for helping us spot places in the draft that haven't completed the trek! Agreed about reframing the framing, and if we suck any of the other use cases in - there was a lot of discussion around needing help with gaming for sub-100 ms latencies at the IETF 112 side meeting, so that might happen - the current framing might be even less helpful. Definitely worth fixing. Expect a PR soon.\nI've been using the phrase \"live media\", defining it as \"media that is actively being produced and consumed\". It's not perfect but neither is drawing a numeric latency number in the sand.\nNAME - just so that I understand (better) - are you saying that the media is being consumed at (roughly) the rate it's being produced? Any answer is fine, I just want to make sure I know what we're talking about here.\nWe have been talking about \"Media Over QUIC\" as a broad term that would include any media transport protocol running over QUIC, which (apparently) would include RTP over QUIC SRP over QUIC RUSH, which runs over QUIC If we are doing \"Media over QUIC\", all of these (and potentially others) would be in scope. If the scope of this document is \"Media over RTP over QUIC\", that would be fine, but we should say that. Thoughts?\nI think we've fixed this one - I'll close it for now."} +{"_id":"q-en-moq-requirements-36a766ee5dff4301f734e4f9d55feaaafdc97a01c3f532ed56b47725954b78d5","text":"Based on feedback from NAME\nThat looks fabulous to me - merging now. There's not an issue for this, right?"} +{"_id":"q-en-moq-requirements-492222ca00aab7261040376677ba5ea8fbb1efc1d651a5cf7f3ba6f49a3739c7","text":"It's likely that only NAME and I will care about this, but I think it matters :grin:"} +{"_id":"q-en-moq-requirements-20cdd18e449f775baa449010c8a67391bdb01f7cd2ce5213fffd7f07564436f7","text":"We should talk about whether the MOPS latency constraints make sense, or whether we should stop suffering and switch to something that's not about transport latency, but more about end-to-end workflow duration, which is what I think we're talking about in the \"On Demand Media\" section. But we can do that another day.\nI think it's hard to really define latency for on demand for a whole host of reasons: The non-functional requirements for a system to process an on demand asset can vary dramatically. For example, it's common for news clips to have an SLA of clip duration to ingest, transcode, deliver, and be playable. Whereas for high resolution content, this stage of ingest and processing can take many multiples of duration and may be done hours\/days in advance of its availability to playback. It's not really live, so the measurement of \"latency\" is a bit weird here. The latency between the origin storage serving the media to presentation on screen is, in general irrelevant from a playback perspective provided it doesn't stall out playback due to poor network conditions. Throughput and jitter have a greater affect."} +{"_id":"q-en-moq-requirements-8a48581034216fbd9637a004054873265ee8b55fbe349861e982f021b6167845","text":"From NAME\nNAME and NAME - I agree with Magnus in his email (cut and pasted in the description), where he said \"what about RMTP and SRT?\", but I'm thinking that since we already have a \"Why QUIC?\" section, we should probably have a \"Why NOT other protocols\" section, rather than trying to talk about that in (what was) Section 6.1. That could also conveniently map into saying why we're not looking at \"Interactive Media\" in this document (and, transitively, in the BOF), and why we're not looking at \"On Demand\" (Media) in this document (and in the BOF). Please let me know if this is a BAD idea soon (I'm typing PRs).\nNAME for media ingest, Magnus has a good point about covering not just HLS and DASH, but also making (even brief) reference to non-segmented protocols like RTMP and SRT (but really, this can be expanded to RIST, ZiXi, etc etc etc). This can just be a sentence or two; I'm happy to write it.\nNAME - I had a complicated day yesterday, which became less complicated last night, so I'm not coherent enough to type text. I'll see what I can do in the next couple of hours, and if I can include these NAME suggestions.\nThe current definition of latency targets in the draft is based on specific numeric ranges. Those do make sense (since they ultimately come from use case requirements), but I believe we should focus more on what consequences those have for the protocol design. Reflecting on our experience with QUIC used for live video vs QUIC used for video conferencing, my proposed definition of the latency target for live video would be \"latency an order of magnitude higher than one RTT\". Two major examples of why this is important: When the target latency is on the order of one RTT, it makes sense to use FEC and codec-level packet loss concealment. When the target latency is in the range of many, many RTTs, selectively retransmitting only lost packets is almost always better than FEC. When the target latency is on the order of one RTT, it is impossible to use congestion control schemes like BBR, since BBR has probing mechanisms that rely on temporarily inducing delay and amortizing the consequences of that over multiple RTTs. I think the first point illustrates nicely why protocols like RTP have low-level control over packetization, while the existing MoQ proposals defer it entirely to the QUIC stack.\nNAME - that's super helpful. It helps us explain a few things - why we need to know this, in order to explain why we're shooting for One Protocol, and why we think the first three use cases in the draft are different (they're all RTPish), and why we're not looking at RTP in the BOF - right now, the draft just says AVTCORE is looking at RTP over QUIC, which really isn't much of a justification, engineering-wise.\nVictor's points are well made, though I think picking the boundary of the number of round trips necessary to rely on standard QUIC loss detection is a challenge. 5 RTTs provides almost 5 opportunities to repair a loss, which in my experience is almost always enough. Similarly, I'd expect it's enough for BBR to complete various probing cycles. It's also important to understand what RTT we have in mind here. I think the RTT used to understand what domain you're in is approximately smoothed RTT (not min RTT), because the number of rounds for loss recovery or BBR gain cycling or Cubic packet conservation matter the most.\nI agree that the numeric ranges are not useful when designing a protocol but they're useful enough as product requirements. However focusing on the RTT completely misses the point. Network latency involves fixed delays (min RTT) but most of it is caused by queuing (primarily congestion control). This includes any queues within the application, the network layer, intermediate routers, and even the receiver. You can think of latency as the \"frame RTT\" which is decoupled from the packet RTT. Of course the packet RTT is important, but it just doesn't factor in how much data has been queued. For example, it's just useless to talk about RTMP and HLS\/DASH latency in terms of network RTTs. In my opinion, the problem with those protocols is that they impose unnecessary dependencies, making it slow to drop the bitrate to reduce queue sizes. Sending media over a single stream means that every frame depends on every previous frame regardless of the GoP structure (no frame dropping), and it also means that new GoPs are dependent on the previous GoPs (no skipping ahead). WebRTC avoids these unnecessary dependencies which is how it's able to rapidly respond to congestion to keep latency low. I've been working on a \"latency theory\" document if any of these ideas are resonating with people. I feel like it's too early though, and we should be focusing on the use-cases and requirements instead.\nNAME - please let me know when you think it's not too early to focus on \"latency theory\". I'd love to know more (in particular, I'd love to know if I'd be able to contribute effectively to that important topic). Just filing this away for now, as you suggest."} +{"_id":"q-en-moq-requirements-435722959d10845013753edeef59bdd97e793790854e7cf8a9fa84917f2e5b46","text":"I propose defining latencies less than 50 ms as near real-time latencies. These latencies can be used by musicians to play together in sync. And I guess the human ear or eye would mostly not notice any delay below 50 ms. Might be worth doing a bit of research around the topic to refine the ranges though. For instance, in this paper (URL) they conclude: In another source (URL) the following is defined: 150ms target performance 13ms > lower detectable limit\n< 200 ms - comfortable latencies for conversation.\nApologies for not looking at this earlier, I've had to prioritise other work recently. Given that our draft and these values are an extension of , could you please also include the references and elaboration like you have in the PR as I think it will help quell any further bikeshedding on the subject.\nThe MOPS document now mentions near-realtime latencies (see URL), but does not define the exact values. Likely this PR still makes sense, but now as an attempt to define those type of latencies."} +{"_id":"q-en-moq-requirements-3737a18cd714fe44c5e24230a2a8b6fa589e3d05d0e85fc58bc95c651e9879e4","text":"Move \"considerations\" to a separate draft, and do various related cleanup work (for example, removing unneeded references that supported text that has been deleted.\n(but document focus is on all media categories, not just live media) (moving the considerations section to its own draft) (moving this diagram to the considerations draft)\nOur diagram shows \"media transport protocol\" as one layer, which can be true, but the text points out that anything running over HTTP\/3, or even WebTransport over HTTP\/3, will have multiple media transport protocols. It would be helpful for readers if we showed this.\nSports fans of this draft may remember that this section was called Requirements, and true sports fans may remember that the contents had changed very little from when James and I thought this draft would be focused on Interactive use cases - and now it's focused on live media use cases. Rather than go into the BOF with a \"Requirements\" section that was a natural attraction for protocol designers, and almost completely mistargeted, I provided text for some of the considerations that I could type off the top of my head, before the I-D deadline. The existing list of considerations is not a complete list, and some of the considerations that are described in this section are \"To Do\", where the first thing To Do is to decide whether that consideration is a consideration for Live Media. To mention one significant example, the discussion about WebTransport doesn't mention one of the biggest advantages - the use of existing CDN infrastructure for live media streaming. But there are other examples - and I may think about this issue, and let other people also provide thoughts here, before I try to resolve the issue with PRs. (tagging the group, so you can be thinking about this as well) NAME NAME NAME NAME NAME\nNAME and NAME - It is occurring to me that all the use cases we're recommending in this draft, and all the use cases that are named in , are pointing towards LIVE media. This document is even explaining why we think that's what matters first. If we renamed the document as \"Live Media Over QUIC\", it would be more accurate, and the BOF proponents could point to this draft and ask people with other use cases if their use cases overlap with the live media use cases, perhaps allowing the BOF to recommend doing the live media use cases in this draft, and inviting people to provide comments on this draft explaining their use cases, so the IESG would know what to do with them. Your thoughts?\nI don't object to the name change but I don't feel strongly.\nThat name works for me too.\nWe should revisit after the BOF, when we know what the scope of the document will be. :sunglasses:\nSo, this is slightly more complicated than I had hoped. The use cases in the draft span \"Media Over QUIC\". The considerations (and, if we get that far, requirements) in the draft only cover \"LIVE Media Over QUIC\". So, let's leave this one open, until we find out the right thing to do, and then we can Do The Right Thing. :innocent:\nI would propose we rename (again) to \"Media over QUIC Use Cases and Requirements\", for a few reasons: Scope of use cases that the WG will work on isn't yet settled, and it's likely that we may need to cover some (but not all) interactive use cases as well as hybrid interactive\/live. I still think we should set the protocol requirements in the document - this frees up the charter to describe scope and milestone, and our draft (if WG adopted) to set the use cases and what we will require of the deliverables. If WG adopted, the details of what is in and out can be made via WG consensus. This follows a similar title format to and other documents like it. NAME thoughts?"} +{"_id":"q-en-moq-requirements-398730da48bece10d7a8a5b873f01f137f396d0f8cc7abf074dfa4ad50be1e98","text":"NAME - I think this version is ready for submission as -03, when the submission window reopens during IETF 115. Please let me know what you think.\nIt would help people contribute to this document if there are blank sections in outline form, and we will be less likely to miss critical stuff.\nThe IETF 115 proposal to the working group is to use the approved charter as the high level outline for the Requirements section."} +{"_id":"q-en-moq-requirements-7128a485250ffe7c4e29e24ac1ae249a7afaa251ff9d545047625b0cfa44a09d","text":"NAME could you take a look at this? I know everyone's thinking has changed on details, but we can capture those changes as the working group converges.\nThe subsections should be Publishing Media Naming or Identification Packaging Media Media Consumption Relays\/Caches End to End Security This is only a starting point, of course, but , and it's easier to keep track of them if the document subsections match, at least for now.\nNAME - ISTM that the contain a lot of material that might better be put in another section. I'll work through that before we talk (next week, right?), but please keep this note in mind, so we can fix what I get wrong. Also - ISTM that we really do need a \"specific protocol considerations\" requirement section, to capture things like streams\/datagrams, WebTransport\/H3\/QUIC, one MOQ protocol, etc. I'll add that in the PR for this issue as well.\nNAME most of the sections had some beginning text (and I can't remember who wrote it - was it you?). I'm copying the bullets from NAME slides into each section as a starting point. I'm not sure whether deleting the beginning text is the right thing to do, because it might or might not match the resulting subsections' contents, and of course the new subsections don't have beginning text because they didn't exist before, but let's talk about that, next week."} +{"_id":"q-en-moq-requirements-b1ed9a2fc23d23d1e6c4e772a5e09fbbd10a393ed452dae360ce3a578ed6d6fe","text":"This is short, but we should say something about why we're working on this section. Comments are, of course, encouraged.\nNAME is hoping that discussion of some requirements at a level that is not protocol-specific would be helpful. It's not helpful to include protocol-specific requirements in this document. This document should explain that."} +{"_id":"q-en-moq-requirements-7bc035bf1064256ccb19597105292e494fbaa954a5c75c10f7387d2770364cdc","text":"NAME I could also include some other non-requirements into the section if you have some to mind.\nA similar section seemed helpful during discussion of URL My first nominee (and I'll put that in the corresponding PR) is that we're assuming that we're working with existing media transport protocols like RTP and SRP, rather than inventing a new media transport protocol in the IETF, but we can talk when I have text.\nWe have this in place, and I think we have enough non-requirements to start with, let's keep adding as appropriate."} +{"_id":"q-en-moq-requirements-eba4ff10d237705e007eeceaf70446baa175e764be40504ad874c4e2e22e877c","text":"During the , NAME asked I answered (even though I'm not a MOQ chair), , but it's worth noting that this question has come up more than once during MOQ meetings, so it seems useful to add a short summary to the draft, in order to make sure we all have the same understanding. If I get this wrong, NAME and NAME can correct me, of course. :innocent:\nTa Hi, Lucas, I'm not a MOQ chair, but I can talk about history on one point. intentional, and (IIRC) derived from URL, which says, While writing down such things as requirements and use cases help to get a common understanding (and often common language) between participants in the working group, support documentation doesn’t always have a high archival value. Under most circumstances, the IESG encourages the community to consider alternate mechanisms for publishing this content, such as on a working group wiki, in an informational appendix of a solution document, or simply as an expired draft. My personal opinion is that we haven't spent enough time discussing use cases and requirements to have a good sense of the archival value of this documentation. If we did a good job on them, and kept them roughly up to date with solution drafts, the documentation may be worth publishing, but my understanding of the IESG statement is that no AD will ever ask the working group when these drafts will be submitted for publication as IETF-stream RFCs. And now, I'll let Alan and Ted give us the real answer. Best, Spencer >\nHi MoQ, I think this document is suitable candidate for adoption. It's not perfect and needs further development but that is sort of the point of adopting it. I note that the WG has a milestone of \"WG adoption of Use Cases and Requirements document for Media Delivery over QUIC\". If we dont adopt this document now, when and how would we axpext that milestone to be achieved? As a point of administration, I don't see a corresponding milestone for submission of a document to the IESG. Can I ask the chairs to clarify if the intention is to adopt and develop a draft but not seek taking it through to an RFC? Cheers Lucas This isn't as odd as it might sound. The MASQUE WG did something similar to help make progress on the protocol spec for IP tunnelling. So I would appreciate clarification here."} +{"_id":"q-en-moq-requirements-732c9854fca8f6582ed3acdd94f92eda52fed6716099c01842368e29b70bb4b1","text":"NAME - I am looking at , and wondering if I understand our point in our draft. Is it accurate to describe what WARP-04 is doing now, as ? If I understand the point, it's that we have to rev the specification, and implementations, in order to add a new package (because implementations that only support Container Format 0x0 for fMP4 don't know what to do with Container Format 0x1), so the distinction we are making seems (to me) to be about whether the protocol specification uses a generic method to tell a receiver how to decode media, rather than whether the protocol specification describes methods itself. Please let me know what you think, because I'm pretty sure I know what I want to say, unless I'm confused.\nStarting thesis: The specification should support CMAF, in addition to CMAF, support non-contained {\"RAW Elementary bitstream\") media, and have agility to accommodate future container formats\nA bunch of good comments get made in room but from requirement point of view, I think we need to have the ability to use something that has less overhead than CMAF particularly for audio. I don't see the need to use multiple types of containers in the same session and have them synchronized.\nThis issue was discussed at the MOQ interim meeting on 2023-01-31 - notes from that meeting are , and included here by reference. Anyone working on a PR for this issue should take a look at the discussion notes.\nHi, NAME would you be able to do a PR for this issue? This isn't my area of clue.\nOn today's authors' call, we think the starting point is now have agility to accommodate a variety of container formats support CMAF, support media formats that are not processed by relays (and relay-related MOQ capabilities) This document should also describe the requirements for additional container formats.\nNAME - do you think what's in the document now (which isn't quite this ^^^^) is sufficient for an adoption call, or should we do a quick PR just retitling and rearranging the subsections?\nWe want agility, and we'll say that as a requirement, but we don't care what a MTI packaging would be (CMAF, fMP4, don't care), and we can defer to the protocol specification(s) (ex. WARP) to state those details, because those specifications will need to handle that anyway. It is likely that we need to handle codecs the same way (agility is the requirement, but we don't care about that, and the protocol specifications will need to handle that anyway).\nWhat the document says now:"} +{"_id":"q-en-moq-transport-914424cac309e024c57813eff721b6787faf1764de1755e7ec160cc8d67416b6","text":"Matches the terminology used by W3C is generally much better. This is the order at which objects are sent, but does not guarantee delivery in this order. Doesn't preclude further conversations about prioritization.\nThis is just a rename so maybe merge it after fixing conflicts (you'll need another approval), or abandon? Based on the discussion today we'll be updating the priority section to indicate other possibilities for expressing priority.\nLooks good to me. I would like to see this merged soon, so we can make the proper references to \"send order\" in the additional PRs.I do think that send order make more sense than delivery order but the parts of this that have to do with the prioritization scheme should probably be moved to a new section of the draft where we talk about the algorithms to decide what to send next."} +{"_id":"q-en-moq-transport-99d3eddaeb21ff26d8a377100f7c413ec9dad620f1f7e5926b7cbc61e1b41874","text":"NAME does the latest commit address your feedback? thanks\nlgtm modulo addressing Luke's comment."} +{"_id":"q-en-moq-transport-061097ad0f44728783e3830f8e621612b7c3f691e3bef7d37ea8e04bd4e63b91","text":"lgtm with the small tweaks to H3 and section heading requested by Luke\/SuhasI'm biased but I thought the old motivation section was pretty good."} +{"_id":"q-en-moq-transport-7a5e1263e7b0afef19c732aea00e9336080d350dccaa8f2f4d0faced6cbfec3e","text":"NAME does the updated text with OPEN ISSUE looks about right to you. I personally feel we should revisit all the terms and get to a common understaning across the documents. For -05 an OPEN ISSUE seems like OK. thoughts ?\nLGTM"} +{"_id":"q-en-moq-transport-817d0ea2119bfd4b96c8d6acd89a9ef23c211287d93235d720f9c9f952ab84b3","text":"NAME please check the latest commit . thanks\nSomething is wrong, this PR appears blank\neditorial nit if it can be addressed, otherwise LGTM"} +{"_id":"q-en-moq-transport-07055d7eb637d454294687c435e3768b7fe024e861f38151f223ebc8edf0029c","text":"This PR by itself doesn't make a lot of sense but I'm ok rolling with it if it's part of the grand plan"} +{"_id":"q-en-moq-transport-788788cb5579f4ed7515a476ac4a96b55a4b9259ecd3d02275866d1986246d68","text":"NAME I think you added this TODO but it got removed in the editorial pass, are you good with that?\nYes, we can remove this. We will certainly revisit the issue later, because this is a thorny subject. For example, we assume that \"objects MUST not depend from objects in the group that have a higher priority code or a higher object ID\" -- object at P2 does not depend on another object at P3, object number 1000 cannot depend on object number 2001. That much has consensus, but it is a negative statement."} +{"_id":"q-en-moq-transport-87680cd045b9aa3c45f714e7d19e366f497b88bb1dce4c595d77760c71bd75f9","text":"LGTMLGTM (side note for much much later, RFC editor often objects to acronyms in the title - but this is a future problem)"} +{"_id":"q-en-moq-transport-0806e9c5e9dfcf25109105c1f17a2bc60f7d23b037559b423e5f13c2821d1ac6","text":"No real changes here (other than adding a missing period) but changes newlines for checker tools\nLGTM, as it seems to just whitespaces only change."} +{"_id":"q-en-moq-transport-85f1b9f89af294a57531908944531039678e64d31a272aabaf94528cd0574858","text":"The stuff we had to italicize the stuff in html is inserting underscores in the .txt file so I just want to remove this until for now until we figure this out. Did not change anything other than making the words not italicized\nThanks for catching this , the rendered TXT file looks different for these things. Merging it for now and we can work on figuring out the right markdown syntax."} +{"_id":"q-en-moq-transport-d489a6e632daa8b606b5e6dc40a2ff10e29f24f284c7a1d2c4960e533945e49e","text":"Based on QUIC: URL\nBased on call today, sounds like when running over QUIC we should also use an ALPN of \"moq-13\" for draft 13\nWhen running over web transport, we need to talk to the web transport people to talk about what to do but the ALPN would be H3 (or whatever) so the answer is not ALPN\nThe conclusion was that we should move version negotiation out of the SETUP message and into an ALPN for native QUIC and an unspecified mechanism for WebTransport. However, I can't think of any replacement for WebTransport that would work today. Query parameters would work for passing the client supported versions but there's no mechanism for replying with the server selected version. The javascript API provides no information about connection establishment (ex. any returned headers), only that it completed. If we moved version negotiation out of SETUP, then it would not be possible to interop until new WebTransport functionality was deployed, assuming W3C agrees in the first place. Could we go ahead with this PR until there's an available alternative? I like the direction of utilizing more of the QUIC\/WebTransport handshake, but I don't think we should block interop awaiting this marginally better design.\nCan we add in the ALPN text as part of this PR and for now say something like \"When QUIC is used, the negotiated alpn MUST match the value in the SETUP message.\"?\nIt's time to specify some version numbers so we can attempt interop. We should do something like QUIC, where draft versions are prefixed with 0xff (or something?).\n+1 The QUIC guys way over engineered this so I would be fine with just doing the same thing so we don't need to bike shed on it.\nFor now, will go with lower bits are the draft number and upper bits are some fixed set of bits defined in the draft. Do roughly what QUIC did.\nHere's what I propose we copy: URL We don't need semvar yet because each version WILL have breaking changes. I'd be totally fine with semvar for the final RFC, like one byte for the major\/minor\/revision. Although I also think that negotiating extensions is more flexible for a transport protocol, while semvar seems more useful for something intended for fanout like the catalog.\nPer conversation, we've decided to move forward with this and do ALPN as a separate PR.I would want to consider using alpn at some point in time. Want to keep it open for URL version at some point too. This PR is fine to continue the experimentation.Looks fine to me. Sure we might do more complicated stuff later but this seems like good enough for now."} +{"_id":"q-en-moq-transport-531dac387727a01b11ea5bc46cd82e0a068a93913c5505f06b7d492d05ef166c","text":"This PR moves track request params to its own section. This was messed up during huge PR merge as part of preparing for IETF117 work. Also it add track namespace length field to Announce_OK message which was omitted.\nNAME good catch. updated the commit to address your comment. Thanks\nIt's an improvement so we should merge, but we do need to clean up how parameters are encoded.You may also want to move the actual definition of Track Request Parameter, it's currently in the SUBSCRIBE REQUEST section."} +{"_id":"q-en-moq-transport-f66e0c134499ed436c5d979e4349482591570f8ccd1b857d5038a84cd2b71bae","text":"This PR adds support for subscribers to issue unsubscribe requests to express its interest in not receiving media for a given track. This reflects discussions from The high level call flow is\nThinking about it some more, an error code doesn't seem necessary. publisher -> subscriber should have an error code. subscriber -> publisher doesn't need an error code. Even with an error code, it would get lost in the deduplication process.\nIndividual reviewIt might be useful to have an error code, but I'm not sure. LGTM.We'll need to revisit using track name vs track ID for unsubscribe once we have an answer to , but for now, this LGTM."} +{"_id":"q-en-moq-transport-c65939c63369e355b9297f905ea1ea8597bfc2006bb2f058a202f89c10cedd97","text":"Following from discussions in , this PR allows a given publishers to un-announce a track namespace to indicate that it is not longer accepting new subscribe requests. The high-level call flow for happy path is as below\nIndividual Review"} +{"_id":"q-en-moq-transport-443d6ae4f0b029c3e35bedc6a59e93c94233ce8f8ae3691a97ca01abd5063733","text":"and . if messages must be known, explicit message lengths are redundanct except for OBJECT and messages with parameters. This also eliminates a whole class of potential errors where the message length does not match the actual parsing of the message. Not checking this -- and not have any message type be potentially zero-length -- greatly simplifies the parser.\nAnd trying to use messages without negotiating an extension\/version just won't work for MoQ. We're designing a protocol explicitly for relays so you do need relay cooperation, or at the very least some information on how to forward unknown messages.\nAlso I love that you're using \"parameter count\" instead of \"parameter size (bytes)\". It makes it much easier to build a streaming encoder\/decoder without allocating temporary buffers.\nI thought we agreed to allow extensibility by allowing new message types to be created.\nLike to see us agree on a extensibility strategy before deciding what to do on this. We also need to decide if messages can have optional fields\nWe have version negotiation, and presumably we'll have extension negotiation too. What's the use-case for adding new messages without negotiating them? The problem is that we can't relay unknown messages, so all it takes is a single uncooperative relay on the chain to shallow them. I think we need explicit negotiation as a result.\nIf we have extension negotiation, I agree. But if our extension negotiation mechanism relies on not getting an error on unknown messages .... PR is wrong place to discuss this but I think what I would like to see is a good way to do extension negotiation, and then like you have here, error on any message that is not part of a negotiated extension. I think I would lean that way if we had an extension negotiation mechanism. Note this is the opposite direction of HTTP which is often our model for things. HTTP just skips past unknown headers.\nFail, or pass over?\nIMO fail. We have version negotiation and should add extension negotiation so it should never happen. The problem with ignoring messages is that MoqTransport is designed for relays. If an intermediate relay doesn't support a message, it then gets silently dropped. At the very least we need enough information on where to forward unknown messages.\nIf it's going to Fail, then most Message types don't need a length field at all (OBJECT and ANNOUNCE_OK would be the exceptions)\nIndividual Comment: Is there any reasonable action a relay can take on an unknown message, besides discarding it? See how HTTP recently added a \"Capsule\" concept which is an envelop with a default relay action of \"forward\", which allows for end-to-end extensions without upgrading all intermediaries. Given that MoQ handles forwarding on a per-track level, I think there probably isn't any way to generically forward a message that isn't associated with an active subscription. That said, it makes me wonder if we want to add a capsule-like extension message, or does the existing OBJECT definition give enough flexibility?\nYeah exactly. It's impossible to know the intended destination of an unknown message with the current ANNOUNCE\/SUBSCRIBE mechanisms. I'm using the (as the track namespace) and instead of . This actually does let you forward unknown messages from publisher -> subscribers and subscribers -> publisher since each session is associated with only a single track namespace. Session pooling is always the issue...\nI think we need extensibility. We want the protocol to be extensible so a given server might support extension A and C and another server supports extension A and B. It is very hard to do that using just a version number.\nRereading all this - II think this whole issue just need to be uplevel to what is the extensibility strategy then this detailed part of it will be easy.\nThere was support for not requiring a length and failing on unknown message types at the Interim, so the intent is to land PR after review.\nLG. The agreement at the interim was to follow up with separate frame types for object with and without length, but do that in a separate PR.I love it. It also means you can implement streaming encoders. Prefixing each message with the length required a temporary buffer or 2nd pass to compute the length.LGTM"} +{"_id":"q-en-moq-transport-d49b762089a0b6ee8490573a9fd9b92dfb62b4363f857f6a1fc5567148f7249b","text":"Can we PLEASE number editor's drafts as draft-ietf-moq-transport-latest? Currently, it says draft-00, which can result is significant confustion with respect to the last published draft and creates an unstable interop target.\n+1 this shouldn't have been checked in."} +{"_id":"q-en-moq-transport-55b487046ffab7ed84d903d47945cc856ca5b0afcdf36369cb3866225e66ae8b","text":"and . There was consensus in Cambridge that removing the special meaning of zero length gets rid of difficult code, removes potental foot guns, saves a byte, and enables actual zero length payloads, which supposedly has use cases.\nSince message (6. Messages) will be the main piece of info of MOQ I think we should try to minimize its size. Assuming most of the messages send to the wire will be objects inside a quic stream, the length should be NOT needed. Could we use 1 bit in \"type\" to indicate if length is present, we could save some bytes without overcomplicating the protocol\nThe length of a QUIC stream is only known after the FIN bit is received(ie the end of QUIC stream). Putting the length in the message header can ensure the size of stream is known at the beginning of the stream.\nThe size of an OBJECT is a varint so that's a single byte when 0. Even in the situation where you're sending an OBJECT per frame, that's what like 100 OBJECTs per second?. That's 100B\/s while the media might average 375,000B\/s (3Mb\/s). But yeah, we can optimize when the draft is closer to completion and things are more concrete. I like how QUIC STREAM frames are encoded, where the frame type indicates if fields are present (ex. length).\nThis issue is related to the question of what types of lengths we'll support, and whether we'll support those that go to the end of a quic stream (aka infinite length). \"unknown\" payloads where they're a delimiter of some sort had no support, because you have to do extra parsing. This issue is about two types: Object and Object + Length, which has support from the WG. Martin to roll into PR\nFrom Cambridge: Martin will put this in a separate PR once lands"} +{"_id":"q-en-moq-transport-da39cca710ef87eb0a376615dabe42e7cf61f2f5c770658ea8d0892dba578e38","text":"one question is if they come on different streams can they get out order\nMy inclination is to send a SubscriptionEnd {} message with appropriate Reason Code set. This will help to support the case that we discussed on publisher wanting to indicate wanting to stop publishing\nSUBSCRIPTIONEND and SUBSCRIPTIONEND_ERROR could be split for the graceful vs non-graceful cases.\nDiscussion in room seems to be rather not have an error after OK but instead use new messages or in subscribe_end message\nI'd like to consolidate messages at some point, but we can revisit later."} +{"_id":"q-en-moq-transport-577066254ee5d1f67e2a6ce6ea19c8ebc26a2f7a95629cc3323755559383939a","text":"by explicitly specifying the namespace. Also consistently changes to the (b) notation to indicate a variable length sequence of bytes.\nThis PR got lost when merging 289.\nAssume a setup with two publishers, a single relay and one subscriber. P1 ANNOUNCE's track namespace fooba P2 ANNOUNCE's track namespace foo S1 sends a SUBSCRIBE REQUEST for Full Track Name How does the relay know whether is Track Namespace: and Track Name: , or Track Namespace: and Track Name , and hence which publisher to route the subscribe to? I see a few possibilities: 1) The relay should not allow this scenario (via namespace construction and\/or authorization). If so the draft should include text explaining that this is disallowed (normatively?) or \"bad things(tm)\" happen. 2) Change SUBSCRIBE REQUEST to be able to disambiguate the Track Namespace from the Track Name. This could be done by sending name and namespace in separate message fields, or having stricter rules for the structure of Full Track Name such that a relay can extract the constituent parts. 3) Explicitly state that Full Track Names are matched to namespaces via \"longest prefix match\".\nOption 3 looks interesting. P1 can use a longer track namespace to steer the sub request from P2. Like a traffic engeering for content.\nI like option 2. I don't agree with treating track namespace as merely a string prefix. Even if you go with option 3, there's still a race condition where a late ANNOUNCE will cause a SUBSCRIBE to an unintended track. I want to give the track namespace more properties so it's treated more like a broadcast identifier. A few folks are headed down this path without realizing it, such as proposing that there's a single catalog per track namespace. I don't see how that would even work if ANNOUNCE is used purely for routing like a BGP announcement.\nNAME : are you saying you believe that the relay resolves the described ambiguity by using 'longest prefix' matched (eg: option 3)? Another option I hadn't considered based on this comment is that the relay would send OBJECTs from BOTH publishers since they both match the Full Track Name. This will break in the current design, since each publisher has their own Object Sequence and Group Sequence and when merged the consumer will be left with garbage. It also violates the logic described that says a Full Track Name can be used as a cache key.\nYeah that seems quite problematic NAME Let's say a relay have two connected publishers who send IN ORDER: A subscriber connects and issues: What happens next? The relay forwards the SUBSCRIBE to publisher 1. The relay returns SUBSCRIBE ERROR notfound The relay forwards the SUBSCRIBE to publisher 2. The relay forwards the SUBSCRIBE to publisher 1&2. These are in the order of Alan's options. I added option 4 based on Suhas' comment. Would occur if the relay rejects publisher2's ANNOUNCE because of thr shared prefix. Would occur because the track namespace is an ID, not a prefix, and must fully match an ANNOUNCE. Would occur because publisher2 has the longer path prefix. Would occur because both publishers could_ produce the requested track. And to throw a wrench in things, a third publisher joins.\nThere's also the case where the relay replies to the subscribe requests on behalf of the publisher(s), so it doesn't have to make the routing decision at subscription time. But it does need to decide where to forward OBJECTs when they arrive.\nImplementing this now so I have some more thoughts. We should explicitly support UNANNOUNCE and UNSUBSCRIBE, which implicitly occur when the connection is dropped. On ANNOUNCE, the relay MUST ensure that the namespace not a subset\/superset of an existing ANNOUNCE. On SUBSCRIBE, the relay looks up the track based on the prefix and appends itself to a list of subscribers. If not found, it looks up the namespace and creates a track entry if valid. On UNSUBSCRIBE, the relay looks up the track based on the full name and removes itself from a list of subscribers. On UNANNOUNCE, the relay deletes any namespaces and tracks matching the prefix. A radix tree is used for both so the relay can iterate through any values that match the given prefix. This is a global radix tree, similar to how QUIC connections are identified when variable length connection IDs are supported. On ANNOUNCE, the relay inserts the upstream into the upstreams hash map and errors on a duplicate. On SUBSCRIBE, the relay looks up the namespace and then the track, creating an entry if not found. On UNSUBSCRIBE, the relay looks up the namespace and then the track, deleting an entry if found. On UNANNOUNCE, the relay deletes from the namespace and track hash map. This is a strictly better than option 1. It provides the same functionality but faster, and without the restriction on namespace prefixes. Runs into issues with ANNOUNCE. example: and are subscribed to different publishers, despite using THE SAME NAME. This invalidates the claim that the track name is the cache key. We can't migrate to the new longer prefix from unless there's a guarantee they are producing the same content. A similar issue occurs when UNANNOUNCE is fired, when happens implicitly when the connection is dropped: Each subscription needs to keep reference to the track when the SUBSCRIBE was received, instead of doing a lookup based on full track name when each OBJECT is received. This will cause havok when resubscriptions occur, for example with ABR, as switching away from and back to a track MAY result in different content. It's not viable. The subscription would intermingle objects from separate publishers that happen share a prefix, all while using the same . I really don't want to support subscribing based on wildcards or prefixes anyway. This is also the slowest option, as there needs to be a global RadixTree that maps tracks to subscriptions on receipt of each object. All other options listed above would keep an array of subscribers per track, resolved using a (shardable) HashMap.\nI think this seems like the right approach. Having 2 producers with conflicting namespaces is a bad idea, unless the authz policy and application is somehow OK with the consequences. I have added to clarify the thinking further\nI'm of the view of two things 1) apps should not do this if it is a problem for that app usage of relay 2) relay should use the latest \"announce\" to override any earlier announces that are more specific ( or the same )\nDiscussed at IETF 117: Support in the room for changing \"Full Track Name\" from a single string to a tuple of Namespace and Track Name, and matching on Namespace exactly.\nIIRC, my recollection was not the same, also I don't think we concluded that the namespace match exactly. The discussion went over what the authz model defines and how that impacts the flow. I wonder how much the transport draft can mandate how a given relay application should implement the business rules. The transport draft must just indicate relay behavior and the course of actions to take for what happens when \" if match succeeds vs match fails \", which is what is in the draft today already\nUh, my understanding of what we talking about what not at all removing Full Track Name - it was about we did not need to define a separator for concatenating things in a tuple. I don't agree there was support for chaning Full Track Name to Namesapce and Track Name tuple and only doing full matches. That is not going to work for many of the use cases and not what we are discussing.\nI thought what came out of this discussion is 1) we do not know how duplicate ANNOUNCE work even when the names are identical ( do we keep first, last , error etc ) 2) once we figure that out, we can start to deal with the more complicated issue of partial match or overlapping Perhaps I have this recollection wrong but if so point me at right place in recording for that.\nMy motivation for ANNOUNCE was for the contribution case. A client like OBS will connect to the server, ANNOUNCE a new catalog track, and then the server will issue a corresponding SUBSCRIBE to get the data flowing. This is modeled after the QUICR PUBLISHINTENT message, except that data doesn't flow until the other side issues a SUBSCRIBE. However, announcing a namespace prefix subverts this behavior: The relay doesn't know if an ANNOUNCE is a prefix for routing, or a single broadcast being published. The relay effectively needs a business agreement with each client to determine. I think we should split this ANNOUNCE functionality into separate messages if just for the sake of discussion:\nIn the case of , a tuple makes perfect sense as it mirrors the arguments to SUBSCRIBE. This message would be primarily used for contribution to \"push\" a new broadcast. It also makes sense as a reconnect signal where the last message wins. In the case of , matching the longest prefix might make sense instead. I know very little about BGP but maybe there's some inspiration there. This message would primarily be used for inter-relay routing.\nI assumed that full track name was a tuple already, and \"full track name as a concatenation\" was a notational convenience. Now that I read the draft more carefully, it does send full track name in messages, rather than a tuple of track name and track NS. As far as I'm concerned, this is a mistake -- I don't think this actually works, and I have no idea how one is supposed to implement that. I believe \"error\" is actually equivalent to \"keep first\" here. I think it's a safety vs practicality trade-off. On one hand, if two different programs announce the same track, it will not work, since they will publish conflicting objects under the same name; if we disallow re-announcing, we will prevent that. On the other hand, re-announcing makes reconnects more practical.\nNAME Sorry if I misread the room and\/or the author's meeting. Let's continue discussing. That is being discussed in \/ , let's keep this issue focused on the SUBSCRIBE REQUEST ambiguity?\nIndividual comment: Just to illustrate the problem further here, I wrote a short sketch of a generic relay (python syntax): So the concrete proposal I have is to change to have two fields, tracknamespace and trackname. Then the first bit would change to: This is not to say the every relay has to do matches in this way. By squishing the namespace and name together it's not possible to implement a relay that does exact matching. If we put the tuple in , a relay that wants non-exact matching rules based on full track name can always construct it by concatenating namespace and track_name together. NAME Can you say more about the use cases that would break with a change like this, or suggest an alternate way to implement an exact matching relay?\nIn a recent editor's call there was some discussion, which split into three separate issues - capturing it here: 1) SUBSCRIBE REQUEST should have a way to denote which part of the Full Track Name is the namespace. Either a Track Namespace Length field or just transmitting Track Namespace and Track Name in two fields. There was consensus to add this. 2) Does relay matching behavior (eg: exact match vs longest prefix vs ??) need to be negotiated in band? () 3) Is there a way to subscribe to more than one track at a time (eg: wildcard subscribe, ). I'd like to keep this issue focused on 1."} +{"_id":"q-en-moq-transport-59c646767a24da8732d63783b618855fff7d888af7ffae04ef5cc4591f51b600","text":"Also, can someone explain what the following paragraph is saying, because I'm having a difficult time parsing it.\nThe next paragraph is trying to say that if you receive HoL blocked stream data at a relay you should try to forward what you have. The goal is to mitigate accumulating latency at every relay hop. I think this makes sense in two cases: You receive data that you know to be within an object you have already seen the header for. Eg: the object length is 100 and you miss bytes 0-49 but get 50-99, you can forward 50-99, as long as your QUIC library supports out of order reads and writes to a stream (not many presently do). This also applies when the object has no length and occupies the entire stream. You are missing some object data for object N on a stream, but you receive the header for object N + 1. This is (potentially) less problematic since the relay might be able to open a new stream to forward it, so out-of-order stream write APIs are not required. In any case, the text probably needs clarification, and perhaps moving from SHOULD (but we know you won't) to MAY is warranted?\nShould we mention that a relay should start sending object as soon as the object can be identified (after the header)? Since the draft points out latency as a big deal, it seems this is a must have if we want to cover ULL cases\n+1.\nIt used to be in but was removed. The text was pretty simple: I think and the agreement was to put it into some non-existent latency section.\n+1 We should add a recommendation text for Relays to address the request from NAME\nsooner is better I do think this might be interpreted by some people in an over zealous sort of way ( like turning off nagle type things ). I do wonder about just using MAY. But happy either way. SHOULD and MAY mean the same thing."} +{"_id":"q-en-moq-transport-13e66a76204a1fd482a28aac601c3e5053a1009d6692267b62a7624300f37719","text":"Section 3.3 currently says: \"3.3. Cancellation A QUIC stream MAY be canceled at any point with an error code. The producer does this via a RESETSTREAM frame while the consumer requests cancellation with a STOPSENDING frame. When using order, lower priority streams will be starved during congestion, perhaps indefinitely. These streams will consume resources and flow control until they are canceled. When nearing resource limits, an endpoint SHOULD cancel the lowest priority stream with error code 0. The sender MAY cancel streams in response to congestion. This can be useful when the sender does not support stream prioritization. TODO: this section actually describes stream cancellation, not session cancellation. Is this section required, or can it be deleted, or added to a new \"workflow\" section.\" [BA] While the meaning of STOPSENDING is clear for frame\/stream transport (e.g. stop sending the frame whose transmission is currently in progress), it is less clear when multiple frames are transported on the same QUIC stream. In the case where multiple frames are being transmitted on a stream, the problem is that STOPSENDING does not indicate which frame the receiver has given up on. So how is the sender to interpret it? For example, after resetting the stream, does the sender open a new stream? If so, what frame(s) are to be transmitted on that new stream? To address this ambiguity, an application-layer message(s) could be defined, indicating exactly what the receiver wants the sender to do: Stop (re-)transmission of a particular (named) frame. The existing stream is reset and a new stream is opened for transmission of subsequent frames. Stop transmission of layer(s). The receiver wishes to the sender to adjust the operating point, by stopping the transmission of one or more temporal or spatial layers. Only the layers that remain will be transmitted on new stream(s). Mailing list thread: URL\nI think subscribe hints solves the problem of what to start on by saying which object to start the new subscription on. It seems like the solutions to layers would be to put different layers on differenent streams. I would expect different layers to have different priorities which will then mean different streams.\nAttendees think there may be valid use cases of STOPSENDING and RESET and it should not be disallowed. But typically UNSUBSCRIBE could be used to accomplish the desired behavior. STOPSENDING and RESET do not change the state of subscriptions.\nYeah the problem is when there's significant congestion, a player can fall behind on the active subscription. It needs a way of indicating that it is not waiting for group X any longer so the server should not send it, otherwise it's just a waste of bandwidth. My proposal is to use STOPSENDING to indicate you don't want something. This does mean that you can't put unrelated objects on the same stream, otherwise you might accidentally reset the wrong objects. ex. If I no longer want to receive group X, I STOPSENDING any QUIC streams containing group X. But that does prohibit sending group Y on the same QUIC stream, so it would prohibit sending the entire track on a single stream. But like you forfeited the ability to skip any content if you decided to do that. Alternatively, we could add a STOPGROUP message or UNSUBSCRIBERANGE. Possibly we could RESUBSCRIBE with a new start group.\nResubscription works fine here. Since a new subscription is supposed to override an existing subscription from the client, Relay can start sending from the new group. Also I don't think we need to forfeit track per stream use-case. If someone is using track per stream, they exactly know what properties they are signing up for. My proposal would be to Subscribe with new start_point\nResubscribe only works if the content we don't want is at the head\/tail, unless SUBSCRIBE supports ranges instead of a single start\/end. We don't need to forfeit the use-case; the receiver would have to know that it can't cancel streams in this mode. Franky the application has already made a huge mistake if you're encountering congestion while sending all groups over the same stream. A UNSUBSCRIBE\/SUBSCRIBE message doesn't work because it will cause redelivery of any duplicates. We would have to add RESUBSCRIBE or allow augmenting an existing SUBSCRIBE.\nNAME Agree that STOPSENDING and RESETSTREAM do not change the state of subscriptions. I'd say the same for CLOSE_STREAM (see: URL).\nNot with Subscription hints though\nYes , its application decision to prefer such a mode. Yes in such cases cancel will cancel the entire stream. But resubscription with a new start_point will help alleviate some.\nThe group spent time discussing ideas around RESUBSCRIBE\/SUBSCRIBE_UPDATE and multiple subscriptions. NAME brought up the fact that a relay may have to issue overlapping subscribes because it doesn't know whether a relative or absolute subscription is earlier. This needs a new issue for multiple subscriptions and updating subscriptions.\nLike the change. Section 3.3 currently says: \"3.3. Cancellation A QUIC stream MAY be canceled at any point with an error code. The producer does this via a RESETSTREAM frame while the consumer requests cancellation with a STOPSENDING frame. When using order, lower priority streams will be starved during congestion, perhaps indefinitely. These streams will consume resources and flow control until they are canceled. When nearing resource limits, an endpoint SHOULD cancel the lowest priority stream with error code 0. The sender MAY cancel streams in response to congestion. This can be useful when the sender does not support stream prioritization. TODO: this section actually describes stream cancellation, not session cancellation. Is this section required, or can it be deleted, or added to a new \"workflow\" section.\" [BA] While the meaning of STOPSENDING is clear for frame\/stream transport (e.g. stop sending the frame whose transmission is currently in progress), it is less clear when multiple frames are transported on the same QUIC stream. In the case where multiple frames are being transmitted on a stream, the problem is that STOPSENDING does not indicate which frame the receiver has given up on. So how is the sender to interpret it? For example, after resetting the stream, does the sender open a new stream? If so, what frame(s) are to be transmitted on that new stream? To address this ambiguity, an application-layer message(s) could be defined, indicating exactly what the receiver wants the sender to do: Stop (re-)transmission of a particular (named) frame. The existing stream is reset and a new stream is opened for transmission of subsequent frames. Stop transmission of layer(s). The receiver wishes to the sender to adjust the operating point, by stopping the transmission of one or more temporal or spatial layers. Only the layers that remain will be transmitted on new stream(s)."} +{"_id":"q-en-moq-transport-3ccb10cfbd1914f431b98d3800c85fb6e6c737b4ebf0e30118a332a86ee512d8","text":"There's an assumption in the current draft that some messages are partially reliable (OBJECT messages) while some are fully reliable (everything else, aka control messages). The ability to drop a message depend on the ability to reset the QUIC stream. However, reseting a stream containing control messages wrecks havok on the subscription state machine. There's no way within QUIC determine if a messages was received or processed prior to a reset. For example, if I send a SUBSCRIBE on stream and then a receive a STOPSENDING for that stream, then the subscription enters a limbo state: The SUBSCRIBE was processed and a SUBSCRIBEOK or SUBSCRIBEERROR is forthcoming. The SUBSCRIBE was never processed and a SUBSCRIBEOK or SUBSCRIBEERROR will never be received. The only option currently is to set a timer after a reset is received. After a few RTTs or so, we assume the SUBSCRIBE was lost and issue a new SUBSCRIBE on a new stream, introducing latency in the process. Though things get extra weird if we receive a response after this timeout expires... I think we either need to state: Control streams MUST NOT be reset (like HTTP\/3) Reseting a control stream also resets all prior messages (ie. all prior SUBSCRIBEs). Reseting a control stream resets all unacknowledged messages (ie. SUBSCRIBEs without SUBSCRIBEOK). Option 3 is problematic, since the subscriber knows that the OK was received prior to the reset but the publisher doesn't. Personally, I think option 1 is the best is there's a single control stream, while option 2 is the best if there's multiple (scoped) control streams. Regardless of the approach, data messages and control messages should be split () because they have different reliability. A receiver should know if it can issue STOP_SENDING for an OBJECT without impacting control messages, especially if we take approach 1. My application uses a bidirectional stream for control messages and unidirectional streams for data messages. It makes it very easy for the application to know if a stream is resettable just based on the ID.\nFrom , (s\/channels\/streams) -- snip Control and Data Channels When a endpoint client or relay begins a transaction for media delivery, a new bilateral QUIC Stream is opened. This stream will act as the \"Control Channel\" for the exchange of data, carrying a series of control messages in both directions. There is one \"Control Channel\" setup per track. Control Channels are \"one way\" and are setup hop-by-hop between the participating MoQ entities. If a peer both sends and receive media, there will be different control channels for sending and receiving. For exchanging media, OBJECT messages ({{message-object}}) belonging to a group are delivered over one or more unidirectional streams, called data channels, based on application's preferred mapping of the OBJECT and its group to underlying QUIC Stream(s). The control channel will remain open as long as the peers are still sending or receiving the media. If either peer closes the control stream, the other peer will close its end of the stream and discard the state associated with the media transfer. The latter resulting in termination or clean up of stream states associated with the data channels. -- snip The meta level thinking there was , OBJECT messages sent over data streams doesn't impact control stream. However closing a control stream, say for a track namespace, will end up closing all the data streams corresponding to the media transfer for the associated track namespace\nResetting the control stream should be an error, given there is only a single control stream.\nThe client creates a bidirectional stream and sends the SETUP message. The server replies with its own SETUP message on the same stream. Afterwards, both endpoints can create any number of unidirectional streams containing any non-SETUP messages. Messages with different priorities should not be mixed in the same stream, which means control messages should never be mixed with data messages. Messages can't be sent or have to be buffered until the receipt of the SETUP message. Control messages over separate streams arrive in an unknown order.The client creates a single bidirectional control stream. The control stream MUST start with a SETUP message, followed by any number of control messages (ex. SUBSCRIBE). The sender creates unidirectional data streams. The data streams MUST only contain OBJECT messages. Pros: Partially Control messages are no longer intermingled with with data messages, and the control stream MUST always be the highest priority. Endpoints no longer have to buffer messages, as QUIC will automatically make sure SETUP arrives before any other control messages. Data streams MUST NOT be created until receipt of SETUP. Control messages arrive in a specified, deterministic order. Cons: Head-of-line blocking between control messages, which sometimes may not be desired.\nWhile I was reading the requirements draft, I found some use cases mentioning bidirectionality. Is it expected\/supported that there can be data flowing in both directions (eg: a p2p call)?\nYeah, my intent is that there's a \"sender\" and a \"receiver\" per track. The ROLE param in the SETUP message indicates if an endpoint (client\/server) intends to send tracks, receive tracks, or both. In the bidirectional case, both endpoints would say ROLE=BOTH. Then they could independently PUBLISH or SUBSCRIBE to tracks. Alternatively, we could say that tracks may be bidirectional... but I don't think that makes any sense.\nWe can simply say that data objects MUST NOT be sent over control streams, and remove any reference to whether data streams should be unidirectional or bidirectional. It doesn't matter and we can leave it to the end-points discretion.\nThe proposal above aligns pretty well with the idea in , more specifically this I like the separation of control stream and data streams, the latter being unidirectional streams.\nI like the proposal for this. I view the HOL blocking of control messages a feature not a bug. More specifically, I think that some of our messages need to be delivered in order. Specifically if a client is trying to stop and high res video subscription and start a low res subscription, it would be good to ensure that the subscribe to the low res track does not happen until after the end the end subscription to the high res track has happened. Sending the un-subscribe then subscribe on one quic stream allows these to be sent and same time to server ( we don't need a round trip between them ) but still be delivered in order. I think it needs to be clear that multiple control messages can be outstanding at once. For example can send multiple subscribes before getting the response to the first subscribe.\nI would like to write up a PR based on the above ( basically resurrecting ) that pertains to the control channels.\nI like the idea of seperation of control stream and data stream.\nWG is ok with a single stream, with no critical use cases for multiple. A single stream makes it easier to change bitrates 'atomically'. People don't want Object messages allowed on the same stream as control messages. People noted that this means the only bidi stream is used for the control stream in practice, and all unidirectional streams are used for objects, but people don't want to bake that into the spec.\nThe current draft specifies that control messages may be sent over separate streams. This means that control messages will arrive in an unknown order. This can create divergent behavior if messages depend on any form of ordering. The simplest example if sending a SUBSCRIBE message followed by a later UNSUBSCRIBE message on a separate stream. If the UNSUBSCRIBE arrives first for whatever reason (packet loss or poor prioritization), then it has a completely different meaning. The current draft adds some advisory text that control messages should be on the same stream if ordering is required. I think we should put all control messages on the same stream, so ordering is enforced and the behavior is deterministic.\nI think life will be way easier if all control messages are reliable and in order. So +1 on single control stream. One of my arguments for this is the difficulty in even testing our of order control messages.\nWill be fixed by the PR that\nAs mentioned in , there's currently no distinction between control streams and media streams. Should we have dedicated control streams? When should these be cached or forwarded by relays? For example, GOAWAY must NOT be forwarded by a relay, but something like TRACK (does not exist yet) should be forwarded.\nThis sounds like something worth thinking on and capturing more about in the spec.\nQuicR supports control streams and media streams. Control Stream is for setup and other control features Media Streams for delivery media - today it supports QUIC Streams or QUIC Datagrams. More details can be seen here : URL This keeps protocol design cleanly separated and also extensible\nDiscussed today, there was interest in separating today's control messages from media\/object streams, but not yet interest in constraining a MoQ session to a single control stream. Per track and per namespace were both mentioned as reasonable scopes, as well as 1 per session. There were follow-up questions about Setup messages and version negotiation and how you know which version is in use.\nOne conclusion from the chat is that QUIC streams MUST NOT contain messages with different priorities. Otherwise, a QUIC library would need the ability to reprioritize a stream at a given byte offset. If control messages are always higher priority than data messages (debated), then separate streams are required. I also mentioned that in my implementation, separate threads process control and data messages separately. Data streams are immediately sent to a media decoder thread (WebWorker) while control streams stay in the main thread. This architecture is nice but breaks if control messages and data messages can be interleaved. I think we were in pretty good alignment that there should be control streams. The primary discussion surrounds the scope and number of control streams. control stream per track. control stream per track namespace. control stream per session. I don't think option 1 is viable for ABR, as you want to coalesce the UNSUBSCRIBE \/ SUBSCRIBE for separate tracks. For example, (near) simultaneously unsubscribe to 480p and subscribe to 720p. If the tracks are truly independent and can not tolerate head-of-line blocking, perhaps they should use a separate namespace instead? I think option 2 is acceptable. The goal is to avoid introducing head-of-line blocking for thousands of unrelated track namespaces, but I think this bleeds into the infamous pooling discussion. If pooling is accomplished via multiple sessions like WebTransport, then option 3 also accomplishes that goal. I'm currently using option 3 since it makes the SETUP marginally handshake easier. Control messages are written immediately after the SETUP message, avoiding the need to buffer messages pending the handshake. It also means you can coordinate subscriptions when tracks are NOT within the same track namespace (ex. ABR across different producers).\nOkay so I can't help but start a tiny bit of the POOLING discussion. My application uses Warp for media and Alan's protocol for chat. Let's assume that I want to use the same QUIC connection for both for efficiency reasons. I would like to design my application into two halves, so one part of the code deals with chat and one part deals with media. I don't want a situation where I have to pass control and data messages between threads. It would be really nice if there were two control streams: one to control the chat protocol and one to control the media protocol. This would also be nice for the CDN, as each control streams could be routed to a separate backend\/origin. A fatal error (RESETSTREAM) on the chat control stream would not impact the media control stream and vice-versa. However, this doesn't quite work right now because the can be collisions. The chat and media system would need to coordinate to avoid collisions, otherwise IDs need to be rewritten. Any incoming data streams need to be forwarded to the correct component based on this rewrite lookup table. One way to fix this is to extend all messages to include a , where 0=media and 1=chat in this example. Each control stream gets its own ID and OBJECTs would include the ID, avoiding collisions and making it very easy to route within the application and CDN. (this is basically WebTransport session pooling) In fact, now each session could even use a different version of MoQ. This would be super useful when the chat and media backends are running different versions of the protocol. It's why I suggested at the end of the call (NAME that each control stream contains it's own SETUP handshake.\nIndividual comment: Do you mean the \"sendOrder\" field in OBJECT (section 6.2), or something else? I'm not sure normative language is required here, especially if it's based on what current QUIC implementations supports as a prioritization API. Someday a QUIC implementation could support changing priorities at different offsets for a stream. As for pooling - WebTransport has taught us that pooling is a huge headache. I suggest we defer to WebTransport or open multiple QUIC connections rather than trying to define yet another pooling layer in moqt.\nI have a draft in the works to support doing this for HTTP\/3, so it's not far off in my world\nWell, I kinda think the normative language is justified when priority schemes are written on the wire. You're totally right that an implementation could prioritize at byte offsets... but that doesn't mean a relay can support it. Ugh, I guess we need to discuss if prioritization schemes are negotiated or not. My goal was to require a base scheme (send order) in the transport draft so we could just assume that every MoQ relay will support it, and thus normative language like this is fine. But you're right, new prioritization schemes will absolutely come along and maybe send order should be a guinea pig. So maybe: if the is negotiated, then messages on the same stream MUST use the same value. I believe the same is true for NAME prioritization scheme. Either way, point taken and it was a bad argument on my part, and barely even related to the topic of control streams. Gross but point taken. I assume the priority can only decrease?\nThe design doesn't currently implement that constraint, the reason being that an application using HTTP might have a need to increase the priority of some bytes relative to other streams.\nGoing back to the topic of control streams, we need to support both the following Control stream per track namespace One Control stream for the session - (for setup, termination) Also i feel the transport draft should have the text for it as well to explain what happens when a control stream for a track namespace is created or reset and how are they related to the data streams carrying the objects.\nNAME mentioned my priority scheme. On the prioritization scheme, my personal thinking evolved as I was studying implementations and consequences. For a while, I did believe that layered encodings would be better implemented by having a single track for the whole media, and having for each object priority fields of some kind indicating the relative urgency of the object. I think now that I was wrong, because it leads to a complex mapping that makes relaying harder, and also because the exact semantic of the \"priority field\" is too complex to specify in a base protocol. The transport mappings get much simpler if all objects in a track are treated with the same precedence -- at least within a group. We may debate whether the next group should ever take precedence over the previous one, or never, or only some times. This simplification does force layered encodings to use separate tracks for each layer, and to explain the relation between the tracks as part of the application data. Which amounts to pushing complexity out of the transport and back into the application, which I think is a better tradeoff. The tradeoff also simplifies the tasks of the control stream, because it only has to manage precedence between streams, not precedence between objects.\nShould be fixed with the PR that\nI would add a note that data streams are sent on unidirectional streams, but looks gud.LGTM"} +{"_id":"q-en-moq-transport-fff4a8bee8a529751ac1b2d9cb60418e9909191b4573f63440b50448e8b1eb64","text":"Delayed LGTM.\nSection 5.1 of the draft says: For the second case above, I think it would be useful to have an explicit signal from the publisher to signal the end of the track. For the third case, the text seems to hint (via a section reference) that if the subscription abruptly terminates with an error, the publisher can send SUBSCRIBE ERROR, but that section says: The draft should clarify if this is allowed following a SUBSCRIBE OK. Also +1 to adding UNSUBSCRIBE ().\n+1 we should allow sending a SUBSCRIBE ERROR after a SUBSCRIBE OK.\nI was assuming that the catalog change would be the way a track end was indicated then the client would unsubscribe.\nAs an individual: This seems like a bit of a subtle signal and something more explicit using the Track ID would be preferable for me.\nI tend to incline towards Catalog indicating the status the more i think of it. Catalog is source of truth for tracks. Having a catalog update that indicates the following Full Track Name lastobjectid field Also if the catalog is ending, since the producer is going away, then include another optional field finished or something These would be sufficient and a clear way to identify the status end to end.\nI think putting the FIN group\/sequence in the catalog is clean. It does prohibit relays from cleanly terminating a broadcast early or rewriting group\/object sequences, but I think that's fine? However, the wrinkle is that a sender MAY choose to not transmit a group\/object. While I don't think this is explicitly stated in the draft, it's a requirement given that relays can only cache data for a finite amount of time. The viewer could get stuck waiting for the FIN group\/sequence as signaled in the catalog, even though it is not forthcoming as the relay did not receive the object or already evicted it from the cache.\nAs an individual: Having an explicit signal outside the catalog will make it easier for relays to clean up track related state. If a publisher is done with a track, they can update the catalog, but it will need to propagate to all subscribers, who in turn need to issue an unsubscribe. Only when all of them unsubscribe (which we still need) can a relay discard the entry in the forwarding table. If the publisher also sends an explicit signal closing off the track, the relay can immediately clean up the state.\nWe need few things. Cease the announcement This is an explicit signal from the publisher that it is no longer interested in sending media for the announced tracks. This is sent to the relay to trigger subscription cancellation downstream and clean up local relay state. Terminate the Subscription On subscribe end signal, each receiving entity will forward downstream and clean up local state. If it is the consumer, it stop media pipeline and so on. Unsubscribe For subscribers to explicitly indicate that they are no longer interested in a given track , they will send I am happy to write up a small PR as proposal if we think the general direction is fine.\nWhat is the difference between Cancel the subscription and Unsubscribe?\nUnsubscribe is an explicit action from the subscriber and Cancel is the signal from the publisher. May be we should call it ` instead ? I see cancel can be misleading\nGot it. Terminate is better than Cancel.\nWe already have for the publisher to immediately terminate a subscription with a code\/reason, so we could reuse that. However, the problem is when the termination occurs. The end of a broadcast would look like: There's a race, as the can arrive before the last objects due to retransmits or congestion control, severing the subscription early. Ideally we wait for all outstanding objects to get delivered and then terminate the subscription. In QUIC terminology, we need a in addition to our existing . The difficult part is determining which objects are still outstanding.\nDo we want to distinguish a normal end of subscription (due to EOF), and an unexpected end (due to, e.g., a downstream server crashing)? We do that normally in QUIC (fin vs reset), so it might make sense to do that here.\nImage the case where we have A sending to R sending to B. B subscribes to R. A sends ANNOUNCE to R. R subscribes to A. At this point media is flowing from A to R to B. A temporary looses WIFI connectivity and reconnects over 5G with new IP. A sends an ANNOUNCE to R, R subscribes to the new A and perhaps tries to send an error to original ANNOUNCE and end subscribe but A will not get that as that QUIC connection is gone. Once again data is flowing from A to R to B. I think it would be bad if the new ANNOUNCE, or an error on the QUIC connection from A to R caused a termination or error of the subscribe between B and R. I'm think it is better for B to unsubscribe if it is not receiving the media it wold expect.\nHaving explicit messages reflecting the actions would be my inclination as well. I thought it might be beneficial to explain the role of the above proposed messages with few flow diagrams Here is the happy path flow (it ignores Subscribe_OK and response flows to keep the flow focussed)\nSummary thoughts: Want to support disconnection without propagation to all subscribers (ie: If there were two publishers) Need a way to cleanly unsubscribe from a track (UNSUBSCRIBE) - Like STOPSENDING? Need a way to indicate that a relay is no longer the place to subscribe to for a track (ie: UNANNOUNCE) But UNANNOUNCE does not indicate an unsubscription. Need a way to indicate the track is FIN'd and ending, could use the catalog or put in the protocol (or both) Likely want a way to drain a relay, such as an HTTP GOAWAY style functionality. (Could be SUBSCRIBETERMINATE) Timeouts do need to be handled, even if they're not the ideal\/clean case. Want to know whether the relay wants to stop publishing or the origin did, so a client can decide whether to re-sub. NOTE: Objects can be dropped, hard to know if they'll ever arrive. Cullen will summarize a proposal. Note that the relay doesn't really need to know about tracks or what its subscribing to. An open issue is how to do FIN, whether it as part of Objects, only the catalog, a separate message, etc.\nRough notes on proposal in the call: The catalog updates and says what the last message in a track is. Catalogs are a way that endpoints can say there will never be more on a track. The annouce has an equivalent un-Annouce message. It changes routing of future subscribes but does not change state of existing subscribes. The entity that send an subscribe and send an un-subscribe The entity that receives a subscribe can send a go-away which will let the entity that sent the subscribe go do something new ( like subscribe to something else ). We might need to figure out what data goes in the go away to given info that the clients enough info to do the right thing. Timeout can happen, and connections disappearing, and we need to sort out what impact that has but that is not the primary way to cause flow of media to change. Update: Key issue here is when the relays see the go away, do they drop the related forwarding state for the subscription, or do they just forward the update to the subscribers and wait for the subscribers to remove the subscription.\nNAME NAME thanks for capturing the notes. I will work on set of small PRs to capture some of the items from the summary.\nThe point I was trying to clarify on the last call was that catalog already has a mechanism for a publisher to communicate to all subscribers that the track it has advertised as being available is no longer available. A clean FIN if you like. We can, as has been pointed out, create a control message which says the same thing. But then we have a weird asymmetry in which a publisher notifies clients of track availability through a catalog update but then uses control messages to do the reverse. For good design with clean encapsulation, we should seek symmetry in operation. We should either use control messages to advertise the availability\/unavailability of tracks , or use catalogs, but not a hybrid of both. In my understanding of moq-transport, control messages have utility that is scoped to the current hop. They negotiate the format in which data will be exchanged on that hop and issue commands to subscribe\/unsubscribe\/announce etc. Those commands have no applicability to any other hops on the distribution chain. A catalog however is a contract between a publishers and all subscribers. Unlike control messages, it holds data that is not interpreted hop-by-hop and is intended for targets that can be many hops away. I think relays should be as simple as possible. In my conception, they should not understand whether tracks are being published or not. Their job is to move objects. Their view of the world is around objects and they do not need to understand what tracks are. While of course we can build all sorts of track intelligence in to relays, I think if we can build a base protocol which is agnostic to tracks, then it will be scalable and robust, and cleverness with tracks can evolve in the streaming formats without affecting deployed networks of relays.\nDiscussed in Boston: Agree we should add a SUBSCRIBE FIN message with the \"final offset\""} +{"_id":"q-en-moq-transport-17deab8e62b6940799d50ed1f2c09d6feb21883c90cbac7b7d7f7c53ad31f767","text":"Issue raised at interim on Feb 1, 2023: Does the protocol need to define a failover if QUIC is not available on a specific path. If so, is it the same failover in all cases?\nNote that WebTransport does provide a failover to HTTP\/2.\nWe need a answer for Raw QUIC too\nWe discussed this in chartering and came down on a hard no. It just makes things too complicated. Yep, the webtransport version of it might work fine on TCP and we are not out to break that but it is not a design goal. This is about media over QUIC and QUIC is over UDP. The CDNs may internally do whatever they do but that is a black box.\nI agree that this is out of scope, based on my reading of the charter, which says \"can be used over raw QUIC or WebTransport.\" I would like to either close with no action or we can add editorial text that notes that all paths may not support QUIC.\nIt was decided that a sentence or two about this, including a note that the appropriate fallback may be something entirely different from MoQ (ie: DASH\/HLS\/etc) depending upon the application and deployment."} +{"_id":"q-en-moq-transport-b5e55e9a53ec13bb6c29addf1b67cc98d055096a201c05b0ca6c1604117a85c4","text":"Given this also runs over WebTransport, it's best to avoid QUIC unless it's specifically referring to the transport.\nThough MOQT is optimized for QUIC, it technically can go over WebTransport over HTTP\/1 or HTTP\/2. There are a number of cases when QUIC is used when it could be something else and in most cases, specifying QUIC is not critical to the design. For example \"Consumer: A QUIC endpoint receiving media over the network.\" Also, the definitions in 1.2 frequently reference terms defined later in the list.\nAbsolutely. Stream prioritization and congestion control is better with QUIC, and thus can support lower latencies, but MoQ over TCP should still work well enough.\nFor what its worth, we did did say during chartering that were were not going to focus on solving any of the problems that happen with fallback to TCP. Clearly we are not going to go and brak whatever fallback WebTransport does, but they key thing is this work when then WebTransport is over QUIC, not other cases\nI completely agree. This is an editorial point that there are times when 'QUIC' is used when it may not actually be QUIC. In many cases, just removing that word makes the sentence both slightly more accurate and shorter."} +{"_id":"q-en-moq-transport-524a2a2cb6871dcb763d4e02c84401080efe7c296e5e96532e3493b8918b1cef","text":"This is a initial PR to get some basic definition of subscription hints. I would like to get us to an agreement for the basic proposal and then discuss further on error codes, edge cases and impacts\/interactions with other messages (Sub Fin, RST, UPD)\nCan you mark this as so it shows as having a PR attached (also possibly 111 and 260)? Also recommend closing as this replaces it.\nI feel proposal is more ambiguous though. I agree we can make the proposed ending in this PR to make it easier to encode\/decode. I have to get higher level idea agreed upon There are subscription hints. 2 hints are defined in this PR a) StartPoint b) Interval There may be hints we might want to add in future and hence a Hint structure identifies a type and value StartPoint hint can support 4 modes. These modes are only applicable when the HintType is StartPoint Interval hint is totally separate hintType that defines ranges of groups\/objects\nHaving a registry is the idea and I was thinking once we get to agreement of Hint strucutres , then we can follow with defining error codes, registries for types and cleaning up track request params\nPayload(b) is defined to have len and value . Here is the definition as defined in the draft today x (b): : Indicates that x consists of a variable length integer, followed by that many bytes of binary data.\nSo pushed few more commits cleaning up somethings and considering suggestion from Alan on TrackOffset and removing text on error. Thanks everyone for the review feedback Would love to hear feedback on the whole text.\nGiven the length of 245. Could you please summarize the use-cases to help focus the discussion. In the order of small to big, my suggestion of the order would be \"3, 4, 5, 2, 1 \"\nThere are 4 examples right at the end of 245, I copied them in this review comment: URL We can discuss tomorrow.\nAlan proposed we use 6 parameters. Start Object Start Group Start relative \/ absolute flag End Object End Group End relative \/ absolute flag Preference for not two ways to say same thing. Use 0 to indicate current. Offset encoding. You can subscribe to the track twice with two different hints if that is what you want. Seems good consensus on call for this. Further conversation got to the point that this fails to be able to do both the first object of current group and current object in current group. So we need to do that too. Update on this luke porposed make rel\/abs flag for both object and group which solves this\nMy proposal is 4 parameters, combining the flag with the value: Start Group + Relative\/Absolute Flag Start Object + Relative\/Absolute Flag End Group + Relative\/Absolute Flag End Object + Relative\/Absolute Flag I'm using the lower bits of the VarInt to save some bytes and make parsing easier. Otherwise you have to deal with missing the corresponding sequence\/mode parameter. I think it's even easier if these 4 VarInts are just encoded as part of SUBSCRIBE instead of being parameters. There would be a None flag if you don't want an end group and\/or object.\n+1 on making it part of Subscribe.\nRegardless of the approach, I think these are the use-cases we MUST support: I think these are use-cases worth considering: And not mentioned in this PR, but I do think we need UNSUBSCRIBE hints.\nNew proposal: Range requests would be accomplished with a SUBSCRIBE followed by an immediate UNSUBSCRIBE. This is a race though, and a reason to not take this approach. Keep-alive would be kinda funny if only the latest UNSUBSCRIBE is active.\nIn addition to what is summarized above, we had a discussion about the way to express relative ranges. In the case of the start hint, the well understood scenarios that I heard are: start at a specific point (specific group, specific object) start at the current point (current group, current object) start at the beginning of the current group (e.g., because the receiver needs all the frames from the beginning of the current group to synchronize its decoder) start at the beginning of the next group (e.g., because the receiver does not want to process old frames, and would rather start clean on a group start) We can make it more complicated, and we have proposals such as: start N (groups, objects) before the current one (in the past) start N (groups, objects) after the current one (in the future). I think that these relative offsets bring a lot of complexity, and a lot of corner cases. They are somewhat undefined, because the subscriber does not know what the current group or object is. If it knew, it could just ask for a specific start point. The combination of groups and objects is a bit weird, because the subscriber does not know how many objects there are in the past groups. The syntax allows for silly requests, like \"start 1,000,000 groups in the past\", or requests that are latent DOS attack like \"start 1,000,000 groups in the future\", which means \"keep state in memory forever\". There is a potential simplification there...\nI want to keep the scope simple and focussed. When i presented on Subscription hints, there were few real use-cases that had a need for hints User joining a mid call and want to start from the beginning of the group to catchup or wait until next group User rejoins mid call and want wherever the current state of group\/object is User disconnected and joining at a known point and want to catch up from there ( say to 2x catchup render) User wants to buffer a few seconds from past to keep the player go smoothly User asking for range of frames and be done with it as catch up Can we define a system that works for all possible combinations ever - Oh YEAH .. But do we really need it - I DOUBT IT I am waiting on Alan's proposal for the promised simplification. NAME no pressure :-)\nI was pretty busy today but have been written up what we discussed during the call - using 6 track request parameters offering the full range of options that are available in this PR as is. By using defaults when a param is omitted, there was only one invalid combination, which I like. The logic to determine what group and object to actually start serving is about 20 lines of pseudo code, which I added as an appendix. I'm not sure what to make of the discussion on this PR. I'll write a second PR that is simpler and just includes the 4 basic modes, and maybe something for range. Stay tuned. Would it be better to submit both as separate PR's, and leave this proposal as is, or should I push what we discussed this morning here?\nIts ok to push here .. how easy would it be merge both into a single commit ?\nCommenting on Luke's table of use cases: Love this way of enumerating and sorting the use cases. Thanks. All the use case I think are critical are covered in the first table and nothing that I did not care about seemed like it was going to complicated implementations so my first reaction was YES. But looking at it, we were getting close to covering 100% of all the cases and since relays are likely to upgrade slower than clients, I just came down to we should just support all the possible combinations even if it is not clear there is s use case for it today - that will actually make implementations simpler. I think the proposed solutions are starting to look like they do all the cases anyways. By all the cases, I mean a subscribe can Start at absolute or relative for both group and object. Relative can have delta to current. Start at absolute or relative or fin for both group and object. Relative can have delta to current.\nOn the topic of do we need unsubscribe hints .... How I think of it is subscribe installs some state on the relay or publisher telling it what to do. It can say what the first and last object is that should be forwarded. However, I think as unsubscribe as the message that removes all that state. If we start making unsubscribe be \"forward until you see object X\" then remove all the state, we get into questions like what happens if object X-1 arrives after X. That said, I do agree with the use case that Luke is getting at of things like. Client was subscribed to something forever, but now I only want to be subscribe up to the end of the current group so that the client can do a clean switch over to some different subscription at the group boundary\nI was thinking about use case for things like text media some of the stuff with the game moves draft that are not audio and video. It is pretty common to want to be able to get the last N things in a group first then later go get the earlier things in a group. The usual rest style pagination type thing where. Imagine you had a group where each object was a message in a chat, you might only want the most recent 10 messages but then when user scrolled back, to get the rest. The more I thought about this, I think we have a requirement for start group notes -- -- absolute Get last N object in a given group\nYeah, I'm thinking we should use RESUBSCRIBE instead of adding hints to UNSUBSCRIBE. UNSUBSCRIBE would always be an immediate and cleanup any state, while RESUBSCRIBE would be used to update the start\/end bounds.\nI took the liberty of adding a 3rd proposal to the PR but did not change anything else in the PR.\n+1 - that does seem like it will be a very good way to deal with it\nNAME I wrote this up on my phone before noticing your commit. It's basically your proposal 3, but not using optional parameters. It would be encoded as: I really like that Alan wrote some pseudo-code, although I agree that it shouldn't be in the final draft. The devil is in the details with this encoding, and I think this scheme is both the simplest and the most powerful. It's also the smallest on the wire if that's any consolation prize.\nAnd I thought it would be a bug to specify startobject=none with my encoding, but it actually seems like a valid use-case. start group notes -- -- none Check if a track exists You would get a SUBSCRIBEOK if the track exists, but the subscription won't actually do anything. It kinda seems useful for specifying intent too. start group notes -- -- N Start warning the cache for group N\nMy proposal in one of the authors call was Subscribe Update message.\nSo NAME proposal in the thread above that is similar to mine just seems better than what I proposed. It covers all the cases, it is easy to describe and validate work and does not have a bunch of uses cases. I would put in a very weak argument for instead of taking the bottom two bits to encode the (absolute, relative, future, none ) , instead just put that into it's own VarInt. I would not argue very much for that because this is all going to get put in 5 lines of code I will never look at again and it's does not turn up on the wire in something that has to be small. Probably the worst part of is is someone looking at network dump might be confused but that just means the wireshark dissector needs to know about how to the >>2 works. I'd be totally fine with proposal as is. Some strong points I see for this: 1) we could merge it by Monday 2) It covers all the uses case we can imagine now plus has a high chance of covering the future cases we can not imagine now 3) It is simple to understand, think about, and test\nA 0-16 range would work for the majority of relative\/future hints, so it would definitely save that byte. But I'm fine keeping it verbose for now; the important part is the reusable encoding. would be useful for checking track existence without receiving OBJECTs. But let's rule it out for now because it would mean a subscription that will never match objects, which would need some extra text. would be same as but that's not obvious. We can also rule it out. is required to specify the FIN of a group. ex. end at the last object of group 69. is invalid though. I think it's a hint right now, since the publisher can choose to drop objects\/groups on a whim. Even if you ask to start at group 3, you might never get it for congestion control reasons. But the bigger issue is that a subscriber doesn't know the upstream's expiration time. A publisher may only cache the last 3 groups, so a subscriber can't just ask for the the last 8 groups and expect to receive them all. Throwing an error in that situation seems futile. But I think the publisher should signal the (absolute) start\/end range in SUBSCRIBEOK, which may be a subset of the requested range. Then it would be a start\/end range request, resulting in a start\/end range response.\nI like the direction that Alan has take Lukes stuff. To bikeshed the Hint. I think \"Location\" would be a better name than \"Hint\". I think this is not really a hint but more a filter that relay will not send me objects on this subscription before the Start location or after the end Location. So if you want my bikeshedy 0.000002 cents, I would define Location { GroupMode (i), GroupValue (i), ObjectMode (i), ObjectValue (i) } ObjectModes = None, Absolute, Past, Future GroupModes = None, Absolute, Past, Future The the subscribe would have a Start and End Location. Location become a clean way to specify a specific a specific object in a track and may turn out to be useful in other parts of the spec.\nI'd like to take a moment to reflect that the notation we are using for is both shitty and bad. Take Hint { Mode (i), [Value (i)] } Modes = None, Relative, Next, Absolute By shitty, I mean super non intuitive. What does (i) mean? Is [X] optional or an array? But no, that would be ... Can I have more than one X? If I had Mode(2) followed by [Value(i)], would it be bit packed or would the Mode be padded out to a byte? And what type is Mode. Or if Mode is a type what is then encoding of all that crap. Most schema languages are fairly intuitive to read and guess what they might mean. This is not. Then there is the bad. Say I want to define a type called Mode then have two fields in a record called ObjectMode and GroupMode both of type Mode. What do I do? Location { GroupMode (i), ObjectMode (i) } ObjectModes = None, Absolute, Past, Future GroupModes = None, Absolute, Past, Future This can not possibly be the right way to do it. I mean now that Mode is not reusable and repeats itself. If only we had a well defined language for expressing this. Oh wait we do URL\nI went ahead and enumerated the permutations group notes -- absolute range request absolute tail of group absolute absolute full group relative head of live group relative tail of live group relative next audio sample(s) relative full group future head of future group future future future future group(s) none tail of track none The ones labeled with audio are only when you have partially independent samples. For example, each audio GROUP is 2s long and each audio OBJECT is 21ms. This would get you the last 105ms of audio (target initial buffer) most of the time. However, you could get unlucky and a new group could have started with less than that available, but it doesn't really matter if the target buffer is small because the player will just wait. Personally, I want to remove this use-case. If objects can be independent, then they should be separate groups IMO. So each audio frame would be a group of size 21ms. You could then combine the group\/object mode since object boundaries can either be absolute or none.\nI pushed a new commit with PROPOSAL 4, which we have been discussing, pulling the \"best of\" text and examples from the other proposals. I deleted the PROPOSAL 1 sample code and didn't add code for 4. There's one DISCUSS, whether we should allow Start\/EndObject to be None and have defaults, or require them to be specified.\nCurrently, we have GROUP SEQUENCE and OBJECT SEQUENCE parameters that allow subscribing to a track at a specific offset. As far as I can tell, we do not actually describe what happens when those are not present. I imagine what most subscribers would want is to receive most recent N groups when they subscribe (for some subscriber-specified N that is determined by how large they want their buffer to be), and we should add a parameter for the said N.\nThis is a special case of Subscription hint issue IIUC () Wondering if we should discuss together\nI hit this in moq-chat when traversing a non-caching relay - a client's subscription to a catalog track came after the most recent group\/object was published, so the client didn't get a catalog. Do all relays need to implement caching of some kind in order to satisfy late-comer requests? Or something else?\nI think the default behavior should be to cache\/serve the latest group. It's basically required for any sort of catalog. But I do think we should support cache-less senders. They can basically only support from and will ignore requests to start subscriptions further back.\nSubscribers needs a way to hint the publishers on when do they have the want the objects to delivered to them within the requested track. This is needed in case due to the fact that different media types can be handled differently by the application ( audio can be delivered to the application immediately, video needs some form of key-frame to have the best experience or something more specific that the application wants ) In that effect , i want to propose an experimental extension to the Track Request Parameters called \"SubscriptionHint\" with the following variants. latest (0): The \"latest\" subscription hint identifies delivery of the objects for the requested track to begin from the most recent group and object sequence. catchup (1): The \"catchup\" subscription hint identifies delivery of the objects for the requested track from the beginning of the group provided in the GroupSequence track request param. waitup(2): The \"waitup\" subscription hint identifies delivery of the objects for the requested track from the beginning of the new group.\nOne of the reason this is key, is often a client is doing a subscribe to a track with a different bandwidth because there is not enough bandwidth for the current track. ( the case where a client is receiving a high bandwidth track, and wants to move a low bandwidth track of the same stuff ). In this case, there is already congestion happening and the client wants to move to the new track quickly without making the congestion worse. Hints along the lines of this really help in this critical case.\nOof yeah this is a difficult problem that we need to solve. A new player can start X seconds behind the live playhead, where X is the target size of the jitter buffer. Real-time will set this to 0. An existing player can switch between tracks at timestamp Y. (NAME example) Neither the player nor the relay knows the mapping from group timestamp. Tracks might not have aligned groups; different offsets, frequencies, boundaries, etc.\nMy issue with this proposal is that (0) latest and (2) waitup are based on the relay's cache and are extremely racey. The assumption is that the relay always has the latest group\/object in cache. The reality is that it often won't, nor does it even know what the \"latest\" group is supposed to be. For example: A player is subscribed to 720p and encounters congestion after . It wants to subscribe to 360p and it has three choices: If the latest object in cache is , then we receive some undecodable objects until starts. It's wasteful and exacerbate congestions, but hopefully for not too long. If the latest object in cache is , then is undecodable and and playback stalls until is generated in a few seconds (oof). If there's no cache, we could forward the SUBSCRIBE upstream with the and run into either of the two above scenarios. Note: the upstream may have it's own cache; it won't necessarily be the origin. If there's a partial cache, the \"latest\" object might actually be in the past. For example, another subscriber issued a SUBSCRIBE for 100ms ago, so our latest in cache might be . We incorrectly start delivering at and exacerbate congestion. (oof) Note that there's no way for a relay to know when it's transitioned from a partial cache (not caught up to live playhead) to a full cache (caught up to live playhead). Issuing two separate subscriptions, one for each mode, would be the only option, but that would temporarily double bandwidth usage (bad for first hop in particular). This works the best because it doesn't depend on what is in the relay's cache. It knows to always start at . However this requires groups to be aligned across different tracks (ex. HLS). If 720p does not correspond to 360p , then this isn't an option. It's also not an option for startup since the player doesn't know the latest group number. The player needs to somehow know which timestamps correspond to which group for each track. If the latest object in cache is , then we correctly wait until starts. If the latest object in cache is , then playback stalls until is generated in a few seconds (oof). If there's no cache, we could forward the SUBSCRIBE to the upstream and run into either of the two above scenarios. Note: the upstream may have it's own cache; it won't necessarily be the origin. If there's a partial cache, the \"latest\" object might actually be in the past. For example, another subscriber issued a SUBSCRIBE for 100ms ago, so our latest in cache might be . We incorrectly start delivering at and exacerbate congestion. (oof) Basically the same problems as (0), but on the other side of the coin flip.\nThat was a wall of text; hopefully I explained it well enough. Like I said initially, the fundamental issue is that neither the player nor the relay know the mapping from group timestamp. What if we resolved that? contains a timestamp field contains a relative\/absolute timestamp. contains an optional timestamp for graceful termination. This suffers from some rounding\/alignment issues but it's fine. Relative timestamp requests are also problematic with partial caches. track contains group\/timestamp pairs. contains start group and optional end group. contains optional end group for graceful termination. This is absolutely the best option for relays. It knows what groups are missing in the cache and can fill them on request. It also means the relay can have multiple SUBSCRIBE requests upstream for the same track without fear of duplication. The problem is that the media protocol needs to produce a live group listing kinda like HLS. This incurs an extra RTT on startup, although it's super useful for seeking backwards and DVR.\nI think there are scalability and flexibility benefits to keeping relays agnostic to time and in fact to any internal characteristics of the objects they are serving. Keeping them decoupled allows the content to change (and the complete timing mechanism if so desired), without having to change the relay networks. The introduction of relay’s requiring timestamp knowledge stems from the original assertion that I think this need not be true. We can teach players the relationship between group and timestamp without too much difficulty. We do this all the time with HLS and DASH. In both cases the HTTP servers delivering these formats know nothing about the relationship between segments and time, yet we are able to build players which seek to precise points in a stream. We can do something similar with our streaming formats and the ideas already presented in this thread by Suhas and Luke. One difference is that I would make would be to allow the group offset described by Suhas to be either an absolute or relative one. As a proposal, what if the WARP catalog listed two things: The GROUP duration in milliseconds. The name of a track which described GROUP number\/timestamp pairs (exactly the track suggested by Luke above). We assume that this timeline track allows delta updates, perhaps describing the complete DVR window every minute at a group boundary, with incremental updates in between. Use cases: Start as close to the live edge as possible a. Since the last group represents the last independently accessible point in the stream, the player would issue a plain SUBSCRIBE b. The relay would serve content starting with the last group it has cached (its default behavior). c. If the relay does not have this content cached, and there is no pending subscription, then it would issue its own SUBSCRIBE upstream. Start with a 5s buffer a. The player would calculate how many group’s it would need to guarantee a 5s buffer. Assuming the advertised group duration was 2000, then it would need 3 groups. Using zeroth based relative group addressing, it would issue SUBSCRIBE (-2). b. The relay would serve from the 3rd last group that it had cached. c. If the relay does not have the content cached, it would send its own SUSBCRIBE(-2) upstream. If it has an open subscription, but has only received 2 groups, then it should wait for the 3rd before responding. Seek to a wall clock time of T seconds a. The player would subscribe to the \/timelines track, using SUBSCRIBE so that it receives the last full DVR description. b. Based on the data in the timeline track, the player calculates that it needs a group number of 5321. It issues a SUBSCRIBE(5321). c. The relay recognizes that this is an absolute reference to a GROUP (as all relative ones are negative) and returns content starting at GROUP . Player encounters congestion at group 53 on 720p and wants to switch to 360p track a. WARP places the packaging restriction that group numbers across tracks must carry the same media time offset (it actually has this requirement already) b. The player knows it was playing group 53 when it encountered slowness. Its next chance for a switch will be at a group boundary. Since it knows the group duration from the catalog, it can decide whether it is better off waiting for the next group, or reloading from the same group. In this case it is near the end of the group, so it decides to fetch the next group. c. It issues a SUBSCRIBE (54) for the 360p track and a UNSUBSCRIBE(53) for the 720p. Cheers Will\nAbsolutely NAME The main difficulty with relative requests is deduplication: a relay has trouble merging subscriptions that use absolute and relative groups when the cache is empty. ex. SUBSCRIBE 480p -3 (50ms delay) SUBSCRIBE 480p 351 The relay doesn't know if the existing upstream subscription (-3) will fulfill both. In fact the relay doesn't know if 351 is even remotely close; it could be a group from minutes\/hours ago. We don't want to download the same objects twice, but maybe that's okay when the cache is empty? I think we either: only use only absolute groups or add a way to OR an existing SUBSCRIBE\nThinking about it some more. If the player only knows the duration of each group (fixed), then it could use relative timestamps, but it's still a massive headache for the relay. Suppose two SUBSCRIBES from separate viewers: A relay with an empty cache issues the upstream. It starts getting some OBJECTs: . And then it gets a from a viewer. It would be an error to serve those three objects from cache. Why? Who knows if they were the latest three or from an hour ago. The relay could hypothetically augment the existing upstream subscription: . However, it still doesn't know which groups correspond to -3. No additional objects would be sent if 6330 actually was the live playhead, and the relay wouldn't know where to start the -3 subscribe. IMO the relay needs a way of converting relative groups to absolute groups for this to work. Maybe SUBSCRIBE_OK contains that translation? ex. . It's still complicated though.\nYou're indeed right. This the fatal weakness of the relative request approach. If the relay has the latest group cached and an active subscription, it works fine, but the player can never be sure of that and the relay can never disambiguate fresh from stale content. If the relay can do that translation, then why couldn't the player do the same and just request an absolute group to begin with? What are some solutions to this problem? Players always make absolute requests. Here are two ways to do this: a. The catalog instructs the player in how to calculate the group number for any given time, for example, via a start time and then modulus the group duration. To avoid players having to have an accurate clock, the SETUP response from the server could echo back the current time. This is essentially a DASH approach to timing. b. The player subscribes to the \/timeline track which describes the wallclock time and media time associated with the start of each group. This is similar to an HLS approach and has the downside of requiring 1xRTT before starting to discover the latest group. It has an advantage that it can describe groups of varying duration and Each group carries a timestamp in its header indicating the UTC time at which it was produced. This is not violating any decoupling of relays and payload because the time of production is core characteristic of any payload and is independent of the internal media time. With these timestamps in place, the relay can judge freshness. A player can issue SUBSCRIBE (-5) which means gives me this track starting with any group whose timestamp T > (now -5). We could also consider SUBSCRIBE (from: 1695571105212), which would require the player to have an accurate clock (although SETUP could be used to address that). Note that this scheme also works for DVR seeking if the player knows how to translate group number to UTC time. Same as [2] except that we add both a media timestamp and a UTC timestamp to each group. This gives the flexibility to ask relays to begin subscriptions at both media time offset and absolute and relative UTC offsets. The downsides are that it binds moq-transport to internal properties of payloads. One way round this is in core moq-transport we can specify two \"index fields\" in the header of each group. These fields must be numeric and monotonically increasing. Relays don't know that these fields represent, however they can service subscription requests that reference them. The streaming format would then bind index A to UTC time and index B to media time. Other formats may bind different attributes, but all could still work with generic relays. After writing these out , I'm in favor of [3] with generic indexes.\nYeah exactly. UTC timestamps get annoying because of clock synchronization. It's also annoying because the player has to know the latency to the origin; requesting media that was generated 200ms ago results in a very different experience if the origin is 10 miles away versus 10k miles away. I like the idea of delegating the group timestamp mapping to the application. The catalog could contain a formula (like DASH) or it could maintain a \/timeline track. That being said, we still run into the same issue; how are you supposed to request the latest group in the \/timeline track? Do you always request ? And is the latest, or do you need to wait until you receive the full \/timeline? I think there's a missing piece. If I were to implement this in HTTP, I would make a GET request with an ETAG to get the latest group\/object sequence. There's no equivalent in MoQ which is problematic. I think we still need relative group requests, but the relays need an authoritative source somehow; it can't decide what is \"latest\" from cache.\nI think that's not what it meant. By \"latest\", relays starts delivering whatever it has in the cache as the most recent group\/object and it doesn't need to remember anything about it other than they happen to be in cache. \"waitup\" is saying given everything once you get objects from current group +1.\n+1 on keeping the relays not aware of media characteristics. Since its complex and may not apply the same way to all the types of media getting transported (live vs dvr vs chat vs catalog vs something new). +1 to keep players aware of mapping from media time to group number. I would be interested in knowing where this will not work though. I agree with Will that the warp catalog can specify the needed relationship here Also the susbcribe message today has \"Group\/Object Id\" in its track request parameters. I would proposes merging Will's usecases (2) and (3) into given a Time T, find the group number you want to start fetching the media Issue a subscribe with that group number Relay can decide if the requsted group number is behind or ahead of the current group and if it is in cache serve them accordingly Basically i think we don't need offset. But would love to hear from NAME where we can't do away with absolute groups\nWe need to be clear on the term latest. In my proposal of Subscription Hints \"Latest\" is in terms of latest in the cache. So if the cache is current running group-45, object-32, then that's the latest. And waitup is , wait until you see group number change before giving me objects. I think with the hints listed, groupid & objectid being in the track request parameters and player providing the mapping function from timeline to group, we can address most of the use-cases being raised here. I would interested to learn what use-cases cannot be addressed with the proposed scheme\nCan you give an example using the \"latest\" mode? Would you use it for playback start or switching between tracks or both? As I understand, groups are independent but objects are not. would be a P-frame or catalog delta. I don't understand why you would ever want to start your subscription at an undecodable object. Starting at or would work, but never .\nFor the video delivery case, I would use \"waitup\" if i have no idea what the current group is. Since it will start delivery the objects from the beginning of the next group. I would use latest for audio use-case where the is fine for me to either render or conceal. Again we can enumerate the use-cases we want to address and I am happy to share my thinking around how the hints help. As part of exercise , we may come out deciding if we need more granular things than what is being proposed here. NAME could you please share one liner for the use-cases you have been considering from a subscribe perspective ?\nThe key point we're forgetting in the freshness quandary (or perhaps it is implicit) is that the relay knows whether it has an open subscription for a given track. This is a key bit of information. If it does have an open subscription, then the objects it has in cache are by definition the latest, and hence any 'relative' or 'latest' queries can be satisfied. If it does not have an open subscription, then it cannot make any determination about the freshness of any objects, even if it has some in cache and it must renew its subscription before responding. To illustrate this, consider a relay which has groups 637,638,639 in cache. Five minutes later another client joins issues a SUBSCRIBE (latest). The response of the relay should be different in the two cases below: It has an open subscription for that track - in this case it knows that those are still the latest objects available for that track and so it would return 639 It does not have an open subscription for that track (because the prior client that subscribed went away) - in this case the relay must make a SUBSCRIBE(latest) request upstream and return the first group it receives (perhaps 763) to the client. The advantage of relative requests is that the player does not need any understanding of the relationship between group number and time. This makes for simple players, as long as they don't need to seek behind live with any accuracy. The relative datum, in this case the \"live edge\", is constantly changing, and for players that need to seek back accurately this can be a problem. Depending on when you make your request and how long it is queued by intermediaries, you may get two different answers. Absolute groups make precise group selection for more deterministic and are robust. For live-only players (such as the web meeting use case), we probably could get by with relative addressing alone. However since our charter includes intermediate and higher levels of latency, we should design moq-transport such that it is flexible enough to handle both relative and absolute group addressing.\nif there are no upstream subscriptions active and hence no published data is being received , then there are couple of things to note What's in the cache is the current source of truth and it needs to be served yes Relay may forward the subscription upstream and feed the data that arrives towards the subscriber ( and also add it to the cache) This will keep \"latest\" property preserved. Players need to handle gaps that might be caused for several reasons including relay cache purges, publisher going away\/reconnecting, subscribers loosing state, relay reboots. I see these as player level details and not specific to moqt. Isn't it ?\nIt's not a source of truth at all. It may be 5 minutes old or 50 minutes old. Without the active subscription, there is no guarantee about its status at all other than it was previously served to some client and hasn't yet fallen out of cache. Why would a client wanting \"latest\" want that and if it was sent, how would the client know that it wasn't the latest. It would start rendering and then jump ahead when the true latest content arrived . If you need a subscription-hint for give-me-anything-in-your-cache-not-matter-how-old-it-is, we can create that, but thats different from \"latest\" and I'm not clear on its utility.\nIn particular, I would like to see MoQ also support some improvements for DVR or even VoD playback, for example: faster frame-accurate seeking. We don't necessarily need the protocol to provide a complete implementation, but having the ability to absolutely address groups would be especially useful in that context. One concern I have about adding timestamps into the mix is in how they would apply to content being played back at arbitrary points in time relative to the original recording time.\nHere's my current thought. SUBSCRIBE can contain a relative or absolute start group. A bit flag is used to specify if it's an absolute or relative request. N means serve group N and onwards +1 means serve the next latest group (waitup) -0 means serve the latest group (catchup) -N means serve N groups prior to latest SUBSCRIBEOK contains the latest and . Useful for players: it knows it should start rendering after decoding object=K Useful for relays: enough information to serve relative requests from cache. Relays can compute \"latest\" if it has an active subscription: if SUBSCRIBE active: if SUBSCRIBE inactive: You can serve relative subscriptions from cache only once you have an active SUBSCRIBE and the corresponding SUBSCRIBEOK. If you don't have an active subscription, then you need to SUBSCRIBE again. The SUBSCRIBEOK will tell the relay how stale the cache is and what can still be served.\nSince URL can only be equal to but never greater than URL, this simplifies to Another another topic, we should also allow the use case of subscribing to a stream before any groups have been produced. In this case we need some convention to convey that no groups are yet available, for example group=null and object=null\nNah, is updated on receipt of a new object. It's used for any new relative subscriptions. Here's a full example from the relay <- relay POV. The edge receives a SUBSCRIBE -3 from a viewer or downstream and has an empty cache: The subscription hasn't caught up to the live edge yet, but now the relay has a snapshot of how to map relative requests to absolute requests. If the relay receives a relative SUBSCRIBE during this state, it will know what can be served from cache. For example, a new SUBSCRIBE asking for 0 will not receive 6743 thanks to this hint. It's the only object in cache at the moment, so it would have been served with or but not with this approach. The SUBSCRIBEOK says that 6745 is the latest so we just have to wait for the data to finish transfering. This snapshot of latest will go stale, but the active subscribe will catch up soon. The snapshot exists mainly to support the first ~100ms where the subscribe is backfilling. In the meantime, the relay keeps receiving groups, possibly out of order: Upon receiving 6746, the relay updates the value since it's now larger than the snapshot from SUBSCRIBEOK. A new downstream SUBSCRIBE asking for -1 now will get 6745 and 6746 from cache. Note that there's no new upstream SUBSCRIBE; we can continue to serve relative requests indefinitely from cache unless they are too far in the past. Just to complete the example, let's suppose the relay UNSUBSCRIBEs right now for whatever reason. 6742-6746 are still in cache but we've wiped our \"latest\" value since the subscription is no longer active. The relay could get either a new absolute or relative downstream subscriber: : the relay serves 6745-6746 from cache while SUBSCRIBE 6747 is sent upstream. : the relay doesn't know if it can serve from cache since the active subscription has ended. The is sent upstream and the relay behavior depends on the resulting : : both 6745 and 6746 are served from cache : only 6746 can be served from cache : the cache is stale and can't be used In the first two cases, the relay would receive a duplicate copy of the objects it already has cached. That's probably fine, but a way to avoid this would be to subscribe based on both absolute and relative groups:\nPlease do not use these tricks of encoding meaning as the sign of an integer. It looks like a premature optimization, and in fact it is not: there is no syntax defined to encode negative integer, you will end up with a complement to 2 64 bit integer encoded on 8 bytes. It will be much cleaner and also more efficient to encode the hint as a tuple: meaning: an enum value describing what the subscriber wants, such as start at the current group, the next group, the first group, a specific group, or a specific location. additional parameters if needed, such as actual group ID if not \"current\", \"next\" or \"first\", and actual object ID if needs to be specified.\nConsensus at the interim is that we definitely want this and we both want to relative (n groups back) and range requests (ie: group n to group m), which could be open-ended. Joining from the currently ongoing group (startpoint=-1 in slides), and the next group (startpoint=1) had strong interest. start_point=0 starts from the current object within a group and there is contention about whether it's necessary given groups are intended as start points, but there was enough interest in the room that it was felt the correct way forward was to support it and then we can revisit the issue in the future if needed. Additionally, there is interest in object granularity, particularly for the range requests. A solution should and .\n+1 to christian and happy to help with the PR . thanks\nHere's what I was thinking: These fields could be optional via a parameter, message type flag, or a . My preference would be the mode because it's easier to decode and encode. It would take 4 bytes to specify all of the start\/end boundaries versus up to 12 bytes if they're all parameters. Here's how the modes from would map: I think this is simpler and more powerful. A publisher just needs to write a function to convert a relative boundary into an absolute one based on the latest value in the cache. This might be an asynchronous process for a relay if the cache is empty or new. Here's some things you can't do with the proposed encoding in : Start 4 groups in the past and end 2 in the future: Start at group 69 and end at latest: Start at 69 and end 4 groups in the past (might be a noop): Keep refreshing a relative subscription while it's needed:\nSome use-case like catchup , go back to a given object for layer refresh or server the game state from a given object , and similar scenarios , would need a name to ask for the object. It would be good for the protocol to be able to subscribe to not just track and group, but also object.\nAs an individual: +1. An application should be able to request a single object. Maybe even a byte range of an object but I would punt that until an actual use case materializes.\nCan you elaborate on the use-case? Resumption after a connection is severed makes sense. We need to solve that for both the contribution and distribution use-case. NAME has some strong opinions here. I don't quite understand the second use-case. Why would a consumer need to refetch a single object, but none of the objects it could depend on? It only makes sense when there's a hole in the group, but why did that happen and how does the subscriber know about it?\nAnd to be clear, I also think we should have a GET equivalent, but I want to make sure I understand the ask.\nThe encoder might have skipped some frames due to input loss or another issue, but maybe they are available from a redundant encoder and you would like to \"fill the gaps\"? The subscriber only sees the gap, it does not need to know the dependency structure.\nTwo uses cases I am thinking, both on distribution side: Use case A: client join a session that is in progress and wants to join Track A. The client wants to start playing before the next the GOP starts so it subscribes to Track A with the most recent group and wants to get all the objects in the group. Use case B: client is watching Track A and has received all the objects up to group 123 object 45 and looses wifi and switched to 5G. At this point it forms a new subscription to Track A but it wants to start at group 123 object 46 because it already has the stuff before that. If it had to wait to received all the stuff in group 123 over again, it would add delay as it switched networks. There are some more uses from scalable codecs but if we can solve the above two cases, we will probably solve the toher uses cases as well.\nI think the solution to this is simply that the Subscribe can have some sort of flag that indicates if the clients wants just future objects, or if wants to catch up on all the objects in the current group, or wants all the objects after some particular objectID\nThere isn't a way to GET a single object - you can SUBSCRIBE to the track starting from that object, and perhaps issues an UNSUBSCRIBE after that. I'm going to leave this open for now.\nWill suggested solving this as part of and have both a start group and end group.\nMerge it :-)"} +{"_id":"q-en-moq-transport-2c903675fc527f243f2243c87e537a326e376c86edadd586556d18307512c9a2","text":"Feel free to change the numbers to any ones you prefer.\nThere's a single SETUP message, but the contents change based on if the endpoint is a server or a client. . This is difficult to implement in type safe languages. I would implement and , sharing the same ID. However, it's then difficult to implement a generic method as it needs to know the expected role to distinguish between the two. I propose a separate and message akin to how TLS has a separate and . Alternatively, implement to reclassify SETUP and OBJECT as control and data stream headers respectively.\nHmm, so this isn't a problem in my code any longer since made SETUP a special message type. We should still figure out if both the client and server SETUP use the same ID, or separate IDs like TLS.\nI still like the idea of, if these message are different, they should have different names as it will just be confusing if they do not.\nAgreement these should be different message types, just like TLS.\nConveniently, message id=2 is currently not used. I propose ClientSetup = 1 and ServerSetup = 2.\nThinking about this a bit more, do we want to bake in the assumption that the client always speaks first into the protocol? For raw QUIC, typically, the server would have 1-RTT keys first.\nI would love to see something like HTTP\/3 SETTINGS where both endpoints can send the message in parallel. That requires version negotiation at a lower level in something like ALPN though. The SERVER SETUP would be sent first in native QUIC while the CLIENT SETUP could be sent first with WebTransport (in parallel with CONNECT).\nLGTM"} +{"_id":"q-en-moq-transport-074df3e4c00b18def7854bc18e498b40596e751c5048f3d1cdab0995de4bd999","text":"This were replaced in Subscribe Locations (hints)\nThese parameters have been replaced with explicit fields in the SUBSCRIBE message. All that's left (outside of SETUP) is AUTHORIZATIONINFO in SUBSCRIBE and ANNOUNCE. I would just as soon get rid of non-setup parameters entirely and have an \"authinfolength\" field in the messages which can be zero. At a minimum, we need to eliminate GROUP and OBJECT_SEQUENCE.\n+1 I would go even further and make a dedicated AUTH message.\nWhoops, those were deleted but must have been re-added in a merge conflict."} +{"_id":"q-en-moq-transport-765a757ad824845d3bda7474a9aff0b93b17dd670be886f40aa46a828bf0bbed","text":"When we added (b) we didn't update this.\nSUBSCRIBE_RST as well\nand ANNOUNCE_ERROR"} +{"_id":"q-en-moq-transport-923edc5f7eef6a23bda9f3fe1c5b6bbf9fd66ea3261a7fb2690af31dc0b2ea07","text":"This PR is expected to be base PR to support multi-subscribe like flows that will follow up. It adds support for SubscribeID and also renames the trackId to to avoid the confusion and make the scope clear.\nNAME NAME NAME wondering if the latest commits address the feedback. thanks\nThe normative bits look fine. PR needs some cleanup. Recommend splitting ID -> Alias to another PR, so this subscribe ID doesn't get caught up with bikeshedding the field name.LGTM but I would like Subscribe ID to be the first field in every message for consistency.Still looks fine after the split Having reviewed some code that does subscription matching with the current draft, I strongly endorse splitting those."} +{"_id":"q-en-moq-transport-63cf0830b8b66186a027eb19b3c800665b351c6d10f4b978de24fc204b968d9a","text":"I also wonder if Connection URL should be renamed to Session URL?\nPart of PR has text \"MUST close the connection if there are duplicates\" It seems like we should be closing the session, and we should specify an error code.\nI think this is editorial, but unsure because of \"specify an error code\"\nI would prefer: instead of:"} +{"_id":"q-en-moq-transport-76b7596b7ceb92dedd3a52d26fa4696dcdee55650df55cf1356e9222315ee553","text":"I believe some of this is covered by the recently updated GOAWAY section and I'm not sure what else to add here.\nI don't think these TODOs have been covered yet. That being said, I would rather we file issues for these so there can be a conversation, rather than litter the draft with TODOs.\nI'm unsure what's unresolved, but if you can file new issues as appropriate and link them to this PR, that would be helpful."} +{"_id":"q-en-moq-transport-5f948af8ec1466b2c4aa0061f6985d56fb7d5fbc14f79b96fd43e2fe46025d1d","text":"I think the intent of the draft was clear but this makes it explicit not to open a uni stream before setup exchange is done.\nIt's unclear what messages are allowed to be sent or received during the SETUP handshake. Here's my intuition: Until receiving the SETUP, the client: MUST NOT send messages MUST buffer any received messages. Until receiving the SETUP, the server: MUST NOT send messages. MUST NOT receive messages. The client needs to buffer messages because the server is unaware of when the client receives the SETUP. There's no equivalent to HANDSHAKE_DONE, and adding something like that would add an RTT anyway.\nA potential optimization exists if the message encoding if the same for all supported versions (invariants). For example, if the client only advertises support for versions 1,2,3 and those versions share the same SUBSCRIBE encoding and meaning. This seems unlikely, given a separate version was created in the first place, and seems very brittle. If this was the case, then the client could process any invariant messages immediately and buffer the others. However, the client still couldn't send messages unless the server buffered any received messages.\nThe most flexible optimization would be the one where you could encode the SUBSCRIBE_REQUEST for every version you offer as a field in SETUP parameters. TLS does not do exactly that, but there are some things in ClinetHello that are only applicable for certain versions."} +{"_id":"q-en-moq-transport-fa8541860650082af188d41db48b36469a4aa390617a06d6ede8ca0e97396001","text":"Thanks NAME - all the proposed changes look like definite improvements. Thanks I need more coffee today.\nIn HTTP, there are a number of restrictions on what characters are allowed in a path, headers, etc. I don't see any text restricting track names and track namespaces, but the examples are all ASCII, implying the namespace and name are not intended to be completely opaque and are instead intended to be human readable. There's nothing in the spec that prohibits them being treated as opaque identifiers, but I'm unclear if that's the intent and I think it'd be good to explicitly say what we intend.\nMost of the restrictions in the HTTP world are derivative of the restrictions in URIs: see RFC 3986, sections 2 and 3. I'd suggest we adopt those limitations and the percent encoding method from HTTP. That allows folks to use a well-known set of methods and aligns with the idea that we might have one or more URI schemes in the MoQ protocol space.\nI believe we currently don't have a way to put a track name or a track namespace into a URL, so I don't think those restrictions apply.\nI agree that they don't apply now, but if we adopt them it will simplify the set we have to deal with (since we will have the same set as other parts of the protocol). I suspect people will want the namespaces to be easy to generate from FQDNs (which are also the authority part of HTTP URIs), as an example.\nIMO a track name should be human readable. Otherwise we should use numeric IDs instead, like most containers. I also think the track namespace should be human readable since it will likely be exposed to the user. Currently you need a URL and track namespace to configure a player\/broadcaster. It will be quite common to combine both into a single URL (ex. like my relay) so it would be nice if track namespaces could be encoded into URLs.\nMaybe we should ask the flip side: is there a use-case that won't work if track names are utf-8?\nMoq transport must not define the interpretation. At the moqt transport layer track name and track namespace are set of octets and relay perform fast match where needed. How things from application domain gets mapped and gets setup with Relay is out of scope of MoQ Transport. Also how an application sets up namespaces with CDN\/Relays is out of scope of moq.\nWe should clarify in the spec to say these are opaque and examples are intended for human readability purposes only.\nI don't have any particular heartburn about where this gets defined, but I don't believe that the overall system will function well if the namespaces and track names are defined solely as \"octets\". We are talking about these as a tuple of namespace and name to create a globally unique identifier. If there isn't a common agreement on how those are composed and of what they are constructed, different systems doing different things will hinder interoperability and\/or generate collisions. I would personally prefer the HTTP syntax, but UTF-8 would likely be okay if we specify how to handle things like byte-order marks, joiners, bidi, arabic letter marks and so on. That's a lot of work, but if someone wants to do it, okay. I don't think accepting a bunch of different encodings is helpful and I don't think using non-human readable identifiers is going to fly without a mapping to them (which only shoves the problem up a layer of indirection and adds in something new to fail). If not moqt, where do you want all this defined?\nI disagree. We have to recognize that we are distributing the process of minting these identifiers very widely. Allowing people to re-use existing systems to create these is going to get us better scale and present fewer problems. You should expect folks to re-use DNS names to mint namespaces, to take an obvious example.\nI don't think I am forbidding any of these. I am just saying it is not something moqt should enforce or we should support all forms of these not just one class of things\nI am also strongly in favor of a utf-8 restriction. I'd like to inherit the usability achieved from a globally deployed and understood resource identifiers and to leverage the existing DNS and certificate system for authentication and routing. If we define names in terms of octets then we need to reinvent these conventions, or else have an unnecessarily bi-furcated system in which some addressing occurs via binary octets and others occurs via utf-8 resource locators.\nThe AUTH parameter is currently defined as an ASCII string. Whatever we use for track names, can the auth token share the same encoding?\nSummary of IETF 118 meeting is there are no restrictions on at the moq transport layer but there may be a layers above this. This is to say the compare function for track namespace and track name are the equivalent of posix memcmp.\nLooks fine to me. Agree with most of the nits suggested by Ian"} +{"_id":"q-en-moq-transport-ab61db0f91213ffd9bdf1f1779bb32a1d9ddc3b3b3d232e8129c8833b8189c41","text":"It's only used a few times and now that Namespace and Name are an opaque sequence of bytes, it's even less compelling.\nTake the line from your drafts that says There is no valid way to parse the right hand of this. We have things like \"Full Track Name Length\" and \"Full Track Name\". I do not think we should have ambiguous syntax for things that require precision. I think it would also be good to have things that can be directly mapped to common programming languages. This moves beyound simply editorial, or specification is underspecified and ambiguous until we fix this.\nOne possible solution would be use underscore or CamelCase\nOh, and the longer we leave this dumpster fire, the more painful it will be to fix.\nlove it"} +{"_id":"q-en-moq-transport-89ea46f2d7013173e587a1eeec83c6b10734e62fc4eb0ba6267b4b1683ae1d46","text":"This PR brings in currently active proposals for supporting explicit indicators of mapping object model to quic transport.\nPlease keep discussion of the concepts on the issue\nIf there are any other proposed solutions to this, would be great to have them documented as additional proposal in this PR.\nMy recommendation is to pause a bit on this PR. We need more normative language, explanatory prose, and examples to help readers understand and evaluate the proposals without breaking github with a long chain of questions and comments. The outcome from Monday's meeting was that the chairs were going to designate a small group to put the proposal forward. I'll circle with Ian and Ted.\nOK - will wait for that\nThis is all just so confused. The number of messages, modes, and permutations are exploding. Some ideas are in the right direction (ex. stream header) but the hodgepodge of modes\/messages is not. I'd like to explicitly add proposal 4: a relay forwards data streams. when a relay receives an upstream stream, it creates a downstream stream (for each subscriber) when a relay reads an OBJECT from an upstream, it writes it to the corresponding downstream if it matches the SUBSCRIBE criteria. when the upstream is closed, the corresponding downstreams are closed with the same error code or FIN. when a relay receives an upstream DATAGRAM, it sends it downstream if it matches the SUBSCRIBE criteria. No wire changes. I would like a stream header eventually but it's more for compression and to prevent foot guns (ex. prioritization). This unlocks experimentation as the application can perform any stream mapping. The relay doesn't care and remains simple. The important part is the guarantee that the relay (middlebox) won't rearrange or reorder shit if the application decided to use a stream. edit: oops was logged into my work account.\nThe consensus in Prague was to move forward with a proposal that uses explicit signals for now, get more implementation experience and data, and revisit. Proposal 4 only uses implicit signals from the QUIC layer rather than explicit signals from the MoQ layer. I think you can achieve similar outcomes with proposal 3 by adding the appropriate STREAM_HEADER at the beginning of each stream?\n+1 to NAME proposal. once we clean it up by removing the other proposals , we should be good for merge review. also +1 on dropping the priority for now\nOh and we should be some text: Otherwise we don't fix the core issue, which is that relays can arbitrarily change how OBJECTs are delivered. Stream mapping is useless if it's not enforced.\n3 is a nice refactor of approach in 1. I like it, expect for two things: I don't think we need to support track-per-stream, and I don't remember if we ever actually got consensus for supporting it (we might need to discuss this separately). Priority\/send order should be a stream-global, since that's the level at which those are determined in QUIC. The text is not entirely clear to me: is it still legal to have two open streams for the same group with STREAMHEADERGROUP (with understanding that they will get merged at the relay)? I think we could make approach 4 that does to 2 the same things that 3 does to 1. Canonical objects would have two fields: 'Object Placement Preference' that says that object is either (1) to be placed on a new stream, or (2) to be placed on the same stream as object N. 'Object Placement Close Bit' indicates whether there will be objects after this one on the stream or not (i.e. whether there is a FIN or not) There's one OBJECT_STREAM frame with Track Alias, Group Sequence, Priority \/ Send Order as headers. You can trivially figure out how to backfill those fields (you only need to remember the most recent object for each stream); the placement when retrieving from cache is similarly easy (you do need a map for \"most recent object -> stream\"). I think this fits in with the \"implicit-explicit\" approach that Luke has mentioned before. It also has a nice property that there are no cases in which what's on the wire can mismatch with what's in the canonical object (unless streams get randomly reset).\nTrack-per-stream would be the preferred delivery mode for VOD files. I'd very much like to ensure that we keep support for that use case open, even if most current implementations are focused on live.\nI think it should be illegal as it breaks . The broadcaster uses stream mapping A but wants downstream relays to use stream mapping B, assuming there even is a relay involved. I don't think there's a use-case for this special treatment of the first-hop. The only use-case that I've heard is a broadcast client that is smart enough to detect group\/object boundaries (ie. frames or key frames) to annotate each OBJECT with the correct stream mapping, but doesn't want to open multiple streams? That's a very poor reason, especially if the first hop may be congested, and the broadcast client should just use the indicated stream mapping. IMO a relay should not split or merge streams. It increases complexity and hurts performance. There's no need for your proposal 4 then.\nMaybe. The assumption is a track-per-stream would allow you to write a simple VOD-only client. It wouldn't need to reorder or reassemble objects because they're all delivered in order on a single stream. That's optimistic, as it will still need to support potentially unordered\/unreliable objects (like a live client) due to seeking and gaps in the recording. But I think we need to explore VOD and DVR more. I definitely don't want to require the server to modify the stream mapping at rest in order to serve VOD traffic, and it's not clear how that would even work for DVR. At the very least we should first support VOD with the original stream mapping. Much like how HLS can use the same segments for live, DVR, and VOD (segment size == stream mapping). And I think we should focus on DVR first.\nThis assumes LIVE-TO-VOD workflow. When I speak of VOD, its for pure VOD, which has no gaps, and was never live. The track-per-stream would would mean the client would never have to deal with out-of-order or lost objects, as the single stream would deliver them reliably and in order. Like you, I really like the appeal of the simple client interface for this. All I need is more space in my cache and I have a new market segment to address.\nI'm confused. Proposal 4 does not allow merging or splitting streams. That is true, but VOD requires different delivery mechanism from what live does (e.g. live cares about catching up to the live state, but for VOD, you usually just want to show everything in order).\nRecent discussions around the points highlighted in URL has made it clear that the core transport protocol needs to provide advices on how to moqt object model maps to the QUIC. Given that different use-cases have slightly varying requirements, there is also need for applications to specify their preference on how to use QUIC streams to transport objects and groups within a track from the publisher to the subscriber. Here is the proposal for enumeration as part of message as an experimental addition to the protocol for publishers to express their interest . Following delivery modes are proposed StreamPerObject(0): In this mode, the publisher intends to use one QUIC Stream per MOQT Object. Stream Per Group(1): In this mode, the publisher intends to use one QUIC Stream per MOQT Group. All the objects from a given Group share the same QUIC stream. Stream Per Priority(2): In this mode, the publisher intends to use one QUIC Stream per priority as determined by the MOQT object priority value. Stream Per Track(3): In this mode, the publisher intends to use one QUIC Stream for a given track. The core idea behind such experimental extension is to help under the application, deployment and interoperability requirements before the final path is chosen for the core transport\nIn moq-transport, an intermediary knows how to extract objects from incoming Streams (there may be 1..N Objects in a Stream), read their metadata (such as Group and priority assignments), cache them and potentially drop them depending on prioritzation settings. They also need to know how to forward them, which I believe is the intent of this issue. Perhaps we should reframe these as ? They instruct intermediaries beyond the current hop in how to propagate the objects . The various forwarding types can be enumerated in the spec and should be a new field in the header of each Object. If the intermediary is not making a new Stream for every Object it forwards, then it needs to bind a new Object to some existing Stream. I term this the \"binding ID\". I have relisted the groups, adding in the binding ID that an intermediary would use. Send each object in a new stream (default?) Binding ID: None needed. Send all objects with the same Group in the same stream Binding ID: namespace + Group number Send all objects with the same Priority in the same stream Binding ID: namespace + priority value Send all Objects in the same track in the same stream Binding ID: namespace + track name\nI want something simpler. If a relay receives objects A, B on QUIC stream 1 and objects C, D on QUIC stream 2, then the relay continues that relationship. The relay MUST NOT reposition objects within a QUIC stream. Object per stream: A:1 B:2 C:3 D:4 Group per stream: AB:1 CD:2 Layer per stream: AC:1 BD:2 Track per stream ABCD:1 Bonus points if objects on the same stream share a header. ex. prioritization is applied at a stream granularity, not object granularity.\nHow would caching work under this scheme? Caching introduces a temporal discontinuity to the transmission. Should the relay continue to cache by Object, or should it instead cache by Stream?\nYou would cache by stream and byte offset, like HTTP chunked-transfer. If you evict A from the cache, you would also evict B from the cache. If the application chooses to put A+B on the same QUIC stream in that order, then it expects A+B to be delivered in that order. It the application wanted those objects to arrive in any order, then it would have put A and B on separate streams. Should we support a situation where a client transmits A+B on the same stream, but expects the relay to split them into separate A and B streams to downstream clients? I don't think so, because of congestion on the first mile. Once you introduce head-of-line blocking, you can't remove it. To oversimplify, it's like converting TCP -> UDP. If there's congestion on the TCP hop, then UDP packets will arrive in TCP order despite being split. This still has benefits for last-mile congestion versus TCP -> TCP, but to handle first-mile congestion what you really want is UDP -> UDP. Vice versa, should a client transmit A and B on separate streams, but the relay combines them into A+B for downstream clients? Absolutely not, as you actually suffer from both first-mile congestion and last-mile congestion. The analogy is UDP -> TCP. Any unreliable or out-of-order delivery at the first-mile needs to be fixed before it can be converted to reliable and ordered delivery. This is then exasperated by prioritization, as the relay wants object A to unblock B, but the sender may decide that C is higher priority. These same principals apply for RTMP -> WebRTC and WebRTC -> HLS. I spent years working on the former at Twitch and we gave up because the user experience was terrible. We offered WebRTC -> HLS but ironically it introduces more latency and is a worse user experience than RTMP -> HLS. I strongly believe that relays need to provide: X -> X If an application introduces head-of-line blocking by putting content on the same stream, then the relay must maintain that head-of-line blocking. The relay doesn't try to introduce more, nor does it try to introduce less. This is also amazingly simple for relay; they just proxy QUIC streams. The application still needs stream prioritization for inter-stream relationships, but now it can use stream ordering for intra-stream dependencies. For example, the base catalog and deltas would be on the same QUIC stream. There's no need for parent sequence number as the relay would deliver the base and deltas in order. There's no reason for the relay to cache them separately, as evicting the base but not the deltas from the cache would be a mistake.\nNAME mentioned something at the start of the call that I wanted to address. It sounded like the client was transmitting a single QUIC stream, and he wanted parts of that stream to be cached independently. If this is a single GoP, then it can't be cached independently. If a new subscriber joins then it needs the entire GoP. You have to make multiple GoPs if you want independent join\/cache points. If there are multiple GoPs sent over the same QUIC stream, then yeah caching is a problem. You would have to cache based on group+stream tuple. However I doubt you actually want this behavior, because you just reinvented RTMP. The whole point of groups is that they're independent, but this property is lost when they share the same QUIC stream. They wouldn't actually be independent for the first-mile delivery, and just like I mentioned in the last post, any first-mile congestion will reek havoc on last-mile delivery. I think you want to split each group into a separate QUIC stream: independent groups = independent delivery = independent streams This is why I want MoQ group == QUIC stream. However, I would be fine with MoQ object == QUIC stream, because like NAME brought up at the start, I can use a single object per group to get the desired behavior. But something needs to be specified, because a relay MUST NOT be able to move my objects to other streams.\nWhen a new subscriber joins mid GOP, there are only 2 options to have any meaningful experience Start from the beginning of the current GOP to get the IDR Wait for the next GOP This goes to issue\nI think we are mixing 2 different levels of mappings here. how an application maps its data to moqt object model One object as GOP or one object as encoded video frame of 33ms is entirely up to application. how aspects of moqt object model is mapped to the QUIC Once you have application data mapped to the moqt object model, then it comes down to question on how moqt objects\/groups are transported via QUIC mechanisims. This issue talks about this mapping in particular. Yes application needs to know how to do 1 and 2, but the moqt transport only needs to specify 2. Also caching needs to be done at object level since it allows retrieval from cache provided track, group and object information as keys. I don't think we should normatively specify that relay shouldn't map things differently from ingest to egress. But yes a given relay implementation can choose to map it differently and it does indicate it via the transport delivery mode.\nYou can achieve this via delivery mode of stream per group. If a group has a single object (say GOP), it just caches one object for that GOP.\nYeah, my point is that you shouldn't be sending multiple GoPs over the same QUIC stream, otherwise it's not possible to serve the latest GoP during congestion. Using a single stream for multiple GoPs is only optimal when there's zero congestion, in which case just use TCP lul. Yeah, the problem is that without 2 the application can't decide 1. The transport needs to provide properties that the application can use. Right now MoqTransport OBJECTs are closer to jumbo-datagrams because they're semi-reliable and semi-ordered. It's a huge red flag when the application can't reliably deliver deltas in order because of the object\/group abstraction. At a minimum, the application needs the ability to deliver a stream of bytes over MoqTransport. We should absolutely use QUIC streams for that instead of reinventing them via an increasingly convoluted object model with flags. Caching should be performed at the stream level via byte ranges. Even with objects, you want to cache at the byte range level, as waiting to receive an entire object before caching\/serving can only add latency. For example, an I-frame takes multiple round-trips to deliver (especially with packet loss), so don't want to wait to receive the entire frame before caching\/serving it. Here's how a relay should work: Read chunk from upstream, (optional) write chunk to cache, forward chunk to downstream. A chunk is a stream offset+length, aka a QUIC stream frame. I think the relay MUST maintain the stream mapping. Otherwise the application MUST assume the lowest common denominator, where some dumb relay decided to send every object over separate streams in an arbitrary order, or decided to drop arbitrary objects in the middle of a group. Imagine if a HTTP relay was allowed to deliver a HTTP response body in an arbitrary order or unreliably. It would break so many assumptions and force the application to anticipate this substandard delivery, even if 99% of relays delivered the body correctly. Or it would become a defacto standard that relays MUST deliver the body reliably and in order, in which case we should have been made an official standard.\nAdding to this tangent, caching based on byte offset instead of frame number is far more efficient. Let me explain why. Let's assume we send an entire 2s GoP via QUIC stream. A subscriber lost their connection half-way through a GoP and wants to request it again: Ideal: Poor: In the ideal scenario, that's one lookup into the cache table. We advance the data pointer X bytes to the start and read until the requested Y byte. This memory is mostly on the same page, and can be written to the QUIC stream in one operation. In the poor scenario, the relay has to do N cache table lookups. These are likely all in separate memory pages and there's N writes to the QUIC stream. N depends on the frame rate if every frame is a separate object.\nThat's exactly the TransportDeliveryMode in this issue is addressing.\nApplications themseleves has a choice to send every object in its own stream. Relays may choose I am not sure how was the conclusion reached here that the entirety of the object has be to received before serving downstream though. If an object is a I-Frame\/GOP, the bytes\/fragments come in and are sent to caching layer as fragments for building the cache for that object and also forwarded to all the subscribers in parallel as fragments. This issue for delivery mode doesn't affect that behavior in any way.\nYeah, we both agree on that front and the disagreement is just over the implementation. Instead of N different modes toggling which messages constitute a QUIC stream, I literally just need a promise that the relay can't rearrange QUIC stream contents. Even if we do add , I still need a promise that the relay won't change it. ie. if an application creates a QUIC stream with only object 1 and object 2 on it, then the relay MUST also transmit a QUIC stream with only object 1 and object 2 on it, and in that order. If a relay is allowed to split them into separate streams or insert object 3 in the middle, then it completely breaks many applications. With that property in place, I do think the stream header could deduplicate some OBJECT properties, but that's an optimization and not actually required. It just seems like a bug to send two different objects with different priorities or groups over the same QUIC stream. Relays should absolutely not be able to choose; it can only create problems. The application made the decision to deliver objects over separate QUIC streams for a very explicit reason. The relay lacks any information about the application by design so if it changes the delivery mode, it's either making a completely uninformed and likely detrimental decision, or it's applying arbitrary business logic. That was mostly from our old conversations around if objects are atomic. But I absolutely agree, relays should cache\/forward streams.\nI just wanted to clarify though , caching is done for objects , however relay need not wait for the full object to arrive before forwarding to the subscribers\nThere's no distinction. If you can serve partial objects to existing subscribers (forwarding), then you can serve those partial objects to new subscribers too (caching). In theory you could differentiate between existing and new subscribers, but that gets very complicated to implement and is detrimental when objects are not atomic (ex. GoP). You might be talking about addressability, ie. you can't request a byte range within object. But the object cache itself is absolutely broken into a dynamic list of byte chunks.\nA object is the addressable entity from the moqt application perspective. How it gets delivered ( as chunks or fragments or few number of bytes) is lower layer transport level decision which is many times driven by MTu and other factors. Moqt applications ask for an object and they get it delivered. We should focus on whats the application model here\nAt IETF118 there was support to add explicit indicator for object model to transport mapping. PR captures 2 proposals that were discussed during Boston interim. please review and if there are suggestions o a totally different way to achieve the same results, please propose it here.\nOne thing missing from the discussion is ordering\/reliability. The entire point of sending OBJECTs over specific streams is to utilize those properties so the application can decode in order. It's pointless to have stream mapping if OBJECTs can be reordered\/dropped on those streams. Anyway, here's my proposals to round out the list: Proposal 3 (implicit): Send objects in the same manner as they were received. Proposal 4 (implicit==explicit): Same as proposal 3 but add a field to OBJECT. But like I alluded to at the start, you need more than the Stream ID to be fully explicit. You also need signals to reproduce the ordering, gaps, and stream end. And all of a sudden you're reimplemented the QUIC STREAM frame. I'm a massive fan of implicit stream mapping. I don't think there's any other option.\nNAME - I think it would be good to write down in the PR the details on implicit and cover how this works when a client switched from networks or there is an error and how it works with cached data. It may be all of this is easy but I want to get my head fully around it in trying to sort this out.\nDo you want me to push directly to that PR? QUIC automatically handles network migration so you're talking about a hard disconnect. Something like going through a tunnel and triggering the idle timeout (ex. 10s). The easy answer is that all streams are reset on connection loss. An application that uses long-lived streams will need to support multiple streams if they want to support reconnects. For example, if the catalog stream is reset due to connection loss, the new publisher can make a new stream with group += 1. You can still keep the ANNOUNCE\/SUBSCRIBE alive, but any streams from the old publisher are reset. Attempting to resume a stream on a new connection is difficult to impossible, regardless of implicit versus explicit. The problem like I mentioned is gaps; if you're using a stream then the decoder expects objects to arrive in a specific order. The old connection would have to use QUIC ACKs to guess which streams\/objects were received just in case it crashes. However, ACKs doen't actually mean that the OBJECTs were actually flushed to the application. You could guess but we would likely have to add application ACKs (ex. OBJECT_OK) primarily for this feature. You would also need the relay to throw out duplicate objects because it may have already received an object but wasn't able to ACK it. Otherwise sending the same OBJECT twice over a stream is likely to break a decoder. It's a whole mess.\nIndividual Comment: I have another proposal, which is is along the lines of NAME 's idea from . Combine the Proposal 2 Mode from the OBJECT wire format with the message type, and use a stream header to avoid repeating the same or potentially mismatched information. eg: StreamPerObject: Message = (same as OBJECT today -- eg no payload length) Datagram: The same as but with no serialized type field (or add one if that's too \"implicit\") Multi Object Streams start with a stream header message and the fields that are constant across all objects on that stream. StreamPerGroup: With similar constructs for StreamPerTrack and StreamPerPriority. The SHORTOBJECT* types are not serialized on the wire, it's inferred from the stream header. I will note this adds more flexibility than Proposal 2, eg a publisher could mix all 5 modes of object delivery in single track, though we could further restrict it if we wanted to. I wrote text for this proposal but wanted to pitch the idea here first.\nHmm, so there's some nuances to that proposal NAME Can be reordered and\/or dropped? Basically, what happens when sequence=3 has a higher sendOrder (lower priority) than sequence=4?\nI think we need to take a step back and really ask ourselves what properties do we want from the transport. I'm worried that we're throwing solutions at the wall without analyzing the problem. An application has live media that it needs to break into pieces. Mostly avoiding existing terminology for the moment, let's say we break a track into independent\/streamable fragments which are then broken into chunks. Each fragment can have different modes: unreliable: chunks MAY be dropped unordered: chunks MAY arrive out of order framed: chunks have boundaries As a thought experiment, let's consider how an application would want to send a JSON catalog with (JSON) delta updates: independent: New catalogs don't depend on old catalogs. reliable: Delta updates can't be skipped. ordered: Delta updates need to be applied in order. framed: You can't concat two JSON objects together (without NDJSON). Each catalog would be a reliable\/ordered\/framed fragment in this scheme. The first chunk is the base JSON and each subsequent chunk is a JSON delta. Here's some possible use-cases for each type of fragment: Note that sending a single chunk per fragment is the same for all modes. This is what RUSH currently does and it uses a reassembly buffer to reorder. However, it would be neat if MoqTransport could perform that reordering via unreliable\/ordered delivery, so the decoder could read in order by skip over any holes (ex. drop b-frames during congestion).\nOkay cool, but this concept is kind of useless unless we can actually implement it. Here's what is possible in draft-01: The rest are not guaranteed due to the lack of a stream mapping. A relay can decide to reorder\/drop\/move OBJECTs on a whim. The only way to achieve guaranteed behavior is to use a single, unsized object per stream so the relay can't coalesce them. I think we all agree that this is a problem. Let's start with the easy proposal (3). Implicit mapping means the relay is not allowed to change the contents of a QUIC stream and all OBJECTs are delivered in the same position. You can now send multiple OBJECTs on the same stream and have them reliably delivered (ex. catalog). But if you to do more complex stuff, like unreliable\/ordered (ex. lossy frames in decode order) then you still need to perform reassembly in the application. The problem with Alan's proposal is that it's unclear if chunks are reliable\/ordered. If they are, then you don't need or even per . If they're not, then it's ambiguous. But what about taking that proposal a step further and explicitly include the reliable\/ordered flags in the . There would always be a stream per fragment with at least one chunk. This would allow: The giant caveat here is that any dropping\/reordering within a stream can only be performed at the MoqTransport layer. All chunks are reliable\/ordered via QUIC streams, but the relay MAY drop chunks if there's sufficient back-pressure (like TCP low-watermark). However if latency is important, then you shouldn't perform reliable\/ordered delivery in QUIC and then pretend like it unreliable\/unordered in the MoQ layer. This is actually one of my complaints with SCTP; it gives the application the ability to specify unordered\/unreliable delivery on a per-message basis but it's a facade. I don't think we should include reliable\/unordered or unreliable\/ordered messages on a QUIC stream because it's a foot gun. QUIC just doesn't work like that. Despite our object model, OBJECTs are not actually the unreliable\/unordered unit, it's actually a QUIC stream and we have to remember that.\nIndividual Comment: I think it's interesting to consider where reorderable and droppable properties fit in the object model, but I think we can tackle the problems separately and make incremental progress. My proposal takes the draft-01 \"implicit\" signal and makes it explicit and more compact, without changing anything else.\nunfortunately draft-01 as-is implemented wasn't implicit or explicit as shown cleary during interop.\nSure, my point is that the implicit signal exists whether you read\/use it or not, and this explicit scheme conveys equivalent information.\nin this case, I wonder if flexibility will come bit us. I need to think a bit more though ..\nI feel we are over complicating the issue at hand by mixing different things. Let me share my thinking and see if I messed it up Every object carries an indicator for priority(sendOrder). Relay application is the one that uses this field to make forward or drop decisions and not the transport mapping and hence not scoped to this issue IIUCProposal 2 (and even Proposal 1) carries a 3 bit value in every object that indicates how a object needs to be transported. When the same object is replayed from cache the same rule applies since the object header has the necessary indicator. This mapping is what this issue is all about. Applications know what treatment they need from the transport and they need a way to indicate their intention to the moq layer and this issue is about helping build that abstraction. For applications who don't know what to do , the draft needs to specify the reasonable default.\nDiscussed offline with Suhas: In Proposal 3, every Object has the forwarding preference, but in some modes it is \"compressed\" on the wire by including a stream header. From an API perspective, publishers would publish objects with a mode and subscribers would receive objects with a mode, like Proposal 2.\nI've thought about this for a bit, and I think I'm also a fan of \"explicit-implicit\" design. What I mean by this is that we formally define a way to describe the object placement (that is used in the APIs and can be serialized into the cache), but we don't actually send most of it on the wire since it's redundant (a \"compression\" of sorts).\nOh, and to be clear, I think Christian and Alan's stream header is absolutely the correct direction. My rambling is only because the presence of implies that objects on a stream are not reliable\/ordered. Controversial take: For all of these stream mapping proposals, OBJECTs on a stream MUST be delivered in reliably and in order. Why? Well the entire point of mapping objects to streams is to get these properties. If a relay is allowed to arbitrarily reoreder or drop OBJECTs within a stream, then we're right back to where we started without stream mapping. For example, we want to produce a catalog with an OBJECT for each delta. These objects are placed on the same stream so the decoder doesn't need to implement a reassembly buffer or detect gaps. If we allow a relay to drop or reorder objects on the same stream for whatever reason, then this is possible: For anybody who implemented moq-clock this week, I'm sure this looks familiar. It's what you get with a stream per OBJECT, stream mapping actually served no purpose. I can live with flag indicating that objects on a stream are potentially unreliable\/unordered, but I do caution that QUIC streams are reliable\/ordered by nature. If an application wants unreliable or unordered OBJECTs within a stream, it's possible but head-of-line blocking will limit the upside. These objects really should be sent over a dedicated stream or datagram instead.\nNAME if multiple objects are sent on a single stream, they are by definition delivered reliably and in order. Why do you think that they would not be?\nOBJECTs on a stream are reliable\/ordered over QUIC, but do we agree that they're reliable\/ordered at rest too? Basically a relay MUST transmit OBJECTs in the same manner as they arrived.\nIndividual Comment: The different send order on the same stream is covered in issue , and I don't think we need to resolve that before we address the basic transport mapping.\nSorry for the earlier outburst. Alan's proposal is definitely in the right direction and is starting to align with my vision. I think I just need an outlet to write that vision down because I'm getting overwhelmed by leaving piecewise opinions on Github. I think we should: delete , unless somebody wants it. delete , moving priority to the stream header instead. add , but we should discuss datagrams more in general.Individual commentsOverall I like the proposal the explicitly map objects to quic streams. I left few comments"} +{"_id":"q-en-moq-transport-18891186af345d5a1000373c238d9dbbc2ece3b8cdb7abc109b9f54a7ce5c6aa","text":"Currently, VarInts have two lengths: In the parameter length. In the first two bits of the parameter value. The decision in was that the two MUST match, which is a good solution without a full reform. However: Ignoring seems like the wrong play. For example, we know the role parameter (0x0) is a VarInt in the current draft. However this text says I MUST ignore any errors when parsing this varint and pretend like the parameter was never sent, which is both unenforceable and a major foot-gun (silently ignoring parameters). I think we close the connection with an error if there's an error parsing a known parameter.\nIf a parameter is unknown, then the parser doesn't know if the value is a varint and therefore can't check the length match. I thought it was better to have consistent behavior than not, because it's easier to write and easier to reason about. But if this has actual downstream consequences, I could be convinced otherwise.\nI don't know what I was smoking when I came up with this title. My concern is that when a known parameter has the wrong length, the text says the decoder MUST ignore it. But it likely has the wrong length due to a bug, so this will cause unintended (non-fatal) behavior.\nI understand what you mean. My point was that an unknown parameter has an inevitable result, and trying to make the processing for known parameters the same had some advantages. We'll discuss today.\nConclusion was to make it an error instead of skipping over the field if there's a length mismatch.\nLG, though we can bikeshed about the error code name."} +{"_id":"q-en-moq-transport-bbc36e664f1fa57ed824c60b0a345c47a9248e4569a3e70362cd9ee9b16399de","text":"This was previously part of . It allows an object to be sent in a datagram f it fits, and put on a stream if it doesn't.\nLinking some feedback raised in the original PR that has not yet been addressed: URL URL\nLooks like a great starting point to me."} +{"_id":"q-en-moq-transport-7cfa225607978e09b5be6d60b4f220f71445e4d9d88dffe8c538b5d003f4a86f","text":"Fixes URL\nnit: can you wrap the lines at 80 chars?\nIn the call last Wednesday, we agreed this design was the correct approach, so merging even though there are some reviews outstanding requesting changes.\nConsider an edge relay, receiving and forwarding relative SUBSCRIBE requests for -5, -3, and -1 for the same track. The first object it receives back is group 37. It cannot tell where to place this object in the sequence and must therefore hold it until it knows more information. If the next object is Group 39, it still cannot forward it, because the head sate is undefined. We can solve this problem simply by having the SUBSCRIBEOK response indicate the latest group and object number: SUBSCRIBEOK { Track Namespace (b), Track Name (b), Track Alias (i), Expires (i), LastestGroup(i), LastestObject(i) } One consequence of adding these fields is that an edge, which has no active subscription for the track, cannot send the SUBSCRIBEOK response until it itself receives a SUBSCRIBEOK after forwarding the SUBSCRIBE. This makes subscription an async process in which the acknowledgment of the subscription may take in some cases a RTT back to the origin. I think this is necessary for stable state management.\n+1 this is required to build a relay and a non-realtime player. This was required anyway, otherwise the relay would swallow any application SUBSCRIBEERROR codes. My mental model is that SUBSCRIBEOK == HTTP response headers.\nMoqt is a pub\/sub protocol and is hop-by-hop protocol. A subscribe ok means the susbcribe request was validated at the relay. A OK isn't tied to getting a OK from the original publisher. Its sub\/sub-ok is a async operation ( like any typical pub\/sub systems out there). An ok for subscribe means a promise that if there is a matching data ( either in cache or coming in from a live publisher at some point), it will be forwarded .\nClarifying question , if the use-case is something like VOD or non real-time flows, I think the same can be done by\nI absolutely don't want this behavior. It means that a generic relay will always return SUBSCRIBEOK, even if it cannot route (or authenticate) a SUBSCRIBE. A typo or finished broadcast would be indistinguishable from a valid request unless OBJECTs start to flow. SUBSCRIBEOK would be equivalent to HTTP 100 CONTINUE, which is truly useless, and there would be no equivalent to 200 OK or 404 Not Found. Pub\/sub doesn't mean black hole simulator either. I've only used RabbitMQ a long time ago, but if you tried to bind a queue (ie. SUBSCRIBE) to an exchange (ie. ANNOUNCE) it will return an error if the exchange doesn't exist. Is the justification that pub\/sub protocols are supposed to black hole, or is this behavior that you actually want? Instead of hanging indefinitely if a relay can't route a SUBSCRIBE, wouldn't it just be better to receive a 404 Not Found? You're allowed to try again, but now there's no timers or guess work as to if the SUBSCRIBE could be routed to an origin.\nIt's for higher latency targets. HLS\/DASH clients don't start at the latest segment, but rather start further back based on the target latency. If I understand correctly, you're saying the relay: has an empty cache. receives a and . issues issues a live subscription upstream: waits for the first OBJECT to arrive for that subscribe, using it as the live playhead: issues a backfill subscribe upstream: forwards any objects based on the mapping from relative to absolute. It works for relays, but at the cost of an extra round-trip to origin. However, it won't work for clients that want to start playback N groups back (jitter buffer size), because the player doesn't know when\/what to render without first receiving N distinct groups. Here's what we would like: has an empty cache. receives a and . (re)issues a subscription upstream: waits for forwards any objects based on the mapping from relative to absolute.\nIndividual Comment: I think SUBSCRIBEOK needs to resolve all relative Subscribe Hints that were passed in the Subscribe. Otherwise, I'm really not sure how my library will know when an Object arrives if it belongs to a particular subscription or not -- the OBJECTs that arrive have only track name and absolute group\/object numbers. I see the use case for some relays wanting to optimistically SUBSCRIBEOK, but then there would need to be a follow up message that resolves the relative hints when the relay finds out where they map.\nCouple of observations what use-cases would involve same client sending multiple subscription hints at the same time ? Hints are read access points on the same track and if a client ask for multiple such points, it will get objects on the same track regardless .. how relays puts them together in such a case for a given client cannot be normatively specified NAME can you give me a concrete use-case example of \"resolve all relative Subscribe Hints that were passed in the Subscribe\" from a single client so that it helps understand the issue better. I personally don't think the issue being discussed is directly related to this ...\nThe use case of an edge relay forwarding simultaneous requests from multiple connected clients. This is a common use-case for any edge or fan-out relay.\nIndividual Comment: This is definitely one use case. Another is a client making two simultaneous subscriptions: eg live edge and range request starting some time in the past.\nOne thing that will not scale is every subscription needing to go back to the original publisher. Consider the cases of a million users are joining a session that starts at the top of the hour and that is published from a mobile device. I think that in any real system there is also going to be no single idea of the live edge. The live edge is relative to which relay you subscribed to. If the relay you are on is over a geo sync satellite, it is going to be behind ones that are not. So here is what I think we should do. We should not allow relative subscription to things in the past. I think the uses cases I have heard for this can be deal with as subscribe to live edge forward and as once the client find out what object that is, do an absolute subscribe for any data in the past to fill in buffers etc. So to be clear, no relative group or object sequences that are negative but still allows absolute. This seems to meet the use cases and remove a ton of edge case design problems. To be clear, I think we should be able to subscribe to the current group, but give me object 0 forward in that group. If it is a group per GOP, this allows one to get everything needed to decode the current frame.\nBut not every subscription has to go back to publisher. It only has to go back to a node that is already subscribed and understands the live edge. Furthermore, concurrent requests would be coalesced at each node. Imagine in the worst case that 1 million requests are made within 100ms to a (really powerful) edge. That edge would only forward one request to the origin. All the rest would be held pending that response. This is exactly what happens when a few million people ask for a live HAS stream. Due to fan out and request coalescing, the CDN only sends a few requests back to the actual origin.\nYeah exactly. Only the first SUBSCRIBE will propagate upstream and the SUBSCRIBEOK is used to bootstrap the live edge. The relay can infer the live edge while the single upstream subscribe remains active. Because of the SUBSCRIBEOK, the relay knows that the received OBJECT only gets forwarded to the first downstream subscribe. Otherwise it would have to guess the live edge. The relay keeps track of while the upstream subscription remains active. The initial value is set by but all future updates are based on received messages. This way is can map from relative->absolute and reply with without issuing a new upstream subscription.\nFor the original use-case reported on this issue, if there are multiple subscribes at a relay with varying relative offsets, the relay has no choice but to send each combination of subscribe and a given -ve relative offsite upstream since that identifies each unique data point in the track. This has couple of issues it doesn't scale across relays and also when there are very high rate of subscriptions across relays and clients it makes the lookup and state management complicated latencies across relays and client locations ends up producing varying results for the same query since it depends on when the query was made and how long would it take to for conferencing cases when there are multiple publishers who can be active (have live edge), unjoined yet ( so no live edge) , the expectation for subscribes to get a OK back from original publishers will not scale in terms of latency of joins The more i think of it, subscribing back into paste using a relative offsets and expecting the answer to be vetted by the original publisher \"unnecessarily complicates the protocol\".\nThe group sequence numbers are consistent between subscriptions . If a relay receives relative subscribes of -5,-3 and -1, all it has to is send a single subscribe upstream of -5. The response would give it the live edge group as well as all the content for the various subscriptions. If the subscriptions were -345, -2, -1, the relay could decide to make two subscriptions upstream - one for -345, the other for -2, because it didn't want to force the smaller relative subscriptions to wait for the larger one. The point is, this is optimization logic for the relay. It does have a choice in what subscriptions it sends upstream based on the timing and diversity of the input subscriptions. Why is the latency of the join affected? Consider a subscriber talking to an edge where the subscriber is 100ms away from the origin. We consider two scenarios: in the first case the subscriber receives a SUBSCRIBEOK immediately after subscribing but then waits 100ms for the audio and video to arrive from the origin. In the second case the the subscriber waits 100ms to receive SUBSCRIBEOK but it is immediately followed by the audio and video. So in both cases the subscriber gets its audio and video after 100ms. The media latency is consistent under either scenario.\nIf the relay logc is to send only a single subscribe upstream, then you don't need anything to be said in Subscribe OK either. Since when the first object arrives it can compute the live edge by adding 5 to the object being reported and the compute the absolute values for others (in this case for -3 and -1). That would definitely make things simpler for sure.\nFor cases where a client wants to go back 345 groups in past, i will be very much inclined to do an absolute request after learning the latest known state of the group as an end application OR something catalog can describe more information to make the right absolute request on the first go\nIt can't do this because the order of items it receives as Streams is not guaranteed. The origin may send the groups in order -5,-4,-3,-3,-1, but they could be received at the relay (or any hop) as -3, -5,-4,-1,-2. That's why you need an explicit notification from the origin as to where the live edge is. Only that plus the group sequence numbers allow the correct sequencing, not the order of arrival.\nIn this case there was just a single subscribe request with -5 was sent upstream, based on your explanation above. This avoids all the confusion which would happen when there are multiple inflight -ve offsets.\nThis seems like it will have many race conditions. It seems like it would be better to transfer this in the STREAMHEADERTRACK and equivalent. Thoughts on that ? I also worry about how long a subscribe would wait to find out this information - that seems like it could be problematic for the client knowing when to give up on a subscribe. Hopefully we can discuss in Denver"} +{"_id":"q-en-moq-transport-25ca84ebc2d99ca7ff1eb78a763da0e03d12353d830c9e9f88ee5c6b81501077","text":"Update main to reflect the tx-mode branch after was merged.\nAgreed, but I'm not sure there's any normative language there. Is this a case of writing a paragraph cautioning that such limits could cause issues?\nWhoa, I've missed some PRs, but LGTMIndividual Review: There was some feedback on previous iterations about this PR lacking guidance when encountering flow control limits (eg: send as datagram, relay converts to stream, but is out of streams or flow control)"} +{"_id":"q-en-moq-transport-5304f508ac5f30f22cd210803f1475e7b00d0a37c39595c5ca12d64e0096fa23","text":"This PR cleans up some nits to make all the places we use the RFC9000 encoding syntax valid, updates our extensions to it, and add a non normative summary of parts of the syntax we use. I don't think this is making any changes to anything on the wire or normative changes to spec other than the statement SHOULD use smallest number of bytes for variable length integers which RFC9000 only requires for some fields.\nI am not in love with the ABNF idea, but this is non-controversial."} +{"_id":"q-en-moq-transport-77e26bb34fbf7ada18f1eecd2f1a615fbcb9af57e56f7aedcd1b6e20cb991cea","text":"This PR\nTrack namespace and track name properties were removed from SUBSCRIBEOK message in draft 02. But they are still part of the text under \"6.5. SUBSCRIBEOK\" which is confusing."} +{"_id":"q-en-moq-transport-939407e1e884a194acdce005fe89a69005007b40f2f67a5eda3bf511ebf9038c","text":"Regardless of where they're retrieved from and over time.\nI put a two very small nit comments but don't care either way. They do not change the substance of the PR and +1 on the overall thing this is saying.\nThis came as part of interim at Denver when discussing multiple announce for same track. There needs to text that mandates, if 2 objects comes with same name, they need to be bit identical. Also needs to clarify on how long the name needs to be unique"} +{"_id":"q-en-moq-transport-70b250e8e5c1d169e784476e083ddb8e990504ec81bad9873c0ac1b59ab17055","text":"They won't be wasted until we have 64 message types valid on unidirectional streams. We currently only have 3 valid messages types. I think you mean that we could save the message type byte if OBJECT is the only message allowed on unidirectional streams, but that's not forward-compatible.\nDraft-02 strongly implies the forwarding preference is a per-object decision. thus the following sequences would be legal: [sequences are written as (group, object)] Stream 2: (0, 1), (0, 2) (0, 3) (0, 5) -- type is StreamHeaderGroup Stream 6: (0, 4) -- type is ObjectStream or Stream 2: (0, 0), (1, 0), (2, 0), (3, 0) -- type is StreamHeaderTrack Stream 6: (1,1) -- type is Object Stream etc. If someone can imagine a use case for this, that's great. If not, it would be easier to implement with different assumptions, e.g. (1) Forwarding preference is a track-level property. Thus the preference could be provided in the SUBSCRIBE_OK and we wouldn't evens strictly need four different object codepoints (though you would want the distinct formats, and might want to retain the codepoints for that reason) (2) Forwarding preference is group-level -- some groups could be aggregated under StreamHeaderTrack, but others could be under StreamHeaderGroup. If there is a StreamHeaderGroup for a group, there MUST NOT be object-level streams for that group. Again, I have little sense of the use cases out there, but I think we should explicitly decide what we are doing and make it clear in the doc.\nA related question is whether forwarding preference applies across subscribes. That is, for per-group streams, a current subscriber gets all of group 12 and then the stream closes. a later subscribe in the same session asks for group 12. Obviously, this will need a new stream. So maybe the answer is no, it does not apply across subscribes\nI agree, this is very difficult for a decoder to support. The target use-case is non-reference frames. In your first example, object 4 is optional so it gets its own stream. However the receiver doesn't know that, instead it just sees a gap between 3 and 5. Does the receiver try to decode and catch any errors? How long does it wait before trying? If I want minimal latency, do I have to partially decode (on a per codec basis) to get the frame dependency graph? I really don't like this option. I instead want an explicit way to know that this object is a non-reference frame. Ideas: Make a separate track for non-reference frames and SVC layers. Receive a formula that every 3x+1 object is optional? Receive a live object catalog containing metadata about each object. My vote is a separate track and each track has a uniform forwarding preference.\nI like the idea of uniform forwrarding preference for a track. That's what I had in mind when we landed that PR. I can't think of any use cases this restriction prevents.\nI think the consensus in Denver was to have one preference per track\njust to be clear , the preference is set as track level property and can have 4 values - track, object, group and datagram.\nYes, although for various reasons the PR I'm doing is going to make it subscription-level property. I think is hard to write it otherwise, and I can just about see a use case for it.\nWhen a few of us discussed it last week, the feeling was that if it came back in the SUBSCRIBEOK, and Objects started arriving before the SUBSCRIBEOK, the receiver wouldn't know how to process the incoming Objects. In the past, we've decided we don't like that, so the feeling was to land a PR to add the restriction now, and then we can talk more about wire format changes. NAME Is Object decoding not an issue for some reason I'm forgetting? Or were you thinking that buffering Objects was fine?\nI am arguing that buffering objects is fine. The usual case, I think, is that SUBSCRIBE_OK arrives first. It's irritating we're using a byte in the object encoding (and creating new error conditions to check for) just to avoid buffering. But I can live with keeping it for now.\nThis is an improvement, though I would still like to get rid of all the wasted bits"} +{"_id":"q-en-moq-transport-c08101924fe19b791f38430d6a04f741bcd7322d29f22be294e46c8f2b886219","text":"QUIC uses RESET, we should too.\nTeeechnically, QUIC uses STOP for receiver-initiated closes. But RESET is better than RST."} +{"_id":"q-en-moq-transport-8046f7fa8f6aab202634337295610b7efe2c2c13e7feaa78814f1731a68b0bf1","text":"Since they're on different streams, an object can arrive before its SUBSCRIBEOK. This is easy to process because the Subscribe ID and Track Alias must have been the ones that the subscriber chose. What if the publisher then sends a SUBSCRIBEERROR? Is this valid behavior or an error? (If it weren't for the Expires field, SUBSCRIBE_OK would be entirely redundant and we could get rid of it.)\nIndividual Comment: This is covered a bit in and Luke expressed a desire to document the full state machine in . My read is that you can't send objects before SUBSCRIBE OK (but they can arrive out of order). I don't think you can send SUBSCRIBEERROR after SUBSCRIBEOK (objects or no objects), instead, use SUBSCRIBE_RST?\nI agree that SUBSCRIBEERROR cannot follow SUBSCRIBEOK. I think you're saying that if Objects are in flight, then SUBSCRIBEOK must be as well, so a SUBSCRIBEERROR would in itself be an error. That sounds reasonable to me.\nWe're clear that only one of SUBSCRIBEOK or SUBSCRIBEERROR can be sent: the subscription, via SUBSCRIBEOK ({{message-subscribe-ok}}) or the SUBSCRIBEERROR {{message-subscribe-error}} control message. The entity receiving the SUBSCRIBE MUST send only a single response to a given SUBSCRIBE of either SUBSCRIBEOK or SUBSCRIBEERROR. I wrote to clarify that you can't send objects and a SUBSCRIBE_ERROR."} +{"_id":"q-en-moq-transport-7086c4411d780e1b516378a6a4d0591e664090f88aad7de6e2b99b3982738c31","text":"by adding an explicit bool indicating whether any Objects were published for a subscription, similar to the explicit bool added to SUBSCRIBE_OK in .\nand define: However, what happens if no objects were ever produced\/sent?\nIndividual Comment: Oops. Two possible ways to handle this: 1) Flag indicating presence of these fields (Use Location now that is defined elsewhere?) 2) Change the semantics of the fields to be exclusive (eg: it is the first object NOT sent), so 0\/0 means none.\nI'm inclined towards 0\/0, because optional fields are more complex to parse and I would expect the existing logic to work correctly with 0\/0.\nLet's say the last object delivered by the relay is . Does the relay send RESET or ? Is the subscriber able to infer the presence of future objects? What if either of those objects actually exist in the cache?\nIndividual Comment: If we use exclusive encoding then I'd say that the receiver shouldn't infer the existence of non-existence of the named object, and either 5\/30 or 6\/0 works, but 5\/30 is preferable?\nIf you define there will be no objects greater than or equal to these, then 0\/0 makes total sense. So 0\/0 for null case and say the last group ID of the last object publish was 10 then the value in SUB_FIN would be 11.\nIn we went with an explicit flag, so I'm inclined to do the same here for consistency."} +{"_id":"q-en-moq-transport-a556369639210a114168db4b6c5acac0a92a7ac165a2baa1c3620a061d497c07","text":"To allow subscribers to indicate that no new subscriptions will be routed to the publisher for a given namespace. Happy to bikeshed on the name and whether we really need another frame for this.\nThe problem is not possible for the subscriber to reject an ANNOUNCE afterwards. This would be very useful, for example to inform a publisher that no more subscriptions will be routed to them for whatever reason. Examples: The auth parameter in ANNOUNCE has expired. Another publisher has sent an overriding ANNOUNCE (reconnect?) The broadcast has been closed remotely, ex. some moderation tool. Without this message, the only recourse for the subscriber is to close the connection. This won't work if multiple ANNOUNCE messages are pooled over the same connection, nor is it clean.\nI propose two options: can be sent after . We add and to mirror . I personally hate 3 separate messages for a subscriber to close an announce, or a publisher to close a subscription. See\nvery trivial bikeshed comment ... I would prefer to call it something other RESET ( and also other places we use RESET ). I get the TCP analogy but I think that is the wrong way to think about this. I would prefer something like REVOKE.\nhow about ANNOUNCE_CANCEL to cancel an open announce ?\nConclusion is that we do need to add a new frame to end an ANNOUNCE. Additionally, there was a principle stated that we should not rely on closing the session as the typical or optimal way to close state and close either an ANNOUNCE or SUBSCRIBE.\nOn the call today, people tend to prefer ANNOUNCECLOSE over ANNOUNCECANCEL as a name. They also prefer merging SUBSCRIBERESET and SUBSCRIBEFIN into a single message, possibly SUBSCRIBE_CLOSE.\nOn an editorial note, it'd be nice to have subsections for publisher\/subscriber to make it clearer to the reader.\nIf we're going to combine and , then we should just combine and now. My desire is to have four messages: publisher create: ANNOUNCE publisher close: UNANNOUNCE subscriber accept: ANNOUNCEOK subscriber close: ANNOUNCESTOP The same goes for subscriptions: subscriber create: SUBSCRIBE subscriber close: UNSUBSCRIBE publisher accept: SUBSCRIBEOK publisher close: SUBSCRIBESTOP"} +{"_id":"q-en-moq-transport-e8b5e483af8a16ad6f028fdf01e143ace6a9c6be2a16b3458c3c1bf0edca9df6","text":"PR updated, PTAL\nThis came out of discussion on When a relay receives an SUBSCRIBE it may need to issue its own SUBSCRIBE to a publisher that has previously ANNOUNCE'd a namespace matching that track. The current draft doesn't specify how this match is done. A relay could implement an exact match on Track Namespace (requires a tweak to SUBSCRIBE), or could do a longest prefix match on track name vs all ANNOUNCE'd namespaces, or possibly something else. Is it ok to leave this up to implementations or should the draft specify this behavior more exactly? If more than one matching type is allowed, does it need to be negotiated, and by whom (publisher, subscriber, or both)?\nAn issue that came up in interop that is somewhat related - NAME relay was implemented to take the namespace from the connect URL, because in his use-case, ANNOUNCE can be made redundant with the connect URL (eg it contains a namespace and enough auth). However, this caused an interop problem because my chat application assumes a certain functionality from relays, and that includes supporting ANNOUNCE messages to determine routing to publishers. Do we need to be more prescriptive about relay behavior?\nANNOUNCE is what my implementation expects as well. A producer can announce track namespaces at any point (after being connected). Overloading connect URL with track info is limiting\nThis is going to be a slightly long comment but this is how I think this should work. The catalog has two fields for the track namespace , and track name. We also specify in the catalog a way to form a URL from these two fields. See Teds proposal and that is catalog specific. The things we put in these track namespaces and track name fields are already canonicalized in a way defined by the catalog and URL. When the application using the moq transport puts these two fields into message, they are treated as just a bunch of bits and any comparison of them is done with bitwise compare. A relay that needs to check if a namespace matches another does not need to worry about string prep or escape encoding of the bits, the will all have been done when the catalog was formed. This allows the relays to work faster by just doing bitwise compare yet at the same time allows us to use URI at the application level. The transport draft just needs to deal with a bag of bits. The catalog draft pushes the work to the URL definition, and the URL definition does the heavy lifting but that is where this type of stuff should be\nNAME Just to clarify that when you say \"bitwise compare\" you also mean exact matching the entire namespace field from announce to the one in subscribe?\nI would like to decouple the media layer from the transport layer. A tool like ffmpeg should be able to generate a MoQ catalog without knowing the namespace it will be served from, much like a tool like zip should be able to generate a file without knowing the path it will be stored. HLS\/DASH provides this functionality with absolute or relative paths. If the playlist contains relative paths, then it can be served from any HTTP host\/path provided the segments are served from the same host\/path. If you want to decouple the two (ex. application serves playlist, CDN serves segments), then you use absolute URLs. To accomplish the same thing, the namespace should be optional in a MoQ catalog. This would mean media tracks are served from the same namespace as the catalog: whatever that might be.\nTo expand further, I think that MoQ broadcasts should be namespace agnostic: a broadcast ingested with namespace XXX is NOT required to be delivered as XXX. I understand where Cullen is coming from: an application like WebEx can use the same namespace for contribution\/distribution because it controls the full pipeline. The application has all of the information required for all future routing\/identification so it can set it on the first hop and expect any relays to blindly proxy. However, for a more generic or distributed system, this is not true. A client like OBS knows nothing about how the broadcast will eventually be distributed, fanned out, or the requirements of potentially multiple CDNs that might be used, nor will those requirements be the same between vendors (present or future). So I don't think we should require that input namespace == output namespace. In the HTTP world, this would be like requiring that all HTTP paths match the origin. It's quite restrictive and just means a CDN needs find another way to perform routing via a custom header\/parameter. I propose we emulate the HTTP model where there's non in-band notification that a path\/namespace was modified. An origin might serve and a CDN might decide to serve it as . Any namespace remapping from XXX -> YYY is business logic and not signaled over the wire. If you don't want any remapping, then that's a business agreement between you and the CDN, wherein you may need to encode namespaces in a vendor-specific way.\nI think we have agreement on that from past discussions. Can the catalog draft have a mechanism to inherit the namespace for tracks from somewhere else, like a relative URL in an HTML page? I'm not sure this is relevant for how relays perform matching between SUBSCRIBE and ANNOUNCE though. The only available information in announce is the namespace.\nYeah I'm probably rambling about something unrelated. But in order to support namespace remapping, the downstream ANNOUCE\/SUBSCRIBE won't necessarily match the upstream ANNOUCE\/SUBSCRIBE.\nThis is a useful property. I opened to address this issue and PR URL to fix it.\nSo if the input full track name does not equal output full track name ( for some version of matching the two ), what is it that tells the relays there is mapping between the two. I'm not objecting to some other way to do this, I just don't understand what it is that tells the relays to do that. Keep in mind relays do not need to read the catalog.\nTo clarify the \"bitwise compare\" ... yes I mean exact matching the entire namespace field from announce to the one in subscribe? If the namespace was utf-8 or even a url restricted utf-8, that would match. I am just aruging that a relay should not have to verify every string is a valid utf-8 string, it should just match the bits.\nHTTP proxies will not modify the path by default. A downstream request for arrives, there's some rules to match it to a backend, and then an upstream request for is sent. A simple MoQ relay would work the same way. However, you can configure a HTTP proxy to perform special behavior, including rewriting the path. Note that a the HTTP server does rewrite the HLS\/DASH playlist provided they are using relative URLs since both the playlist and segment would match the same prefix rules. For example, here's a simple nginx configuration: I would allow the same exact thing for MoQ, but of course use our terminology instead. I think a key improvement with MoQ over HTTP is actually the namespace\/name tuple. I propose: A relay SHOULD NOT change the track name. That means a producer can make a track called \"video_480p\" and reference it in a catalog, inheriting the current namespace. A relay MAY change the track namespace when negotiated out-of-band. It's RECOMMENDED that an application support being served from arbitrary namespaces, including using relative namespaces when tracks reference each other. Basically, the track name is controlled by the application (encoder\/decoder) while the track namespace is controlled by the relay (hop-by-hop). This makes a ton of sense if the namespace is primarily meant for routing (critical for relays), and it allows relays to do stuff like encode the route or a unique ID into the namespace itself.\nGoing back to the original issue Alan posted (if I understand it correctly), I think we need to support out-of-band negotiation given the nature of fanout. Maybe we can provide something in-band but it won't always be possible. For example, an OBS client sends: In theory, the server could reply with the information required to watch the broadcast (final hop), including any remapping along the way: But this gets really messy when you factor in auth, multiple CDNs, multiple edge hosts, and just the nature of ingest being on the other side of the pipe from distribution. In this example, I think the application has to negotiate with Akamai on how they need to ingest and how they need to egress. I'm not opposed to using extensions to make this mapping explicit though. It would be cool to have a scheme that defines routing\/authentication using globally unique namespace, kind of like a decentralized CDN. But it shouldn't be required as part of the base protocol and there's a LOT of sharp edges to figure out.\nIndividual Comment: My opinion is that anything that does this kind of rewriting is out-of-band, at least for now? For a completely generic relay input=output.\nYes. I opened issue URL to address this and PR URL to fix it. I really like the flexibility that relative names bring. We know this works at scale for current web and media delivery across disparate systems. I'd hope that relative track names are the normal mode of operation. We should allow absolute namespaces in catalogs to allow mixing between sources, however I think this will be an edge case. By default, a relay SHOULD NOT modify the track namespace or track name. However, what a distribution network does within the bounds of its own network is up to it. As it restores the original namespace and name at its boundaries (to match the catalog), then external systems cannot tell and distribution intent is preserved. Additionally, there will be business logic and distribution agreements between CDNs, coordinated by the content distributor, to rewrite namespaces between networks. Such namespace rewriting is common between networks and should not be prohibited. Here's how I envisage a scalable solution working. When a twitch user wants to use OBS to publish, they would log in to URL and authenticate against the Twitch CMS. It would vend them two URLS: a connect and publish URL a playback URL to share with their followers The OBS client would use this to The client then publishes twitch\/kixelated\/catalog, a track which describes two relative tracks - audio and video. The playback client connects to URL and subscribes to premiergaming\/gamer34\/catalog. After parsing the catalog it subscribes to The Fastly relays have been configured by Twitch, out-of--band, so that when they go forward to origin (which is Akamai), they do a namespace rewrite to The subscriptions make their way back to the OBS instance. The encoder prepare Moqt Objects with in their headers. As these are received by Fastly, business logic rewrites them to , caches them as such and they are delivered to the client.\nAs per the discussions in person, it is more nuanced than renaming few bits in the header. It might break e2e encryption \/auth for example or reencryption in some cases. As this is very application specific, one needs to be take into consideration the costs as it may not be just rewrite few bits in and out of network edges. The more appropriate or clearer framing may be - whenever the name changes, it is ,in a way, republishing the content with a new name ..\njust to note, the network wouldn't have access to the catalog\nConclusion is that we'll use exact matching like specified in\nThanks for the update I'm not sure how to wordsmith this but +1 to what it is saying."} +{"_id":"q-en-moq-transport-4ed3493f56ac91b9caa50f5e0331f74c921d3585863799b8c67be8690041e8ad","text":"Add some normative text and requires both client and server to send their role. I can also see the argument for removing this entirely, but it's in the current draft, so if we're going to keep it, I'd like to define it better.\nHaving a flag in SETUP doesn't stop from someone sending unsupported messages. A faulty implementation or intentionally malicious endpoint can send unsupported messages. A receiver should be prepared to handle unsupported messages regardless.\nRight, an implementation is still going to crash if it receives an unsupported message, but there's a difference between signaling that upfront and yolo. A one-sided implementation will look something like this: The purpose of the ROLE parameter is to avoid accidentally triggering that default, as it will close the connection. For example, I absolutely want to have a way to get per-viewer feedback: a track produced by each viewer. Without this ROLE field, a viewer might cause a crash if it sends an to the server, or the server might cause a crash if it sends a to the viewer. It will only work if both endpoints explicitly flag . The only alternative to the ROLE field is to implement the entire protocol, but that's unnecessarily restrictive. Take HTTP for example, it's almost taken for granted that an implementation is for a client or a server, but rarely both. I imagine that will also be the case with MoQ with the sole exception of relays. OBS is not going to implement subscribing, and VLC is not going to implement publishing. Even in my URL library that does support both modes, I ran into a deadlock because of ROLE=both. My publisher was not consuming ANNOUNCE messages because it was not expecting them. Explicitly signaling ROLE=publisher would have prevented this, making it a protocol violation to send half of the messages in the draft.\nA resilient implementation state machine will be something like Since for unsupported messages, you don't want a protocol action as it will be DOS surface, for example. Also, Many of the conferencing examples will mostly always be send and receive by default. Our client for example does pub and subscribe always. I personally feel the implementation constraints are resolvable and I feel still unconvinced for a ROLE parameter.\nHeh, please don't do that. This is exactly why we have version\/extension negotiation; not supporting a message after negotiating it is a protocol violation. Otherwise, my endpoint has to assume that your endpoint could ignore messages, and the MoQ draft in general, on a whim. Just tell me that you don't support ANNOUNCE via ROLE. Otherwise I would have to send an ANNOUNCE, start a timer, and guess that no OK\/ERROR means no support? Probing for support is miserable. You close the connection on an unsupported message. Just send ROLE=both then. Not every conferencing is bidirectional though, for example clubhouse-style. At Discord we also have a dedicated connection for screen sharing (don't ask) and a very explicit role=publisher vs role=subscriber flag.\nI don't think this is necessarily true. Without ROLE, you can still choose to only implement half the protocol and reject the messages you don't support. Every client has to be built to be resilient to arbitrary incoming messages. It can't break because the other side said their role was subscriber but then they send you an ANNOUCE. Therefore sending an upfront role seems like a promise to behave in a certain way. A system that relies on promises to not break will be brittle. I agree that ROLE seems superfluous.\nI want publisher and subscriber support to be optional. That way it's only necessary to implement half of the draft on many cases. A CDN distribution edge would be one such situation. In fact, it's a hardening measure to ensure the edge cannot receive a ANNOUNCE which might otherwise cause a vulnerability. And generally speaking, if something is optional, then it should be negotiated. Otherwise you end up with Schroeder's extension. Silently ignoring unknown\/unsupported messages is just terrible protocol design and it's s far better to close the connection with a protocol violation instead. That's the point of the ROLE parameter. But yeah let's chat about it on Wed.\nIndividual Comment: Every implementation has to have that switch statement and some not-crash action when it receives a message it doesn't support. Some logical options are: Ignore the message Send the corresponding error message (eg SUBSCRIBEERROR, ANNOUNCEERROR) with an error code like \"not-implemented\" Close the entire connection I think the value of ROLE is that it gives a consistent behavior early in the life of the session. If I want to send subscribes and the peer says they are subscriber only, I close the connection. But I still have to have the switch statement because of broken or malicious peers. The MUST in this PR can be improved by explaining what happens when you don't like the value you see, or the peer sends messages inconsistent with its advertised role.\nIn that case the peer will close the connection since it knows that it is subscribe only implementation and the role in the setup is supreflous\nThis is the type of things where I don't think we need it but it does not break anything, is not a big deal to implement, and does not hit performance. Given other people feel strongly this is needed, I can easily live with this. If this helps us get to consensus, I'm fine with adding it. I think the rewrite makes things clearer than they were before.\nDiscussion concluded that supporting this was not that difficult and it's a clear signal of what's supported, to so AI is to fix up the PR to clarify behavior.\nSec 6.1.1.1 specifies client sending or ROLE, and says nothing of server-sent. I guess server-sent ROLE is not needed, but maybe we should fail if it is sent?\nI filed about the role parameter. The client doesn't actually know what the server intends to do. The server should be able to reply with its functionality, which might be a subset of what the client can support.\nThe SETUP message contains a ROLE parameter. In the current draft, the client specifies the value, indicating if the client will send data (0x1), if the server will send data (0x2), or if both endpoints will send data (0x3). A conferencing client supports the ability to send and receive media, and would always send role=0x3 because it supports both functionalities. However, it's unaware of the server functionality, and the server has no way of informing the client if it intends to send media. This matters because a generic client needs to know if it shows a black screen and\/or should send a SUBSCRIBE message (ex. for \/catalog). I propose that both endpoints send the ROLE parameter: 0x1 = publisher, 0x2 = consumer, 0x3 = both. This gives the server the ability to specify its own functionality.\nWondering if we even need Role in the first place .. can it be changed mid-session ? Say the use-case Participant in media conference joins as subscriber only since he is driving and want to listen in Then at some point mid session , he wants to contribute and he now becomes as publisher and subscriber what is the expected flow in this case ?\nCurrent draft doesn't really explain what ROLE parameter is for, only that client needs to send it - what was original idea? Why not just always allow everyone to send and receive data?\nIt seems a better way to control if you can send and receive might just be if the client has the appropriate auth tokens for each role. I lean towards remove this if we don't need it, but if we need it, I have would be fine with a design where the client says which roles it is capable of , and the server response contains which roles it is allowed to do.\nI think ROLE is good for negotiating half-functionality. If the remote endpoint cannot send media (ex. player), then sending a SUBSCRIBE is invalid. If the remote endpoint cannot receive media (ex. OBS), then sending a ANNOUNCE\/OBJECT is invalid. This removes a burden on implementations, as most will only care about specific role. NAME brings up a good point. If an endpoint could switch roles, then it should advertise full functionality. That would include publishing an empty catalog which it can append it later when it wishes to publish tracks.\nI don't think I seem much use for the Role - it seems like something the application would know at a higher layer. But I have no objection to including it if others do see a need for it. It's not like it makes life harder for applications that don't use it.\nIf the ROLE is used to enforce the half-functionality, then it needs to be clear what should be done when invalid message is received. Sending a GOAWAY? Sending a generic error message? Does the connection need to be terminated? I think most of the time, application will just choose the full functionality in case of role switching on the fly and avoid the invalid response code(just ignore the invalid message).\nOh hey, I found a good use-case for ROLE. If my client connects with ROLE=publisher, then the server knows that it should not send ANNOUNCE messages. The messages are useless and actually triggered a deadlock in my broadcaster, since the application was not reading them and overflowed the receive channel. The same would be true with the reverse; a subscriber will not be able to process any SUBSCRIBE messages. I like the ability to specify half-functionality. At the very least the library should support half-functionality (ex. automatically reply with ANNOUNCE_ERROR) but it's even better if it's enforced by the protocol.\nLGTM me. Ship it!I wonder if it helps to discuss the role of role parameter during authors call for a short bit NAME"} +{"_id":"q-en-moq-transport-815e4f7e97e3b907ab84e61c589b5ef6c2a635b59650974518990ee0ca7b8ff3","text":"Internal server error has been useful in HTTP, so it makes sense to include something similar for MoQ."} +{"_id":"q-en-moq-transport-d01949fae001250fa9f225dfde8704cd2fd4b9e23ffb2429c351a2aa1914e365","text":"Into SUBSCRIBEDONE. I didn't remove the SUBSCRIBEERROR message in favor of SUBSCRIBE_CLOSED as NAME suggested in this PR. Addresses some comments in , but possibly not all\nSo my notes from this discussion. CLOSE is confusing with QUIC layer. People seemed to prefer UNSUBSCRIBE over SUBSCRIBE_CLOSE. I love the move aware from RST and FIN as naming.\nPropose names from my understanding of discussion call Announce (send by publisher) , AnnouceOK, AnnouceError UnAnnounce (send by publisher) UnAnnounceOK, UnAnnounceError AnnouceEnd ( sent by thing that received Annouce) , AnnouceEndOK, AnnouceEndOK Subscribe (send by subscriber) , SubscribeOK, SubscribeError UnSubscribe (send by subscriber) , SubscribeOK, SubscribeError SubscribeEnd (send by thing that received subscriber), SubscribeOK, SubscribeError Mostly trying to get rid of FIN \/ RST and have different names for the messages that flow in opposite directions.\nI don't like the name of the different messages used to terminate. I'd rather have one message that transitions to the terminal state, with an optional error code and final group\/object. IMO the subscription flow: And the ANNOUNCE flow mirrors: The brackets mean a message is optional. The sender is responsible for creating and the receiver is responsible for closing, possibly at the bequest of the sender (like QUIC's STOP_SENDING).\nhas been merged, adding yet another term (CANCEL), but I figured we'd clean that up with this issue.\nSince we are bikeshedding, I prefer ANNOUNCE_RESCIND instead of CANCEL, since an announce is more of a declarative statement than an active request from the sender.\nHeh and UNANNOUNCE and UNSUBSCRIBE stick out because they don't share the same prefix.\nI like the names being proposed as they help make make is easier to understand. I do want to separate out one issue. The STOP \/ CLOSE type thing happens way after the initial SUBSCRIBE thus it can end up in a situation where auth has expired or something else has gone wrong this I like the current design that allows an OK or ERROR to be returned for those. I think it is also key for sender of the things stoping the subscribe to know that the other side has acted on that so we need the OK.\nI'm leaning towards CLOSED instead of CLOSE because CLOSE implies its a requested action, so to me SUBSCRIBE_CLOSE sounds like something the subscriber would send, not the publisher. For a similar reason, I like CANCEL slightly better than STOP, but both work. UNANNOUNCE and UNSUBSCRIBE are good words, but I think it's better to follow a pattern in this case and I like NAME sequence of messages.\nNAME CANCEL and CLOSED are slightly better, sure. NAME there MUST still be an OK before objects can be sent. That way the receiver can tell upon receipt of the CLOSED if the subscription was ever active.\nSo to add an OK, we need to add another state. The question for me is what is the use case where we need to deliver that information in the end of the subscribe.\nThis discusion is probably easier looking at state machine diagrams\nSections 6.8 and 6.9 strongly imply that SUBSCRIBEFIN\/RST are sent at the true end of the track. So are they not sent at the conclusion of a SUBSCRIBE that does not cover the end of the track? For example, if the last object of a track is (10, 12), and there's one subscribe for (4,0) to (5, 100) Is there a SUBSCRIBEFIN? If not, how does the subscriber know that in fact (5, 60) is the last object in that subscription? If so, does Final Group and Final Object refer to the last object in the subscription?\nIndividual Comment: Putting my HTTP hat on for a second, when you ask for something with unknown length, you infer from a graceful termination that the last thing you got was the true end of that resource. If you send a range request, the graceful termination at the end does not convey that the last thing was the true end. Once the object expires, you can issue another request to the same name and potentially get a totally different thing. Does this model work for moq, or do we need something different?\nThere are three different things you might want to communicate to a receiver at the end of the subscribe: 1) You got all of the objects in the range; 2) You got almost all of the objects, but some were dropped for latency\/bandwidth reasons 3) The track was truncated for some reason and possibly, 4? 4) You got everything I have but i'm not sure if that's everything\nis somewhat relevant to this issue.\nAs mentioned in , there are 3 different ways for a publisher to close a subscriber, each with minor differences. : I never sent you an object. : There are no more objects to send after group\/object X\/Y. : There are more objects after group\/object X\/Y... but I won't send them. A few things: I think it should be renamed since it's not the only ERROR message. Maybe ? The only difference is that one has an error code. Otherwise they both include a final group\/object and it's up to the application\/library to decide if they wait for the final object(s). I'd like to only implement this once but right now I have to copy\/paste any code to handle these almost identical messages. What if we merged them and use error code 0 to signal a FIN. The reason phrase should be removed as part of . There really needs to be a state machine like with this many ways to terminate a subscription. I think these are all protocol violations: then then then My interpretation of the draft is that a followed by a is invalid, which is not obvious and unlike QUIC. I don't think so. The library\/relay behavior doesn't change based on . The only thing that miiiight differentiate between the two is the application, which can use errors codes for this purpose. At the very least I would merge these two messages. But I would also like to explore a single way for the publisher to close a subscription. The application can signal the nuanced differences between errors (ex. error code 404 vs 410). The library\/relay doesn't care, it just forwards the error code.\nIndividual Comment: The inspiration for FIN vs RST in moq came from HTTP caching. If a resource has no content-length, and the caching proxy has received N bytes, does it have the entire resource, or does it need to refetch the tail? If that response was terminated with a FIN (H2 or H3), then it has the whole resource. If it was terminated by a RST, it can't assume that it does. So the value I see in these two messages is for a moq relay is similar. A client connected to the relay and subscribed to a track, which fowarded the subscribe to a publisher. The track was published to the relay and ended with a FIN. Another client subscribes to this track from the beginning (Absolute\/0,0) and no end. Does the relay need to resubscribe upstream? Now suppose that the connection from the publisher to the relay closed abruptly, or the track was terminated with SUBSCRIBE_RST. This is a signal to the relay that it's worth another subscribe to find out where the end of the track is. I can live if we consolidate the framing to a single message, and define a special error code to mean FIN, but I do think the transport needs to see the difference.\nI like it. I would also like to merge ERROR with CLOSED, but we'll chat about that."} +{"_id":"q-en-moq-transport-89c1c6ce2c2b0f341857afb51ca3f3d0471e1c1f11006cd26f34ea5db73ce756","text":"Note from call. Fixe subscribeDone at same time. Define a new bool type. p[ick new leatter other than b\nAlso make sure to fix SUBSCRIBE_DONE at the same time.\nNAME - I updated, have another read and see if it looks OK. Thanks\nand introduced a field to SUBSCRIBEOK, SUBSCRIBEFIN, and SUBSCRIBE_RESET. If I read the notation correctly, it has a size of 1 bit, which makes it hard to parse the rest of the message. Could we encode that information in the message type by using two types per message instead of a 1-bit field?\nI think the intent is that it is one byte not one bit.\nMaybe it should have been:\nor just a byte field with only 2 possible values allowed (1) and (0) ..\nI agree we need to keep this byte aligned however we do it.\nIMO just make it a VarInt (or byte), with only values of 0 and 1 currently valid. This encoding is temporary until we nail down all of the fields\/flags so we shouldn't be making life difficult or efficient.\nIndividual Comment: Even if it's temporary, my preference is for a one-bit field and unused\/reserved bits to align, because there are no error conditions.\nHave a look at PR"} +{"_id":"q-en-moq-transport-9743246b8f63ba71869a3f9bbaa70f4b7d9db70f26b6da46cdde91b2ec837745","text":"And add a new SUBSCRIBE_DONE status code. Fixes part of , but it's still unclear how one might refresh a subscription without interruption.\nSubscriptions always end with either SUBSCRIBEFIN or SUBSCRIBERESET, except when they Expire and then no control message is sent. This complicates the state machine and adds another case to handle. The subscriber needs to start a timer or something similar to know when the subscription will end, and it doesn't get to know the final group or object ID.\nI'd like to get rid of , but my thought was that it's a signal to SUBSCRIBE again before the timeout to avoid interruption. Otherwise, there would be a SUBSCRIBE_RESET with an \"expired\" error code containing the final group\/object.\nI agree that a subscriber can deal with this field, but I don't know what purpose it serves for the publisher\/relay? It's different from an idle connection timeout, because assuming there is data flowing for the subscription, it would never go idle. Unlike QUIC, I don't see value in having a 'silent close' for subscriptions, because typically we expect subscriptions to have data flowing at a somewhat regular interval. Also, there's no guarantee that the subscription will last until the expiry time, it can always end early with SUBSCRIBEFIN or SUBSCRIBERESET.\nBased on last week's conversation, the Expires field doesn't change the state of the subscription by itself, a publisher would still have to send a FIN or RESET to close it. It's still unclear as to what a subscriber should do with it. There was a fair amount of discussion around refreshing one's authorization. It's unclear if that would be done with a SUBSCRIBE_UPDATE or something similar or via UNSUBSCRIBE\/SUBSCRIBE?\nWould the solution to this be when the relay expires a subscription, it sends a control message ? This might all become clearer once we figure out re authentication but at a high level, I want some way that the relays can clean up state on a time line that is under relay control particularly when poorly implemented clients make subscriptions.\nI think this was resolved by -\nLGTM"} +{"_id":"q-en-moq-transport-5a75723e325c04d53462b749ab82b7f6c1c96d77a4011849e8837c8dc795e97e","text":"There is a bit of angst about using SUBSCRIBE to obtain the current Group ID. This PR is a straw man to provide an explicit means of obtaining the group ID. I would be surprised if this ended up in the final RFC, but it might be useful to keep in the draft as a primitive while the other SUBSCRIBE\/FETCH stuff moves around.\nSummary from meeting. We want to make it more extensible for other things. Scope to things the relay knows, not a substitute for Catalog. Rename to make it more generic.\nWhat about returning latest object in addition to latest group?\nOK, changes: I painted my bikeshed TRACKSTATUS, on the grounds that TRACKINFO sounds like more permanent data that maybe out to be in the Catalog. I added last Object ID because Will asked for it and it seemed easy enough. I did not fill Suhas's suggestion to make it optional because the parameterized idea got a lot of pushback in the call, and this seemed adjacent. Please have a look, and if Ian's happy with it he can merge.\nI'll also note that we could put a track alias hint in here, but I didn't care enough to add it without more positive feedback.\nThis is a good start. We can massage the status codes later. They should correspond to some sort of state machine that we define for moqt in general. I'm sure we will have future changes to this, but I think we should merge it."} +{"_id":"q-en-moq-transport-0349b03792c253dade14cf8edc2de5e72a049e4f438922f869b051d5151dbd53","text":"This PR adds a status field to the object that allows an relay to indicate an object or group was lost or dropped. It also allows a producer to indicate a object or group will not be produced including the cases for end of group and end of track. The PR probably needs a bit more text on what what producers and relays do but I think this is close enough to get some initial discussion on this solution. Fixes most of\nI think I'm happy to use a byte for this. We can always optimize it later in a number of ways if we want.\nWe can actually save a byte by omitting Object ID except for the first Object in the stream, because this allows the Objects to be sequential. Similarly, we can omit GroupID if the delivery preference is track per stream.\nThis is a good point. Conceptually we need two enums: One to indicate if it's a Group Fin or Track Fin and one to indicate why there's no payload. There are a lot of ways to encode that, but it's a bit nice, because the first enum is an immutable property of the Object and the latter is sort of a payload alternative.\nI've only had a chance to glance, but how does this interact with RESETSTREAM? Putting drop notifications within the OBJECT itself means that an object can only be dropped prior to being written to a stream. This is okay when used in conjunction with stream priorities, but it's not great. Ideally we can renege after committing data to a stream by resetting it. IMO there should be a message indicating which objects were dropped upon a RESETSTREAM.\nNotes to myself: Need text on reset. could send object group drop on new stream open an issue if we need this it Way to deal with sending zero length object add an enum for this object is empty Way to indicate end track and I dropped it. perhaps better way than sending same object twice\nNAME i did leave the following comment URL\nAfter reviewing fully, yeah I don't think we can decouple drops from resets. When any publisher (including a relay) decides to drop an object, it will also reset the associated QUIC stream. Otherwise, the publisher has to delay writing the object to streams (like TCP low watermark) so it retains the ability to drop it. And the problem with TCP low watermark in MoQ is of course... priorities. Newer content is usually more important, so even if we already have N bytes queued in the QUIC layer, our new content needs to skip the queue. The only way to do that is to actually write to the QUIC stream marked at a higher priority... or reset streams currently queued. The PR doesn't allow either of these as I understand.\nI like what Cullen suggested in the editor's meeting. When we want to drop an object, we RESETSTREAM and write a dropped message on a new stream (that won't be reset). Relays will forward this message too. The caveat is that the object may have been successfully delivered despite the so the subscriber has to be prepared to ignore some drop messages.\nSorry I left a PR review a little too fast between meetings and taking care of the newborn. I had a chance to think things through a bit more instead of leaving vague and passive-agressive messages like: I'm warming up to the status in OBJECT, however it needs to be a bitmask. An object can be both dropped, the end of a group, and the end of a track. It's a little annoying to have to process duplicate OBJECTs with different statuses but it's fine I guess. An alternative would be a reliable SUBSCRIBE control stream that contains reliable messages GROUPEND, TRACKEND, OBJECTDROPPED messages. There would be no need to retransmit anything on a RESETSTREAM but the overhead would be marginally higher. ex. GROUPEND would consist of the message type, the groupid VarInt, and a object_id VarInt only. The track would be inferred from the subscribe control stream. The added benefit is that a subscriber would know the size of the group BEFORE receiving the final object; it would know how much is still in flight.\nI think Cullen's update addressed this by making \"end of group\" and \"end of track\" have different object IDs than the last actual object. Should we further state that only objects with status 0 can be \"dropped by a relay\"? All the other statuses may be needed by the subscriber, and all should be very small. Would you be ok with moving this PR forward if we split out 0x5 and 0x6 for now?\nNote to cullen - remove the two codes. Clean up other comments. Ping Ian when done and ready to merge.\nNAME - Can you give this a read and merge if you think it is OK. I did make the change in the PR luke sent to this but it ended up in another PR so github did not see it as done.\nI did another read through and it looks good to me !! Thanks Cullen\nIndividual Comment: This means a relay cannot assume the existence of these marker objects. That's unfortunate, because I'd like the end-of-group marker to help my relay know when to close fanout streams. I don't want to rely on QUIC FIN for this - a publisher may use a QUIC FIN to terminate a stream cleanly (delivering all written objects) as part of a graceful shutdown, without wanting to signal the end of group. In this case, I explicitly don't want to close fanout streams, in case the publisher is reconnecting (make before break). Would making end markers optional for DATAGRAM tracks sufficiently address the audio only use case?\nI get your point and need to think about all of that a bit. Lets say the client did send them, would it be a problem if it just crashed without sending them. Would the relay also have a timer or some other way to clean up the state ? Or just wait for subscribe to end ? Just thinking through implications but you raise a good point.\nAs far as I understand, we currently allow group IDs to have gaps in them, with only requirement for them is that they are monotonically increasing. This presents a problem with caching. Assume I am a relay, and I receive a range request for ; if the origin has never produced group 15, then even if I have the entirety of that range in cache, I would not know that, since 15 would be just missing, and the only thing I can do with that is to request 15 from the origin every time. Possible solutions to this problem: Add some mechanism to indicate that 15 is not there (e.g. by putting the number of previous group in the beginning of every group). Require group numbers to be incrementing by one. I am fan of approach number 2 since it's conceptually simpler, but I think we can make some version of 1 work.\nI am not fan of approach 2 since it is limiting and applies to typical media applications only. I think we need to understand that, the application controls the groupIds and they are independent sync points. Catalog should provide that information for the players. Regarding if there is a gap due to object being dropped, that information is identified within the object itself. Let's take an application example where the group numbers increment by 2. So the groups would be 10,12,14,16,18,20 in the above example. Now consider that group 12 was dropped by the publisher , then the relay's cache will be in this form [10, 12 (not sent by the publisher), 14,16,18, 20] Keeping relay unaware of application semantics is something we should allow. Regardless, the end applications know exactly how the groups behave and if there are geps (either due to broadcaster not producing or dropped for other reasons), things will be identified over dataplane as such\nNAME can you give a specific use case for how an application could use non-sequential group IDs for some benefit?\nNow consider that group 12 was not sent by the publisher , then the relay's cache will be in this form [10, 12 (not sent by the publisher), 14,16,18, 20] The problem I have here is not with 12, the problem here is that the relay cannot distinguish 11,13,15,17,19 being in a state \"has not arrived yet\" vs state \"does not and will never exist\", so it will be always forced to fetch those.\nThe relay should not worry about the semantic structure . Relay needs to fetch only those that have been marked as dropped. If it hasn't arrived, then it will eventually arrive or the upstream will mark it as dropped. In this example, Relay fetches 12 and sends rest from its cache. If relay makes decision on behalf of the application, it will end up fetching things that are not even part of the application ( here the groups 11,13,15,17,19) My proposal is to act on the things you know explicitly and don't guess on the unknown.\nI'm confused. How is the relay supposed to know that when being asked for range , it does not need to fetch 15?\nSay relay cache is empty, it will ask upstream from range [10-20) .. The upstream publisher will send 10,12,16,18, 20 .. Relay responds with that. OTOH if the upstream provides 10, 12,, 16, 18(dropped due TTL), 20 and if the operation is fetch , the relay will fetch 18 to fill in the gap and once 18 arrives , it will be able to send 18 with others.\nSure, let's assume there are no drops, the first fetch is successful, and it puts 10, 12, 14, 16, 18 in cache. Now the second fetch arrives for the exact same range, . It looks at its cache; it has objects 10, 12, 14, 16, 18, that it can return immediately, but it doesn't know if that's all objects that exist in that range or not (we know that because we know that the previous request was , but it is entirely possible that those objects ended up in cache since they were results of five different fetches), so it has to repeat the fetch for those.\nI strongly think we should have sequential groups. Like Victor said, the group ID can be used to detect gaps at the moq-transport layer if it's sequential. I understand the desire to stuff application-specific metadata into fields, but it's a layer violation. Put application-specific stuff in the object payload (ex. timestamps) and put properties useful to relays in the moq-transport layer.\nGroupIds are just application sync points and are by definition independent of each other. Adding a correlation between them can be invalid\/incorrect\/misleading as a Relay is unaware of how application defines its sync points. OTOH objects have dependencies within a group, hence the app decided to send them part of a given group and having relay maintain a state to find out gaps may be useful.\nI am also a supporter of incrementally increasing Group IDs. Having such IDs does not break the contract that these points are independent of one-another. It simply means that they occur in a sequence. And being part of a sequence is a fundamental attribute of all MOQT groups, since their parent, a track, represents a temporal flow of data. This example from Victor illustrates the core problem. For anything other than a contiguous sequence of group numbers, a relay can never know if the Groups exist or not and it would always have to make a request upstream to check. It could keep track of prior subscriptions (for example [10..12], [14..16], [18..20]), however if any of those did not overlap then it would be forced to make a fetch for [12..14] and [16..18] in case groups 13 and 17 existed. That is a wasted and convoluted workflow. I tried to think of a use-case where I would want non-incremental group numbers. One example would be to use epoch time as group number. So if I'm making variable length segments, and doing segment-per-group, I might number my groups: A player wanting to seek back in time would need a timeline track to tell it the precise group number it needed. That timeline track could equally map group numbers to epoch time, which would allow me to number my groups incrementally: The meta-point here is Group IDs only define sequence and any other relationship between those sync points can be expressed via an application level timeline track and\/or catalog.\ntemporal flow of data doesn't imply sequential nature of groups either.\nSecond fetch gets the exact same answer (10,12,14,16,18 since it falls in the range). If there were gaps due to groups being dropped on path or a source deciding to skip a group , those should be marked for relay to see if it needs to satisfy fetch request. I feel it will help us to think on why gaps can occur and those can be explicitly signaled . Relays then in application agnostic way acts on the data in cache.\nNAME How does the subscriber know the SUBSCRIBE\/FETCH is done? We have a SUBSCRIBE_DONE message with end=20. The subscriber receives 10,12,14,16,18,20 but when does it stop waiting? Is group 19 still in transit? There's no drop notification for it since it doesn't exist.\nSubscriber application knows its catalog and deduce the group distribution and find out there will be no grpup 19.\nThe moq-transport library should tell the application that a subscription has terminated, and not the other way around. Especially since the moq-transport library can't release any state until this optional signal is provided by the application (which might not care about gaps). If the application needs to use a catalog\/timeline to detect gaps, then it can use that same catalog\/timeline to carry the timestamp that is being shoved into the . Or it can put it in the object payload.\nSo we're talking about two different fields and I'm going to use disambiguating names: : Increases by 1 each group. Provides ordering and gap detection. : Increases by an application specific amount for each group (no units). Provides ordering. There's a world where both are separate properties (and fields) of a group in the moq-transport layer. Maybe the relay could use the epoch (as a timestamp with units) to perform TTLs, or latency budgets, or fetches. But I think we should punt timestamps and metadata in general to the application for now. I'm also not sure how this would work in practice anyway without an equivalent , but are already sequential.\nWe define a track (our subscribable entity) as a \"sequence of Groups\". That is part of our object model. There is no part of a track which does not belong to a group. Therefore, a track consists of a sequence of groups. ! This diagram shows three tracks. The first two tracks are valid, one with temporally contiguous and the other with temporally non-contiguous groups. The third track is not valid, as we don't allow a publisher to produce objects at the same time in the same track which belong to different groups.\nthis looks like an example constructed incorrectly. None of my points above said overlapping groups Ids. All I am saying is group Ids are independent and don't have to increase in sequence. As you pointed out, both track 1 and track 2 are valid, which i 100% agree. I am proposing we shouldn't force an application to follow what looks like just the track 1 model, since track 2 is also a valid group distribution. MoQT should have necessary things to allow track 1 and track 2, if not, let's define it.\nI think you missed the point in my comments . No catalog is not used to define gaps. Gaps are signaled in the data plane\nExactly. I am not arguing either. All I am saying is, when a subscribe done is processed and reported to the application, the application ha enough information to know if there needs to be group 19 or not, from your example. MoQ transport library reports status of what it has seen and only the application knows more about, if group 19 was supposed to be even produced for a given track or not , since that info is in catalog. Please note , this is nothing to do with GAPs. Gaps happens when things are dropped for variety of reasons. Track properties\/characteristics (like number of groups, how groupIds are used, group duration) is catalog scoped and not up to for the moq transport to enforce. This is just the principle of separation of concerns across layers.\nIn the current state, since the moq-transport library cannot detect gaps, it has to immediately report any SUBSCRIBEDONE to the application. OBJECTs may still be received during this state so the library does not drop state associated with the session. The application has to tell the library when the subscription is actually done so it can remove the lookup table entry. With sequential groups and drop notifications, the moq-transport library can buffer the SUBSCRIBEDONE until it has received and flushed all OBJECTs to the application. The application only learns the subscription is done after receiving the last object. This is how transport APIs generally work and how QUIC works itself.\nAnd just to draw inspiration from QUIC itself, it does not allow gaps in the Stream ID or Stream Offset (roughly analogous to groupid and objectid). QUIC could have allowed the application to introduce gaps in either but it just complicates everything.\nNot sure how to interepret this in the context of this discussion. QUIC is transport layer protocol and MOQT is application protocol. Also StreamIds and GroupIds are semantically different.\nChair Comment: I think the WG can build the system to support either sequential or non-sequential groups. Several folks have put forward reasoning for why sequential groups provide benefit and\/or why non-sequential groups add complexity. NAME as the lead proponent for non-sequential groups, can you state the use case and application benefits. This will help us evaluate the tradeoffs.\nThe problem is how to resolve this on the cache. Right now, the cache, when it does not have any information about the state of object 15, cannot tell apart the scenario where 15 does not exist, vs the scenario where 15 is missing from the cache because the cache got filled by one client doing a request, and then another doing . It is possible to work this around by putting a \"does not exist\" kind of entry into cache on FETCH: if I fetch , and get , I can cache \" does not exist\" from my knowledge of the properties FETCH provides. Outside of being more complex, this also does not solve the entire problem, since our goal is to be able to cachefill from SUBSCRIBEs as well as FETCHes. So non-sequential group IDs inherently makes caching less efficient.\nLet's take the example of sequential groups for a similar use-case Fetch 1: 10-14, cache is filled with 10, 11,12,13,14 Fetch 2: 18-20: cache is now filled with 10, 11,12,13,14, 18.19,20 now fetch 3 arrives with 10-20, Cache doesn't know the state of 15,16,17 . So as it needs to do, it will have to ask upstream for . In the case where 15 was never produced for whatever reason, the publisher will need to mark it as such in the response regardless (like ), as this will aid subsequent requests to not repeat the same request. One observation on the cacheFill, When producer sends groups and objects, either as response to subscribe or fetch, the cache fill happens regardless. Let's say when subscription was made , an object was marked as dropped due to congestion. An eventual fetch request can end up replacing that object with proper object if it is still available upstream.\nAre you saying that the publisher is allowed to skip group IDs, but it has to explicitly mark groups in the middle as missing? This de-facto makes them sequential.\nNot exactly that. This is the gap use-case that had a requirement for players who want to know if a given group that is expected is marked explicitly when source drops, thus allowing player to not wait forever. There are couple of things we are talking here IIUC. Publisher explicitly marking things when it decides to drop. Publisher responding to the request for a non existent things (never produced, permanently gone or dropped) Players knowing not to expect certain groupIds since they know from catalog that the group distribution will not generate such groupIds. Players knowing not to wait for missing things. Cache-fill from either subscribe and\/or publish. Relay asking upstream for things that is not its cache Clients and Relays need to be able to process groups out of order. All these can be done or needed regardless of how group numbers are distributed.\nWant to clarify that, one needs to request for 15 only one time, if and only if the fetch request asks for it. Also a client who is aware that group 15 will not be generated, per catalog, will not ask for it either. Regardless of the group distribution, relays can be asked for things that it not in its cache. Asking origin for that evaluates to a non existent things ( never produced, not produced since origin decided to drop it or permanently gone from the origin) needs to be updated in the cache. Since a fetch can happen for things that fall in to any of these cases and a relay that doesn't have that information in the cache will need to go back and find out the answer and it needs to do only once.\nI want to second this. NAME would allowing groups of size 0 work for you? The problem today is the ambiguity. A subscriber doesn't know if 15 is in flight or will never arrive. If the publisher explicitly tells the subscriber that 15 is empty (or dropped) then that solves things.\nyes if there is a reason for a producer to drop entire groups and should be explicitly signaled. \"The problem today is the ambiguity. A subscriber doesn't know if 15 is in flight or will never arrive. If the publisher explicitly tells the subscriber that 15 is empty (or dropped) then that solves things.\"__ We need things to be marked to indicate things when dropped at the publisher. This is needed for reasons pointed out in URL and URL\nI don't get what the problem is here. The things the relays cache is the object. I am very concerned that any assumption on groups being sequential are being produced in order are not going to work with the way people want to use complex reference frames and layer in modern codecs like AV1. Of course an given application like youtube might specify for it's usage of moq, everything was sequential. But the moq relays don't need to know this or assume this and keeping them flexible allows for all kinds of interesting media uses cases for media beyond video. I have helped write a relay, and I don't see what problem this is causing for relays.\nAh, my concern was that someone would try to have groups numbered 0, 1000, 2000, or use group ID as a timestamp, in which case you'd have to send hundreds of gap indicators. If that is out of scope, simple \"a group is missing\" would work.\n+1. PR provides aspects of that for the cases when the publisher explicitly drops a group IIRC.\nThe current approach of allowing any OBJECTs to arrive on any stream makes it impossible for the receiver to know when a group ends. There's a few conditions when you need to know the end of a group: A relay being told to terminate the subscription after group N (subscription hints). A relay knowing when to close a QUIC stream (if it writes a group per stream). A decoder deciding when to start decoding\/rendering the next group (ex. GoP). Likewise there's a similar problem of knowing when there's an gap within a group. There's no way to signal that OBJECT X will never arrive, so the receiver should not wait for it.\nI strongly advocate for a 1:1 mapping of group to stream. Objects within a group are in order, but there may be gaps. The stream FIN signals the end of a group. The sequence number (or FIN) can be used to determine gaps. An application that doesn't support gaps or skips a group can send STOP_SENDING; something that is not possible with an arbitrary stream mapping.\nAlternatively, we could have , and control messages. I think this is a mess, and the receiver would need a corresponding in response to an unrecoverable gap caused by . I don't think we can annotate OBJECTs like NAME once proposed with a GROUPFIN flag because they're lossy. You need some reliable mechanism to signal the end. For example, sending the OBJECT with a GROUPFIN flag over a datagram would be illegal, as would sending a resetstream\/stopsending for any stream containing a GROUPFIN object.\nMaybe this a duplicate of ?\nThe issue mentioning Last Object Previous Group is in\nI think the best solutions to this is a header on the last object in the group that indicates it is the last object in the group.\nIndividual Comment: I share Luke's concern above about OBJECT being somewhat unreliable. I think a GROUPFIN message with stronger reliability guarantees would look similar on the wire but work better. eg: Instead of Have This works better than implicit mapping from groups to streams. A relay might need to close a connection with an open stream, but wouldn't want to reset it and wouldn't want the stream FIN to be interpreted as a group FIN. I don't want to put GROUPFIN on the control stream, but that would be the safest. I think the other messages Luke mentioned related to DROP are probably an orthogonal issue.\nAgreed, I'm just bringing it up early because stream mapping has massive ramifications on what control messages we'll need in the future. If we can't utilize QUIC's STREAMFIN, STREAMRESET, STOPSENDING, MAXSTREAMS... then we'll need our own, analogous control messages.\nyou guys have convinced me\nAs I mentioned during the interim meeting, it makes sense to put that information in the header of the next group. Something like: That information can be used by the receiver to decide when to start playing the new group, and when to wait for more information instead. It is useful: when using \"stream per object\", to understand how many individual objects are expected for the previous group, when using \"stream per group\", if group numbers are skipped. The downside is that the relay needs to know the value of previous group and last object before sending the first object of the next group. This may or may not be a problem in practice -- relays may be receiving the information from the origin. If the relay sets an overoptimistic value of previous object, it will have to send some kind of replacement for the skipped objects, such as Which will inform the receiver that the specified object ID and the following ones will not be sent for this group.\nIndividual ReviewI removed my -1 so if others think it's fine, merge it.I think this fine. Like you, at this point I don't really care much how we spell it. Compared with my proposal in , this is conceptually way simpler (no special rules for each forwarding preference) but is less byte-efficient. I'm happy to go with whatever the group decides.This is a dramatic improvement over the status quo, I think we should merge this without bikeshedding too much about the encoding for now."} +{"_id":"q-en-moq-transport-e1dae45dbe1bbb7a2e721fabc9010b64117f1a540bf4726e4907366d1abbc465","text":"Based on\nConsider the following sequence of groups and objects, some already produced , some that will be produced in the future. The largest group ID is currently 23 and the largest object ID is 3 (it also happens to be the last object in the group). ! Our definitions for RelativePrevious and RelativeNext are shown here. Example 6.4.3.1 for 'now' shows In our example, largest group is 23, however Largest Object + 1 is 4, which does not exist. We need to adjust the definition of RelativeNext to allow for the fact that the next object may be in a different group. There may be some players who simply want the relay to send the next available Object, in this example Object 0 of group 24. There is no current way to reliably ask for this, because StartGroup mode cannot be 'none', and we don't know which group that object will belong to. It may be in the currently largest group, or it may be in the next group. A solution here would be to relax the constraint that StartGroup mode cannot be 'none'. If we then request it would instruct the relay \"Send me your next object, no matter which group it belongs to\". In 6.4.3 we have examples titled 'Now', 'Current', \"previous\" and 'Next\". 'Now', 'Current' are synonymous to most readers. We need to better titles to illustrate what is actually being requested here. Additionally, \"Previous' doesn't indicate whether we want the previous group, or previous object. Similarly for next.\nIndividual Comment: You are right that the example (and the text likely) don't show the full range of possibilities for what the next object will be. The intent was that Start = Prev\/0, Next\/0 will get you the next object, which could be any object in the current group greater than largest, or the first object in the next group. This is how my implementation handles it. I don't see a way to get only the next object in the current draft, however, because of the ambiguity you describe.\nIf I wanted to receive all objects in group N, I would issue: However the draft sneaks this restriction in, and it would actually be a protocol violation: It wasn't clear to me until chatting with Alan how you're supposed to do it: It's clarified in example 5, but I don't think this is obvious and it feels like an off-by-one. For example, if N was the last group in the broadcast, I can imagine some implementations incorrectly throwing an error or somehow mishandling it.\nIndividual Comment: This was discussed on the PR here: URL Given how subscribe locations work (end is exclusive), end=N+1\/0 means \"including through the end of group N\". The question is whether we should have more than one way to do things, and defaults when values are not present.\nSorry, bad choice of word there. I meant that it's not immediately obvious why the restriction exists, and what it impacts.\nIndividual Review: I like removing relative options from subscribe. I don't love the group ID + 1 spelling, but I guess I can live with it. An alternative is to add 4 code points for SUBSCRIBE to indicate the presence\/absence of start\/end (SUBSCRIBE, SUBSCRIBEFROM, SUBSCRIBETO, SUBSCRIBE_RANGE)."} +{"_id":"q-en-moq-transport-fdbe7f4c28a17ee41a610dcdc144583144854efd4197b2aeec764e66e19d78bc","text":"I took a stab at another rewrite of Subscribe message which attempts to not use special group and object values. Also add support for multiple subscription flows that have been discussed. As this is a control message, we should be Ok to be elaborate in describing the subscriber needs. This leaves 0 as a special value for the StartObject and EndObject.\nNAME updated the PR .. please let me know .. thanks\nI like that this would move us away from stealing special values from obejct id \/ group id space.\nThis looks good NAME .. thanks for the update\nAs currently defined, you can SUBSCRIBE starting from the following points: The latest group and latest object An absolute group and latest object An absolute group and absolute object You cannot SUBSCRIBE to a. The latest group with an absolute object b. The latest group from the beginning c. The next group after latest, from the beginning I'd like to remove 2) (absolute group, latest object). If this group is in the past, then likely the group is complete and no new objects will come in that group, so it's equivalent to (absolute group + 1, object 0). If this group is in the future, then it's equivalent to (absolute group, object 0). If the group is current, it's equivalent to latest\/latest. Does anyone have a use case that requires this mode? I'd also like to add b) (latest group, first object) and c) (next group, object 0). This can be done now with an extra round trip via TRACKSTATUSREQUEST, so perhaps this is an optimization we can tackle later.\nIt's currently ambiguous if or happens. When , then is not present but doesn't have a specified default. If then occurs, and if then occurs. The correct behavior is IMO since groups are intended to be sync points. +1 to removing .\nAnd I don't think we should support without a good use-case. It's too racey for ABR and too delayed for initial playback. It's also barely an optimization since the alternative is making the TRACKSTATUSREQUEST, which primarily eats into time that would have been spent waiting for the next group to start anyway.\nmeans you can only start at: b. The latest group from the beginning An absolute group and absolute object Everything else is accomplished with , forcing the subscriber to converts relative values to absolute. I think we should eventually bring back Locations so the publisher can do this conversion too without an additional RTT, but the desire was to simplify start\/end ranges.\nFWIW, I thought\/intended start_group=0 to correspond to b. Also +1 to removing case 2.\nThanks for summary. +1 to remove case 2 (abs\/latest). I also thought we had b. today but we need case b so lets update to make it clear we have b (latest\/0). I don't think we have a use case for a (latest\/abs), +1 on remove. I originally was in favor of case c (next\/0) but I have changed my mind for now. IF you need this, you just do case 1 (latest\/latest) and toss out object until you get to the next group. If there was not enough bandwidth to do this, then there is probably not enough bandwidth for the next group when it arrives. If we get a bunch of code running and can show that (latest\/latest) is not good enough and we really need (next\/0), then I will change my mind but for now, I think we can live without (next\/0). And +1 to point that TRACKSTATUSREQUEST solves most problems.\nA +1 to all these points Regarding starting at the latest group at the beginning, this seems like the default starting use case for many players. Therefore per Luke's comment -\nFixed by"} +{"_id":"q-en-moq-transport-dec6212bdb3bf4db4e27950054c7d57049b7e3ffe0f28130327bd00fc2ade6b1","text":"The other option is we can use a delta encoding to make it impossible to specify an earlier object, but I think that would make it more complex.\nGood point, I changed it to the same or later object\nAs mentioned in the sprawling , I think we should allow empty subscriptions, aka subscriptions that match no objects. It would be nice to have a way to check if a track exists without receiving any OBJECTs. The MoQ equivalent of a HTTP HEAD request. It could be useful to pre-warm the cache. An empty subscription could be the equivalent of \"I'm interested, but don't have the bandwidth to receive objects yet\". Again, similar to HTTP HEAD requests. For example, a client could issue a to accomplish two separate things: The track exists if it gets a SUBSCRIBE_OK. The encoder\/CDN starts producing the track. The client could resubscribe at any point to immediately start receiving the track.\nCan the same be accomplished with: or any other range request where start==end? Is it ok for the publisher sends SUBSCRIBEOK and SUBSCRIBEFIN, or does your use case require the publisher to hold the subscription open? Adding a new message is also an option.\nI think there's a few potential use-cases: Check if the track exists. Check if the track is active. Check if the track is active, and get notified when the track is closed. Check if the track exists and group\/object N exists. \"exists\" vs \"active\" is tricky. If you request absolute\/0, then it's totally valid for the relay to return OK so long as the objects haven't expired yet, even if the publisher sent a FIN a long time ago. But what should the relay do if absolute\/0 has expired? Should it return an error, even if the track is still actively producing objects? Requesting relative\/0 is better, although what if that object has expired too? For example, a muted audio track. IMO ? SUBSCRIBE start\/end=none ... UNSUBSCRIBE SUBSCRIBE start\/end=none ... SUBSCRIBEFIN SUBSCRIBE start\/end=absolute\/N ... SUBSCRIBEFIN\nAnd a more concrete use-case that NAME and NAME might like. Let's say I have 3 renditions: . The client starts watching a broadcast: This is a hint to the relay that the 720p track exists and it should start warming the cache. The relay is free to ignore the hint, but at the very least it will perform authentication. When the client decides that it has enough bandwidth for 720p, it will switch up at an aligned group: If the relay warmed the cache, then this switch is instantaneous. If the relay ignored the hint, then it takes at least an RTT to origin to fetch the 720p track. The time it takes to switch renditions matters more when switching down in response to congestion. The buffer will deplete while the CDN fetches the rendition from origin, which pre-warming subscriptions will avoid. And just to complete the example, you have to pre-warm the 480p SUBSCRIBE again. This could either be done with multiple subscriptions, or waiting until the SUBSCRIBE_FIN arrives and then subscribing again. I'm not quite sure which approach is better.\nI'm a little concerned about this use case, for the reasons below: Is the assumption that the catalog, which declares the existence of the track, is not trustable? How? If the relay went forward to the origin with the same start=none request then the origin should not actually send anything and therefore the cache would not fill. The only way this works is if the relay goes forward with a different request with some !none start value. This non-deterministic behavior (ask one component for one thing and it asks for somehting else) can lead to scaling and debugging problems. I appreciate the benefits of cache warming, for fast start especially when the the edge relay is far from the origin. There are other ways to achieve this. One way can be by having a \"warming client\" make a conventional subscription request. This client may be an agent hosted on the edge relay for this purpose and its APi would be offered by the CDN and is outside the scope of moqt. As a third point, empty subscriptions are also a DDOS attack vector. As a client, if can make a hundred requests for streams that all then flow to a relay, but no content actually comes to me, it is a highly asymmetric (and therefore dangerous) DDOS tool.\nI think it's useful for applications that don't use a catalog. ex. Alan's chat protocol could check if a chat room or participant exists by performing an empty subscribe. The relay doesn't have to forward ranges verbatim. Just like if a CDN receives a HEAD request, the relay MAY issue a GET request to the origin instead. I think this is actually pretty common? In the MoQ case, the relay could request instead of from the origin. It's completely up to the relay; it would only do this if there's sufficient bandwidth\/cache available. It's a little bit of business logic though; maybe the customer can choose to enable it. Just keep in mind that we don't want the relay to have access to the catalog. So you're asking applications to provide this functionality, requiring a custom service worker, a custom player, and hooks to notify the service worker on player start\/stop. And the service worker itself doesn't actually want to receive these objects, so yeah there would need to be a non-standard API to actually warm a track. Plus it gets more complicated when you through in alternative renditions. Maybe the client has no intention of downloading AV1 tracks, or maybe it has no intention of downloading Japanese subtitles. This sort of selection criteria should be signaled to the service worker; pre-warming all tracks is a waste. I think this is the type of thing we build into the protocol instead of asking every application make a unique implementation. Warming would be completely optional for both the publisher and subscriber. I don't quite understand. The CDN would deduplicate subscriptions so only one would ever reach the origin for a given track. If the relay already has an active subscription for the 480p track, then it wouldn't need to issue another subscribe upstream to serve empty subscriptions. But if you're talking about using random track names, a client can already SUBSCRIBE to random track names (just like random HTTP paths) to DoS the origin. Or maybe you're talking about a client potentially pre-warming a bunch of valid tracks. Like somebody wrote a bot to scrape all Twitch broadcasts with zero viewers and issued an empty subscribe to them all. The CDN is completely in charge of determining if it pre-warms empty subscribes, which it should not do if it's near capacity or detects abuse. But I don't think this is unique to empty subscribes. A client could perform a normal subscribe and then either immediately unsubscribe, close the connection, or set MAXSTREAMSUNI=0 avoid receiving any objects. These empty subscribes follow the normal authentication logic, so the mitigation is going to be the same for both.\nIndividual Comment: Let's retitle this issue to be more specific about what the ask is. Empty subscriptions are possible with draft-01. Their semantics are a bit undefined - eg: a relay or publisher has multiple options - SUBSCRIBEERROR, SUBSCRIBEOK + SUBSCRIBEFIN, SUBSCRIBEOK and no FIN, in case the subscriber wants to do something else? Do we need to mandate a particular behavior for better interoperability? Then there's a separate issue about supporting these use cases: If these are required, then there's other ways to spell that without changing Subscribe - and none of these cases have anything to do with subscribing except maybe 3? A QUERY_TRACK (eg: HEAD) message that doesn't alter the subscribe state seems like a more apt vehicle. How to pre-warm a cache seems out of scope. I think we should open a separate issue about this case, and what the relay requirements are (if any) around backpressure. The same thing can happen if a client runs out of streams and doesn't grant new credit in a timely way. Are there obligations to keep the subscription open and\/or cache at the relay for some period of time, or can a relay aggressively drop objects or reset subscriptions? A naive relay is going to get itself in big trouble very quickly.\nNow that we have TRACKSTATUSREQUEST, can we close this one? The one use case we haven't completely addressed explicitly is cache pre-warming, which is tracked by\nLGTMAfter or equals to? (in case one requests a single object)"} +{"_id":"q-en-moq-transport-8d0550bcf3aec6eccb08175e1cb8ef11cfeff19430d07745453d296328e95002","text":"I'm not totally sure what to do with the special value of 0 for StartObject and EndObject. If StartObject == 0, that's requesting the whole group. But if it's 1, that maps to a first object of zero, which is also consistent with requesting the whole group. For EndObject, the special value makes sense, because the subscriber might not know how many objects there are. There's also the error case where StartObject == 0 and EndObject != 0, or the reverse. IMO it would make more sense for EndObject to have the special value at zero, but not StartObject.\nYou're correct, this was an editorial error on my part. I did not intend to make StartObject have a special value."} +{"_id":"q-en-moq-transport-2c169eb7c27cfb34734c1bf3647a9ac538177ca5cda50265cabc51c76fe21ecb","text":"Shared between and\nMerging because I don't think this is controversial, and we have two PRs doing the same thing.\nThis sounds a bit like an extension that just gets forwarded when not understood by a relay? How does APP interact with caches, for example? Can a single QUIC stream have SEGMENT and APP data interleaved? Might be too vague to leave in\nThe intention was to cache\/forward it the same way as media. It's just meant for arbitrary metadata and as a replacement for the current approach of shoving metadata into the media bitstream. For example, in h.264 land it's common to put arbitrary content (non-standardized) in SEI messages. Even if we don't cache\/forward the stream, it would still be nice to let the application use unidirectional streams. This basically says prefix the stream with a 2 if it's used by the application instead of Warp. This wouldn't work with relays which is why I had the idea of reusing HEADERS to tell the relay which application streams should be forwarded\/cached and how.\nOh and it's missing because there's no encoding yet, but I was envisioning that SEGMENT and APP frames do not specify a length. They would consume the remainder of the stream so there's no way to use them in conjunction.\nIs there an MP4 box that we can just use as a part of the layer? That would provide some form of uniformity.\nThat was my thought too, although it means every container will need some way of specifying application data. MP4 atoms require a length up-front so they would be limited to messages instead of unbounded streams.\nthe RTCP APP message never turned out to be as useful in RTP as people thought it would be ... that said, fan of being able to send whatever data in the payload. If it happens to be a protobuf app message instead of media, seem like MOQ shoudl not care, it should just deliver the bytes with not much care if they are audio, vidie, haptics, or an app custom media\nWe need to define the difference of APP message and Media data in terms of what is inside and how it is interacted with caching and dropping. Otherwise there is a risk that the APP message becomes the container for “whatever does not fit in the current RFC“ which renders the standard useless."} +{"_id":"q-en-moq-transport-029b749d0ec617f8fa4f0f2c78be4aaa67c050a48af1d3f07c6f710782d7ac30","text":"This case seems easily avoidable by a subscriber, so the simplest option is to close the session.\nDraft 04 currently doesn't mention about sending any response when the SUBSCRIBE_UPDATE is successful or when it fails even though the draft hints on possible failures The above suggests that an error should be sent to inform the client that it is making invalid subscription updates.\ndocuments some other aspects of SUBSCRIBE_UPDATE that are underspecified.\nTable 4 is missing SUBSCRIBEUPDATE, so there's no code for this message type. Section 6.5 doesn't say what to do when the update is malformed, in some way. SUBSCRIBEERROR? Is a bogus SUBSCRIBE_UPDATE ignored, or does it nuke the original subscribe?\nOne nit, but LGTMLGTM"} +{"_id":"q-en-moq-transport-7056d30c99f8ed179969b824b769fc6583505eab9d2160912254ec293c773ea1","text":"Small editorial clarification on duplicate track alias for subscriptions that addresses aspects of\nAs I read the draft, is the distinguisher of a given subscription. is made unique for a subscription state by indicating \"If the Track Alias is already in use, the publisher MUST close the session with a Duplicate Track Alias error.\" The is mentioned as \"A session specific identifier for the track.\". It is initially set by the subscriber, where the publisher MAY reject... likely not if it wants to avoid long subscribe times. The draft does not indicate that the track alias is a hash of track or not. If the client implementation uses a sequence ID for both subscribe ID and track alias, then it could result in duplicate subscribes that would result in the relay building state, propagating the subscribe toward the publisher to solicit a response of OK\/ERROR. In some implementations that support headless publishers, the relay would send a subscribe OK immediately. The edge relay that has the connection to the publisher performs state compression of subscribes, but that's only at the level per the draft. In this case, the track alias is different, resulting in a new subscribe (duplicate when looking at the ) being created to the publisher. The publisher receives the subscribe with a new and and starts to publish data objects using the and that match internally the track . If one or more clients do this repeatedly, it would result in all relays and the actual publisher itself sending duplicate data that matches . Below diagram illustrates the above. I'm sure there are many reasonable fixes for this... below are a few. This would change the verbiage of duplicate track detection to perform duplication check on and instead of only on track alias. The state machine for subscribe SHOULD change to move from TO . This simplifies the state machine by reducing the complexity in: detection and elimination of duplicate subscribes eliminating the challenges with having to deal with trial and error responses A hash of namespace and name would allow for the duplicate track detection to work per draft, but... Track alias has a max value of 62 bits. Collisions are the largest pitfall for using a hash of two values. Registry\/directory service could act as gatekeeper of track aliases. While this would work, it introduces another micro service for a simple pub\/sub, which isn't great. It will also slow the subscribe process down. Many other issues will likely arise with this approach.\nI don't understand duplicate subscribes issue here ? can you please elaborate .\nIf a subscriber sends 2 subscriptions for the same track to a publisher ( ignoring relays for a bit here and also ignoring any form if Ids ), what should be the expected behavior of the publisher ? There are few options Publisher sends 2 copies of the objects (wherever they overlap) Publisher rejects the second subscribe since it is in the same connection Publisher sends only 1 copy of the overlapping objects What is in draft today allows for 1.\nClarification on relay vs publisher needs to be distinct here. When discussing this topic, we need to be distinct on is the request for a range of objects (aka fetch operation where there is a start and finish?) or subscribe to live stream where there is no known end? Some of the challenges here are introduced with the MOQT decision to overload to support both fetch operation and subscribe to live stream, possibility starting at X. In the case of fetch operation, subscribes do not get forwarded to original publisher or to other relays, they are instead immediately fulfilled by the first hop relay, which may pull the data from some distributed\/localized cache. It is totally acceptable that fetch based subscribes (aka filter type 0x4), can be in duplicate... although I suspect that is a mistake\/error on the client side to request at the same time overlapping\/exact same data. Regardless, it's not a big deal for the relay to satisfy this repeated request over the same data at the exact same time, but there needs to be some level of rate limits established to mitigate DoS. While MOQT doesn't need to define the limits, it does need to support error responses that the limit has exceeded. In the case of live stream, IMO it's 100% mistake for any client to ask for the same live stream twice via the same QUIC connection. While talking with NAME , he mentioned a possible use-case where there are two or more displays that you want to display the same content to. If the display has IP connectivity, it should be reasonable that the display would have connected and subscribed to the content, hence moot. In the case where the display does not have IP connectivity, a HUB device could be used that does have IP to act as the MOQT client to receive the data content. IMO, it does not make sense that anyone would want duplicate live feeds of the same data via the same QUIC connection. Instead, if the HUB device has multiple displays that are not IP connected and cannot make MOQT connections themselves, they likely have some other method to mirror content, such as an HDMI splitter. I'm interest in hearing the use-cases that require MOQT to directly support and require relays to support duplication of live streams to the same QUIC connection. Per discussion with NAME , the relay peering would never duplicate the live feeds. It would de-dup all these requests using the track , regardless of how many track aliases were used by a client subscribe. The original publisher would realize this aggregation\/de-duplication. In other words, each subscribe from the same QUIC connection or from multiple QUIC connections (in theory different clients) would be grouped together by the relay using the . Each of the and values together represent a feed. In this sense, only or is needed for this process, not both when considering live feeds. Again, fetch is different and is distinguished not only by , , but also by start and end ranges. I'm interesting in hearing the use-cases of why MOQT requires of the relay to support duplicate fetch requests for the exact same ranges at the same time when the other hasn't finished yet. Seems like it would make sense that if the same QUIC connection (aka client endpoint) request range X - Y and while still sending X - Y, it requests it again, that it would cancel the previous and start over. Add some text around that endpoint to relay can establish duplicate live streams by using different track aliases for the same namespace\/name. The relay or original publisher MAY implement rate limits on the number of active subscribes for the same namespace+name. The relay and\/or original publisher may implement different rate limits based on FITLER TYPE. For example, filter type 04 may allow a maximum of 10 active subscribes for the same namespace+name, while filter types less than 4 that involve live stream feeds is restricted to 1.\nThe status quo of having both needs to be fixed at some point, see\nDoes this mean we close the complete quic session? Can we just reject the subscribe?"} +{"_id":"q-en-moq-transport-4e677e3160bdef647000a1991c582945bf4adac34226ca0058558477cc9d249e","text":"Pulls in some text from and attempts to convey the conclusions of the Seattle interim. Adds Subscribe Parameters to SUBSCRIBEOK to allow this parameter to be optional in SUBSCRIBEOK.\nLGTM"} +{"_id":"q-en-moq-transport-f3ebc87f1d57a3d4e38f64b1920e71d35797daea331b71af6183af72d4f8516e","text":"This had consensus in the past, but a PR wasn't written. Particularly now that we have Object Status, which is optional for all other mappings, it makes sense to add the length and make things consistent.\nIn our current encoding, you can stream objects without knowing their length in advance if your forwarding preference is Object or Datagram, but not if it's Track or Stream (since those require explicit lengths in the front). I feel like those two aspects should not be tied: we should either support or not support streaming for all preferences.\nIndividual Comment: In order to stream multiple objects with unknown lengths in a multi-object stream, you'd need additional framing. Alternately, we could create single-object variants of the TRACK and GROUP stream header.\nI agree it's weird to have a mix of unframed and framed messages. While I prefer unframed streams... I think we should state that MoQ objects are always framed, represent a single point in time, and have a known length. It's an optimization to withhold the length in the datagram and dedicated stream case but maybe we should include it.\nIn meeting we decided add a length to everything. We can add extension later to have no length if we need ( Thanks to Mike E for suggestion )\nLGTM"} +{"_id":"q-en-moq-transport-7c9d5feeb889623ea363bfc0288636b115c79a0fe985e03ddec37313514f0d80","text":"This is a bit different from HTTP, but having at least one SUBSCRIBE_OK succeed ensures the track exists in some sense, as well as allowing a relay to populate the Group Order field. This is intended as a clarification, but it adds normative language.\nLGTM"} +{"_id":"q-en-moq-transport-5eda609fb0406c499dd9794f69c7f5490551891cf36cc7aaf3dc2a24bcd9ce3b","text":"This came up during interop - A relay received a subscribe with start={g=RelativeNext\/0 o=Absolute\/0}. Since the relay had not received any objects, it defaulted largest to 0 and interpreted the subscribe as starting at {g=1, o=0}. When {g=0,o=0} arrived, it was not forwarded. But there's a reasonable interpretation that it should have forwarded treated this as the beginning of the next subscription. This is partially related to and maybe\nIf the subscriber wanted to get the largest group, it would have made request start = {g=RelativePrevious\/0, o=None} .. This would have the interpretation that subscribe starts at {g=0, o=0} when {g=0, o=0} arrives. This would also work when {g=0, o=22} arrives. If my understanding of locations is correct, then the original request of {g=RelativeNext\/0 o=Absolute\/0} is incorrect for the answer being expected ..\nI think it's reasonable to return 0,0 The subscriber uses RelativeNext if it doesn't want old data. If no data has been generated, then that requirement is still met.\nBut that's ambiguous. Since Group and Object IDs are zero-based, that response also describes the situation in which the relay has received the first object from the first group, which is clearly a different state. Can we solve this problem by specifying that both GroupID and Object ID are >= 1 of they exist, and 0 if they do not? This can be used to describe the situation in which the subscription is active but the publisher has not yet produced any objects.\nOh to clarify, the SUBSCRIBE_OK should return None\/None as the latest group\/object. The first OBJECT delivered would be 0\/0. And yeah we could switch object\/group to be 1-indexed instead of 0-indexed. None=0\nIndividual Comment: Agreed that in this particular case, the client was not really asking for what they wanted. The \"Largest group\/object\" language comes from the draft, though, and I think it's ambiguous what it means to a relay that hasn't seen any objects. My interpretation of the spirit of the draft is that 0\/0 would be forwarded in this case if the client asked to start Next\/0 (eg: there's a bug in my relay), but if so, we should make it clear in the draft how it's supposed to work.\nIs this\nI think I can see the intent with what's there, but the draft could use some editorial text clarifying what \"next\/0\" means when subscribe OK was \"no content available\".\nI thought we decided on call that we were not going to steal bits from the group id \/ object id to indicate meta data what is going on. After that call I was expecting to see some explicit flag for \"no content available\" if that is what needed to be sent.\nAfter merging , Subscribe OK has the flag you are referring to: https:\/\/moq-URL I left this issue open because I think there's still some ambiguity that can be resolved with editorial text.\nmakes sense - thanks\nNAME can you write some editorial text to clarify this if you have something in mind?\nI tried to write some text, but I got confused. Ignoring object locations for a second and just focusing on groups, I think this is what we want, but I'm unsure how to define \"current\" when there are no objects: I think the simplest way to resolve this is to treat the next seen group ID as the \"next\" one and anything attempting to match only \"current\" matches either a) nothing or b) . I recognize that b) may be controversial in that we don't require unitary increase in group IDs. Practically I think subscribers should usually ask for [RelativePrev\/0, None] (give me the current group if you know it or the next seen group if you don't). If we assume that the publisher only says \"ContentExists: false\" when it hasn't published anything, and relays only repeat this when they get it from publishers, then it's somewhat easier to assume that anything before \"next\" simply doesn't exist. I'm unsure if this covers publisher restart cases well enough though. There's also an asymmetry in this case in that there's no message to communicate how the relative requests were interpreted, as SUBSCRIBE_OK now does when the largest object is known. Further, the algorithm for how to interpret relative object locations in groups with an unknown \"largest object\" may be even murkier, but let's agree what the groups mean and then move on to objects.\nNow that is merged I think I can make a PR to address the concern. I think it would be something like:\nLGTM"} +{"_id":"q-en-moq-transport-66576d3dd174f20d24ce571e04b7e5ea14079f67fb9b9a6356582a045f73a08d","text":"Some diffs are due to me reflowing text to fit the line limits, so please ignore those. Once we have this and a limit on announce IDs, it seems likely that we'll be able to get rid of 'Role', but for now it can stay.\nThanks NAME . this looks like a good improvement. I have few clarification questions and bit of extra work for you to add some informative text, if my comment seems ok to you.\nUnfortunately, that makes it much more difficult for the server to tell abusive clients from valid ones, as we discovered with the recent HTTP\/2 rapid reset attacks. In the end we did develop heuristics, but they're imperfect and in the process we broke a bunch of applications in an effort to defend against DDoS attacks. In contrast, QUIC had a system like this and experienced no issues and we didn't need to develop any new heuristics.\nCurrently, there's no way to limit the number of simultaneous subscriptions. QUIC stream limits cannot effectively be used to limit the number of subscriptions, so I'd suggest MoQ have a limit. The simplest idea would be to have a limit specified as a parameter in the server's SETUP message. As a related question, even with limits, what happens if a client subscribes to a number of streams that it doesn't have the bandwidth to support? ie: 10 4k video streams and they're on a slow link? Possibly the subscriptions could fail with an error code indicating that the server didn't have enough bandwidth to request them and\/or serve them to the client?\nrofl I just came here to post the same issue. See: URL I think we follow QUIC and have a MAXSUBSCRIBES frame. The initial value can be set in the SETUP message. The subscribe is considered active until the publisher sends or (TODO). I think we're okay with the subscriber\/publisher using arbitrary subscription IDs if there's a single control channel. It would be easier if we required incremental subscribe IDs and you couldn't reuse them though.\nAlso do we need ? That seems like a potential attack vector too, since it involves inserting into a potentially CDN-wide routing table.\nA few months ago, I would have said we should go with the HTTP\/2 style design of just having a limit on number of streams, but given the recent issues, I'd be inclined towards the QUIC style of specifying the max number. But I think that works better if we also have a subscription ID that's monotonically increasing for subscriptions, ie: Announces also likely need some limits, but I'm less clear on how that might work.\nI propose a monotonically increasing in the message. You can't reuse Subscribe IDs. We add a message containing the maximum allowed ID and possibly a SETUP parameter to set the initial value. The corresponding messages use that ID instead of echoing the full track name. This removes ambiguity and overhead. We would still have Track ID for compressing the full track name (maybe renamed?). You could have multiple SUBSCRIBE reference the same Track ID but contain different Subscribe IDs. For example: Same for .\nAdding a Subscribe ID used as you're describing SGTM and as you said remove any ambiguity about what subscription is being referenced. Unless we are absolutely never going to have multiple subscriptions, I believe we'll need a Subscribe ID.\nThis is related to\nmay be better to name it as transaction_id . since it is id tied to a given subscribe transaction ..\nAs a follow-up to adding a Subscribe ID, we'll add a SETUP field or param to indicate the initial value and a MAXSUBSCRIBEID frame to increase the limit, similar to QUIC flow control.\nThe consensus at the session today was to go with a QUIC style approach, so I wrote There was interest in a similar approach for announces, but announces don't yet have an ID, so that will have to wait.\nWill assign to Mike English\nThis seems like a really complicated way to limit the subscribes. I understand how it mirror streams but that had pretty different constrains. Why don't we just have an subscribe error of \"too many subscriptions\"? Discussed open issues in the 7 August 2024 virtual, not recorded here. Remaining issue (just do a SUBSCRIBE_ERROR?) went to the list and chairs declared rough consensus, although at least 2 participants would prefer not to have this."} +{"_id":"q-en-moq-transport-abf63e17f8bcea59182a258889b2d56607e370366270ef48067d6a56a6a7899c","text":"There are some obsolete references to Object Send Order, and part of the STREAMHEADERTRACK description seems to have slipped below the header for the following section.\nThanks for fixing these .. LGTM"} +{"_id":"q-en-moq-transport-294882286eea4bacbfee59824e4285a73befecc9a7b8c88c85e1d7a38ad5bcce","text":"I have foregone any premature optimization of the encodings here. Asking NAME to take a first look.\nHow does one close a peep? Currently, in our implementation, we rely in combination of delivery preference and object status for that.\nI think a Stream FIN would be sufficient.\nHow am I supposed to know when to send a FIN?\nThe publisher sends a FIN when the Peep is done. The relay knows a Peep is complete when it receives the FIN from upstream.\nIs there ever a case where a publisher needs to close a stream and ensure all data on it is delivered, but does not know that the peep is done, or knows it is not done. For example - in response to an unsubscribe or graceful closure.\nHow is this represented in the cache? Are FINs guaranteed to be always at the same position? One scenario where the sender might FIN a peep earlier is if the next object in the peep is past subscription end, though I'm not sure how we'd hit this in practice.\nGood point, I've added an end of Peep object.\nOK, I believe I've fully addressed all comments except for those I've left unresolved. NAME NAME If you are satisfied with my response, please resolve the comment.\nNo normative language, summarize what streams are.\nNAME wants to make a proposal, wait for that\nFor bikesheading on the name, I make these two proposals: \"Stream\" - this correlates directly with the mapping to QUIC. Simple to comprehend and explain. \"Block\" - this denotes contiguity and avoids overloaded use of the term \"stream\".\nI have made all changes I know to do. I am awaiting NAME making a concrete proposal on the question of whether or not two Peeps can have the same priority, and the wire image implications downstream from that.\nWhy have Peep ID be a tiebreaker if we also have priority? Just let the streams have the same priority!\nBlock and Stream work for me. I was going to say that I prefer \"QUIC Stream\", but given this might be over WebTransport, technically it'd be a WebTransport stream, so defining the term \"Stream\" at the top of the document should make that clear.\nOne more name suggestion : SubGroup\nMerging with the understanding that we'll continue to discuss how prioritization should work.\nIndividual Comments"} +{"_id":"q-en-moq-transport-8078683a554eb03499ef100e66e80dcfd2b309166e426bfe3dc78580dee4ddaf","text":"I removed this sentence: Relays MUST NOT depend on OBJECT payload content for making forwarding decisions and MUST only depend on the fields, such as priority order and other metadata properties in the OBJECT message header. We already have text explaining that the object payload content can be encrypted (section 2.1). I think that makes it clear that a relay can't do anything with the object payload unless it has the encryption keys or a priori knowledge they are cleartext. The sentence itself was overly restrictive (relays could use other inputs, for example out of band information from configuration or business relationships). The preceding sentence explains how to find subscribers to this track, the subsequent one says \"MUST forward\", and we have an entire section explaining what to do with priority order now.\nLine 572. “[Relays] MUST only depend on the fields [for making forwarding decisions]”. I am not sure to understand why we would like to restrict here the behavior of Relays to forward the Objects to subscribers. We can imagine edge processing for the implementation of forwarding decisions based on third-party validation (possibly based on payload). Furthermore, this restriction is not necessary for the norm to be implemented.\nThe intent of the language here was that relays should only depend on being able to access the headers and not the body of the objects, as the body may be encrypted. However someone may want to build a relay that uses 3rd party logic , or that inspects the payload, in order to make forwarding decisions. I agree that this should not be prohibited by the protocol. We should relax the language to indicate that payloads MAY be encrypted and that relays MUST implement any prioritization instructions carried in the headers. The spec should make no normative statements about how relays make forwarding decisions.\nI agree that this is too restrictive and yet unenforceable. The intent is to say something like \"a router MUST only forward packets based on the IP header\". But what about the UDP\/TCP header? What about the TLS header? What about the HTTP header? It's perfectly valid to use those iff a router supports those layers (aka a load balancer). I mentioned in the other issue, but we should treat moq-transport as just a layer. There may be other layers stacked on top and which should not be restricted. It's L3\/L4\/L7 all over again.\nI am always confused when we use the term Relay to mean more than one thing. My thinking so far is, these are the middleboxes who have no role with payload of the MOQ objects. Yes, it cannot be enforced and a implementation can do whatever it wants. However, when we define security properties, we need to be clear on Relay's role. OTOH middleboxes like SFUs, transcoders are more flexible to access payload and may be payload headers based on the architecture and they fall into different category when defining the security of the system.\nThis is marked not-transport, but the issue was filed against text in moqt, so maybe that's not correct."} +{"_id":"q-en-moq-transport-4541720a82ebe196e482eb770b5cfb49a9539f60289f0b939f9b7782552f2e09","text":"This language exists in current draft and refers to an old scheme in which MOQT registered the streaming formats and had a scheme in which the leading varint of the catalog defined the format. This has been superseded by an externalized catalog format and this TODO should be removed. The IANA table for MOQ streaming formats should be managed instead by the Catalog draft, which references those entries."} +{"_id":"q-en-moq-transport-63c8549bcc208df8a495cc9d5c499814d32785b6bd7920e937394582a8fa6870","text":"As I observed in this comment: URL, the missing group status seems equivalent to \"End of Group\" with ObjectId of 0. Additionally, there is no obvious value for ObjectId in Objects with a status of \"Group Missing\".\nmakes sense to me"} +{"_id":"q-en-moq-transport-6a243bc164cc3429faecf925f2d89cea32cf3dd9f2134b70a3348b04ec3524d6","text":"It's meant to be sent by the subscriber, indicating that the announce is no longer accepted. The fields should mirror ANNOUNCE_ERROR so the subscriber can indicate stuff like authentication expired, another announce was received, etc.\nLG, but I noticed: \"If a publisher receives new subscriptions for that namespace after receiving an ANNOUNCECANCEL, it SHOULD close the session as a 'Protocol Violation'.\" I believe that's racy, because the SUBSCRIBE and ANNOUNCECANCEL can be inflight at the same time? Also, shouldn't it be \"after sending an ANNOUNCE_CANCEL\", or am I misunderstanding the text?lgtm"} +{"_id":"q-en-moq-transport-39205102f7f479affcb0e4d85946dc235473ceb439e7071b0b6777c3c47c29ad","text":"Many thanks on review and any help of fixing all the small stuff (point me too it, or PR on the PR, anything) much appreciated.\nNAME - thanks for review. I think I fixed all the stuff you raised. Can you take another look NAME - handing the baton to you on this one. Thanks you\nAt interim, to support make before break use-cases, we need to clarify relay behavior when a new announce arrives for existing published.\nThis came up during discussion of . The current draft is ambiguous whether multiple distinct publishers (not reconnect) can announce the same namespace to a relay. If that is allowed, should SUBSCRIBE messages be sent to all publishers that have announced that namespace? What happens if multiple SUBSCRIBE_OKs come back? Do we need normative text describing how a relay handles all these situations, or is it implementation specific? Some possible error conditions: Two publishers publisher the same track\/group\/object with different content One publisher skips an object (perhaps due to a delivery timeout) but another publisher sends it ?\nI initially added a recommended session migration algorithm in . However it's too prescriptive and non-normative and I removed it. However I do think we should talk about what it takes to gracefully migrate a MoqTransport session, if it's non-obvious and should be in the draft (maybe an appendix), and if we need more mechanisms in the protocol. This recommended algorithm requires servers to not error on a duplicate ANNOUNCE. It also results in an RTT where duplicate OBJECTs will arrive on both sessions, which is not necessarily ideal. Here's the original text before I removed it:\nI think it'd be useful to have a PR to look at if you're willing to write one up?"} +{"_id":"q-en-moq-transport-88be07f8c1672276259bb9938f47e5db3299cfa849ea37691156df306570cba0","text":"Sorry for the late response but I dont think this text is quite correct. If you're running MoQ over WebTransport, you're likely to be following the instructions in URL, which states My reading of the current text means that endpoints are likely to blend up treating things as a violation when there is no violation.\nSection 3.3 says: \"The control stream MUST NOT be abruptly closed at the underlying transport layer. Doing so results in the session being closed as a 'Protocol Violation'.\" I know we have multiple transports, but it would be good to clarify what an abrupt close is. Presumably any sort of RESET_STREAM, but what happens if there's a FIN? Is this a \"clean close\" of the session? Does it imply or force a reset of data streams?\nLGTM"} +{"_id":"q-en-moq-transport-cb13576820075b2b9870c707d59f4743113fd508735a41d6975479d390f70235","text":"My understanding is that MoQ is always hop-by-hop encrypted by virtue of working over QUIC or WebTransport, and therefore a \"relay\" as described in the draft can only inspect Track\/Group\/Object metadata if it terminates QUIC. The document should clarify the requirements.\nI agree with Alan. As a QUIC enthusiast, I guess the current draft implicitly suggests hop-by-hop QUIC termination at relay, e.g., which is necessary to enable cache at relay while having MOQT Message Type and OBJECT metadata as QUIC payload. Yet it is definitely more clarified for readers to explicitly see that information in an early section (e.g., sec 1.1.4).\nGiven MoQ is over either QUIC or WebTransport, I believe it's impractical to restrict it to only QUIC based transports.\nThe context of the question is whether one can build an MoQ relay that can view object properties without terminating the TLS session underpinning MoQ or WebTransport (eg an untrusted middlebox).\nLG, but given Webtransport is also a substrate MoQ runs over, I think we want to avoid using QUIC connection?LGTM"} +{"_id":"q-en-moq-transport-800f9efc396f169f9c4845c6953d5ecb76a54d652c5b4a5e82065a8d48f0d2b5","text":"This is a proposal to illustrate the changes needed to address one aspect of URL . In creating this, I explicitly didn't state copy the normative action on dealing with a 0 or 1 value. See URL for that.\n9\/25 interim: No objections to this direction, but make it MUST close if not 0 or 1. NAME can merge when he's satisfied the discussion has completed.\nI updated this to use MUST.\nhttps:\/\/moq-URL states This seems at odds with the QUIC design ethos of being very strict about receiving things that are obviously broken. This matter is wrapped up in URL and the proposed change (URL), so I wanted to track the question separately. I suspect we might want a bigger pass over the recomendations about when to detect a protocol error and how to handle it but I don't want to miss this specific issue in case it was done for a good reason.\nI don't think this is editorial, since it affects the expectations on what a receiver is expected to do\nTo clarify, the proposal is to change the SHOULD terminate to a MUST terminate? I expect that's what everyone will do anyway, so this SG\nYes that would be my proposal here. And in addition, a separate review of other fields for consistency.\nDiscussed in today's interim. MUST rolled into URL and merged. So closing issue as done.\nThis change seems small and sensible, especially since every time I see (f) I think it's a float, not a flag.Don't feel strongly one way or the other but this looks good to me."} +{"_id":"q-en-moq-transport-db949a39c55f9e8eb02b284424460b89e35026f345e7c378cb76861d3ded1073","text":"On the call today, people favored renaming to ANNOUNCES over ANNOUNCEMENTS, so I'll do that.\nI call it . Because the subscriber is interested in any messages that match the prefix.\nI agree with this. It is counterintuitive that SUBSCRIBE_NAMESPACE results in an ANNOUNCE request being sent to RELAY.\nSpeaking as an individual that came up with the name: It's terrible and anything the group comes up with is better. #sorry I'm fine with ANNOUNCEINTEREST. My suggestion is SUBSCRIBEANNOUNCEMENTS.\nI like both ANNOUNCEINTEREST and SUBSCRIBEANNOUNCEMENTS. They are both long by the time we add OK and error to them. I wonder about just calling it INTEREST. I don't feel strongly about any of these names but I do like the idea of improving the name.\n+1 This is clearer than ANNOUNCE-INTEREST, which is an ambiguous term for the uninitiated."} +{"_id":"q-en-moq-transport-38d3d90228a693a36e9138d5a21d1b98ba5bb86f8c2c5078bda033c606aeca07","text":"Changes SUBSCRIBE so you can't go back farther than the current Group with any of the filter modes. If a SUBSCRIBE tries to go back farther, it is closed with a SUBSCRIBE_OK. Alternatively, instead of failing the subscription, we could specify that the SUBSCRIBE will be truncated and start at the beginning of the current group.\nLGTM.\nLGTM .. I had couple of questions on error condition to be MUST instead of SHOULD."} +{"_id":"q-en-moq-transport-8b21e6a7272f9c4fb0ffbd688d9e66f3a6e88bcce8ca28bcbd41afcc85f2a0b9","text":"This is a dependency of Martin's Stream FIN\/RESET PR,\nBesides the fact that this simplifies the FIN logic, I really like that this frees the subscriber from either finding out the last object or sending object ID 2^62-1."} +{"_id":"q-en-moq-transport-98b34111a7f1c25c1bc15d78942360d6693950a565baae2a4274cbdaca6f4077","text":"We don't discuss how to use it anyway. This was suggested in\nDone\nThere are references to EndOfSubgroup in the FIN PR, so one of the two PRs is going to have to delete them."} +{"_id":"q-en-moq-transport-8ee90657646ce01866a75aae2bea9667830bf83dae7f8575c2644e98830de212","text":"The role somewhat limited what actions were valid, but it's not clear that was important. Also, some things can be limited by MAXSUBSCRIBEID now.\nI understand that a ClientSetup is sent that includes the role of if the client only wants to send objects, the role of if it only wants to receive objects, and the role of if it wants to both send and receive objects. And I believe that basically only the following relationships can be established: Client: vs Server: Client: vs Server: Client: vs Server: My questions are as follows: If a ClientSetup with or role parameters were to arrive at an ingestion-only server, how would the server handle it? Is it required that the server returns that the client does not expect and the client understands the server's intention? If a client that has already sent or receives a ServerSetup containing , is it acceptable to use that connection as is? There may be some security risks in this case. Thank you!\nAs a side note, if the conclusion in will be that Role Parameter is unnecessary or no consideration are required as endpoint behavior, then this issue would be unnecessary.\nNot clear what the use case is. Seems like a foot gun in that it can be set wrong for how the connection is used.\nI don't care if we leave it or keep it. Just adding this as I got questions about why have it.\nHaving the client voluntarily declare itself to abide by a certain role is not preventing the relay form having to do any work. A malicious client will declare one role but then act differently. Any relay open to the public has to protect itself against unauthorized ANNOUNCE and SUBSCRIBE control messages, irrespective of what role was declared. Therefore the upfront role declaration is superfluous.\nI tend to agree it's not doing anything helpful here, and sent out to remove it.\nOnly suggestion to adjust other setup parameter keys now that you have removed 0x00. Given we don't have a strong need for this, glad to see it go for now and we can add back later if we find a strong need for it."} +{"_id":"q-en-moq-transport-4b758a2ba78e8b4980a90805c98d6e742128d24af72d6ad3384435dc3d3f500f","text":"As I was reading through the draft, I found some slight typos in the publisher interactions segment. This fixes them!"} +{"_id":"q-en-moq-transport-45bf784460a01c1f213cee0c6bcd5012a9b6b89c6ba460d5a4c1118f942bf13c","text":"As of now, we use Subscribe ID for operations on a subscriber, and Track Alias for operations on objects. In , there was an effort to remove referencing subscribe id within object header types, but one slipped through. This fixes that."} +{"_id":"q-en-moq-transport-da45aea98f4b76b6d8a9cd053b2547e70ba960eff5f4ab5148ca263ae584ea7f","text":"This is an attempt at clarifying MAXSUBSCRIBEID, feel free to make suggestions.\nlgtm\nIn 6.16 of draft 6, the draft says that \"receipt of a MAXSUBSCRIBEID message with an equal or smaller Subscribe ID value is a 'Protocol Violation'.\" ; however, later the section mentions that \"If a Subscribe ID equal or larger than this is received in any message, including SUBSCRIBE, the publisher MUST close the session with an error of 'Too Many Subscribes'.\" This is can be ambiguous because technically MAXSUBSCRIBEID is any message. Should all sessions be closed if MAXSUBSCRIBEID message contains an invalid Subscribe ID value? Does any message exclude MAXSUBSCRIBEID if an invalid Subscribe ID value is provided? Clarification on this would be helpful. Thank you!\nlgtmAgree with Alan's comment. Otherwise LGTM"} +{"_id":"q-en-moq-transport-18a4929d6b0bf3bc8ee99a30c01bdb044667f43d70b7b1049326d424a55e4f03","text":"(recently merged) uses \"Peep\" where the rest of the draft now uses \"SubGroup\" Personally, I wouldn't mind going back to \"Peep\" since we seem to keep using that term conversationally, but either way, the draft should use one name or the other, not both.\nChairs do you want to reopen this ?\nChair Comment: The working group has aligned on subgroup as the nomenclature in the document. Of course mistakes happen but let's just correct them and move on.\nOops, that was an error.\nLGTM"} +{"_id":"q-en-moq-transport-9fa94fbcbcd58e376b04429a86c77776a877047db7cbdd0e2ad9c672324878a5","text":"How big can a group be? I know we (probably) won't be able to set a fixed limit, but I feel like we need to answer this in order to know what kind of design assumptions to make. Is a single group expected to be small enough to fit in the cache? Will the protocol work if a media track is just a single group that is being appended to for hours if not days? I was thinking about stream mapping and the resource-use-at-relays problem earlier today, and realized we might not currently have a conclusive answer for those.\nAs it stands today, the group ID is merely an annotation used by SUBSCRIBE hints. It's a sync point only by convention; in actuality the relay has no idea if objects within a group depend on each other. The relay is expected to cache\/serve\/evict objects independently so the group size is irrelevant. For example, a valid use-case is a single audio group with an object per frame. Each OBJECT would have an expires field of 30s, effectively dropping the head of the group as time progresses. The total number of objects per group is unbounded but the cache size is bounded. I strongly disagree with this. I want groups to actually be sync points, in which a relay knows that it should not evict OBJECT N without also evicting OBJECT N+1. Then the maximum size of the group actually matters.\nIMO the size of a group (in bytes, or object count, or wallclock duration) should not be defined by the moq-transport spec. Limits such as these are driven by the physical characteristics of the caches and therefore should be provided by the relay (cache) provider. A streaming format (such as WARP) may choose to define a minimum object size to be supported for interop reasons between relays, but this should probably stay outside of moq-transport.\n+1 to will's point. Group Size vary depending on the application - video, audio, chat, something new If any application needs to be aware of this, we can do the following Have catalog specify the group size per track Have a MOQT message updated where the Publisher can specify it per track in Subscribe OK. This might help as hint to caches if need be.\nIndividual Comment: There may be something interesting for moqt here in expressing more information that's useful to a cache. Maybe an explicit expiry time would be more useful than a TTL, or a TTL at the group rather than the object level? Caches will evict stuff and can go fetch it if someone asks for it later. Maybe what needs to be specified is what happens when a cache receives Objects 0-10, evicts 0-3, then gets a SUBSCRIBE for Absolute\/0-10?\nYeah, the cache needs some dependency information if it has any hope of evicting objects before the maximum TTL. That should be the purpose of group in my opinion.\nI think we should allow it to be pretty much any size and as we use a VarInt for the object id so it does not matter. I guese I am not seeing a problem here from what we have today with pretty much no limit on number of of objects or number of groups. So I guess I am saying whatever max of VarInt is which I assume will be at least 2^60.\nNAME are you fine with NAME point? If so, please close the issue.\nI think we should add a note that groups can be in practice arbitrarily big."} +{"_id":"q-en-moq-transport-ac4495d9cf5dfc61893de578b32f7186e5b1b1b734b4defca3839069b9c226e8","text":"Section 6.2.2.1 (Role) and section 6.2.2.2 (PATH) indicate and . Shouldn't key be ?\nThe \"parameters\" syntax is defined in section 6.1 as: We should be consistent, and refer to \"Parameter Type 0x1\", etc.\nMinor nit , otherwise Looks Good"} +{"_id":"q-en-moq-transport-268282a4705aab0decab4cbcfc03c97d08b6a09d6e16c43ad29bb6bd4b20d7a3","text":"It's difficult to populate and has fringe benefit at best. Let's revisit this in the future if it's actually useful. removed after discussion in\nTo explain a bit more in-depth: The goal of the dependency list was to give relays the information required to know which frames are droppable. This doesn't actually accomplish that goal since a frame could be a reference for a future frame. It's a lot of state to keep track of and it's unlikely to be used by even the smartest relay. It's much simpler and probably a better user experience for a relay to obey the delivery order. The decoder doesn't actually need this list since the references are stored in the codec itself. A mismatch between the Warp dependency list and the codec reference list sounds pretty awful. Accurately populating the Warp dependency list requires parsing the codec bitstream, which is more difficult and expensive than it needs to be. The only immediate use-case for the dependency list is to mark init segments. We could have a separate field for that, or put the init segment in the CATALOG itself. There's a possible use-case for SVC but again, a relay can just follow delivery order and the decoder should be able to piece together dependencies from the underlying codec.\nI went ahead and merged because I haven't heard a positive reaction to the dependency list. We can always make a PR to add it back and actually discuss the merits."} +{"_id":"q-en-moq-transport-b8fa9529fa19c7b1d99984c7c01468bccfbab33ea1610ee1364455bd5d5eb083","text":"This error code can be sent by a publisher to a subscriber that is not consuming data fast enough, causing a queue to form at the publisher that exceeds the publisher's limits.\nI added text here on when to send Too Far Behind that attempts to clarify the fidelity of subscribe. It ended up in the delivery timeout section, which may not be where it belongs, but we don't have a generic section that explains how subscriptions work, it's spread amongst the relay section, SUBSCRIBE message and Delivery Timeout.\nDuring the interim last week, there seemed to be consensus that SUBSCRIBE's default behavior is a high-fidelity mode (eg all new objects will get to all subscribers, with the exception of DATAGRAM tracks). Based on the FETCH discussion - the status of a cache on any relay in the chain has no impact on new object fidelity. The only parameter that can affect this is the subscriber's , which opt-in. The only other issue to address is - a subscriber falls too far behind live head to sustain the subscription based on sender resource constraints. I recommend we add an error code to such as , include the final queued object ID, and remove the subscriber from the forwarding table so no new data is queued for them. When those objects are delivered, the subscriber can FETCH later objects and\/or resubscribe as needed.\nI do not think this was the consensus. It has been very clear that one of the main use case for sub groups was temporal scaled video where there was enough bandwidth for one sub group but not enough bandwidth for all the subgroups.\nIndividual Comment: Here's a clip from the minutes: My understanding of that scenario is that prioritization can deprioritize some subgroups but they are queued indefinitely unless the objects within pass the DELIVERY_TIMEOUT. If the delivery timeout is infinity, how does a relay know which subgroups can be skipped over (or when) and which are critical? Do you have a different mechanism in mind?\nIndividual Comment: If a publisher doesn't have enough MAXSTREAMDATA, MAXDATA or MAXSTREAM_ID to send objects, it seems to me that it needs to queue until the peer grants these credits, up to the delivery timeout, right?\nIf the relay is sending data to a subscriber, and the link between the subscriber and the relay cannot sustain the bitrate of the track, eventually the relay will have to buffer more and more data. At some point, the relay would have to drop the data; we should specify how that happens (lowest priority stream gets dropped first?).\nIndividual Comment: +1 We need to specify what happens in this case. I think there's two parts: 1) selecting what gets dropped (this is a priority question) 2) the mechanism for dropping (stream resets, etc).\n+1 Expanding Alan's list, I think we should determine: (1a?) valid boundaries for dropping data (group? object? byte? bit?) (2a?) under what circumstances the subscriber needs to be informed of the dropped data (do we need different verbs with different semantics in this case, e.g. fetch\/subscribe? is dropping data always permitted and something all subscribers must be able to tolerate?) (2b?) the means of informing subscribers of dropped data (implied or explicit)\nThe AI is to add an error code for SUBSCRIBE_DONE to indicate a subscription was cancelled because the subscription was too far behind."} +{"_id":"q-en-moq-transport-0b8ec954d51afc51caf283f7b84d19f4512114d774626bf3752fc51713e72e13","text":"Explicitly state that these IDs are numbers that can be compared to determine order. This is already implicit in a number of places in the draft.\nNAME and NAME Said that group IDs may be sparsely used, particularly in cases when they were chosen in a way to avoid collisions. This sounds like an interesting set of use cases, but previously my understanding was that groups indicated some sort of temporal ordering, even if it's not strict. If Group IDs don't imply temporal ordering and join points, and instead they're a way to publish Objects in a way that don't collide with one another, it seems like a different thing? Top of head strawman: Possibly different users could have the ability to publish Objects as themself for a single track? Having a few specific use cases would be helpful here, because it feels like there are real use cases, and the existing mechanisms might be awkward at best? It would also be useful to understand the use cases so we understand if they're in scope for MoQ.\nPossibly relevant to\nI have advocated strongly for having Group IDs begin at 0 and increment by 1, with no gaps. This restriction (which is identical to what we have for Objects) brings us some valuable benefits: Arbitrary relays, without any other information, can know if Groups are missing Clients can tell if groups are missing, without having a catalog to define the numbering scheme. I spoke this over with Cullen and Suhas. They mentioned that they have a use-case in which clients will publish data under the same namespace and name, but using different ranges of groups IDs (correct me if I'm wrong there guys) and that therefore GroupIDs cannot increment by 1. My concern is that this one use-case is burdening the transport with a whole ton of complexity around knowing where there are gaps. Witness . I'd like to simplify moq-transport by setting restrictions on GroupsIDs and then help brainstorm alternate solutions to their multi-publisher use-case that are compliant with a simpler version of moqt.\nI completely disagree with the use-case NAME and NAME have suggested. A track must have a single publisher and I just can't envision how MoQ would work otherwise. If you have multiple publishers, then make multiple tracks\/namespaces. A subscriber can request them all. Or a muxer can request them all and produce a new merged track at incrementing group boundaries.\nI am also in favor of Group ID beginning at 0 and incrementing by 1.\nThere was this issue discussion on groupId and gaps and why relays need not do the book-keeping on gaps between groupIds or such .. URL\nI'm totally lost on what problem people think they are trying to solve with this issue. I really object to how we constantly reopen stuff for no reason. The protocol works fine with multiple publishers and out of order groups - we have running code that does that. We have discussed this is the past and the use cases. There is no new data that is like here is why this causes performance problems. We have multiple use cases that take advantage of this. ( some of the variations of group chat, logging, metrics ) GroupID are left to the applications using them to choose. Due to timing and priority issues, if a relay receives group 2, object 1, it can not assume it will never receive another object form group 2. Some applications use them to indicate ordering and dependency. The data model makes it clear they are a join point, and because of that things that have dependencies can use them for the dependent things. The proposal we had for make before break can results in out of oder groups.\nSpeaking as Chair: NAME The issue as filed is soliciting use cases. Will stated you have some, it would help if you could document them here in as much detail as possible. Other folks seem to be advocating for a particular design without seeing the full picture. It would probably help if we understood the use cases first. I'm surprised to hear about use cases where multiple publishers are publishing to the same track simultaneously in different groups. I don't ever recall talking about this pattern, apologies if I missed it. Victor pointed out here (URL) that arbitrary group or object numbering interact poorly with subscribe ranges\/fetch. Because a relay can never know the how the id spaces are populated for a track, the relay\/cache has to assume all ids exist until told otherwise by the original publisher. Suhas indicated that if a publisher drops (skips?) a group it will signal that. There are scalability challenges if the 62 bit space is sparse - perhaps people were talking past each other in the issue? I think the group is struggling because use cases and the protocol design elements they depend on are not well documented, in writing, in a central place. I've gone back and forth about resurrecting the use cases and requirements draft and adding new detailed sections that have this scope (eg: explaining make-before-break, and what moqt protocol elements it requires). Ideally, we would get consensus on the use cases we are trying to support. Would folks find it helpful? I'm wary of having folks invest in writing something if we're not going to read it.\nAs discussed at length in the other issue, \"no relay should not assume any structure between groups\". Groups are independent join points and we should not add more application semantics at the MOQT layer . it tends to more bookkeeping which is not really needed.\nLet me take a stab at going through the range of solutions that can exists when fetch requests are made for large absolute ranges. For this discussion, I am ignoring caching since it is easy to think through caches independently. When a relay receives a fetch request for large absolute range, one the following can be the response it gets from the upstream Relay receives a marker object for each group that is not included. Relay receives some kind of maker object with range of groups that doesn't exist. [ this is optimization on 1] Relays receives no indication on the groups not included. These options apply similarly regardless of group Ids being sequential or non-sequential and when things are missing or not produced or so. Does this makes sense ? Happy to add a PR to clarify some of these .\nNAME NAME and others, We're not looking for solutions to support non-sequential groups yet - we're still looking to get documentation of the application use case that drives the non-sequential group requirement.\nSo I know this sounds a bit snarky and I really don't mean it that way but have you read URL ? There also been a bunch of similar things. I sort of work on the assumption chairs are reading the drafts in the WG.\n:D Thanks for the bump. Well now I've skimmed it at least far enough to see this: I'll take a closer look. Let's use the metrics repo for any discussion about it's design rather than having it here. Are there other use cases you would like to surface here? Even if the chairs are reading every draft, not everyone else in the wg is.\nSo lets imagine the client requests the range 0-10,000, but the publisher only produces group ID's at odd numbers. Is the proposal that each relay in the distribution chain send along 5,000 marker objects to indicate the gaps? That doubles the number of objects sent in response to each subscription. Compare that to a scheme in which groups IDs are sequential and zero markers need to be sent. Surely a marker-less scheme is more efficient in caches and across the wire?\nSo the use case for fetch has generally been VOD and getting the most recent 15 seconds on live streaming video. Both of these case would make sense to use sequential groups so I don't see this problem. The use case people have brought up about sparse groups are not so much cases where getting the old data makes much sense and also some of them the applications know what the groups are if they wanted them. I'm of course not suggesting a classic VOD use case would use non sequential groups, however it might on scalable codecs, but what I don't see why the relays do anything different if groups have to be sequential or not. Consider a live streaming like use case where groups are sequential and it is stream per group. A relay gets 3 streams coming in to it and on stream A get gets an object in group 1, on stream B it just waiting and has not received anything yet. And on stream C it gets an object from group 3. I probably don't want to have the relays not send objects from group C because they are waiting for something that might arrived from group 2. I'm not understanding what people think they want to do in the relays that requires the groups to be sequential. Overall, we need a design for fetch (lacking a better name for subscribe to old data). If fetch ends up for priority reasons to be delivered over a single stream, then it upstream just puts in order stream. Of course other designs are possible but am not getting what people want and why\nMy intent was not to provide a solution. I was instead proposing non-normative clarification text on the solution space which applies similarly regardless of the groupId structure. However, I feel once we resolve fetch api, flow control and limits around handling large absolute ranges, we can revisit to see if groupId are still the issue or not.\nAre logging and metrics the motivating use cases for multiple publishers of the same track? Reading the drafts, it doesn't appear like that is the intent, or you could get collisions. If you have any use cases like this, can you share them?\nIndividual Comment: Based on how SUBSCRIBE is defined now, I think both group IDs and object IDs have to indicate an ordering. If I SUBSCRIBE at a given start group and object, only published objects with groups numerically greater than or equal to that will be delivered. This is similar with SUBSCRIBE_UPDATE specifying the end of a subscription. Object IDs are also numerically comparable for prioritization within their group and priority level. There's text all over the draft implying these comparisons are valid, for example in stream-per-track, group IDs cannot go backwards. If an application wants to pick IDs at random, use a hash scheme, or dole out ranges to multiple publishers where newer things have numerically smaller IDs, then some funny interactions and protocol violations may ensue. No one has publicly specified a use case yet for doing this."} +{"_id":"q-en-moq-transport-6a9309742dfa3773c76b6ba6111dbaaf5cc81c45889f0a7fec2759b1b00723c9","text":"The text we landed for to define priorities (URL) indicates that we should use a 8 bits ( where is ) to convey subscriber priority and similarly a byte to convey publisher priority. ( URL ) ( URL ) The text also indicates that lower values have higher priority which I read as meaning 1 () should be sent before 10 (), (After) is 133 (u8) Edit: We should probably recommend that implementations use a mid-range value like as a default to allow for prioritizing both above and below it. (The MoQT spec doesn't need to define a default since these values are currently always explicit on the wire, but I think it would be helpful to provide guidance here.)\nI have a slight preference for treating these priorities as signed values because it makes a good default for implementations to choose. That way it's possible to go above or below the default. The same is possible with unsigned values, too, but if we go that direction we may want to provide some guidance to that effect (recommending something like or as a default mid-range value).\nOh, nevermind, NAME noticed some text I missed in section 4: \"with 0 being the highest priority\" which would imply these are unsigned int values, . It may still be helpful to make that more explicit and recommend a default mid-range value.\nI'm fine with some editorial text here if that's how we want to resolve it? (PRs welcome NAME ) I'm also ok with it being signed, since it it nice for 0 to be a 'default' value, but we don't use signed integers anywhere else in MoQ, so I'm a bit concerned that using them in exactly one spot could be confusing or error prone?\nIn some cases, things would work better if we allocated some ranges of priorities to specific DSCP like \"less than best effort\" or \"best effort\". That would end up making the actually numbers here have absolute meaning too instead of just relative meaning."} +{"_id":"q-en-moq-transport-cb5f7d6aef43648ae76e7fad7a9c35af0fb2fd57da09897441bb60a0c3f72622","text":"Oh that makes more sense. Let's add this to the issue and see if others agree there.\nNAME NAME you both approved when this PR set the value to 0 but I changed it. Please take another look.\nif a Datagram object is delivered via FETCH (which doesn't really make sense, see and ), what Subgroup ID do I put in the FETCH object header? The only answers that make sense are zero, or the object ID.\nI think the answer is getting datagrams via FETCH doesn't make much sense?\nAgreed! The point of this issue is that there is an ambiguity in the spec unless we choose to close it via 588 or 599\nI think we need to be able to fetch object that were originally delivered over datagram. Common use case is audio is over datagram and client got disconnected from the meeting for 10 seconds, rejoined, wants to get the missing 10 seconds and replay and 1.5x speed till it catches up then joins live meeting. I was sort of assuming subgroup would be zero for datagrams but I had not thought about using object ID. I would be concerned when doing that that object id can be large but size sub groups are limited to be pretty small so we could make them work with priorities. My question would be, does it every matter in anyway what the subgroup is for fetch ? If not, then zero seems fine.\nIndividual Comment: +1 for using zero It does if data that is FETCHed can later be subscribed to. Today it's possible because SUBSCRIBE allows Current Group (). If adding JOIN removes that (I think it should) then the protocol wouldn't have a direct use for subgroup. I wonder if there's a case of FETCHing a static asset to a re-broadcast publisher that would recreated the broadcast and want the subgroups though. I'm inclined to keep them in the FETCH stream wire format, at least for now.\nThe easiest way to think about this probably is, datagrams for a track have just on subgroup and thus its value is 0. I feel subgroup concept is important since it allows a publisher to control granularity of HOL blocking (based on subgroup size) support different priorities within a group I am not too sure of the requirement of fetched data to serve subscription. Even in JOIN flows, it is internally fetch + subscribe. We have following cases: Case 1: Objects are sent as datagrams and cached with subgroup of 0 On Join, fetch will return all the objects until live edge on a single stream and then subscribe will get all the new objects as datagrams. Basically fetch flattens the priorities, original stream mapping (datagram here) and just returns objects from the cache. Case2: Objects are sent over streams and can have one or more subgroups. So when cached, objects can have different subgroupIds. On Join, fetch will return all the objects until live edge on a single stream and then subscribe will get all the new objects based on the subgroup mapping in one or more streams. Even here, fetch flattened the subgroups and priorities and just returns objects from the cache. What am i missing here ?\nMaybe you want this comment on ? I think for now a PR with subgroup=0 should resolve this.\nI don't have that strong a preference, but I would think having all Datagrams in a Group have the same subgroup ID would indicate I should be sending them over a single stream? Making the subgroup ID the same as the Object ID is more in keeping with the SubGroup == Stream transmission model.\nIndividual Comment: I previously +1 using zero but I'm convinced by Ian's argument to set it to the object ID. That may result in ever-so-slightly larger FETCH responses when there are more than 64 objects in a group, but I think that's in the noise.\nI'm fine with this, since it resolves the issue, but I would think having all Datagrams in a Group have the same subgroup ID would indicate I should be sending them over a single stream? Making the subgroup ID the same as the Object ID is more in keeping with the SubGroup == Stream transmission model."} +{"_id":"q-en-moq-transport-47038ab852e3b8a37c1dd7f014fbebc598998a79ca42cef7443a00b3a9329f17","text":"One small normative change: I relaxed a SHOULD to a MAY when discussing relays writing data out of order. No one is implementing this right now.\nHere are some editorial notes on the document Line 72. “MOQT is a generic protocol is designed to work” Line 91-93. “The development of MOQT is driven by goals in a number of areas - specifically latency, the robustness of QUIC, workflow efficiency and relay support.” This sentence is intriguing. What does the robustness of QUIC mean? Latency, workflow efficiency, relay support are not “areas” or maybe area should be defined. (side note on the missing Oxford comma in this sentence) The Section 1.1.1. Latency as a whole is very confusing. HTTP-based streaming is not designed for low latency mostly due to the essential request-response paradigm. Queuing on the path has a insignificant impact on the end-to-end latency for HAS. The “speed at which a protocol can detect queuing” is not related to the relatively high latency issues of HAS. Overall, I see a mix of unrelated subjects here, which makes the paragraph overall meaningless. I would recommend starting the paragraph with a topic sentence (which may be the last one of the current paragraph). Line 120. “Applying QUIC to HAS …” (and the following sentences) is not necessary. The reasons for which QUIC does not bring advantages to HAS are still debatable on many points. This QUIC-on-HAS remark could raise unnecessary questions although the beginning of Section 1.1.2 is clear and sufficient to convey the message. Line 125. “Universal” has a too broad sense. Do you mean “Unicity of protocol in the whole pipeline” (or something like this)? Line 127. “Internet delivered media today has protocols optimized for ingest…” I don’t understand this sentence. Line 127. Some examples of protocols could be good to support these claims. Note that WHIP is an attempt to unify the protocol. Line 146. “to treat relays as first-class citizens of the protocol” is an excellent marketing message but it does not mean anything in a technical norm. Furthermore, see my two reserves above. I’d claim that the current draft is not making relay first-class, on the contrary. Line 179. “a join point” joint? Line 231. “The application is solely responsible for the content of the object payload.” What is the application? It is a bit confusing. It has not been defined. Line 269 .“within a single MOQT scope, subscribing to the same full track name would result in the subscriber receiving the data for the same track.” I am not sure to understand this sentence. Is a single MOQT scope a scope with only one server or do you mean “within a MOQT scope”? the two “same” adjectives are confusing too. Line 273. It would be better to stick to “servers” or to “connection URIs” to refer to the elements of the scope. Line 339. “The portion MUST NOT contain a non-empty portion.” Is it a double negation to say that the ‘authority’ MUST contain an empty ‘host’ portion? Line 357. This sentence may require a MUST. Line 359. “for exchanging …” and following sentences are not related to session initialization. They are rather remarks that would be a fit with the draft Section 4. +1 to the TODO remarks on line 381. This should be in Section 4. Line 654. “it allows the peers…” The term “peer” is used 7 times. It is supposed to be an “endpoint” (as defined Line 173), unless peer and endpoint have different meaning (in which case peer must be defined) Line 693. “A relay that reads from a stream and writes to stream in order will introduce head-of-line blocking.” I don’t understand this claim.\nThanks for the feedback. I'm going to mark this as Editorial, but there are a number of points here. If you believe any of them are not editorial (ie: \"Line 357. This sentence may require a MUST.\"), please split into a separate issue.\nWill assign to Mo.\nNAME Please go through these and fix any that still apply."} +{"_id":"q-en-moq-transport-51a8a1d9368a2eec8be0cdcbb0dfcc561f5a5905bcb7122dd4a5a7d8b185ecb7","text":"Current definition specifies namespace is an N-tuple, with 132 is it expected to return a protocol violation, or do we leave that undefined?\n+1 I think we should address this as part of refactoring the tuple type\nI put up a PR addressing the issue but didn't do a larger refactor of the tuple type, which seems editorial.\nEditorial suggestion, but LG"} +{"_id":"q-en-moq-transport-a893c130e321d7ddb0d5fb4fd5e69e81cfad04378da3256038e64cc3e2b5989a","text":"Allow clients to send GOAWAY to signal servers to UNSUBSCRIBE.\nPR has GOAWAY as only sent from server to client, but there may be use cases for the reverse as pointed out by NAME\nI'm not really clear, what would be the use case for client -> server GOAWAY? If the only use case is a client indicating end of subscription, then I think that would be better encoded in the SUBSCRIPTION_END that Alan mentioned, but if there's other use cases then I could see GOAWAY making more sense.\nWhat if the client is not the original source, but rather a relay. In that case, the relay may need to restart\/etc. One option is the relay sends a GOAWAY and the client starts publishing to a new relay. But does it need to do something about it's upstream subscriptions?\nFWIW ... relay to relay is out of scope - we are only doing client to relay - we would need a lot more to have a full relay to relay protocol\nA client-initiated GOAWAY is weird but kind of makes sense. I'm thinking of the dual ingest case when there are two connections from broadcaster to relay(s). If the client sends a GOAWAY message, it's an explicit signal to the relay that it needs to UNSUBSCRIBE to everything and find another route to the content. The party isn't over, you just can't stay here. But there's some overlap with UNANNOUNCE. I'm not sure if we actually need it.\nThat use case makes sense. \"The party isn't over, you just can't stay here.\" is awesome tag line for use case. For a graceful shutdown of a publisher, I think the way to do that would be the client publishing sends un-annouce and the relay then ends all the subscriptions, once all the subscriptions are gone, the client closes the connection.\nWith UNANNOUNCE and ANNOUNCE_CANCEL (), I'm inclined to think we don't need this.\nUNANNOUNCE tells the receiver not to send new SUBSCRIBEs, but doesn't currently mean \"please UNSUBSCRIBE and find somewhere else to SUBSCRIBE if you still want content\". Do we want a message to convey that semantic?\nIt definitely feels odd given that outside of SETUP and ROLE parameter, MoQT is a symmetric protocol.\nMy current thinking is that GOAWAY is useful for all publishers, even if they're 'clients' (ie: an original publisher). It also seems useful if an original publisher is publishing to an ingestion server that is about to restart and the server wants to cause them to connect to a different ingestion server. I don't have a clear use for the end subscriber sending a GOAWAY, but it also doesn't seem particularly harmful.\nI think you missed a few spots to update (suggestions made), but maybe you intentionally left those as is?"} +{"_id":"q-en-moq-transport-3ffc1c3ecc4bd27d726a3453b160cd7ad8d6add79035045348a4a40ec46a1203","text":"Individual Comment: lgtm\nI don't have a strong opinion, but why not MAXCACHEDURATION?\nSection 6.1.1 has not been updated to reflect the 2 new messages with parameters.\nI don't have a strong preference. Mind writing up a PR of whatever you think is a sensible proposal?\nI don't have any preference at all, but I can do a straw man.\nI can think of AuthInfo as the only parameter that makes sense for now. NAME please let me know if you think of anything else. I can add it to my PR. Thanks\nThanks, this should have been added originally."} +{"_id":"q-en-moq-transport-c021c30d663af5e4e62fa789b99dbe73d54bfb969d17fb2e1a6eddc944b2418d","text":"Merging based on in-person discussions.\nYou merged a litttttle early. The wire format is good, but the accompanying text could use some tweaks. For example, it's not specified when the object sequence number should be incremented. It must be done on every independently decodable OBJECT. I'll make another PR."} +{"_id":"q-en-moq-transport-1abaabc635d7d1eeaac24f6afbe16a9b3036488bf7ebd6381d41af3c73b8c255","text":"This is a PR to propose changes in the data model PR. Largely work in progress, but I wanted to first propose an organization.\nThe introductory text seems clear to me. I have questions about what it means for an object to be 'atomic' - since it can necessarily be larger than an MTU it might not get delivered. It can be repaired of course, but what are the consequences to other objects in the group or track if the object never completely arrives? Are there restrictions on receivers attempting to use partial objects? This reader is excited to read the next chapter :D\nNAME The main consequence of \"atomic\" is that it is way better to send one complete object than two incomplete objects. That should very much guide scheduling of transmissions at relays. The other consequence is that if publisher can split a big blob of data into several objects that can be processed independently, they should. For example, instead of sending \"the entire GOB as an object\", they would get better results sending the GOB as a series of objects, with each object boundary becoming a proper \"cut mark\". And yes, I have to develop the text. I checked in a partial PR to Suhas' PR, mostly to test my editing chain...\nthanks Christian for detailed text. I will merge it into my branch and do an updated version based on this."} +{"_id":"q-en-oauth-browser-based-apps-f3af6fc67feec2f9c372ef0e435cbb585119e922e6f0246e3b111d1409d645b9","text":"Refactoring some architectural patterns: Single-Domain Browser-Based Apps (not using OAuth) Backend For Frontend (BFF) Proxy"} +{"_id":"q-en-oauth-browser-based-apps-df4b772476e4d4f246bc64dec0ac5bfa895bd48b6c88dab73793bb87974b6868","text":"reworked some architectural patterns: Javascript applications accessing resource servers directly (js = oauth client)"} +{"_id":"q-en-oauth-browser-based-apps-6948a23761b3fb29fcd64a871e9fa59f9c9650709551a6285a3504dbc0193631","text":"This looks great. I'm going to merge this and make a few minor tweaks to it."} +{"_id":"q-en-oauth-browser-based-apps-0046dc86760ab0d59143d1d939f76ba0ef4f6177220fd5b740963088b3a38483","text":"I moved the considerations about XSS to a general section: all architectures are concerned. I added some words about bypassing the Service Worker: this would need a very broad successful XSS, with a much broader attack surface than what is typically the case. This can be mitigated by making sure that registering the service worker is the very first thing happening. There is also no API for unregistering a SW, so it can't be removed after the fact.\nI haven't dug too deep yet but isn't this an API to unregister a service worker? URL\nIndeed, this mitigation won't work. I oversaw the registration itself and focused on URL . I'll rework this and focus on what can be guaranteed by specs.\nI removed the part about service workers for now. I'll see if I can further improve it in another PR."} +{"_id":"q-en-oauth-browser-based-apps-da5bce876c3b6a7fd7948f87ee279071a0e07697887b2bcbebcd54695b17666a","text":"As discussed on the OSW, this PR adds a small section with security considerations on the use of (a.k.a. \"web messaging\") to this draft. It makes reference to , which already discusses the security implications of in-browser communication flows in detail. Since (silent) iframe flows and popup flows are especially used in browser-based apps, we think that it makes sense to include security considerations of their in-browser communication into this draft. Please let us know what you think about this. We appreciate any feedback.\nSounds good to me, thanks!"} +{"_id":"q-en-oauth-browser-based-apps-aa53f51e5dfa9b0992839d411f03ae29f802befc7ce036122a3f4e11711e883b","text":"fixes URL\nThanks NAME I like it, I updated the PR\nBecause the UI and the backend APIs can be hosted from the same backend application, Anti-forgery tokens can also be used to protect against CSRF.\nThere are indeed various CSRF defenses that could be used in this architecture. The two options discussed in the current document explicitly focus on defenses that are part of the core web platform, and require minimal effort from the developer to implement. The use of anti-forgery cookies requires code on both the frontend and the backend to guarantee their effectiveness. Are there benefits to using anti-forgery cookies over samesite cookies\/CORS that we're missing?\nI thought it would be good to add this as it is per default supported in some tech stacks and so easy to use. People reading the doc might read it as only the mentioned ways protect against CSRF. Adding this would make the recommendations more complete. “Are there benefits to using anti-forgery cookies over samesite cookies\/CORS that we're missing?” In my opinion, there are no benefits compared with the other two options, just another option. Samesite cookies\/ anti-forgery cookies are also a viable option. If the used tech stack has this already supported per default, then there is less to do as a developer. This would make it an option.\nNAME I am open to listing out this option as well for the scenario you mention. Do you have text you could suggest that captures this?\nHi NAME I'll write something Kind regards Damien\nNAME NAME I created a PR with an initial draft. I don't think we should explain how it works because this is not the scope of this doc. kind regards Damien\nThanks for the suggestion NAME I believe it is important to explain why this addition is there, so I've taken a stab at rewording (see below). I have also removed some of the implementation details that are not necessarily relevant (e.g., the use of HTTPS). I also removed the note on encryption, since the value in the cookie is typically random, so there would be no need to encrypt the cookie. Let me know what you think."} +{"_id":"q-en-oauth-browser-based-apps-fe38a55743c6874dbc9382521f24d95c8072d068966fa8d24b694dfceaa398fd","text":"The new section on the security of in-browser communication flows only applies to the browser-based OAuth client. It does not apply to the BFF or TM Backend. Therefore, I moved this section to the security considerations of the relevant pattern."} +{"_id":"q-en-oauth-browser-based-apps-0a20e60397440a685b645bf4ad3bd7bee39e72bee77ed87300faecae73cb5433","text":"This PR offers an alternative to . Compared to PR it makes two changes: I removed the newly added text to avoid creating confusion between the responsibilities of a BFF. While it is technically possible to deploy a BFF as part of an API gateway, I believe this suggestion may create confusion for someone trying to grasp the pattern. An API Gateway is closely linked to an API, while a BFF is (in theory) closely linked to a frontend. Reworded the benefits of the Token Mediating Backend to more accurately represent the advantages\/disadvantages of the pattern, as correctly suggested by this PR If this PR is merged, can be closed."} +{"_id":"q-en-oauth-browser-based-apps-bb31c73ae6fc460da5bb9d4c48f48627922763722a8ac11cfaa5b9b9353111f7","text":"Arrow C is referenced in the text \"[The code in the browser] obtains an access token via a POST request (C).\". The arrow should have an arrowhead on both ends as it symbolizes both the token request and token response. \"The JavaScript app is then responsible for storing the access token (and optional refresh token) securely using appropriate browser APIs.\" I think this sentence was misleading, because as of today there is no way for the app to store tokens securely in the browser. Arrow E was not referenced in the text."} +{"_id":"q-en-oauth-cross-device-security-743d1d1793430f173976ef6d1c4a43527212195c90d200cbce0be31abea709a3","text":"Refined the trusted device section to make it clear that restricting to a specific device type is another option to limit the impact of deploying cross-device flows (issue )\nLimit it to specific devices."} +{"_id":"q-en-oauth-cross-device-security-b70a91f7c159ad80cf2745578eb94421b5d0b19748a89326ba6b6bf0cc49597e","text":"Moved paragraphs to improve the flow of the section. Removed references to User Education (which is now a mitigation itself, so all the details are already in the ad hoc section). Fix case for keywords should and may."} +{"_id":"q-en-oauth-cross-device-security-b112287d39da620deac5de41332e9e0e39b55ac6f009aa5cb275a97a28eec32e","text":"Added an example described at OSW 2023 (see issue )\nThere are two new attacks, the consent bombing and fake helpdesk attacks, that may be added as examples of attacks, or as extensions of existing attacks.\nAfter discussing ith Daniel we agreeed: Add an example for the consent bombing attacks Add a sub-attack to example B.4 (split into two attacks).\nApproved with editorial changes."} +{"_id":"q-en-oauth-cross-device-security-7b3f91461c8fe07805a4fdc4350b011b2cfc6d8d3e5c824248414ccc7d20965a","text":"See issue\nThere are two new attacks, the consent bombing and fake helpdesk attacks, that may be added as examples of attacks, or as extensions of existing attacks.\nAfter discussing ith Daniel we agreeed: Add an example for the consent bombing attacks Add a sub-attack to example B.4 (split into two attacks)."} +{"_id":"q-en-oauth-cross-device-security-d9c19341c0eff783fcbc813fc13c822e66bb9d35502b143e9694f6db00d2126f","text":"Alternative name for authenticated flow (see )\nCan we find a better name for the \"Authenticated Flow\" mitigation?\nAdditional authentication on consumption device Initial authentication on consumption device Authenticate before initiating cross-device flow. Authenticate on consumption device before initiating cross-device flow. Cross-Device Flow with previous authentication Authenticate before initiating cross-device flow.\nThanks, approved with minor editorial changes"} +{"_id":"q-en-oauth-cross-device-security-00a0034c8ef83e07f7a2ebd5fc7b7a9ed0545852560730ad7117f265d89b9484","text":"See issue\nClosing issue\nAdd reference to VC presenation specs in section 4 and section 2.2.1\nOnly adding references to section 4. Adding this in section 2.2.1 will be confusing, even though a QR code is being scanned in these protocols, unlike device code flow, the session does not revert back to the consumption device.\nAddressed in PR\nLooks good to me, one small editorial change."} +{"_id":"q-en-oauth-cross-device-security-4c60398e9d4aaaa1ec998a79dec2b643f019dac7a194a902f5ea1544c2694955","text":"Adressing feedback in issue\nNAME\nFeedback from Dean Saxe on WGLC Section 4.3.9 reads, “… using an e-mail campaign etc.” Should this be rewritten, “using an e-mail campaign, for example.”? URL\nFixed in Rifaat, I have a few minor nits in the doc, nothing of significant concern for WGLC. When describing the visuals documenting the flows, there is a step that includes “The user authenticates to the authorization server”. In each case this should include verbiage to indicate that this is only necessary if the user is unauthenticated, e.g. “If unauthenticated, the user authenticates to the authorization server…”. Specific sections include 3.1.1, 3.1.2, 4.1.1, 4.1.2 Section 3.1.3 the final sentence notes the authorization data may be delivered as a text message or via a mobile app. This is inconsistent with the methods mentioned in the first paragraph, which includes email and text messages. I suggest being clear that these are example mechanisms and not a full list of mechanisms by which codes can be delivered. Section 3.3.1 the first sentence should note that the QR code is associated with the particular service (Netflix, AppleTV, Disney+). Readers could assume that the QR codes originate from the TV manufacturer’s service alone as written. Section 4.3.9 reads, “… using an e-mail campaign etc.” Should this be rewritten, “using an e-mail campaign, for example.”? Section 6.2.3 discusses FIDO CTAP 2.2. This document is still in review draft 01. We should note that the document is not final as of today. Section 6.2.3.5 could be softened a bit. The first sentence should include, “… and a suitable FIDO credential is not available on the consumption device.” In most patterns, this mechanism is used to bootstrap a new credential on the device, rather than using this mechanism for authN every time. Authors, if you have any questions please let me know. Thanks, -dhs Dean H. Saxe, CIDPRO (he\/him) Senior Security Engineer, AWS Identity Security Team | Amazon Web Services (AWS) E: EMAIL | M: 206-659-7293 From: OAuth on behalf of Rifaat Shekh-Yusef Date: Monday, April 22, 2024 at 7:57 AM To: oauth Subject: RE: [EXTERNAL] [OAUTH-WG] WGLC for Cross-Device Flows BCP CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. We have not received any feedback on this document so far. This is a reminder to review and provide feedback on this document. If you reviewed the document, and you do not have any comments or concerns, it would be great if you can send an email to the list indicating that. Regards, Rifaat All, This is a WG Last Call for the Cross-Device Flows BCP document. URL Please, review this document and reply on the mailing list if you have any comments or concerns, by April 29th. Regards, Rifaat & Hannes"} +{"_id":"q-en-oauth-cross-device-security-8593d4a6ea301a3c147e02f4541f9bda32f24f69938616e37f573044f17f9e8d","text":"Addresses feedback in issue NAME\nFeedback from Dean Saxe on WGLC Section 3.3.1 the first sentence should note that the QR code is associated with the particular service (Netflix, AppleTV, Disney+). Readers could assume that the QR codes originate from the TV manufacturer’s service alone as written. URL\nAdressed in Rifaat, I have a few minor nits in the doc, nothing of significant concern for WGLC. When describing the visuals documenting the flows, there is a step that includes “The user authenticates to the authorization server”. In each case this should include verbiage to indicate that this is only necessary if the user is unauthenticated, e.g. “If unauthenticated, the user authenticates to the authorization server…”. Specific sections include 3.1.1, 3.1.2, 4.1.1, 4.1.2 Section 3.1.3 the final sentence notes the authorization data may be delivered as a text message or via a mobile app. This is inconsistent with the methods mentioned in the first paragraph, which includes email and text messages. I suggest being clear that these are example mechanisms and not a full list of mechanisms by which codes can be delivered. Section 3.3.1 the first sentence should note that the QR code is associated with the particular service (Netflix, AppleTV, Disney+). Readers could assume that the QR codes originate from the TV manufacturer’s service alone as written. Section 4.3.9 reads, “… using an e-mail campaign etc.” Should this be rewritten, “using an e-mail campaign, for example.”? Section 6.2.3 discusses FIDO CTAP 2.2. This document is still in review draft 01. We should note that the document is not final as of today. Section 6.2.3.5 could be softened a bit. The first sentence should include, “… and a suitable FIDO credential is not available on the consumption device.” In most patterns, this mechanism is used to bootstrap a new credential on the device, rather than using this mechanism for authN every time. Authors, if you have any questions please let me know. Thanks, -dhs Dean H. Saxe, CIDPRO (he\/him) Senior Security Engineer, AWS Identity Security Team | Amazon Web Services (AWS) E: EMAIL | M: 206-659-7293 From: OAuth on behalf of Rifaat Shekh-Yusef Date: Monday, April 22, 2024 at 7:57 AM To: oauth Subject: RE: [EXTERNAL] [OAUTH-WG] WGLC for Cross-Device Flows BCP CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. We have not received any feedback on this document so far. This is a reminder to review and provide feedback on this document. If you reviewed the document, and you do not have any comments or concerns, it would be great if you can send an email to the list indicating that. Regards, Rifaat All, This is a WG Last Call for the Cross-Device Flows BCP document. URL Please, review this document and reply on the mailing list if you have any comments or concerns, by April 29th. Regards, Rifaat & Hannes"} +{"_id":"q-en-oauth-cross-device-security-b44d962dc3a7d08347ec2369881ea8f28b5dd7761311b61621efc713a8dd6fdf","text":"Addresses issue NAME\nWCLC Feedback from Dean Saxe Section 3.1.3 the final sentence notes the authorization data may be delivered as a text message or via a mobile app. This is inconsistent with the methods mentioned in the first paragraph, which includes email and text messages. I suggest being clear that these are example mechanisms and not a full list of mechanisms by which codes can be delivered. URL\naddressed in\nlgtm Rifaat, I have a few minor nits in the doc, nothing of significant concern for WGLC. When describing the visuals documenting the flows, there is a step that includes “The user authenticates to the authorization server”. In each case this should include verbiage to indicate that this is only necessary if the user is unauthenticated, e.g. “If unauthenticated, the user authenticates to the authorization server…”. Specific sections include 3.1.1, 3.1.2, 4.1.1, 4.1.2 Section 3.1.3 the final sentence notes the authorization data may be delivered as a text message or via a mobile app. This is inconsistent with the methods mentioned in the first paragraph, which includes email and text messages. I suggest being clear that these are example mechanisms and not a full list of mechanisms by which codes can be delivered. Section 3.3.1 the first sentence should note that the QR code is associated with the particular service (Netflix, AppleTV, Disney+). Readers could assume that the QR codes originate from the TV manufacturer’s service alone as written. Section 4.3.9 reads, “… using an e-mail campaign etc.” Should this be rewritten, “using an e-mail campaign, for example.”? Section 6.2.3 discusses FIDO CTAP 2.2. This document is still in review draft 01. We should note that the document is not final as of today. Section 6.2.3.5 could be softened a bit. The first sentence should include, “… and a suitable FIDO credential is not available on the consumption device.” In most patterns, this mechanism is used to bootstrap a new credential on the device, rather than using this mechanism for authN every time. Authors, if you have any questions please let me know. Thanks, -dhs Dean H. Saxe, CIDPRO (he\/him) Senior Security Engineer, AWS Identity Security Team | Amazon Web Services (AWS) E: EMAIL | M: 206-659-7293 From: OAuth on behalf of Rifaat Shekh-Yusef Date: Monday, April 22, 2024 at 7:57 AM To: oauth Subject: RE: [EXTERNAL] [OAUTH-WG] WGLC for Cross-Device Flows BCP CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. We have not received any feedback on this document so far. This is a reminder to review and provide feedback on this document. If you reviewed the document, and you do not have any comments or concerns, it would be great if you can send an email to the list indicating that. Regards, Rifaat All, This is a WG Last Call for the Cross-Device Flows BCP document. URL Please, review this document and reply on the mailing list if you have any comments or concerns, by April 29th. Regards, Rifaat & Hannes"} +{"_id":"q-en-oauth-cross-device-security-102592f77b6d2d024148f76665ee031140b3458c6e4c5a10e44b1bb04fd6010e","text":"Addressed issue NAME NAME\nFeedback from Dean Saxe during WGLC Section 6.2.3.5 could be softened a bit. The first sentence should include, “… and a suitable FIDO credential is not available on the consumption device.” In most patterns, this mechanism is used to bootstrap a new credential on the device, rather than using this mechanism for authN every time. URL\nAdressed in Rifaat, I have a few minor nits in the doc, nothing of significant concern for WGLC. When describing the visuals documenting the flows, there is a step that includes “The user authenticates to the authorization server”. In each case this should include verbiage to indicate that this is only necessary if the user is unauthenticated, e.g. “If unauthenticated, the user authenticates to the authorization server…”. Specific sections include 3.1.1, 3.1.2, 4.1.1, 4.1.2 Section 3.1.3 the final sentence notes the authorization data may be delivered as a text message or via a mobile app. This is inconsistent with the methods mentioned in the first paragraph, which includes email and text messages. I suggest being clear that these are example mechanisms and not a full list of mechanisms by which codes can be delivered. Section 3.3.1 the first sentence should note that the QR code is associated with the particular service (Netflix, AppleTV, Disney+). Readers could assume that the QR codes originate from the TV manufacturer’s service alone as written. Section 4.3.9 reads, “… using an e-mail campaign etc.” Should this be rewritten, “using an e-mail campaign, for example.”? Section 6.2.3 discusses FIDO CTAP 2.2. This document is still in review draft 01. We should note that the document is not final as of today. Section 6.2.3.5 could be softened a bit. The first sentence should include, “… and a suitable FIDO credential is not available on the consumption device.” In most patterns, this mechanism is used to bootstrap a new credential on the device, rather than using this mechanism for authN every time. Authors, if you have any questions please let me know. Thanks, -dhs Dean H. Saxe, CIDPRO (he\/him) Senior Security Engineer, AWS Identity Security Team | Amazon Web Services (AWS) E: EMAIL | M: 206-659-7293 From: OAuth on behalf of Rifaat Shekh-Yusef Date: Monday, April 22, 2024 at 7:57 AM To: oauth Subject: RE: [EXTERNAL] [OAUTH-WG] WGLC for Cross-Device Flows BCP CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe. We have not received any feedback on this document so far. This is a reminder to review and provide feedback on this document. If you reviewed the document, and you do not have any comments or concerns, it would be great if you can send an email to the list indicating that. Regards, Rifaat All, This is a WG Last Call for the Cross-Device Flows BCP document. URL Please, review this document and reply on the mailing list if you have any comments or concerns, by April 29th. Regards, Rifaat & Hannes"} +{"_id":"q-en-oauth-cross-device-security-19ead31669c2c82854ce8226606b6b08255beb919185a588d4c999c0a2a382a0","text":"Addresses issue NAME\nFrom WGLC Nit: \"SmartTV\" and \"Smart TV\" are used interchangeably throughout the doc. No preference on which one is used, but should be consistent. URL NAME Looks great! Some small proposed tweaks: Nit: \"SmartTV\" and \"Smart TV\" are used interchangeably throughout the doc. No preference on which one is used, but should be consistent. 6.2.3.1 Current text: \"supports a new cross-device authentication protocol, called \"hybrid\"\" Proposed text: \"supports a new cross-device transport protocol, called \"hybrid transports\" Propose adding the following at the end of the first paragraph: \"CTAP 2.2 hybrid transports is implemented by the client and authenticator platforms.\" Current text: \"The main device and authenticator\" Proposed text: \"The main device (CTAP client) and authenticator\" Current text: \"The user will receive a push notification on the authenticator.\" Proposed text: \"The user will typically receive a push notification on the device serving as the FIDO authenticator.\" 6.2.3.3 Current text: \"Both the Consumption Device and the authenticator require BLE support.\" Proposed text: \"Both the Consumption Device and the authenticator require BLE support and also need access to the internet\" s\/hybrid transport\/hybrid transports Current text: \"The mobile phone must support CTAP 2.2+ to be used as a cross-device authenticator.\" Proposed text: \"The device serving as the FIDO authenticator must support CTAP 2.2+ to be used as a cross-device authenticator.\" tim On Mon, Apr 22, 2024 at 10:57 AM Rifaat Shekh-Yusef wrote: >"} +{"_id":"q-en-oauth-cross-device-security-40cb32b2e95b4395c052ad73db3dc412762b76b257be78239c50b31787d984ef","text":"Addresses issue NAME\nFrom WGLC Current text: \"supports a new cross-device authentication protocol, called \"hybrid\"\" Proposed text: \"supports a new cross-device transport protocol, called \"hybrid transports\" Propose adding the following at the end of the first paragraph: \"CTAP 2.2 hybrid transports is implemented by the client and authenticator platforms.\" Current text: \"The main device and authenticator\" Proposed text: \"The main device (CTAP client) and authenticator\" Current text: \"The user will receive a push notification on the authenticator.\" Proposed text: \"The user will typically receive a push notification on the device serving as the FIDO authenticator.\" Current text: \"Both the Consumption Device and the authenticator require BLE support.\" Proposed text: \"Both the Consumption Device and the authenticator require BLE support and also need access to the internet\" s\/hybrid transport\/hybrid transports Current text: \"The mobile phone must support CTAP 2.2+ to be used as a cross-device authenticator.\" Proposed text: \"The device serving as the FIDO authenticator must support CTAP 2.2+ to be used as a cross-device authenticator.\" see: URL NAME"} +{"_id":"q-en-oauth-cross-device-security-7f623705ae1c403fb4a1046a5102dc5d50106f9a64f68b258982a4570c9fa744","text":"Clarify the authorization server's role in detecting if a cross-device protocol is being used in a same-device scenario. See issue NAME\nGive guidance that enforcement of same device use of the protocol is something the authorization server should do, in addition to a design time constraint. From Roy Williams: 1) At the end of section (5) there is a paragraph that talks about limiting Cross-device protocols on the same device. It does not seem to be something that a client could\\would know about when let’s say YouTube TV requests auth and it ends up on Authenticator on the same device. In theory this would then be the Authenticator Service’s Job to determine this situation and respond with a well known pattern to drive the client to engage in a local oath call directly to authenticator. Full feedback here: URL I had promised at the 119 meeting that I would review this document and give feedback. I have completed that document and other than two potential clarification points, I found it to be helpful. The following two areas could be slightly improved: At the end of section (5) there is a paragraph that talks about limiting Cross-device protocols on the same device. It does not seem to be something that a client could\\would know about when let's say YouTube TV requests auth and it ends up on Authenticator on the same device. In theory this would then be the Authenticator Service's Job to determine this situation and respond with a well known pattern to drive the client to engage in a local oath call directly to authenticator. In the case of 6.1.1 establishing proximity, there is a boundary (pun not intended) case where a device will shift between two different cellular providers. The IETF's Drone effort were examining the same problem as the drone flies close to an international boundary and flips back and forth to roaming and not. How to deal with this case or whether it is dependable is a question. I know that Pieter is suggesting Fido2, but the way this section is written a Consumption device may be on a weak Wifi and the authentication device has shifted to Cellular. Roy."} +{"_id":"q-en-oauth-cross-device-security-cbf645a4904edfc63e9996c52e41aa27d82fcc638c354a866d03a7519006c254","text":"See issue\nNAME NAME if you have a moment, can you take a look and approve? I want to merge all PRs and publish an update.\nConsider removing the use of etc with either concrete examples or replace with text."} +{"_id":"q-en-oauth-cross-device-security-24c14435c5b354688d72daf2aa81bed2e06aace0228a618365c18e13e8e0b2b7","text":"Added reference to the mitigation when describing .\nNAME I made one editorial change. If your OK with it, I will merge.\nOk for me, thanks! NAME"} +{"_id":"q-en-oauth-cross-device-security-4cd0cfb6c01e3fe5bffed30cc371eeafb6481228180632072f443eec5593ee70","text":"Added additional text making it clear that cross device flows should not be used for same device scenarios as suggested by Joseph Heenan (see )\nFrom Joseph Heenan: URL I may have missed it, but it may be worth being move explicit that one solution is to avoid using cross-device flows for same-device scenarios? It’s sort of obvious, but questions like “well CIBA works for both cross-device and same-device, can’t I save myself effort by only implementing CIBA and not bothering with standard redirect-based OAuth flows?” are commonly asked. Hi Pieter \/ Daniel \/ Filip It’s great to see this document moving forward. I may have missed it, but it may be worth being move explicit that one solution is to avoid using cross-device flows for same-device scenarios? It’s sort of obvious, but questions like “well CIBA works for both cross-device and same-device, can’t I save myself effort by only implementing CIBA and not bothering with standard redirect-based OAuth flows?” are commonly asked. Also, in this text: \"If FIDO2\/WebAuthn support is not available, Channel Initiated Backchannel Authentication (CIBA) provides an alternative, provided that the underlying devices can receive push notifications.” It might be best to use a term other than ‘push notifications’ there or otherwise rewording this, as there are alternatives. e.g. I think there’s at least one CIBA implementation out there that can use email to notify the user of an authorization request. Thanks Joseph"} +{"_id":"q-en-oauth-cross-device-security-2ed5c90c6ef421f91524ced4258da50c6470d79f6a4b70299e9cb883f569078c","text":"Added clarification that CIBA is not a standard under development, but was completed in September 2021 and added references to the completed standard.\nRefer back to issue\nThanks for that bit of historic context Brian. I will remove those references.\nThanks NAME and NAME - I also added you to the acknowledgment section."} +{"_id":"q-en-oauth-cross-device-security-eec1cafe81dd1f39e8071ae0759248e8825ce8cdb3190ba978ead29676a8913a","text":"fixes URL\nMerging new mitigation proposal\nI want to use my E-ID on a verifier website to prove my identity. If I authenticated on the correct website using a phishing resistant authentication and only after this, can I scan the QR Code to present my verifiable credentials presentation. This works good, as a lot of the website where I require a digital identity check, I already have an account and can use a phishing resistant authentication first. I could add a text for this, if you think this is a good idea. Greetings Damien\nNAME I think of this as a type of \"authenticated flow\" where the user authenticated the channel before engaging in an additional cross device flow. I think it is a reasonable mitigation, would you like to take a stab at writing up a more detailed description of the attack?\nNAME Thanks for the feedback. I created a PR for this: URL Greetings Damien"} +{"_id":"q-en-oauth-cross-device-security-0a8fbf40667a6b13881a2bed216cd32a7e2a2d31f9513c278dec05863eb6b0ad","text":"Added examples to illustrate cases when the authenticated flow is not an effective as a mitigation."} +{"_id":"q-en-oauth-cross-device-security-c93654482e21360290d02ccbc0ed125a72f26578635d30f78d83a6f6eeb52471","text":"Accepted proposed text to clarify that this document does not cover cases where the user is colluding with an attacker."} +{"_id":"q-en-oauth-cross-device-security-f9b65782f77afd1a6a18afbaa7aa4a8d5283605ec8fd6fe63a986e5e1e8252c1","text":"Changed name of \"User transferred\" attack to User Transferred Session Data Pattern - see Issue\nIs this a duplicate with PR ?\nResolving issue\nResolving"} +{"_id":"q-en-oauth-cross-device-security-31fa843fefc0bae13816b8272ad5c249c0798082317fb956cfd3d7bb7d60931b","text":"Changes made in response to issue\nUser Transfer Pattern -> User Transferred Session Data Pattern Client Transfer Pattern -> Backchannel Transferred Session Patterns Hybrid Pattern -> User Transferred Authorization Data Pattern\nAttack naming conventions updated as reflected in merge 0988577895daf3878e3d3de59add470c35c4cc4a"} +{"_id":"q-en-oauth-cross-device-security-2f364df542be3fb41834ad0248274f56d490f56f586200b122b2d994ec96b7b8","text":"Included feedback based on\nSection 3.6 (the section about an attacker emulating an enterprise application) is the canonical example of an illicit consent grant attack leveraging the device authorization grant. One of the key things that makes this attack particularly scary is the fact that everything the targeted user sees (outside of perhaps the non-ordinary flow involving user_code entry) indicates to them that they're signing into a trusted enterprise application, since you can use a trusted app's client ID when initiating the flow. In reality, however, they're just completing the second leg of a flow initiated by some attacker's script. While you mention \"emulating an enterprise application\", I don't think that provides the clearest example of why this is particularly scary. I think you could go a bit further and emphasize that everything the user sees in this sort of attack is: a) trusted UX and b) indicates that they're signing into a trusted application."} +{"_id":"q-en-oauth-cross-device-security-19d17aced48a382b063ff5b6f7033842c0623e643b4f4eeef7b5d2e417d8f617","text":"Corrected typo (related to )\nIn the first sentence of section 3.3 you say \"authorizations\" instead of \"authorization\""} +{"_id":"q-en-oauth-cross-device-security-01d55513da2e797d13704f41425f11dc050e414c3887765c6f188956c2b50176","text":"Introduce the Cross-Device Consent Grant label (see )\ne.g. Malicious Context Change Authorization Request or something like that.\nIdeas from Pieter's slides: Illicit Consent Grant Attack? Describes outcome, not the mechanism Attacker-in-the-Middle Attack? Describe attacker capability, but both too broad and too narrow Authorization Context Manipulation Attack? Describes the mechanism Authorization Context Manipulation Exploit Describe mechanism, hints that protocol functions as expected. What about \"Phishing for Consent\"? Describes the mechanim which depends to some extend on social engineering (phishing) Describes the outcome (consent) Other names shared via e-mail by George Fletcher: Zishing Azishing \"Cross-Device Authorization Exploit\" \"Social Engineering Token Theft\" \"Authorization Flow Manipulation Exploit\" FlowJack AuthJack TokenJack Context Manipulation Authorization Exploit Permitphishing, Authishing\n\"Cross-Device Consent Phishing\"?\nSome references on Consent Phishing: URL URL URL\nCDCP - Cross Device Consent Phishing - the abbreviation works well.\nTwo editorial nits, the rest looks good to me! Thank you!"} +{"_id":"q-en-oauth-cross-device-security-bcd9c548e532d6c771f4e9e0262f4f4817cc8ae4a6b1810c6706c570dfe60aca","text":"See issue\nClosing issue\nHyphenate all uses of cross device (e.g. cross-device)."} +{"_id":"q-en-oauth-cross-device-security-487f2a14bb049e032c3570d7844680998c505a6a0d6880a8cb71623ee5428948","text":"Editorial clarifications based on feedback from Aaron Parecki (see )\nA couple of editorial suggestions, but otherwise this looks fine to me!"} +{"_id":"q-en-oauth-cross-device-security-e149d3773bbde4342e928e37337fa4460d7429d229a12b6fefe5dc064a79306d","text":"Instead of referencing spray attacks, describe how they work (see )\nInstead of referring to spray attacks, describe how an attacker might use the same QR or user code by sending it to the different recipients."} +{"_id":"q-en-oauth-cross-device-security-7528b9642f1ffffb2a17c16e28222ddcdbc890b81750495265e38e415c65db91","text":"Consistently hypheante and clarify section on sender constrained tokens\nconsistently hyphenate sender-constrained tokens. Clarify limitations.\nOne typo from earlier."} +{"_id":"q-en-oauth-cross-device-security-ea3153ceac4418e0e4d73938871ef688dbfef16390d31ed57660dd6c653b4dab","text":"See\nI have trouble explaining why scanning a QR code is dangerous to initialize authentication flows to gain access because sometimes it is less dangerous. When using a device which is close proximity to the second device and forcing this through a protocol, the danger is reduced. I think introducing a new wording context would help (especially if this was used in the corresponding protocols, for example FIDO2 passkeys) Near device A device which is in close proximity to the second device and uses a protocol which forces a close proximity communication channel for the authentication steps. Cross device A device which communicates to a separate device and uses a protocol which does not force proximity to the second device. Then it would be easy to explain attacks and mitigations: Using a near device communication, the attack X is not possible etc. Not sure this is the right place to add this, but it would help I think.\nThat's and interesting idea. To me it is more about cross-device protocol properties. Some of them have strong proximity assurance properties, and some don't. Perhaps terminology like \"proximity verified cross-device flows\" or \"proximity enforced cross device flows\" vs \"proximity-less cross device flows\" works better? It is also somewhat protocol neutral.\nI like these two: \"proximity enforced cross device flows\" vs \"proximity-less cross device flows\"\nLet me think about how to work that terminology into the text at the right places."} +{"_id":"q-en-oauth-cross-device-security-d1cd14443ae2b03de9022724cebc0266628377b237e8e688f3fb0077bd47fe4c","text":"The \"Authorization Device\" was sometimes referred to as \"Authorizing Device\".\nNAME Looks good to me, please merge if you approve as well.\nAgreed."} +{"_id":"q-en-oauth-cross-device-security-791438815723ba3a99472c3745ea00dc5bb50a56c852974ff3fb782304e550e3","text":"Added user education as an explicit mitigation ()\nBased on discussions with UX researchers at royal Holloway University and the latest requirements from the Payment Service Directive 3, user education does provide benefits in aggregate and should e considered as an additional mitigation (either as part of the UX section, or as its own section).\nSmall additions\/changes, otherwise looks good to me."} +{"_id":"q-en-oauth-cross-device-security-357816f4dde4a0f9d6be03930157a3f0df9df1b2cb419a9e8fb339955a7d20d6","text":"Incorporating feedback based on\nFollows . (CC NAME User Experience (extension) During potentially dangerous operations (e.g., reading the QR code on the Authorizing Device), advise users to verify the trustworthiness of the source, for instance by checking that the connection is protected through TLS or by verifying that the URL really belongs to the Authorization Server. In particular, when activating the camera to read the QR code on the Authorizing Device, the activity could also display a warning message advising users to only scan QR codes displayed on a specific website. Limitations: those already reported in the mitigation\nAfter discussing with Daniel, this is very close to the recommendations on user education. We will review that addition and incorporate the recommendation to add warning messages when scanning a QR code, if it is not sufficiently clear already."} +{"_id":"q-en-oauth-cross-device-security-30d2b012edd6eabb96ce67a342d6c84e289eae8e70ac2637afdb210408f41619","text":"Added \"Request Initiation Verification\" as an additional mitigation based on issue\nFollows . (CC NAME OTP Verification In the User-Transferred Session Data Pattern and Backchannel-Transferred Session Pattern, before authorizing the attempt, the Authorizing Device could display an OTP to be inserted back in the Initiating Device. In case the QR code or the push notification was generated without the users' consent, they would not have performed any action on the Initiating Device, and therefore they would not know where to insert the OTP they received. Limitations: Attackers could deceive users into revealing the OTP through other means (e.g., via email) and use it to finalize the authorization process; the additional step could reduce the usability level of the protocol. Effect on attacks (Table 1): .\nDiscussed with Daniel and proposal is to include as an additional mitigation and position this as part of the user authentication (i.e. not a protocol extension). It makes it harder to scale attacks, as the attacker has to find a way to recover the PIN as well.\nFix typo on \"Authorization Device\""} +{"_id":"q-en-oauth-first-party-apps-c523b72e50b7942e5cc2c7bc8830e164fbf8cd782dd73828b44759ef4509771e","text":"I'd like to discuss why we don't want to recommend supporting the spec for multiple first party apps? My thinking has been the protocol (whether in or out of scope) allows for a challenge\/response method that the AS can control allowing it to support multiple apps at the same time. On the App side, this can be managed via an SDK that is used by all the first party apps. The AS needs to track which app versions have which capabilities to ensure the correct native experience is supported and if it's not possible to support the native experience then the AS asks the client to fall back to the web redirect experience.\nKey point is that we don't want different user experiences implemented by the different apps. Add this topic in the Security Considerations section.\nAlso, if there are multiple 1st apps, can use OIDF Native SSO for Mobile Apps to create a better experience for the user\nalready stubbed some out in the draft, needs more expansion\nWould we need a second profile to use this with OpenID Connect? How do we expect an AS that supports both OAuth and OIDC to implement this endpoint. Aaron - make sure this is possible without breaking anything. George - provide some guidance on what behaviour we want. Get AS behaviour consistent to simplify native implementations (no AS specific code needed). One idea - use examples from OIDC in a non-normative way. OmniAuth - maintains a list of providers their library supports. Aaron- 2 classes of implementors. (1) proprietary (2) those using OIDC\nCopy language from PAR:\nWhat profiling would we need to make to ensure that the new endpoint accepts any parameter the authorization end-point. The goal is to allow layering and use of other OAuth capabilities. Do we borrow from PAR?\nLanguage from PAR: George: Some parameters have meaning in a web context but don't have meaning in a native mechanism (e.g. response_mode=query). It is out of scope as to what the AS does in the case that an extension defines a parameter that is invalid in this use case."} +{"_id":"q-en-oauth-first-party-apps-f7e686a01302aa1fb5c1e41759c665446c2b717c736ae80098a099366b7b44de","text":"Included use cases for OTP, e-mail, SMS confirmation, step-up authentication and registration. Also see and"} +{"_id":"q-en-oauth-first-party-apps-68919f5b3d4799aa3b868340ee87c159d107570f18501fdfb744874c73ae6327","text":"I added three sections I feel are important: ) Credential Stuffing\/Abuse callout in security considerations. Browser-based front ends have lots of security checks inherently built in to an IdP's flow. If we move the authNZ to be backend API based then it's important to call out to implementors that they should treat that API with the same security concerns as whatever API recieves request when a login form is submitted. ) New Grant Type needed. ) DCR Registration consideration and 4.) corrected minor typo Happy to discuss. I have other updates I'll add and submit PRs\/issues for later."} +{"_id":"q-en-oauth-first-party-apps-b8e0d93d1327c9c0822ed3f0ff2f78edd6451fee567f7bd3c85cee8f516b53e3","text":"Added option for redirecting to the web. Notes: Re-using the PAR response - feels strange to have the \"request_uri\" parameter as the redirect URI - is that OK? Added and example of redirecting to the web. Not entirely sure how control gets passed back to the \"native\" app\/client or how much detail we should provide.\nI see the problem with the current text. In PAR, the response is a which represents the PAR request itself: The client is expected to then start an authorization request and put the value in the query string parameter: So the text in this section should instead be:"} +{"_id":"q-en-oauth-first-party-apps-dccf6d937f06fe1b47a887061d31f34bae02d80f990ca235e54f25af124a5d8d","text":"Added text to address assigned issues: URL URL URL\ntbd: \"Due to the inability to securely attest to the first-partyness of a browser based application, it is NOT RECOMMENDED to use this application in a browser-based application.\"\nThinking out loud, would the hypothetic \"SPA attestation\" be possible in principle \/ make sense at all?\nYes, Chrome has a proposal for the \"Web Integrity API\", but it has received a lot of pushback: URL Safari already shipped Private Access Tokens which are similar: URL\nThere was some discussion at OSW about enforcing client auth first, before invoking the native flow to further build trust. It may be a good idea to add additional information, perhaps in the security considerations, about ways in which the server can have confidence that the first party app is a \"real\" first party app.\nYes, I agree. There are couple of ways the server can determine trust in the mobile app. We shouldn't be prescriptive but maybe putting some options in the security considerations would help make it more clear how to do this. Maybe there is best practice guidance that could be put in a doc like for single-page-apps.\nNAME to update First-Party Applications section\nin Section 9.1 URL\nAdd new normative requirement for authorization server to verify that the application is a first party application.\nOne of the participants in the OSW workshop suggested adding a content type parameter to allow them to specify for example a JSON structure or some other data format to control the native app experience. We have been deliberately avoiding defining a new language or re-inventing HTML, but allowing implementors to return complex content types of their own definition may be worth considering if this is a common enough use case.\nMake it more clear in 5.2 that this spec expects the response to always be either the authorization code or an error URL Section 5.2.2 needs to make it clear that the authorization server has to define its own error codes URL Section 5.2.2 should explicitly allow more specific JSON content types"} +{"_id":"q-en-oauth-first-party-apps-1be15aad8d00920c475f9213b5d8aca2fef2938b4ae689aa9b67ce2de7b72164","text":"Issue Add normative requirement for device binding of auth session and require the authorization server to enforce it.\nShould we prohibit the auth_session from moving off-device to avoid resumption of the session on another device (avoid risks of session theft or session take-over).\nauth_session is expected to be device bound\nDiscussion Add statement that authorization sessions are device bounds. Make it Normative in spec.\nResolved in"} +{"_id":"q-en-oauth-first-party-apps-8a97300133ebad7c9a04681f912d8e2557206b0974ccebd0031a92fb36bc4dfd","text":"Removed the need for the new ash claim by inverting the way in which binding works by putting responsibility for maintaining the binding on the authorization server (see issue ).\nI do love \"ash\" as a claim name but (as DPoP co-author and a DE on the JWT Claims registry) I'd strongly suggest that the binding be done the other way around. Basically just say that the AS binds the authsession value to the the public key from the DPoP proof and checks the binding whenever the client sends the authsession. Binding that way is simpler (from the spec perspective anyway) and provides more protections. It's also how most of the binding in DPoP works (\"ath\" is kind of an outlier and provides somewhat questionable value). URL URL\nSuggestion: also require a unique DPoP key per session to avoid risks around session binding.\nCompleted in\nLooks good"} +{"_id":"q-en-oauth-first-party-apps-45458f471df539421c761a324da153923eb7ea0aa5db89a87b47ac2abbcd6d68","text":"Proposed update based on feedback in issue Open question for reviewers (and NAME whether we should allow both or only a single method, and if a single methods should we opt for the least common denominator (dpopjkt) or an authorization challenge endpoint specific one?\nCommented over in URL that I think the single method of the DPoP proof header is sufficient and preferred.\nI would tend to agree that we should stick with only the header option for DPoP.\nI'm good with that. Will create a PR to reflect this.\nNAME and NAME I made the changes. please review and feel free to merge if ready.\nBecause the client interacts directly with the Authorization Challenge Endpoint the authorization code binding can and probably should be done via the DPoP proof rather than the dpop_jkt parameter. This is similar to what but here you could probably get away with just using the DPoP proof.\nThat makes sense, we should do it the same way that's recommended for PAR.\nNAME I was reading the rationale for requiring both the dpopjkt parameter and adding the DPoP header to the PAR request: URL It looks like the goal was to keep client libraries simple so that they don't need to distinguish between PAR or front channel flows. If we remove dpopjkt, do we make it harder for implementors of client libraries? Given that this is a new end-point (Authorization Challenge Endpoint) and a native client, we may not need to keep support for both and we can be more prescriptive, but I wonder if we might cause confusion for implementors as we have established dpopjkt as the least common denominator (the reason for allowing it to be used with PAR end points). Perhaps we should have similar guidance to DPoP on supporting both dpopjkt and allowing the DPoP Header to be added to the Authorization Challenge Response? I guess the question is really if we settle dpop_jkt as a the least common denominator in all DPoP impementations?\nI think the in PAR was more about keeping the commonalty of authz request parameters and their treatment on both sides. Not so much about being a least common denominator. I think this Authorization Challenge Endpoint is different enough that just using the DPoP proof is the way to go. Also, doing DPoP key binding of the needs the DPoP proof - so a client doing DPoP in this context really just should be sending the DPoP proof header.\nClosing this as it was resolved in\nlooks okay to me (pending conflicts that need to be resolved)"} +{"_id":"q-en-oauth-first-party-apps-494be2920398a3152b6fabdff3226e37d32da733bb9a33cb130a2700ef1293ed","text":"Addressing Issue I did not add anything explicit to the Security Considerations section and instead mentioned the potential security exposure in the text on User Experience\nFirst item to add is guidance on limited input devices.\nNAME I recall we discussed this, but unable to recall the details of the concern on what guidance we had in mind.\nNotes from editor's call: \"Not recommended to use this with challenges that require significant user experience challenges, for example entering a password on a TV screen with an on-screen keyboard.\" Security consideration around the differences between exposure of credentials on a TV screen in a room vs the risk of a cross-device flow We don't want to discourage the possibility of using this flow with a passkey on a TV remote control \"If you're on a limited input device, and the authentication method the user must pass requires significant user interaction, e.g. entering a password, then you shouldn't use this method; use the device flow instead.\"\nCompleted in"} +{"_id":"q-en-oauth-first-party-apps-1f62ca1152d6e94ac948caa631d95d950ee7d92ae602fce4a250f345f5771ed2","text":"Clarifying the text to make it clear that there are two ways in which phishing attacks may be facilitated by this specification. (see issue ) cc NAME\nMaybe I misunderstand this section, but it sounds like only one way to do phishing is described, though the section starts by saying \"there are two ways.\"\nNAME Thank you for the new text."} +{"_id":"q-en-oauth-first-party-apps-310e6454cf41b1eb28c4738f5e15a24bab2a3db38b23d97f8dbb1b0da68302b6","text":"Sec. 6.1 (in particular the example), what is the meaning of a response with an access token, a refresh token as well as an authsession, what is the client expected to do? How should it use the authsession?\nThe auth_session should be cached by the client in case it is needed for the stepu-up authentication flow as described in the Appendix (section A.7)."} +{"_id":"q-en-oauth-first-party-apps-8373859d7e37a602f97e0ba313d72d3d48c6cd1d479f0b782578ae99a0439ba7","text":"Clarify that the binding is not done through cryptographic means but rather through association by the authorization server since the binding is only between the client and the authorizations server. See issue cc NAME , NAME\nSince RFC 9449 does not specify how \"additional\" parameters can be bound, please say explicitly that (presumably) this is a claim within the DPoP proof JWT named \"auth_session\".\nIt is not a new claim in the DPoP proof, because that would imply the binding happens from the client, not the AS. Brian's suggestion was to require the AS to do the binding, in which case it's internal and not part of the spec.\nSo can you please fine tune the language in 9.6.1. At least this reader understands the word \"binding\" when used in this context as cryptographic binding, and this is obviously not what you want. Maybe use \"associate\" instead.\nFixed by ."} +{"_id":"q-en-oauth-identity-chaining-5d49d37349b7be8c7ba46384ce189468ddc270a4bc10dcadd9af047cf0657feb","text":"Added use cases. We may want to refine these, or move them into a non-normative section later."} +{"_id":"q-en-oauth-identity-chaining-7d36942f322dc34aa768ca9e98a6983a22062bc7c4a19dc1d9c24654ff44792a","text":"parameter removed from token exchange request section as this is already specified in RFC8693 Resource\/audience section merged into request section. added more author details"} +{"_id":"q-en-oauth-identity-chaining-c6506dfca9cdbd40a1f64b8d9231569c2699e574ce73802208c3a1cf785694fa","text":"Added a section in the security considerations to address topic raised by\nComment from Kelly: A.2.(D) I think AS in domain A at least SHOULD authenticate to AS in domain B. We have it as a MUST.\nPerhaps this can be done in the security considerations."} +{"_id":"q-en-oauth-identity-chaining-54b652da61579085ec0b470ad2fe60f3ee206398ba7a7acac7f0447520b1262c","text":"Includes updates based on\nOriginal: URL Kelly's editorial changes: identity-chaining-draft-URL"} +{"_id":"q-en-oauth-identity-chaining-ce8daaad154ef9c642d0e311a187cb98429c0070356ad32a6509daaa46a401ae","text":"Removes a duplicated section from the introduction\nFeedback from Brian Campbell: Section 2 repeats itself with the text \"A client in trust domain A that needs to access a resource server in trust domain B requests an authorization grant from the authorization server for trust domain A via a token exchange. The client in trust domain A presents the received grant as an assertion to the authorization server in domain B in order to obtain an access token for the protected resource in domain B. The client in domain A may be a resource server, or it may be the authorization server itself.\" occurring twice."} +{"_id":"q-en-oauth-identity-chaining-c2a6a8d60bcfd07ff5d2d2d78b6aef1d60ac90fabcbf852e5e9f381de43c574d","text":"Replaces examples and add response examples\nFeedback from Brian Campbell Seeing curl commands in the examples is sort of unexpected. Typically these kinds of drafts try to show the \"on the wire\" HTTP message -something like this [URL] for example or this [URL] or [URL] etc"} +{"_id":"q-en-oauth-identity-chaining-e366f2f0a29da035e3adde81f393f53896f34da9d3c6c83fac7eff73115bef3a","text":"Fixes RFC7523 reference (was RFC7521)\nFeedback from Brian Campbell The reference in the first sentence of [URL] would be 7523 rather than 7521 (I got rather confused reading that paragraph :-) )."} +{"_id":"q-en-oauth-identity-chaining-2919949d0be604fd0acfa8609f94722035c719eea348a36c1a7e0c1ac479ebc7","text":"See\nFeedback from Brian Campbell This requirement for the audience [URL] is already a requirement of [URL] (3rd bullet) and also [URL] (also 3rd bullet). But the way it's listed here makes it sound like an additional thing. It might be worthwhile to use the bullet here to be more specific about the aud value (it's been a bit of an interop pain point w\/ JWT client auth fwiw) and say that it has to be the token endpoint or AS issuer identifier."} +{"_id":"q-en-oauth-identity-chaining-66b034612c421035fb0329e1cc5d720571166a16c3fa0ddd38686b4f117882e8","text":"and minor other little fixes to fix issues , , an editor's draft preview of these changes here: URL\nsays this: However, if the authorization grant is a JWT (which it always should be, but see for details), then defines the grant type of the request to be: The other implication of this is that the only way to distinguish the type of JWT assertion is in the JWT itself, which can be done using the field in the JWT header.\nPer is an \"identifier [...] for the representation of the issued security token\" and I think it's admittedly a bit odd but perfectly reasonable to use the URN of the grant type that the token is intended for as the value conveying the representation that token.\nWith that said however, I do kinda think that'd it be better to limit to JWT authz grant only (a la ) and not tie the issuedtokentype to the grant type.\nMy point was more about the grant type of the request as described by RFC7523. access token request as defined in of the OAuth Assertion Framework ] with the following specific parameter values and encodings. So if the request contains a JWT, then the grant type value has to be . Which means according to Section 2.4.3 of this spec the issuedtokentype \"SHOULD\" be , which is silly. So my proposal is to remove the \"and SHOULD be passed into the assertion request\" part, so that issuedtokentype can be a value that makes sense, and use as the grant type. As I said in the original comment, the implication is then that the JWT header itself would need to include something that indicates the type of JWT it is, otherwise all token exchange requests look the same.\nI might argue that it's not actually silly but I won't b\/c it'll be moot as PR addresses this with the removal of the \"issuedtokentype parameter in the response indicates the type and SHOULD be passed into the assertion request\" URL\nissuedtokentype (a required parameter of a token exchange response URL) is missing from the example response at URL\nNAME can you prepare a PR with an updated example?\nexample would be fixed by PR\nFeedback from Brian Campbell Conceptually having \"flexibility for authorization servers to change format and content\" sounds nice but I can't help but wonder if that flexibility is actually useful and if this document wouldn't be better served and more straightforward using just JWT authz grants from 7523 (rather than being abstract about using 7521 and the very brief mention of SAML w\/out a 7522 ref).\nI agree that limiting the authorization grant format to JWT is useful. In fact you've almost already done that by adding the validation to the processing rules here: URL\nAs discussed in the meeting with the editors and Aaron, we will make the use of JWT explicit.\nPR would limit the authorization grant format to RFC7523 JWT"} +{"_id":"q-en-oauth-identity-chaining-cee4a73c9b06503329a7f449c082d98c024cccbfb758eba1661b878b088ac32d","text":"when \"authorization grant\" refers to the actual JWT being passed around, it is clearer to call it \"JWT authorization grant\". RFC6749 uses \"authorization grant\" to refer to the general concept of any grant"} +{"_id":"q-en-oauth-sd-jwt-vc-d57a7138480c9ba3c5c36210adbf2eab9ba09a4ddd6791ceb47ce980e3afcce6","text":"Issue\nCan you give an example where this is relevant? With excluding private claims we make sure all claims are uniquely identifiable across all parties of the three-party-model.\nclaims we make sure all claims are uniquely identifiable across all parties of the three-party-model. I dont believe this PR discourages public claims it simply reflects that there are bunch of scenarios where VC ecosystems what to interoperate on claims that are perhaps not general enough to register as public claims.\nPlease merge"} +{"_id":"q-en-oauth-sd-jwt-vc-78168e9c13037d2308b2ca8d0459550f84bd4af0f78319a1902188779b906d7f","text":"Status Provider should be either 4th party or issuer (not verifier).\nMerging this since multiple approvals.\nlgtm, thank you!"} +{"_id":"q-en-oauth-sd-jwt-vc-aba01995baf8c865f7e59d53da52ea8318742b0d6932f9f3593a2e5af9b07c3b","text":"Merging this since this is an improvement and PR might not be merged as is.\nNAME can you pls resolve merge conflicts, then we can merge this PR.\nMerging PR since a lot of approvals.\nlgtm, since we have merged PR and might not merge PR , we should merge this."} +{"_id":"q-en-oauth-sd-jwt-vc-b36616d353f52136c966cdc8565cfc184b6d871e0f0ceda83ab52294eed60543","text":"alternative to based on NAME 's comment \"I can't help but wonder if it'd make sense to have the spec be only about SD-JWT VCs and treat the plain JWT VC as just a special case of an SD-JWT with no selectively disclosable claims.\" no changes to the basic structure. simple clarifications what to do when there are no selectively disclosable claims ie no claim and no Disclosures. Might need some more details but this is how it might look like. (cc NAME NAME linked to rendered preview\nGenerally, I really like this approach! Some examples could be useful for key binding with JWT (no disclosures)\nI'd expect the typ for both to have +jwt if that were really the case. There is another draft up for multiple suffixes. URL Is it simple enough to support now?\nshould this be titled ? I think this should be changed as well as the title before merging to make sure we are calling out that this covering SD-JWTs and JWTs\nYes. Will make a change once we have a decision whether PR or this one is the way to go…\nWhat is the position of this specification? From my understanding it defines a VC with JWT claims it defines a profile for the JWT claims it defines a profile for securing the VC There's an open discussion between the sd-jwt and JWS protection. Relevant discussion has been started URL - sd-jwt introduces an encoding that becomes incompatible with JWS in the limit where selective disclosure is not used. In other words, it introduces a first-order phase transition. Let's see how this topic resolves. IMO, the data model and securing mechanisms should support both sd-jwt and JWS.\nI do like this approach, but I am a bit concerned that even in the 'no disclosures' case, we'll have the trailing tilde in the issuance and presentation (when no KB is present). A JWT is not a special case of an SD-JWT for this reason.\nI agree with the concern.\nIMHO keeping the trailing tilde is preferable as it keeps the model consistent.\nfrom what I am sensing, there seems to be a lot of preference towards the approach in this PR over , partially because it is simpler and gives one credential format and not two (consolidation, yay!), with one major discussion point being a ~ after a JWT VC when it is issued. a bit of the context... this trailing ~ is the result of the decision being taken in SD-JWT spec to better align the issuance and presentation formats, (URL does it, and it has not been merged yet). One option is to keep issuance and presentation formats separate in SD-JWT (as-is) so that there is no ~ when (SD-)JWT VC is issued. but that is not that clean from a pure SD-JWT processing perspective is I think the opinion of its authors (NAME NAME so the question is really what to do with a trailing ~ when there is no SD in SD-JWT based VC.\nshould be good to merge once NAME takes a look and approves\nI'm ok with this direction. I personally don't like the trailing tilde but we can fix that in another PR. I'll open an issue in the repository.\nMerging this PR since a lot of approvals and no objections.\napproving but will create an issue to address trailing tilde.I approve this approach over as, the specification is much cleaner I expect implementations to be much cleaner I don't see confusion with trailing tilde, because typ is not jwt but sd-jwt most interop profiles will require SD support anywayI prefer URL since I find its description of what we're achieving more natural \/ less surprising. But I can live with this description as well, provided the distracting \"with JSON Claims\" language is removed."} +{"_id":"q-en-oauth-sd-jwt-vc-982143571eed81b1664a61e8223fcfab7cdffdf9cea768266aa81d2f072cc322","text":"Replace \"Three-Party-Model\" with \"Issuer-Holder-Verifier Model\"\nThere are lots of things that could be described as a three-party model (including classic OAuth where this work is aspiring for adoption). The use of \"Three-Party-Model\" in this document strikes me as somewhat presumptuous and potentially misleading or confusing. Could we instead use \"Issuer-Holder-Verifier Model\"?\nI agree, since there must be always a trusted third party in the model, like a trust anchor or its intermediate\nI typically refer to this model as \"decentralized identity\" or \"verifiable credentials applications\/solutions\/ecosystems\", I would be fine with \"digital credentials applications\/solutions\/ecosystems\".\nI'm suggesting only to replace the 5 or 6 usages of \"Three-Party-Model\" with \"Issuer-Holder-Verifier Model\".\nThat would work for me and I understand that Three-Party-Model is confusing since other architectures also involve 3 parties.\nI'm ok with using \"Issuer-Holder-Verifier Model\".\nIt seems that this issue is ready for a PR.\nis that PR\nthanks, lgtm"} +{"_id":"q-en-oauth-sd-jwt-vc-995f790ee45e7a81ca8b53d82135a40fef572355d461e1cc538147b01b7d5132","text":"As NAME has aptly pointed out, 'the \"with JSON payloads\" wording is clunky and redundant.'\nlgtmI see no reason to have the \"with JSON payloads\" in the title"} +{"_id":"q-en-oauth-sd-jwt-vc-b4c41c4dc0365c87269fccdae681ee87daa124e7acefb6b5b48cfe613333b3db","text":"I worry about collisions and confusion with (and with ) - those of us that work with this stuff will likely understand the difference, but there is probably a better descriptor we could use than for this reason. Maybe specifying \"schema\" or \"schemaName\" or \"sn\" (for nice short form) would be more useful here\nNAME there is a conversation on renaming type claim in Issue URL which should be a separate PR if we agree on an alternative claim name..\nlgtm after we applied NAME suggestion"} +{"_id":"q-en-oauth-sd-jwt-vc-942c9a7d14ca23e24a8e70b54c1b5a13be51cddeaeaeb1de9a1a59116337795f","text":"I removed the subsection numbers where appropriate to be less dependent on changes in the SD-JWT spec. I also removed references to the issuance\/presentation format, as this was unified in SD-JWT. This also meant that the section on the data formats for issuance and presentation in our spec can now be just one section, as there are no big differences any longer.\nThere are a number of references to specific section numbers in SD-JWT that are now out of sync and point to the wrong sections or non-existent sections. And some lingering use of terms like \"Combined Format for XXXXX\" that are no longer used in SD-JWT.\nAnd there may well be other changes coming to the section layout URL so it might be prudent to avoid specific section number references (for the time being anyway).\nlgtm"} +{"_id":"q-en-oauth-sd-jwt-vc-67c2426dbeb0e2a06591eb2201bbe31c84a0c60bcb043e3265953d2c0dab2c16","text":"We currently have two section describing verification by Verifiers. This commit consolidates both sections. Importantly, this commit fixes the description of the key binding check to ensure that removing a KB JWT from the presentation does not allow circumventing KB.\nthank you! great improvement."} +{"_id":"q-en-oauth-sd-jwt-vc-1b916beaf4a71a757410c3b517e73abbfb050c344b187db38e7b1f5cbf12ef6a","text":"URL\nI found several minor issues here: The spec has the following headlines: 4.2.2 Claims Comparing this to RFC7519 JWT these should be: JOSE Header JWT Claims Set Also, in 4.2.2.1 type claim is never explained. This should be added to the Terminology. Its also missing a link to the Annex A.1.\nAdditionally, headline 4.2.2.1 would better fit saying \"New JWT Claims\"?"} +{"_id":"q-en-oauth-sd-jwt-vc-185c70cf218375904faf6b5717ce44d4f2926ee4da819610a13130b7538220a4","text":"Add reference to Security Considerations in SD-JWT\nE.g.: SSRF with Issuer URI"} +{"_id":"q-en-oauth-sd-jwt-vc-4e72ad165b3423639a65f7e4d9b951e8da75ff324ece9d793fd3100df049b45e","text":"Added privacy considerations section from SD-JWT which provides a good foundation. ,\nCurrently, is not one of the selectively disclosable claims. This might be ok but it might have some privacy implications if a VC has multiple types. Let's assume the following: A VC with the types: driver's-license, age-over-18. If the holder discloses only the age, then based on the current specification, the verifier would also know that the age was disclosed from a DL which is probably unnecessary context info since all the verifier needs is to trust the issuer of the VC. Note, the issuer itself might also leak some contextual info as well, so the issue might not need to be mitigated or it might not be entirely possible anyways. Should we make selectively disclosable per array entry? Is this even supported by SD-JWT?\nClosing this issue since SD-JWT doesn't support selectively disclosing array elements.\nWe might want to add something to privacy considerations about that.\nFYI: There is a discussion about Selective Disclosure for array elements. URL\nsuggest to close as the type is a simple string now\nNAME raised an issue with me that \"if the type is always disclosed, what is the point of selective disclosure because the verifier will know what claims are in the credential\". I think we should consider making type selectively disclosable.\nI agree with Peter and I'm in favor of sd for type\nWhat is the value of a VC without type information? Or let me put it the other way around. If the Verifier is supposed to be able to process a VC without knowing the type (which also means it can request the VC presentation without specifying the type), why do we need the type at all?\nI was wondering about that point as well - what is revealed information worth without information about credential type (and corresponding trust framework).\nIt becomes valuable if all the verifier wants to know is a specific claim, e.g., , and trusts an issuer which could issue other credential types as well. The credential type can also make a difference, e.g., if somebody wants to know whether somebody has a driver's license but is not interested in the details.\nMy current assumption is that is the primary selection criteria for credentials in a presentation request (as identifier for a pre-defined bundle of claims associated with certain policies for enrollment and issuance). Based on that assumption, the Verifier always knows the type. So what would be the scenario to request a VC without knowing its type?\nHaving the type disclosable does not mean the user never shares it. If a verifier requests the type then the user can disclose it, no? The issues with both the and the is that they reveal what claims the user has, which is more than what is strictly required for a service. The privacy requirements from the Parliament, Council, and the Commission are strict. However, I cannot think of a scenario to justify type being disclosable. Arguments for: As NAME mentions, the verifier may require only a particular claim, e.g., age over 18, and should reasonable accept these from any trusted issuer without having to know what exact attestation type it came from. Some issuers may have multiple types that all contain the same attribute. And there are strong demands that the user should reveal only what is required for the service and absolutely nothing else (to the point where I am actively having to fight against people who want to put ZKP everywhere). Not revealing claim values is great; also hiding what claims you have is more privacy preserving. Arguments against: Beside from those already mentioned, you could question the privacy benefits. As long as the individual Issuer's signature is included, it is likely not that hard to guess what the particular attestation may contain, particularly if the Issuer only issues few attestations. In conclusion, given how there are no technical challenges to making type disclosable, and how the Verifier may just request a type and refuse to process a presented claim without type disclosed, I would argue for that the type MAY be disclosed.\nThe field is always disclosed, hence not revealing the only offers privacy if the same issuer with the same key offers multiple VCs that share the same claim name that is being requested. It seems like an unrealistic scenario that this is a privacy benefit. It seems to me the usecase would be merely a fillout help, where the verifier does not care about the origin of claims and handles those as self-attested\nLet's create a PR and see how the spec will change and continue the discussion there.\nCurrently, the spec gives the following function: Given that, I don't think it can be selectively disclosed. We should close this issue.\nWe should add something to the privacy considerations section that explains the risk of leaking more context info from the type in certain use cases."} +{"_id":"q-en-oauth-sd-jwt-vc-1b886c6568290e452f5ee35f905d3779674b5249f62dff420af90f744d0c9913","text":"NAME do you know how to fix the build errors?\nlet's fix references later and create a new issue for that"} +{"_id":"q-en-oauth-sd-jwt-vc-7242163b5777863e398d32c32c248824f9fd69dd9e910cd44c5aeeee14e62bfa","text":"Type identifiers MUST be collision-resistant names as defined in RFC7515 Changed the example to a URI"} +{"_id":"q-en-oauth-sd-jwt-vc-6f7ead04ffeffc4809cf2ff957662547e6019141427914166b6090d0dbc7296d","text":"This PR renames the claim to , as per the discussion in\nI oppose against this change. I suggest to use 'vct' instead.\nI'm good with . For the record, I think it's important to move away from , because URL uses the claim name, but with a different syntax. Keeping the naming conflict will inevitably lead to grief.\nNaming is hard. Regarding dct: isn't it obvious for an internet standard that we refer to digital credentials?\nNeither dct nor vct are intuitive for me, so I am pretty unsure. But between dct and vct, I would prefer dct. A term Verifiable credentials got enough feedback that it has strong too strong of an association to W3C VC-DATA-MODEL. And Digital credentials seems to be a term that is acceptable to generalize.\nWhy is \"Verifiable Credential Type\" (vct) not intuitive to you? This would match the current definition of . Isn't this what this claim is about? That is why I'm actually in favor of over , .\nI meant that abbreviations are not intuitive to me. And \"type\" is a pretty JSON-LD specific construct AFAIK so I was hoping we could have a deeper discussion on whether it should be called \"claimset\" or something. If we are to move away from \"type\" claim name. on dct and vct, as I explained, I prefer something that is not associated to W3C VCDM and digital credentials seems to be pretty well understood.\nIf the color of the bikeshed is still open for debate, how about just for credential type?\nI like 3-letter-names more than 2-letter-names for some reason. Also, is more accurate since the spec is about SD-JWT VCs => Verifiable Credentials. I'd be still in favor of for that reason.\nis fine for me. Updated the PR accordingly.\nis fine for me too. was just an off-hand idea\nMerging this PR since a lot of approvals. We will have a larger conversation on what is inferred by the value in a separate issue.\nI suggest to use 'vct'.LGTM. We should address NAME concern in a separate issue.This is an improvement."} +{"_id":"q-en-oauth-sd-jwt-vc-7971d832f254e65610d1a767b2ea9be67c72bf879c8606ffe291255c12764215","text":"This PR does the following: make verification that the public key of the Issuer-signed JWT belongs to the Issuer mandatory introduce methods to perform this verification based on JWT Issuer Metadata, X.509 Certificates and DIDs Potentially . Partially Out-of-scope: Renaming and moving JWT Issuer Metadata Section\nAddressed all suggestions, please review again NAME\nI had to rename the section. I think now it is more precise.\nShow possible options like JWT Issuer Metadata, being a DID URL etc.\nWe introduced a new section that talks about different key discovery methods during the verification process in PR"} +{"_id":"q-en-oauth-sd-jwt-vc-db5c6458a47f466ac2acb5f4dcdefd0e744f93b4fcf5d38ca1fc130dac709a26","text":"This fixes Issue . Note that this removed the \"exp\" claim which I think we don't need to mention explicitly.\nSD-JWT defines the KB-JWT header, SD-JWT-VC only refers to the claims. Is it necessary to reference to the SD-JWT spec for this?\nWe do refer to the Key Binding JWT: URL Is there anything else missing?\nClosing this issue since got merged.\nLGTM"} +{"_id":"q-en-oauth-sd-jwt-vc-e312be7ed752f79b4b9425d30cb0a2b1e90f2141082a2ad681554f77a2a1984c","text":"This PR introduces the following changes: - - Fixed SAN Extension rule Removed all options for X509 except\nNAME pls review again.\nFixed merge conflicts and reconciled with PR . Nothing new should have been introduced than what has been approved already. If NAME or NAME can approve again, then we can merge imo.\nThe section should be renamed to \"Issuer-signed JWT verification key validation\" since the section is about checking that the value corresponds to the key that signed the JWT. The process of obtaining it via different means is just the first step. We should also fix the language in the \"Verification and Processing\" section to say: instead of\nFixed by\nThe rules to obtain the verification key from X.509 should be changed to use the from the SAN extension instead of the (which does not exist). Furthermore, since can potentially start with a https scheme, the JWT Issuer Metadata rule should only be enforced if no x5* JWT header was set.\nWe also need to add JWT header.\nFixed by"} +{"_id":"q-en-oauth-sd-jwt-vc-c35d8c4f750d9616419aaf2958ca8d42cc09761ddbe661f95c94b9a8f25d424e","text":"This PR introduces the following changes: renamed JWT Issuer Metadata to JWT VC Issuer Metadata which - moved JWT VC Issuer Metadata to the end of the specification for editorial reasons\n3 approvals, resolved merge conflicts, merging\nEvery entity in the OAuth\/OpenID is potentially a JWT issuer, then I found the name of this endpoint misleading. Considering that this brand new specification defines a profile for SD JWT verifiable credentials, I'd propose to change the pathname to .well-known\/vc-jwt-issuer\nThe idea is that the JWT Issuer could be used with any JWT, not necessarily VCs only.\nNAME what do you think?\nNAME is there a reason the JWT Issuer should not be used with something else than VCs?\nThere are well known endpoints like openid-configuration or openid-federation that publish entities metadata, including jwks Since this specs Is related to VC based on sd-jwt, this specs should not be in conflict to pre existing Oauth2\/OpenID ecosystem SD-JWT-VC should define only the resources needed for the solution It defines\nI'm ok with the proposed change. I anyway assume we will add VC issuance specific metadata in the future (it is not limited to key distribution).\nI agree NAME Key attestation alone Is not enough, this endpoint deserves a proper identity with a specialized protocol metadata Also the verifiable attestations related to the entity may be available in it\nTo me this sounds reasonable. I will update the spec accordingly.\n(As a record, I repeat my question I made elsewhere in the past.) Why is it necessary to introduce a new path, especially if the expected content is just like below (excerpt from \"\")? Isn't the JWS header parameter (RFC 7515 ) like below sufficient?\n(For the record, I repeat my answer I gave on Slack and add more context) The jku header is not sufficient for the intended purpose. The purpose of the well-know endpoint is to establish a stable issuer identifier independent of the jwks location. This issuer identifier can be used in other structures to establish trust, e.g. it can be added to trusted lists (verifier end or trusted 3rd party). Additionally, it fits nicely with the other well-known locations, like openid-credential-issuer and oauth-authorization-server. So the issuer can distribute the role specific metadata in different location residing under the same issuer identifier. And not to forget, this well-known location also could work well with openid-federation. Moreover, any additional VC issuer metadata we will come across can be added to this endpoint. BTW: I think a dedicated issue would be the better place for this discussion.\nI am actually in favor of . This mechanism is meant to be general enough to be used with not just a 'verifiable credential'. and I don't think anyone would think to use with ID Tokens and Access Tokens, UserInfo responses , etc. that are JWTs.\nI understand your point NAME below my thoughts Why defining jwt-issuer for general purpose in a spec that's specialized for VC? Requirement: definition of a specs in IETF for general purpose JWT Issuer metadata (enabling client metadata, that today is only available in openid-federation...) I agree that jwt-vc-issuer semantically collides with credential-issuer, and we should not do collisions why we can't use .well-known\/openid-federation? Federation entity discovery is required for the entities that builds trust chain according to the federation specs, nothing prevent to publish all the metadata of all thw \"JWT-role\" in a single metadata, that should not be signed if request (content-type) asks it in plain text I suggest to use to consolidate a unique way to publish all the metadata types and roles of an entity. coming back to the name change proposal. Why we use the term since the metadata contains also general capabilities not related to the issuance? in a more general purpose way, wouldn't it be a ? then the question: if that entity has more functionalities and roles, being at the same time a RP and a AS, how the metadata should be organized? We've resolved this issue in openid-federation, with the below schema, where a RP can be also a OP, AS, Client, credential-issuer and so on: URL\nI'm a little embarrassed, but to be honest, I still don't understand the reasoning. \"A stable issuer identifier\" is represented by the claim, not by the URL of the well-known endpoint, isn't it? A possible reason for necessity of a well-known path I guess is \"to make sure that the host identified by the issuer identifier actually exists and is accessible, by forcing JWT verifiers to access a path under the domain.\" An example with concrete values would be \"to make sure that the host identified by the issuer identifier actually exists and is accessible, by forcing JWT verifiers to access a path () under the domain ().\" It might be beneficial to define as the ultimate fallback general-purpose option for locating the public key required to verify the JWT's signature (not as the sole rule for every VC to follow), especially when it is challenging to reach a consensus on a single method within a limited timeframe among the following. starting with Others However, maybe I'm still missing something, sorry.\nThe well-known endpoint is indeed also a stable identifier. But it depends on a certain well-known location, i.e. different well-known locations are different identifiers. I'm looking for a stable identifier for the issuer that works across .well-known locations and is not bound to a certain key. That's pretty simple with well-known locations as the iss value can be extended by several different .well-known locations, but the identifier is still the same, right? So for example, the issuer could use the jwt-issuer mechanism share keys and (at the same time) be part of an OpenID Federation as shown in the following: \/.well-known\/jwt-issuer \/.well-known\/openid-federation The same issuer's URL could also at the same time used in an external data structure at the verifier or at a trusted 3rd party. Effect: I can establish trust in a certain issuer\/entity\/server that works across different roles\/perspectives. I hope that explains better.\nthis would allow an entity to use different keys to sign ID Tokens, VCs, access tokens, etc.\nmy question is how to specify different capabilities for different metadata protocols? this is an example in oidc federation URL here we're saying, this is the wallet provider metadata, where multiple metadata\/roles cohexists in a single .well-known how jwt-issuer endpoint achieves this?\nIf the JWS header indicates that the JWT is a VC (e.g. ), the following paths also can work. (defined in OID4VCI) (proposed by this issue) I'd like to understand why the generic name is chosen. The generic name is subject to interpretation where authorization servers, relying parties and any JWT-issuing systems should provide to expose their metadata (at least or ) as the last resort. What if one server behaves as both an authorization server and a credential issuer? Should the authorization server publish the same content at both and ? Should intermediate authorities and trust anchors of OIDC Federation also provide ? Such a conflict is what I'm (and ordinary server implementers will be) concerned about. The latest draft of \"\" defines and jwt-issuer metadata. If so, there is no reason to prevent adding the same metadata to OID4VCI as well. Basically, I believe the metadata should be added to OID4VCI regardless of the discussion about jwt-issuer.\nPotentially fixed by\nNAME raises an important question that we need to address, at least by giving guidance. I don't have a good answer to that at the moment. Besides that, I don't see a need to rename to .\nNAME correct me if I'm wrong. I understand you prefer over , right? And whether the key data will be added to the is a different topic (that's how I read your last statement). Please conform.\nWe will need more discussions on that topic. Potentially at IETF 117. This means, we won't be able to resolve this issue for our individual draft 03.\nWe had a small discussion on this and I created a new issue so we can separate between the 2 discussions overlapping here (name of the endpoint and intended usage).\nRegarding the path, I have been losing confidence in my own opinion. As an implementer who has implemented numerous OAuth\/OIDC specifications and been leading development of Authlete that holds the most OpenID certifications in the world, my intuition told me \"The path should be avoided due to possible conflicts with other specifications.\" However, I haven't received as much support for my opinion as I expected. For now, I will implement my OID4VCI implementation using the path. Please let me withdraw my opinion.\nI agree the position of NAME while in the italian solution we're implementing openid-credential-issuer as metadata within the .well-known\/openid-federation endpoint, according to OpenID Federation 1.0. This doesn't prevent to include also a jwt-issuer metadata type but there's the decision by our side to wait and the same time give evidence of our implementation to help the consolidation of the vc-jwt-issuer specs\nNAME An off-topic question out of curiosity. If an authorization server \/ OpenID provider and a credential issuer are hosted on the same server, the payload of the entity configuration published at of the server contains the following, right?\nHey NAME I'm sure that you meant The answer Is yes, even if we have used a snake case like here URL It's still an initial draft without an official release, several editorials and implementation compromises needs to be resolved but the most of the work Is there, feelfree to open issue for discussion here URL\nTo make progress with this. Are folks comfortable with renaming \"JWT Issuer Metadata\"? And if so, would folks be fine with \"SD-JWT VC Issuer Metadata\" or \"VC Issuer Metadata\"? I like \"VC Issuer Metadata\" better because it is more future looking. What do folks think? cc NAME NAME NAME NAME NAME After we agreed on a new name I'd ask NAME to update PR .\nAgree that \"jwt-issuer\" is too general and am in favor of a renaming."} +{"_id":"q-en-oauth-sd-jwt-vc-0350711b80ab4361f5da0df9acfd785edecc35250028497f717b01b4d2f4d2fc","text":"This PR introduces the following changes: - Made all public key validation rules conditional Rejecting the SD-JWT VC if public key validation cannot be done\nThis harms interoperability, since some implementations will use DIDs and others will use . A more interoperable approach would be to drop and to always use DIDs. Those who want to use domain names and web servers can then still use . DIDs have always been designed as an abstraction layer for different types of resolvable identifiers. To build yet another layer of optionality, where DIDs are one option out of others, does not make much sense. Now you will have to deal with different metadata formats (JWT Issuer Metadata, and DID document). Similar arguments apply to the vs discussion. But I already know the response.. \"Simple is a feature\", and \"DIDs have more boilerplate that is not needed\", right? :)\nNAME Let's have the discussion in . As long as we haven't resolved the discussion I keep the DO NOT MERGE label on it.\nLet me try to update the PR.\nNAME I made all rules now conditional from a verifier perspective. In this way a verifier is compliant if they don't support all rules but the SD-JWT VC would fail validation. Note, it is impossible for a verifier to support all rules that may be defined by ecosystems anyways. Are you ok with the updated language?\nAs a DID maximalist, I would personally prefer DIDs to be the only mechanism that is used whenever public keys are needed, instead of allowing more static public key mechanisms like \"jwt-issuer\" or \"cnf\". Unlike \"jwt-issuer\", DIDs support identifiers that are persistent, decentralized, and cryptographically verifiable. Unlike \"cnf\", DIDs support key rotation and other updates. And as I said above, with DIDs there would be only one syntax and one resolution mechanism and one metadata format, instead of multiple different ones that individual implementations may or may not support. But I also understand that for some communities, \"keeping it simple\" is more important than the above properties, so yes I'm fine with the language, thanks for the effort to update it!\nWhile I understand your points, I think some of them are not true. DIDs may be decentralized, may support key rotations and may have other features. There are enough examples of DID methods that do not have those properties. Making DIDs mandatory would limit optionality. But also with DIDs the optionality in my view shifts just shifts a layer further down as there are many DID methods. Indeed allowing DIDs expands optionality by 200+ did methods, so this could indeed be a counterargument. \"Cnf\" isn't actually far away from DIDs, as the content can vary, there is a registry at IANA for supported methods, with different properties. In my view the cleanest approach for the keybinding would be to register a new cnf type \"did\" in this registry. But that's off topic in this PR. I'm fine to keep some optionality in the specification at the beginning and then see what is actually being used\nTrue. :) But with \"JWT Issuer Metadata\" discovered from domain names you will never have the kind of decentralization, persistence, and cryptographic verifiability that DIDs can offer. Sounds interesting! But I agree off-topic. You can still restrict the DID methods you want to use in a specific application such as eIDAS2. For example, if you want to support 1. domain names, 2. plain public keys, and 3. EBSI, then: With the current approach in this specification: For 1. you need to support JWT Issuer Metadata (= a new format introduced by this spec). For 2. you need to support \"cnf\" and JWK (= a format that has been around for a long time). For 3. you need to support DID documents (= a format that has been around for a few years). Compare that to a DID-only approach: For 1. you need to support did:web (= DID document format) For 2. you need to support did:jwk (= DID document format) For 3. you need to support did:ebsi (= DID document format) You see what I mean? Yes with a DID-only approach you would still have a layer of optionality with regard to DID methods, but you'd only have that ONE layer of optionality instead of TWO layers of optionality, and you'd have a harmonized identifiers syntax and harmonized metadata format. But I understand there are communities that simply don't want to use DIDs, so I think this current PR is a good middle ground.\nThe current text is basically a normative requirement to support all DID resolution. Which is too much, even to the extent we want to allow for or support some DID usage. We need to revisit how this is phrased in the document and even how much (if any) is said. created from discussion in PR : B: \"IMHO DIDs can be accounted\/allowed for via the \"Separate specifications or ecosystem regulations MAY define rules complementing the rules defined above\" criteria below and shouldn't receive 1st order treatment like this in the spec.\" O: \"We added DIDs to make the spec useful also for the credential market that uses DIDs. For those people it is required to demonstrate how that can be done. I'm wondering how other folks are thinking about dropping DIDs from getting 1st order treatment. You are right, one can create another draft that just explains how to use DIDs with this spec.\" Originally posted by NAME in URL\nIt was definitely not the intend to support all DID methods and all kind of DID resolutions. We need to make DID support optional for verifiers and if they don't have support they won't be able to verify DID-based SD-JWT VCs which is fine.\nI made a proposal here"} +{"_id":"q-en-oauth-sd-jwt-vc-38a45ff0774388f4c689b3bcf8bc1a4b447e97024662f22ef47da09cbb06b52c","text":"also fix a typo or two for Issue\nURL has \"Combined Format for Presentation\", which isn't used in SD-JWT anymore.\nI assigned you NAME since you are one of the SD-JWT editor's.\nGotta start somewhere... hopefully I can handle this one. PR has proposed changes."} +{"_id":"q-en-oauth-sd-jwt-vc-db689cb0d9e58ed18e857c5f08e9a6dff601936ea4d44b7aa6d05226c4602d6d","text":"also add associated IANA registration request and fix up formatting etc in other IANA bits see it live URL\nThere are 2 parts of the draft discussing the path to issuer metadata: and I do believe the latter is the currently intended \/.well-known mechanism, whereas the first section was probably not updated and would result in to resolve metadata. I guess this was an oversight in ?\nThanks for catching the inconsistency NAME and I believe you are correct that the latter is currently the intended mechanism for .well-known.\nWe might also want to use something other than \"user\" in the example path. It's not wrong exactly but maybe unnecessarily misleading or confusing.\nendeavors to fix this (and a few other little things)\nlgtm"} +{"_id":"q-en-oauth-sd-jwt-vc-cd5609948b7dd545973742745aa1e13752aa72d360d2ed5c2299693eecbd9a93","text":"Added terms and definitions section; made some changes data formats based on terms and definitions section -\nI think we need to define what the following \"things\" are. What is the Verifiable Credential based on SD-JWTs? I believe it is the JWT that contains the s. It is not any of the combined formats. This means that the Verifiable Credential MAY NOT contain any disclosable claims, e.g., . Would you agree NAME NAME Daniel? The VC based on SD-JWT will have the media type . What is the Verifiable Presentation based on SD-JWTs? It makes sense to define that since the HB JWT has some specific requirements. It also makes sense because protocol developers will need to refer to one term when they exchange presentations or verifiable presentations between issuer holder verifier. Can we call the Combined Format for Issuance AND Presentation the Presentation based on SD-JWTs, and if there is HB, then it is a Verifiable Presentation? The VP based SD-JWT will have the media type altough I'm not sure if this makes sense. Wouldn't it better to have ? Any thoughts on that?\nAlso it is important to note that for OIDC4VP we use Verifiable Presentations in vptoken and we use Verifiable Credentials in OIDC4VCI in the credential response. It probably has the implication that for OIDC4VCI, we will put the VC based on SD-JWT into the credential response, or we define a format identifier for something else. Keep in mind we need to send the disclosures to the client as well. In that regards is the SD-JWT itself the SD-JWT-VC without disclosures, or is the combined format including disclosures the SD-JWT-VC? Or we follow the approach above and we return a combined presentation (no HB) in the credential response and define a credential identifier for that. It probably also means that we would put the VP based on SD-JWT into the OIDC4VP vptoken.\nWe (IDunion) put the combined presentation (+hb jwt) in the vp_token. What else would you expect?\nThat makes sense, so the VP based on SD-JWT is the combined format for presentation including HB JWT. Then, there is the question what would the OIDC4VCI credential response return? And what would be a format identifier? I know that this doesn't have to be answered in this spec but it helps with answering the terminology question.\nSuggest we not overcomplicate things and say: what is returned in a VP Token in OID4VP is a combined Format for presentation with HB-JWT. what is returned in Credential Response in Credential Response in VCI is combined format for issuance. credential format identifier is 'vc+sd-jwt' for both. Terminology-wise, SD-JWT in the IETF spec is defined as an issuer signed JWS without disclosures. For this profile, the payload of SD-JWT, can be mapped back to a VCDM2.0\nThis is potentially fixed by PR"} +{"_id":"q-en-oauth-sd-jwt-vc-e8432aedd0235d5efe268cb49b2f8efaa0b4bf61c1345fa76b63b0c9dd23f801","text":"This PR: adds relationship to other docs section - preview here: URL\nOne little nit on the text but otherwise looks okay (after the couple iterations by NAME and NAME thanks for those).\nNAME pls approve if you are happy with the latest language, then we will merge the PR.\nData model in this document resembles to the ID Token data model as defined in OIDC (with the addition of cnf). On the other hand, the data model is trying to follow the W3C VC data model. What is the benefit of this data model over the W3C VC data model (except that the claim names are different and iss can be a URI, and not an object and that JSON-LD is not supported by the data model, which basically excludes some significant use cases such as Europass that are leveraging JSON-LD for internationalization, for example)?\nIt is simple, leverages all predefined JWT claims (including eKYC), and can be used by people familiar with the JWT model to also implement VCs - with SD-JWT also with selective disclosure capabilities. So it clearly gives an easy migration path from the federated to the issuer-holder-verifier model. We tested the idea with some projects and communities and it seems a lot of implementers like that. Regarding Europass and other JSON-LD-based credential schemes. There could be another spec defining how SD-JWT could be used with JSON-LD.\neKCY defines its own data model same as for ID Token. eKYC can be easily mapped to VCs. The only real difference is the name of the metadata claims (iss vs issuer, iat vs validFrom, etc.) But SD-JWT can be used with W3C VCs (bot as SD-JWT is and W3C VCs as they are).\neKYC defines an extension that can be used in any JWTish environment. Doesn't work with W3C VCs due to the split between subject related data (credentialSubject) and metadata (evidence). Ask Mark Haine. not sure what you mean. can you please explain.\nWe did some mapping in the past (it was some time ago, and I don't remember how far we got): Evidence in VCs is optional and the model and use of evidence for eKYC doesn't make a lot of sense since eKYC already defines the full data model. eKYC can be nicely modelled within the credential subject. There were discussions around re-using the model of eKYC for expressing different levels of verifications. Essentially it was something like: verifiable-attestation-URL is the VC data model e-kyc-verifiedclaims-2022-URL - is the JSON schema of the eKYC\nJust had a chat with Alen about typing and want to share the conclusion. We think typing happens on the following levels: Payload of a (HTTPS) response - here, a processor needs to know how to process the object (blob). So on this level, the media types or should be used. JOSE header - on that level, the determines how the payload of the JWS shall be processed. fits on that level. It governs the format (SD-JWT, which implies JSON) and what claims MUST be present (e.g. and ). Those claims are related to security of and trust in the credential. They are not related to the further domain specific structure of the credential. In SD-JWT VC, we use the claim to determine the structure\/schema beyond the basic structure determined by the JOSE header. A university credential, for example, will have different rules for the schema of the payload than a drivers license. NAME NAME NAME NAME NAME NAME NAME Please let us know, whether we are on the same page. If so, it might be beneficial to add some text along those lines to the draft.\non point 1, I think that is ruled by the underlying protocol - OID4VP\/OID4VP define those ( and ) I do not think sd-jwt vc draft should say anything about that. on points 2 and 3, i think i agree\nto answer the original question raised in the issue, the way VCDM 2.0 is going, it requires data model to be JSON-LD and processed according to JSON-LD rules, so things like will not be conformant. the value of VCDM IMO is defining the concept of a credential in a three party model: issued credential must contain the issuer, the subject, claims about the subject, metadata about the issuance of that credential (iat, exp, etc.) and optionally a cryptographic key. that's it. How to map it to the JWT claims is not the expertise of w3c vc wg nor is well-defined in the documents produced by that group. VCDM itself is silent about how to sign\/process the I would absolutely not equate ID Token data model to a VC data model. VC data model needs one step to get to the ID Token usability which is defining how the concepts defined in VCDM are mapped to the JWT claims and how to sign\/process that JWT claimset, which is what this sd-jwt vc draft attempts to do.\nNAME I think we need to at least scope the media type definition in the spec as those definitions are related to the only, not request payload.\nwhy it can't be left to the protocol? why it can't be ? just for example: URL\nNAME the way VCDM 2.0 is going, it requires data model to be JSON-LD and processed according to JSON-LD rules, so things like NAME is in the payload, we put it to be conformant, but are not processing it will not be conformant. Please check the discussion in URL since the conclusion is different. NAME What do you understand under LD processing? (LD processing with VCs is used only by: data integrity proofs, selective disclosure that's been just defined, internationalisation, identity matching, some advanced querying) Since JWS is not using any of that, JWS protection mechanism is not affected by the -LD part. However, it enable use cases, that want to use the -LD (e.g., ELM v3) for internationalisation etc., to use it. When VCs are secured with JWS, LD processing is not performed. It MAY be performed, if use case requires it, in the post-processing (after the validation) or rendering phase. the value of VCDM IMO is defining the concept of a credential in a three party model: issued credential must contain the issuer, the subject, claims about the subject, metadata about the issuance of that credential (iat, exp, etc.) and optionally a cryptographic key. that's it. How to map it to the JWT claims is not the expertise of w3c vc wg nor is well-defined in the documents produced by that group. That's perfectly fine. W3C VCDM is focusing on the VCDM data model. Note that the current VCDM v2.0, without any JWT claim, is a valid JWT. VCDM itself is silent about how to sign\/process the data model I think this is a good approach of the specification, since it defines a data model. Securing mechanism should be defined by specifications like JAdES which defines a profile for JSON-based AdES.\nReason is simple and it is following the basic architecture\/design principles. Signature (e.g., JWS) defines the serialisation and with it the resulting type. JWS specification cannot and should not know anything about the protocol that will use it. When a protocol defines a profile or a set of supported elements, the protocol needs to ensure that the media type that's defined consistent with the elements it selected.\nNAME Perhaps I did not explain well. This is the media type definition from the spec. It does not describe the context where the media type shall be used. So people might think this is a media type that should be used as content type of an HTTP response. I think we agree this is not the case. All I'm suggesting is to document the intended use in the JOSE header (and as format identifier in credential exchange protocols). Would you agree?\nI don't agree that it is a good idea to take VCDM claims and use all of those to secure JWTs, because there are existing claims from JWT claims registry and existing implementations depend on those\nNAME thank you for your feedback and I fully understand your concern. We need to put things into a context: most of those cases operate within closed systems where everything is well defined - data models, verification rules, etc. Within this context, I fully agree with you. Securing JSON-LD can be done today by JAdES (pure JWS, no JWT). I believe it is worth to read and reflect on it URL From my experience, the NAME is really processed (in most cases) outside of the verification scope\/process. There are signatures that rely on the context, however, that's not relevant for JWS and this discussion. (In a brute-force scenario context can be embedded\/inline and problem solved - size is the price to pay - not in the scope of this discussion) Since JWT specifies that all claims are optional, libraries should be able to process JWT tokens without any JWT claims (I know how it sounds). From experience, JWS\/JWT libraries verify the signature, other verifications are performed manually. People could also argue that VCs are used. This direction won't lead to a solution. ID Token and eKYC define rules for the JWT claims when they are secured with JWS. I believe there are many other use cases that define profiles, rules, etc. In an open multi-use case system, there are additional challenges and we had a good discussion with Torsten on the topic where we identified several points that should be considered. It is worth reflecting on the designs of JWS JAdES linked data proofs VCDM v2 on their data model and serialisation. Back to my question: What is the real benefit of this data model over the W3C VC data model by looking at the situation objectively? (I'm also trying to find an answer to the question) With Torsten we had a very fruitful discussion. We need to reflect on several elements. Have a nice weekend!\nNAME , I missed one point. Can you please elaborate on \"take VCDM to secure JWT\"? If my understanding is correct (sometimes is, sometimes isn't), it is JWS the one secures the payload (and it can be any JSON). JWT claims define contextual information: who, validity, about who, for whom. We should reflect on sigTst from JAdES. I'm personally not proposing to use VCDM to protect a JWT. VCDM and JWT are both protected by JWS.\nAs an example, iss and cnf are registered claim names for JWTs that are used to secure the VC. Existing RFCs are using those. Therefore it doesn't make sense to overwrite those by the w3c data model in my opinion\nThank you NAME I understand your view. If VC claims were registered, what would be the benefit of the data model over VCDM? Since I’m involved in projects that use VCDM (in JSON and JSON-LD) and also JWT I would really like to understand the objective benefits od the data model.\nTo resolve this issue, we will add a section on relationship to other documents and include a chapter on \"VCDM\".\nThis issue is ready for a PR.\nApproved with a minor proposed improvment."} +{"_id":"q-en-oauth-sd-jwt-vc-e66070d7e28e7d9d83e5890142d5117311c8bc48cf6080e965360c2e1b4c2f9c","text":"Clarify the optionality of the cnf claim (to\nplease see issue and note that the text in this PR has \"This [cnf] claim MUST be present when cryptographic Key Binding is to be supported.\" Also note this PR isn't for issue and it was probably a mistake on my part to have suggested text there that touches two issues. But they were both on cnf so I did what I did [no pun intended]. This PR only attempts to clarify the optionality of the cnf claim for issue . There doesn't yet seem to be consensus on .\nURL\nNAME you probably made the comment while I was updating my original comment based on re-reading the PR. I still think it should remain \"required when cryptographic binding is used\"\nwhat does this mean? you believe sd-jwt-vc should have more than one way to do cryptographic holder binding? or that many of the credentials will not use cryptographic holder binding?\nNo, I just mean SD-JWT VCs can be key bound, but they don't need to be. Therefore it is an optional feature.\nI see, the difference here is that if key binding shall be used, then cnf must be used while the other formulation leaves more space?\nIf that's the intent, sounds more appropriate to me. just saying would lead to confusion that it is optional even when cryptographic binding is used.\nI also like the proposed version more. The proposed language addresses which is based on developer feedback that were confused by the current language that uses REQUIRED if ... . So I think that OPTIONAL seems to solve that issue. IMO, it is also appropriate since we have the additional requirement in the paragraph that says MUST be present if cryptographic binding is required.\nSorry, I don't understand why more space is needed here? The less optionality in key binding, more interop. with the current PR, it would be ok to use a top level sub claim and put a DID there (instead of using URL). I think it is harmful for the interop, that SD-JWT VC is hoped to bring\nI don't see that this optionality is allowed by the following?\nI see your point. The proposed text opens a door for key binding without using the cnf claim. However, the new text is more initiative and we could add this constraint as an additional sentence or in the section on key binding\nOkay, it is already present. Sorry I didn't recheck the changes. Lgtm then\nI'm struggling to understand the disagreement\/argument here. But I'm trying... The text for cnf in the PR has: which seems to me to clearly says that cnf is the required way to do key binding, if doing key binding. Especially the \"This claim MUST be present when cryptographic Key Binding is to be supported\" sentence. Would changing that sentence to \"This claim is REQUIRED when cryptographic Key Binding is to be supported\" help? Maybe I'm missing the point of contention but that and the current PR text and the current text in the draft all say the same thing - that cnf is the one and only way to do key binding in SD-JWT VC but b\/c key binding isn't itself required, the claim also isn't required.\nI thought that was better phrasing anyway so made the change w\/ 9edae9d4aa1dc2b647c77029b7d415d384cded49\nSeems great to me.\nYou're suggested pedantic wording is indeed better than what I'd had. Thanks. I've accepted it w\/ c6b138036e035e6df2a2f983aa4cf8b183d39fc2\nNot all VCs require key binding. For those, cnf should be made optional.\nI've read the text in URL that says \"REQUIRED when Cryptographic Key Binding is to be supported.\" as being optional or not required when key binding isn't needed. Perhaps we need to discuss and\/or make things more clear?\nDo you think we should also explain the OPTIONAL case or replace the REQUIRED with something else? CONDITIONAL is not a reserved word unfortunately.\nI would propose to clarify this. From a quick reading this is not obviously optional and it does not match the other claims that only state REQUIRED\/OPTIONAL without any conditions. As cryptographic binding is optional, I think this line should begin with \"OPTIONAL. [...]\"\nURL attempts to do just that\nthank you!lgtm, see my comment in the discussion.I like this version. CNF is more optional to me and having required as the first keyword seems less rightI'm generally fine with the change, but I think I want to be pedantic here and resolve the contradiction."} +{"_id":"q-en-oauth-sd-jwt-vc-5d637c50074209d455782d26a560b31fbd5f1c3ac7e3e0bcb5bd84f1ead9fb87","text":"Add general SD-JWT VC Type Metadata framework based on NAME work: URL See preview here: URL\nYep, good point. I18N is definitely needed although I'm not too sure about name\/description since those are intended for developers only, not for end-users. I'll create a ticket to consider I18N, and also discuss whether this is needed for dev-only fields. I18N is definitely something we should add to metadata but needed some PR to start with first.\nWe have 3 approvals and no new comments coming in after 2 weeks. I propose to merge this PR NAME NAME . PR was created 1 month ago.\nWe should define more clearly what a credential type (identified by the claim) actually means. The current description is a bit handwavy and that may be a sign that we have not yet defined the concept well. I think that there is a set of metadata usually associated with each SD-JWT VC type, for example Display information for the whole credential A schema for the JSON structure (may or may not be defined in JSON schema) Information about the claims, including Display information Type information Status information (self-attested vs. verified etc.) Maybe: What kind of binding is supported\/usually used for this credential type Maybe: What issuers are allowed to issue this credential type Such metadata would help Wallets (e.g., for displaying the credential to the user) and Verifiers (e.g., for determining the status of claims). It would also help developers of Verifiers, as they would get structured information about the credential they can expect. The need for such metadata has been discussed in Issue and similar approaches have shown up in . The metadata may or may not be embedded into the credential itself, or provided externally. Assuming we allow for registered values (e.g., ), this metadata could go into the registry. It could also be distributed manually. But it can also be made discoverable, e.g., by providing it at a .well-known URI derived from the URL in . Here is a completely made-up example of what this metadata could look like: Important: This mechanism will not be format-agnostic, but defined only for SD-JWT VC for now. There are overlaps with mechanisms defined in HAIP, for example, and part of this issue here is to figure out what data should go where and to avoid duplications by getting the concept right. I think that the element defined for the credential issuer metadata in could be replaced by just a identifier (the metadata could be discovered from there, or retrieved from the registry).\nInteresting point, I have the following approach to satisfy the requirement of having the credential metadata along with a credential: using openid federation a trust chain is needed to attests the cryptographic keys of the issuer to verify the credential. Within the trust chain the openidcredentialissuer metadata is provided. in the openidcredentialissuer metadata there are the display information for all the credential types. Then, using this approach, the credential doesn't need any metadata. Fed trust chain can be made available in a static form, within the credential jws\/MSO headers Let's see other points: I would not use , because its value seems more close to the concept of issuance type. I would say set(, ). Also the name may semantically collide with all about status lists It would be up to the trust model and the credential metadata or trust marks or attestations. We cannot allow an italian PID provider to issue a Polish PID, then the domestic italian credential identifier must be linked with one or more italian credential issuers. This information must be made available according to the infrastructure of trust, where the issued credentials are not involved yet. During a discovery phase the user\/holder knows which are the credential issuers, which credential types they offers to users. Generally this is made available in the openidcredential_metadata and must be granted by an accreditation body. That's why I mentioned trusted lists or federation trust marks or X.509 custom extentions (for strong stomachs) or federation metadata policy. Without openid federation I would have to find a solution and the credential metadata sounds resonable but I use federation and this requirement is already satisfied. Another point of interest is that the credential metadata should be issued by credential issuer and bound to the credential. I would change this sentence we have in the current text: because the metadata should be issued by credential issuers and be tamper proof, otherwise a bogus implementation would use a valid credential with a fake presentation of the claims that may swaps some value and change the interpretation of the data to a verifier. Let's imagine that Giuseppe Marco would be swapped to Marco Giuseppe and pass through a supervised verifier as Marco, while the credential is still valid and verified\nI like the idea to define schema and display information in the context of the credential definition (in contrast to issuance protocol) as I believe this information should be consistent across issuers and needs to be available for verifier developers. I also believe some information could be useful at the level of individual credentials for visual customization in the wallet.\nwe may add: type of binding of the vc ,e.g. key-binding, claim-based-binding, none vct could be either URL linking to the resource directly or an identifier that links to e.g. an (ISO) specification to get the credential metdata IANA registry is probably not a good option versioning of the metadata protection against change e.g. put a hash of metadata in the vc It might be worth to consider reducing the credential format specific Issuer metadata defined in OpenID4VCI and define that in a credential format specific credential metadata resource: However, there will remain issuer-specific things e.g. cryptographic support that should stay in the Issuer Metadata. These would need to be defined in SD-JWT-VC.\nUsing the VC type metadata, one could also include an ecosystem-specific parameter that contains a that allows you to map to any data model one want. It would also allow automatic fetching and executing the transformation algorithm quite easily.\nI prepared a first draft for the SD-JWT VC metadata. I wrote this as a separate draft for working on it, but of course we can consider integrating this into SD-JWT VC — whatever works best. I'm looking forward to your feedback. URL\nNAME I like the document and think it's a great start. I think there's a lot more to do around JSON Schema, similar to the work we've put into . I'm happy to help here. Similarly, the display reminds me of , which could be a standalone thing. Perhaps the types document should be transformed to a 'hub' document?\nFor all these \"human readable strings\", please make sure to have an i18n story, and ideally reference PRECIS, or DisplayStrings, etc ."} +{"_id":"q-en-oauth-sd-jwt-vc-75c41679124d494dd441c2ba0b3d2d44dad08a77323c68ae4782b474bff99fee","text":"Updated terminology and other sections to clarify VCs and presentations can be repudiable or non-repudiable Minor typo fixed See preview here: URL\n3 approvals, merging\nThe specification currently uses terms like public key validation and potentially other things when it comes to the verification of the KB-JWT. We should make sure there is no language that would prohibit the use of MAC authentication for KB-JWT and Issuer-signed JWT.\nCurrently blocked by SD-JWT: URL URL\nNAME A few options we discussed on the editor's call today (cc NAME NAME ): not do anything in SD-JWT to enable this since SD-JWT just prohibits \"symmetric algorithms\". We could wait for an JOSE value specification to appear and to be registered in the IANA registry which states that the algorithm is not symmetric. One downside of this approach is that standard JOSE libs won't work today since the value is not supported. Note that technically one could implement this using standard if the key was agreed\/derived using an ECDH\/ConcatKDF schema. remove the normative statement completely from SD-JWT that prohibits \"symmetric algorithms\". change the normative statement to a SHOULD NOT use \"symmetric algorithms\" and explain in the SD-JWT security - considerations section where symmetric values are acceptable and when not. keep the MUST NOT statement in SD-JWT but add an \"... unless\" clause that explains where this could be useful, e.g., for hybrid schemes but this needs to be confirmed if this process is indeed hybrid and therefore the use case is covered.\nI think is the cleanest approach, but may take more time for implementations, we will try to get some experience there. also doesn't sound bad to me.\nI think I still lean towards preferring something like # 2 - just remove the explicit prohibition of symmetric algorithms from SD-JWT and not say much, if anything else about it in that spec. Let the layer above SD-JWT\/JOSE (probably the presentation protocol) define how an HMAC key is agreed upon and use existing JWS HMAC algs and implementations. I.e., the verifier sends a public key in it's presentation request, which the generator uses in a key agreement with their private key to get a key that is then KDFd to produce an appropriate key for existing\/established JWS HS256[384|512] algs. That layer is going to have to define much of that anyway so there's not much gain from pushing an ECDH-based MAC alg into JOSE\/JWS (also note that defining and registering a new JOSE alg is far from a trivial exercise). And the prohibition of symmetric algorithms in SD-JWT makes sense from a particular perspective but is arguably just unnecessarily restrictive at the building block layer.\nI would agree that removing the restriction is okay, as SD-JWT is an improvement of JWT and JWTs don't have this restriction. Do you think that ECDH-ES is an anti-pattern? It has been in JWA from the beginning and you could have the same arguments there?\nI wouldn't take it that far. My comments\/arguments here are really just in the context of the current state of JWS specs and existing library support.\nNAME NAME if option 2 is our favorite options. Should the SD-JWT VC spec then say something about that? IMO, it would be good to say something in the security considerations at least.\nI agree in general that since SD-JWT is a building block, being too restrictive there regarding symmetric algorithms is probably not necessary.\nI don't object removing the restriction, I checked RFC7519 and there is no similar statement.\nDefinitely not opposed to saying something. But it might be challenging to say something that's actually meaningful\/helpful.\nURL removes the explicit prohibition on MAC from SD-JWT (which will be in the -08 draft-ietf-oauth-selective-disclosure-jwt)\nThen, we need a PR that cleans up the language in this specification, so MAC is not going to be prohibited."} +{"_id":"q-en-oauth-sd-jwt-vc-a807362a975a015e79dcf6e24a58cdda11b4225f1b35d77e2047d3c1d24759ad","text":"This PR includes the following: [x] Add schema type metadata [x] Add full schema type metadata examples [x] Describe base document for schema validation [x] Minor editorial fixes [x] Moved type metadata retrieval section for readability purposes [x] ~IANA considerations for , and ~ See preview here: URL Fixes URL\nNAME NAME Do you think it makes sense to describe the base document for schema type metadata validation? For example, a verifier receiving an SD-JWT VC with Disclosures won't be able to validate the JSON schema against it before transforming the entire SD-JWT VC with Disclosures to expanded JSON document first.\nNAME NAME Should we restrict schema to specific JSON schema versions?\nI'm honestly not familiar with the intricacies of JSON schema. Is\/are there a stable standard version(s) that can be referenced from a prospective RFC? The current content of the PR seems to have some normative statements but no reference. I'd think that's kind of a prerequisite to discussing version restrictions\/requirements. A few minutes of looking around and I found this https:\/\/json-URL which at least suggests it's a bit messy. Any idea if\/how other actual standards documents utilize JSON schema?\nWell, statements like \"MUST validate the Verifiable Credential against the provided JSON Schema document.\" probably need some more clarity about what part of the VC and at what stage of transformation\/processing.\nI suggest that we proceed with merging this PR nonetheless and have a separate discussion on JSON Schema versions in an issue.\nUpdate: We don't need IANA registration for schema, schemauri and schemauri#integrity since type metadata is not a JWT.\nNAME NAME Can you please check the examples I added.\nI believe we will need to distinguish between presentation and issuance schemas -> see\nI don't really speak JSON schema but it looks ok\nThanks a lot. I updated the PR. Will merge later on the editor's call.\nMerging this since all comments were addressed.\nDefine how schemas can be included in type metadata. Requirement is that it should allow JSON schema.\nleft a couple of editorial remarks, but generally this looks good to me, thank you!"} +{"_id":"q-en-oauth-sd-jwt-vc-9d3499992b43bc12744299abb8b8cd296b0eb1c25de90080b1a78080079394ff","text":"The first example in Section 6.1 shows a credential, not a type metadata document, but is not labelled as such. The example in question:"} +{"_id":"q-en-oauth-sd-jwt-vc-0f8324d25117568d766ba246aea51cb79563306c8087c2416f96b11dd038c350","text":"Two approvals, merging now.\nThe text currently says: It must by the Issuer's choice to support either x5c, jwt-vc metadata or both, this is also how HAIP defines it. Extending this thought, the Verifier has no means of understanding what the issuer supports, given that may be an HTTPS URI but no support is avaiblable for jwt-vc metadata. This is because is the best indicator but in Section 5 only described as RECOMMENDED. Proposal is that the Issuer may chose this and that the presence of in SD-JWT VC indicates jwt-vc metadata.\nThis is from HAIP: By following the HAIP text, I don't understand how an issuer can support both options in an SD-JWT VC? Can you give an example? Furthermore, DNS name is not a URI and strictly speaking is this not allowed as per SD-JWT VC where has to be a URI. For that reason, SD-JWT VC says that the value needs to be encoded using the dns URI scheme. I'm not saying SD-JWT VC should not be changed in that regards because I do see some potential for improvement and I think as well that an issuer might want to support both and that needs to be addressed. We should discuss ways how this can be achieved. But I don't think is appropriate for this to control whether a verifier fetches well-known or not. I would assume that a verifier will likely know which issuer ( value) to trust and also know if the issuer is part of some trust framework that supports well-known. If not, the verifier will have to try in the same way as they try other well-knowns. I don't see the need to connect the presence of with the presence of well-known if this is what you suggested. Furthermore, I believe your specific use cases is already supported by: using an HTTPS URI for the value hosting a well-known for that URI including a x5c header with a URI SAN entry that matches the value\nI don't think using to indicate whether well-known is supported is a good solution. I don't think we need this indication for the reasons pointed out above. (we can discuss whether URI scheme is a good solution in general but this is a separate topic and whether we should not limit to be a URI)\nNAME at least for using kid with openid federation is resonable, because that kid is resolved within a federation trust chain\nI think there is nothing wrong with: using an HTTPS URI for the value hosting a well-known for that URI including a header with a URI SAN entry that matches the iss value adding to the header In that case, using the to indicate whether well-known is supported or not would not work because like NAME mentioned, apparently OpenID federation uses as well and I believe in that case, well-known is not required. Please correct me if I'm wrong. NAME Can you explain how OpenID federation resolves the ?\nTo be clear, the above is valid. To be clear, the above just means that using to control how the verifier has to resolve the key does not work because people using OpenID federation might not host a well-known endpoint but will also need a .\nNAME sure, thank you for asking imagine the federation trust chain like an object, related to a subject (giuseppe, a trustable entity) linked to a trust anchor (a trusted party over all the others), wwhere in the middle we may have zero, one or more intermediaries between the subject (leaf) and trust anchor. the leaf's metadata can be overloaded by the statements within the chain, along the trust anchor. Therefore the jwks the leafs claims to use, and any other protocol specific metadata, can be dynamically overloaded within the trust chain, by applying the metadata policies. once the policies are applied, the resulting metadata is called final metadata and contains all the jwks related to an entity, for each protocol (since a federation entity configuration may have more than a single metadata creafted for a speicifc role, such as VCI, RP, OP and so on). using the final metadata, the kid is needed in the form of lookup parameter within the JWKs provided within the final metadata\nthe resolution of the kid is interesting we may have a discovery, using well known endpoints we may have a static trust chain, provided like a x.509 certificate chain\nNAME To be even more clear, they might not host a SD-JWT VC Issuer Metadata well-known endpoint but they might host an OpenID Federation well-known endpoint.\nThank you for this elaboration! It means that in this case, you won't need SD-JWT VC Issuer Metadata well-known, right?\nusing federation the metadata must be signed and evaluated within the trust chain, therefore, yes, there are no direct dependencies with the SD-JWT VC Issuer Metadata well-known endpoint, even if these two can cohexist within the same entity.\nI'm supportive of changing value requirements for the following reasons: using could be a bit misleading using to match against the SAN DNS value is error prone (some devs might forget to do and instead do ) and perhaps misleading if does not exist (although the domain exists). Perhaps an edge case but it might have security implications.\nto chage the requirement to be not a URI but a string for cases when is a DNS SAN value? the point of in HAIP was that the issuer can use the same value in sd-jwt vc to support various key resolution mechanisms - web-based key resolution (.well-known\/jwtvcissuer with or without openid federation) or x509. so I am supportive of keeping a URI in sd-jwt vc, but change the scheme to as opposed to regarding your points 2 and 3, for example, in HAIP, the idea was for the issuer to support both key resolution mechanisms, so the chance of devs forgetting doing is low, and the verifier supporting x509, would have to extract dns name from as part of validations steps, so I see your points, but still think the the benefits of the same with different key resolution mechanisms is strong.\nI support this, see my reasoning here: URL\nNAME and I chatted briefly and are leaning towards keeping the requirement on iss to be a URI while using an https scheme for both key resolution mechanisms.\nNAME said she might have some suggested text or a PR along those lines\nNAME I'm going to work on this tomorrow (hopefully) so if you have anything that might help in that effort, please comment or post it or whatever works."} +{"_id":"q-en-oauth-sd-jwt-vc-ba4b50aabe79de53c89bf54a8cd7a1c0cdd3f0c4042488c6c627fe34fcaa446e","text":"for and hopefully\/maybe too URL URL URL\nNAME this work and URL were largely undertaken as a result of your questions\/feedback\nSelect parts of a conversation from an encrypted messaging service copied here. Editorial liberties have been taken to extract the most relevant parts of the discussion and protect the identities of the participants. >- the first example is just user claims that are being secured >- the second example is user claims + metadata that is necessary to be added when signing the sd-jwt IMHO it would be worthwhile to expand the example in URL with some additions to URL to also show the content of the thing that needs a name.\nSeems related to\nI think the understanding of the spec could be further facilitated by adding an example of a SD-JWT VC after the verifier has processed the SD-JWT VC and (basically) has constructed a JWT representation for further processing in the application.\nThe new PID example in draft -02\/-03 shows the JSON data for \"further processing\" after validation: URL Does that seem sufficient? Or perhaps a similar addition to the example used in the main body of the draft ()? Or something different\/more?\nI personally don't understand why this applies to the verifier: \"has constructed a JWT representation for further processing in the application\". However, NAME has this issue been addressed by the examples NAME provided? Then we can close it."} +{"_id":"q-en-oauth-sd-jwt-vc-367033cb5ac692b7041c7523033978ef9a01bee5ed7ec9df50c698e9960b8beb","text":"Update the anticipated media type registration request from to per the Dublin Accord (last slide of URL)"} +{"_id":"q-en-oauth-sd-jwt-vc-0864c4f95198a4599c0abb96f783fef8627698f5d9ccd2286bdb32ac367d12fd","text":"Remove the requirement to insert a .well-known part for vct URLs , ,\nDr. Fett, PhD: >Gentlemen, Some feedback I received (on top of the comments we already got in the issuers) makes me second-guess the choice to use a .well-known URL for the type metadata documents. In fact, RFC 5785 tells us that .well-known is mainly to be used for site-wide information or information about a specific host. that it will be used to make site-wide policy information and other metadata available directly (if sufficiently concise), or provide references to other URIs that provide such metadata. Brian Campbell, B.A.: >That feedback seems not wrong to me. We've got probably a lot of somewhat questionable uses of .well-known but type metadata is maybe less well suited than most. And less needed. >It could just point to the json document. Or we make up some convention. Or something with uri templates.\nRelevant related issues: , ,\nI believe that lifting the restriction is not preventing the use of .well-known; IMO there are two approaches provide the full link signal that the metadata is under the well-known However, metadata owner must be aware of the responsibility to take care of the versioning. As mentioned in , should there be a requirement to either use hash links or protect the remote content via vct#digest? Dr. Horvat, PhD\nMaybe I'm missing something, but the change in PR should mostly solve this, I think. It now requires to provide the full URL to the metadata. (If someone wants to provide it under .well-known, that is their choice, but doesn't influence the requester at all). For versioning I filed a . Not sure about making #digest mandatory, but we should also discuss this separately.\ndigest mandatory, but we should also discuss this separately. -> if it's an external resource, it should be defined how it is protected\nURL Is defining that if vct is an https:\/\/ it should check the metadata under the well known (at least the 2nd part of the text reads like this: Many registries are, and will be accessible via URLs, hence the metadata type is expressed via an URL; Adding or maintaining a .well-known might not fit in the existing API designs. Also note that .well-known has well-known issues with multi-tenancy. Most use cases will delegate the hosting of the information to registries. Also Questions: if schema is https, should the full URL be provided? (no ambiguity with .well-known, you can host schema on github, ...) metadata retrieval category re-consideration: 1) Fetch vct from a remote source: a) URL: HTTPS schema -> full URL that points to a schema b) URN: domain-defined URN that MUST be understood by the wallet; The URN method defines how to map the URN to URL and retrieve the data 2) Fetch vct the metadata locally a) local cache b) Signature (signed or unsigned header); Whether or not metadata is shared in the (un)protected header is defined by the signature format, hence out of scope of this document. 2b: point to consider for the OID4VP: should there be a flag: \"archival mode\" or similar, that would flag that the wallet needs to provide all the referenced content in an unprotected JWS header?\nremoves the requirement to insert .well-known (discussion in )\nType Metadata introduced new well-known URI \/.well-known\/vct\/ in \"6.3.1. From a URL in the vct\" , but this is not reflected in appendix-A.3 Well-Known URI Registry. Additionally, using in sentence \"Metadata can be retrieved from the URL URL, i.e., by inserting \/.well-known\/vct after the authority part of the URL.\" seems to me a little bit confusing as type is considered to refer to whole URI. Maybe use or (to include also query and fragment URI components)?\nThank you and yes, the registration request is missing and needs to be added. And that sentence could also be improved.\nAlthough the use of .well-known for type metadata is being reconsidered\nCurrent text: Since well-known only applies to HTTPS URLs, we should limit the section to HTTPS URLs although other URLs would be still possible, e.g., . Additionally, also the in is probably not correct since the type refers to the value itself which would not make sense to the full value here which would also include the scheme for instance. I suggest we update the title to \"From an HTTPS URL in the claim\" and update the language to something like this:"} +{"_id":"q-en-oauth-sd-jwt-vc-0012f960b6e72ecf05ee13ab7e8014eee676efef7942cd777e6909cd1393365a","text":"and a few others that had gotten out of sync\nSection 4.1. Key Binding JWT, says the following: But Section 5.3 does not exist in the referred spec. Instead, it should be Section 4.3\nThanks NAME A couple sections in SD-JWT were consolidated around draft 12 or 13 and other sections were renumbered as a result. We'll fix this reference and check for others that might need updating too.\nsee including URL"} +{"_id":"q-en-oauth-sd-jwt-vc-2a7b464f5b023ff56aa0a916f2460f4b5fbc43aed5d167a22cf3eff647844964","text":"Pros: selective disclosure of types possible to increase privacy and to reduce contextual info more granular types possible issuers don't have to issue multiple credentials Cons: polymorphism always is a challenge, we that with no support in SD-JWT yet to selectively disclose array elements\nAfter further discussions it seems that multiple types might cause difficulties with JSON schemas. If multiple types are required in the future (e.g. more implementer's feedback), we can update the specification. So, let's make a single string value."} +{"_id":"q-en-oauth-sd-jwt-vc-1e2a20d544b8ec731fcf6e4911d9c7ce51cda4526a02e82916f95a2c37ca50c6","text":"I left some references open that will be addressed by other pull requests changing the surrounding text in order to avoid merge conflicts.\nFix references to OAuth SD-JWT\nFix references to RFCs, standards etc.\nlgtm"} +{"_id":"q-en-oauth-sd-jwt-vc-b6f4249e5872270b6de312142d3415d40273dfe5a0eedd2031f836a7f3e8753d","text":"I believe I addressed all the things except the ones where I created issues for NAME . Please review again NAME .\nNAME approved, merging this.\nThe SD-JWT spec does not define how to construct the HB-JWT. It would be nice if the HB-JWT could bind the signature to the transaction, i.e., the rest of the SD-JWT combined format for presentation. I'd suggest, we define the following payload for the HB-JWT:\nBecause the HB-JWT does not sign any other disclosures, any non-repudiation attacks are not possible.\nNote that also SD-JWT will be more specific on the HB JWT: URL\nSection is empty atm\nThe distinction of VC-SD-JWT and VP-SD-JWT etc. is still not super clean also in terms of what the verifier receives.\nFix references to OAuth SD-JWT\nFix references to RFCs, standards etc."} +{"_id":"q-en-oauth-sd-jwt-vc-d2fe1e25c7628efcb54b22eca4cd8b2e677355e4642572772ade4ca7dd2a2f6a","text":"Fixed VCDM transformation algo,\nomitting claim in Presentation and treating it as a default value\nWe will do the following: Instead of pseudo code, we will use algorithmic description Add algorithms for VPs and VCs."} +{"_id":"q-en-oauth-sd-jwt-vc-724d4ee04b0e9254148ef757d7726fb5c034a1ccbb41b4d92b2df9d65af01c14","text":"In the rendered HTML, the entire algo appears in one paragraph. The mmark is probably broken and has to be fixed.\nlgtm, thank you!"} +{"_id":"q-en-oauth-sd-jwt-vc-378712c658aade9e32564bb0a94730580516c59a5f00ca33418942a397ad89bf","text":"As agreed on the editor's call (5\/23), the following ... should become ...\nI'm still confused why we need this text at all. What people will basically get is \"this spec deviates from the VCDM 2.0\"."} +{"_id":"q-en-oauth-sd-jwt-vc-a8c1b4c35b6a527497cca776f775db8c138d28051083af956b3b62b134012cc2","text":"fix:\nin the three-party model: An Issuer creates a Verifiable Credential for some End-User (Holder), who then presents this credential to multiple Verifiers. A Verifiable Credential might contain a large number of claims, but the Holder We should short this as suggested by NAME Originally posted by NAME in URL\nRelates to"} +{"_id":"q-en-oauth-sd-jwt-vc-b79da2737a90d8112cebd121ab3c3d4b2070f9f7fcf883787cb2d52431cbb1aa","text":"The rules for obtaining the issuer metadata needs to be extended to cope with paths as well. I suggest to use the text from URL\nThe following example is already supported ... NAME is this what you had in mind, or is there anything missing? In that case, we could add an example that makes it more visible that this use case is supported.\nURL needs to be URL have a look at URL\nOk, makes sense. Will create a PR to fix that."} +{"_id":"q-en-oauth-sd-jwt-vc-dd667d87e6ce8b0f722adad0e5bb8806fdf0bf84c65b3d0c10960c2de9838e1d","text":"NAME can we merge this now?\nTry to find a new term.\nIntroduce and use the term \"unsecured payload\" instead of payload or credential to refer to the input JSON document. We cannot use payload because it would lead to confusion since the payload of the SD-JWT might not even contain all the claims and only the hashes."} +{"_id":"q-en-oauth-sd-jwt-vc-bf5baca2d638ca83d7b15a83a76e52a5e9c09f7a7594ba7039377f616c578d96","text":"I'm not sure whether we really need a comprehensive description of the issuer-holder-verifier model. I think what is needed is a description why JWT is not suitable and why SD-JWT is to motivate VC-SD-JWT. Note: we also have a duplication of term definitions between intro and terminology section\nlgtm now"} +{"_id":"q-en-oauth-sd-jwt-vc-574194e85d246dcfd7007f882903ef14c2915023ed078da6064d0778233a3b7f","text":"This just fixes typos and references.\nThis just fixes typos and references. No review required at this stage."} +{"_id":"q-en-oauth-sd-jwt-vc-8199458bea3bb872a17b04a3c11d38d69eb817c4ef44698c2881f042ab3349ea","text":"Only minor editorial changes such as typos.\nOnly minor editorial changes such as typos -> no peer-review required at this stage."} +{"_id":"q-en-oauth-sd-jwt-vc-4f1dd217a63503e5564014de3a0152b373deb2f4ba8c9df8b88c15994bfba585","text":"Removes VP terms; changes language around VPs and presentation of VCs; -\nNAME gave verbal approval.\nDiscuss better term than VP-SD-JWTs. VP-SD-JWTs could be an acronym for the above.\nI think the term VP-SD-JWT and especially the media type vp+sd-jwt are not appropriate there is no distinct element in SD-JWT that is \"THE\" VP. I suggest to change the wording to \"presentation of a vc-sd-jwt\" and drop the media type.\nThe OpenID4VP spec refers to Verifiable Presentations in the vp_token. Wouldn't it make sense to stick to that terminology? This will allow us to use VP-SD-JWTs more seamlessly with other specs. Would you agree?\nThat's a good question. I just checked and we don't use the term Verifiable Presentation in conjunction with mdoc. So I think we don't need to use it here either.\nPerhaps we should create an issue in OpenID4VP to make it more clear what can be included in the vp_token?\nSo, this means we will not focus on verifiable presentations at all. We would only speak about presentations of VCs. Is this correct?"} +{"_id":"q-en-oauth-selective-disclosure-jwt-9758f36da717b96570b7088e029466bc2a0ef6a3520e241ba9a53e3b2d5342b3","text":"first pass, will cover other sections later\nI approve of the content in this PR. (I would use the GitHub \"Approve\" function, but it appears that I may need write or contributor access to the repository to be able to do so. Could that be added for me?)\nThese changes are all worthwhile improvements."} +{"_id":"q-en-oauth-selective-disclosure-jwt-c95682bf8276a7465057142aa0db5887ed0f0b0a2703aca51034d795feb36a3a","text":"Would like to go with salted hashes based approach, since there has been a lot of feedback that \"simplicity (of salted hases approach) is a feature\". However, acknowledging , adding a text to the security considerations explaining mechanisms that should be adopted to ensure that verifiers validate the claim values received in SD-JWT-R by calculating the hashes of those values and comparing them with the hashes in the SD-JWT."} +{"_id":"q-en-oauth-selective-disclosure-jwt-6868a3f93d4dcaf30c635137df1f3888e48d63193c3770d4c815363a407efe83","text":"In this PR, I generalized the examples: Right now, there is a lot of repetition in the code for each of the three examples. With more examples (W3C VC) and more example code (claims merging) to come, things will get more complicated. I therefore moved the examples into a separate directory as YAML files. I used YAML, because this means that we can store all data required for each example as separate JSON documents in one YAML file (JSON is a subset of YAML). The code can now be called with a YAML file as the input and will run the demo for the example defined in the file. This will make it easier for us to add more examples in the future. Also in this pull request: Cleaned up the code to produce just one example each time - there is no need to calculate all examples when only one is used in each run Cleaned up the code to produce both the CLI output and the spec examples from the same set of data to avoid repetitions This means that placeholders in URL are now strictly named as follows: , where refers to the identifier of the data produced in the example run (see file). Fix for example output: serialized SVC was not shown, but serialized SD-JWT twice Renamed to (not sure what was meant to say) Fix for norandomness: RNG was seeded to static value even when no_randomness was not used\nYes, it was in my backlog, thanx. This first proposal was to get all the examples in a single shot, we can add a CLI parameter, I'll have a look for that This means that placeholders in URL are now strictly named as follows: example-{exampleid}-{artifactid}, where artifactid refers to the identifier of the data produced in the example run (see sdjwt file). Thank you, I wanted to make a proposal for this, you did it My bad it was \"selected example\" but artifacts is better anyway OK!"} +{"_id":"q-en-oauth-selective-disclosure-jwt-9375ab29eb545bfc8206b239b98de0a3b10c81f670c595c25debb5e843c107c6","text":"addresses Issue\nNAME considering the last OOP refactor on the python code and the revision of Brian probably it would be easier to close this PR and starting a new one on top of the current master branch, the changes in the code and in the text are very trivial\nNAME ok, please do a PR directly to this branch :)"} +{"_id":"q-en-oauth-selective-disclosure-jwt-e9ea83a6abde5d0759f759dd25e81c15a96d754bf7883f177b9fb7eaa4c7c7b5","text":"Spec text updated, please review!\nGeorge during OAuth side mtg pointed out that it is important for the verifier not to change anything in the JSON object in the Release (ie SVC )that is being hashed so that the hash output is the same from the one in SD-JWT"} +{"_id":"q-en-oauth-selective-disclosure-jwt-fadbb3acc1d168ea71b23b128b0ef940e608c2e640136367dacec86ca8256135","text":"PR for URL\nDone, and also cleaned up the intro slightly (repeating 'document' is superfluous there)."} +{"_id":"q-en-oauth-selective-disclosure-jwt-d617e02f4500b2df6300c883653b01869ea0a54a6c6f3cea8ca4b041c1b66d7f","text":"added HMAC algorithm to supported identifiers replaced with to enable random value being a cryptographic key\nNAME ready from my side\nLooks good, I just had a few nitpicks here and there.\nresolve conflicts, change to sddigestderivation_alg check if renders correctly...\nVerbal approval from NAME\nNAME sorry for the late revision it seems that here we have merge conflicts exposed URL\nthanks! yes, fixed them in a PR that was merged"} +{"_id":"q-en-oauth-selective-disclosure-jwt-c648dc605392198ef9c042991524fb97fb5c9c9dcd6500e68f45446a52f18dc7","text":"reflecting conversation in Issuer why we are not hiding claim names\nI propose to move this to the privacy considerations.\nmakes sense.\nSome use cases may require hiding both claim names and claims. Some schemas are country\/jurisdiction-specific and revealing all claims might reveal some additional information. Open questions\/remarks since the issuer is known, information about the schema can be guessed hiding the claim names does not hide the structure do use cases really require hiding the claim name?\nThere have been some discussions about it on the signal group. Just recording them for the transparency purposes hide claim values only hide claim names and claim values Rationale: Some schemas are country\/jurisdiction specific and revealing all claims might reveal some additional information. (Something the use-cases still need to clarify is the fact that the issuer is known. Still waiting for some additional feedback.) That's what I thought. Another question would be: For structured claim objects (cf. eKYC & IDA Spec), do we need to hide the structure? The structure itself might be revealing some information. I can imagine the cases where an issuer would be issuing only one type of credentials with fixed structure and claimset. That means, if you know the issuer, you can pretty much find out the structure and claim names in it. So, hiding the structure and claim names would not add much. I would assume that is the default. I agree In ekyc, the structure alone can reveal the type of evidence used, which might leak info about e.g. nationality. But these cases might be rare and I'd also like to keep it simple for now. Right. At the same time, you typically want to know the nationality in the case of EKYC in financial institutions :-) Speaking of that, I was assuming that in a lot of cases, the receiver may need to know the type of credential especially when the issuer was issuing multiple types of credentials at different assurance levels. Is this assumption bogus? That's a valid assumption. Verifier needs to know the level of assurance of the VC and in most cases (I guess) it will need to know under what policies the holder keys are created\/stored\/... (or if it can trust the wallet where the Verifiable Presentation was created and is shared from)\nNAME are you arguing to have both options - to hide and not to hide claim names?\nWe are discussing whether there's really a case where we need to hide the names. In most cases, you'll need to reveal info about the issuer and the schema.\nYup. And once you reveal info about the issuer and the schema, there is no point in hiding the claim names in most cases. So, I am arguing that we do not need to hide them.\nI agree. We can close this issue. Thank you!\nI think we should add the rationale why we are not hiding claim names. leaving a note for self to do a PR on this."} +{"_id":"q-en-oauth-selective-disclosure-jwt-ef0247a3d6e80c22d6e68adf94bbf01a41049b1ec4cce166b0bca9c18e9cb2be","text":"Addresses Issue\nI also revisited the security recommendations for the salt and made some smaller changes there.\nuser has 2 SD-JWTs from the same issuer, same blinded claim names can be used to correlate the user. advice to have a fresh blinded claim name for each credential.\nInstead of opening a new issue from our conversation this morning - here is the excerpt & suggestion from the document we discussed: First off, maybe it's informational or maybe it's specified - issuers should not include \"secrets\" in claim labels. If a label for something is actually a secret, it should be treated as such. That said, I get that this scenario must be addressed. Sensitive predicates or graduated disclosure contexts can benefit from this capability. It seems to me that the most useful and easy algorithm to implement is to use the same hash algorithm + salt for the node value to blind the label where: SALTED LABEL = HASHALGO(LABEL + SALT) Of course, this assumes that specific guidance is included for best practice salt generation (fresh entropy gathering for each attribute block salt, sufficient bits of entropy, etc.) for strong cryptographic outcomes. In this representation, the salted hash of the label's value (\"name\") \"9-J2445ap91...\" can be addressed directly by the digest \"h\" value in compact form: Side-note: Informative statements that recommend large sd-jwt claim sets should be 'broken down' into much smaller sets, or individual claims, then wrapped and sealed as a JWP may be better than including too many claims in an sd-jwt.\nRight now, it is RECOMMENDED that a random string is chosen for the blinded claim name. HMAC(salt, claim-name) and HASHALGO(LABEL + SALT) would be fine as well, but there's more room for errors, especially as holders\/verifiers might expect a specific format and then reject an SD-JWT when, e.g., the issuer chose a wrong encoding or similar. I don't see a real advantage of using HMAC\/HASHALGO for the blinded claim name. I therefore propose to stick with the current recommendation. We should stress that a fresh random value must be chosen for each claim in each issued credential.\nNAME it's not super clear to me how: would alleviate your concerns here: \"but there's more room for errors, especially as holders\/verifiers might expect a specific format and then reject an SD-JWT when, e.g., the issuer chose a wrong encoding or similar\" Maybe I'm missing something obvious but I don't see much difference in how holders & verifiers would treat these various approaches. They still have to reconcile with the encoding and\/or hashing algorithms expressed in the security envelope.\nMy point was about the selection of the blinded clame name (or 'placeholder claim name' as it is called in the current draft). We have two options on the table: Select a random string as the placeholder Select the placeholder as some function of the original claim name (the label in your proposal) and the salt What I wanted to express is that I think option 1 is better. From a security perspective, both are equivalent. There is no need to choose the placeholder claim name in a particular way, we just need to ensure that it does not leak information about the original claim name. Just numbering all placeholder claim names would be fine as well (as long as the numbering does not reveal anything, e.g., the position of the original claim in the list of claims.) But for option 2, I see two drawbacks: We need to explain more, in particular how to assemble the input for the function, similar to the discussion we had about the hidden claim values. For example, we need to define how the original claim name and the salt are concatenated such that there can be no prefix collisions. A verifier receiving an SD-JWT with blinded claim names according to option 2 may be tempted to (needlessly) check that the claim name was created in a particular way by recalculating the hash. If the issuer has made a mistake when calculating the hash or the verifier makes a mistake, this check will fail. Since such a check is not needed, it would be better if we don't set this trap in the first place. I see no advantages of option 2 over option 1, so I propose to go for option 1.\nI see, thanks for explaining that. I'm fine with option 1 if the goal is to reduce cognitive complexity for the application developers - it makes sense.\nPR and PR\nI agree with these changes, but it really made me think that we need to clarify various options we have introduced how claims in SD-JWT can be represented as it is becoming a bit hard to follow.. issue filed"} +{"_id":"q-en-oauth-selective-disclosure-jwt-33deb7736cd5485344c606256ef9b7508ee27b4933ec08adda57e5dea5159810","text":"addresses issue\nThe way it is written now does not look like terms and definitions. It would also be a good practice to make it possible to the definition to replace the terms that appear in the main text. With this respect, verbose explanations especially the mechanisms should be done in the \"Concepts\" section in the main text. In fact, much of the text in the current \"terminology\" mostly duplicates what is being explained in 3. Concept Also, examples etc. should be added as a note to the definition and not as part of the definition text. Proposes the following as amended text Terms and definitions 2.1 SD-JWT signed JWT ], that supports selective disclosure as defined in this document Note to entry: Signed JWT is JWS. 2.2 release (SD-JWT-R) document that contains a subset of the claim values of an SD-JWT in a verifiable way Note to entry: Holder binding is also possible. 2.3 issuer entity that create SD-JWTs (2.1) 2.4 holder entity that has control over SD-JWTs (2.1) Note to entry: If holder binding is desired, the holder also has the signing key for the verification key contained in the SD-JWT. 2.5 verifier entity that checks and extracts the claims from SSD-JWT-R (2.2)\nclosing with PR\nCurrently, it has Conventions and Terminology Terminology Having two \"Terminology\" as consecutive headings is not good. I propose to change 2 as follows: Terms and Definitions\nclosing with PR"} +{"_id":"q-en-oauth-selective-disclosure-jwt-0d0ace00db7fd7427aaac9596799b0c937a0d1b64e213e32c1a922bdb56e248e","text":"I think we need a section summarizing what kind of claims SD-JWT can consist of... claims in SD-JWT can all be selectively disclosable claims, or can also include always-disclosed claims claim names in SD-JWT can all be blinded, or can include non-blinded claim names claim structure can be simple or complex and claims in the complex structure can be blinded\/not-blinded and disclosable\/not-disclosable same claim names can appear within multiple complex claim structures and can be blinded\/not-blinded and disclosable\/not-disclosable..."} +{"_id":"q-en-oauth-selective-disclosure-jwt-0acf202e3d3db644c42320620b8aeeb7aab1ced5f5a09c27aefa73f14e92a2da","text":"Added an option to do pairwise SD-JWTs to prevent Verifier\/Verifier linkability Original feedback. Please re-write section 8.2 in the light of the above suggestions."} +{"_id":"q-en-oauth-selective-disclosure-jwt-7c2dc226b4c82cd3dd87233fa2bb46da4c7d23e9753b69270f63e98ff3b86568","text":"Fixing the \"release\" in the sense of a defined term as \"SD-JWT Release\"\nWe have fixed the term and definitions but a few still appear in the main text. We need to fix them as well, hopefully before the first individual draft is uploaded. So, s\/signatures called releases\/signatures called SD-JWT Releases\/ s\/SD-JWTs and releases\/SD-JWTs and SD-JWT Releases\/ s\/Verifying a Release\/Verifying a SD-JWT Release\/\nFixed with PR"} +{"_id":"q-en-oauth-selective-disclosure-jwt-3a28a71cab6e46513b0df136ab9b8a4901e6240dc93a466dd7a6010817cf50fa","text":"Some minor edits: consistency of hyphenation in base64url-encode(d) use transaction instead of use case in one place because selective might be needed or not within one use case, depending on the interaction with RP explicitly called out period character to avoid misinterpreting as missing word some typos\nchanged six . separated to four per Christian's separate comment\nWhat did I miss? The presentation should be six dot separated elements. Three for the SD-JWT, three for the SD-JWT-R.\nWhy would you need to merge both? I don't understand the proposed flow between issuer, holder, and verifier; perhaps a data flow section would help. I would think the holder gets a 3+1 part JWS with SVC (which can be -concatenated). Then all you need to present to a verifier is the same data, along with some user proof key signature if the JWT is bound to a user key (for which a property is added in the SD-JWT-R, if I understood correctly; more details would be helpful here).\ncreating a separate issue on six vs four period separated elements to merge this editorial PR\nLGTM"} +{"_id":"q-en-oauth-selective-disclosure-jwt-4e71297294f0015bae9d49b25c66070e6bc8733f0ecc696fd9194f541bbb7874","text":"Add Justin to Acknowledgements (he suggested the array for _sd in sid…e meeting) and fix the commas + and in that list of names"} +{"_id":"q-en-oauth-selective-disclosure-jwt-c1c6043fe3eaf9820af3523c495af7af69bdc1d2cf56123fe1ccf1a7381952f7","text":"As discussed on our last call, I added a section saying that there is a potential privacy risk in storing signed End-User data inclusing a discussion and recommendations on potential mitigations. This does include a potentially controversial bit on burning private keys. My final stance on that is subject to the outcome of the that I have launched on Twitter.\nBesides what we discussed in our last call, I also made some other tweaks to the wording in the , please review.\nshould probably add something to the document history too just for posterity"} +{"_id":"q-en-oauth-selective-disclosure-jwt-7fbe745f49fb5604b999ab9c11c91fef5f12858932264fd0374e8eee8a90e81e","text":"for add a bit more context\/background to the intro about the indented general applicability of SD-JWT and the relation to JWT and the OAuth WG"} +{"_id":"q-en-oauth-selective-disclosure-jwt-dea476e13a1549e725a88af8c6f4f58cdc9c41195011a0f2c5d0c853de7f1a35","text":"More explicitly state that SD-JWTs have to be signed asymmetrically (no MAC and no ) One item out of John Mattsson's review: URL Thanks for the review John. I've tried to reply to the comments inline below. MUST be signed using the Issuer's private key\" in sec 5.2 .) but I think you're right that it should be explicit. I assume it would be a claim. Is there something that suggests otherwise to you? That \"HOLDER-PUBLIC-KEY\" only shows up in section 4.3 , which is supposed to be a somewhat abstract discussion about it. And I think it's separate from SD-CLAIMS and NON-SD-CLAIMS there to call attention to it. But not to suggest that it's not a claim. More generally (as sec 5.2.3. Holder Public Key Claim tries to describe) the specific way that holder public key binding is established by the issuer and represented in the JWT is out of scope. A \"cnf\" claim is used in examples for what it's worth. But the specific details of a holder public key binding claim are left to applications using or profiling SD-JWT. the hash input so as to make it infeasible to \"reverse\" the hash value by enumerating potential claim name\/values into the hash function. Correspondingly a salt value is included in the disclosure structure directly alongside a claim name and value. I think using the word secret to talk about salt values would be problematic and potentially confusing. The salt values of undisclosed claims need to be hidden from parties to whom the claim isn't being disclosed. But the salts are sent from the issuer to the holder. And salts of disclosed claims are sent from the holder to the verifier. That's different from a typical secret (as I think of a secret anyway). We can look to add text or clarify text about not revealing salt values to unintended parties. that the \"Issuer MUST ensure that a new salt value is chosen for each claim\" and a \"new salt MUST be chosen for each claim\" in section 8.4 . I think that's what you mean by salts being independent of each other? Sec 8.5 has a 128-bit minimum length as RECOMMENDED. As I recall, there was some desire to allow for shorter salts when used with a more computationally expensive digest function and the RECOMMENDED would allow for that situation by basically saying to use at least 128 unless you have a good reason to do otherwise and understand the implications. I think \"key\" or \"secret\" are definitely not the right name and would be more prone to mislead or confuse. It's a random value that's included in the hash input (that otherwise has a more predictable value space) to make it infeasible to guess\/confirm the claim name and value from the hash value. And ensure that the hash output will vary when hashing the same claim name\/value more than once. disclosure structure (containing the claim name & value as well as the salt) is hashed and the digest value is placed in the \"sd\" claim of the signed JWT. Each (signed via inclusion in the JWT) digest value is an integrity check on, and reference to, the corresponding disclosure. Length extension attacks aren't applicable to this kind of use of the hash value. Any change to the disclosure content breaks the integrity check and any change to the hash value breaks the signature. Cheers, CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you._"} +{"_id":"q-en-oauth-selective-disclosure-jwt-bb150de0a92b16e04ff42997985104b45eaa73d8756fa2cb210bf4bce7d4078e","text":"A bit more in security considerations for Choice of a Hash Algorithm (1st & 2nd preimage resistant and not majorly truncated) for issue\nWhile looking at this I noticed URL only allows hash algs from the registry. URL removed the JOSE HMAC stuff but also removed allowing the hash alg to be \"a value defined in another specification and\/or profile of this specification\" which I thought we wanted to have - for Christian's type use cases and extensibility in general. This isn't the right place for a discussion but wanted to note it somewhere before our call.\nI think we intentionally removed “a value defined in another specification and\/or profile of this specification” Because using that would require a separate profile and not just defining an identifier.\nfrom this PR URL"} +{"_id":"q-en-oauth-selective-disclosure-jwt-d9e8b631bba8da09db1ee73c189394b7520defa684ea95f9c6b872fabec1577e","text":"Based on the feedback from NAME\nThis review identifies three minor syntax issues. Once they are addressed, I'll approve."} +{"_id":"q-en-oauth-selective-disclosure-jwt-5be498fa51dce9c9af7bed52fdc6495a936870583b2759fc3bcc7cf83808970f","text":"This PR outlines security targets of SD-JWT. when merged it will replace PR and close issue ."} +{"_id":"q-en-oauth-selective-disclosure-jwt-68cc9af1fb6808cfb83a082962467c31e5ad6b1dcf19619f215ae19d77c0f32f","text":"When merged, this PR replaces PR and closes Issue . I am not sure what value reference to ISO 27551 brings (+1 to remove it). Unlinkability pairs defined in ISO27551 do not really apply to Issuer-Holder-Verifier model, since they are modelled after User Agent-RP model. and anonymity of the User is out of scope for SD-JWT IMO, but adding one subsection on \"User identifier should be chosen to guarantee user anonymity and not only pseudonymity\". but feels redundant since SD-JWT is not only for natural persons.. Some of the notions in PR 93 were outdated since claim names are now blinded by default. \"Issuer Issuing One Type of SD-JWT\" section might be valuable, also confirming with Smart health card community.\nthe \"build\" for this PR isn't working URL"} +{"_id":"q-en-oauth-selective-disclosure-jwt-c5f6311f6124727c64361f0f666f63cacfac89e6d8216836121c5035707bd93f","text":"Addresses issue . enveloping in a single signed JWT more than one Combined Format for Presentation without a Holder Binding JWT."} +{"_id":"q-en-oauth-selective-disclosure-jwt-a9ae8f16ae1f7cb1ead31ac671a3f8de240d8a7fc9bfb30acff8c409676a0656","text":"Fixes Issue\nThanks NAME I think the two folks should be acknowledged and the small change around the claim incorporated. Then I think this should be merged and we should publish a -04 as basically a bug fix. The issue NAME URL found kinda undermines the whole point of SD-JWT and we should have a published version out that fixes that.\nWhen decoys are used (_sd members not corresponding to a disclosure are added) the says to I imagine this is not intended and the holder should ignore missing disclosures instead? Same in\nActually the following in doesn't make sense to me at all since the whole point is to selectively not release disclosures.\nGood point\/catch Filip. I think the intent was to say that if a disclosure is found that isn't referenced by a _sd hash value, that should be a rejection. But the language got turned around somewhat. I think anyway. It needs to be fixed or clarified regardless though.\nProcessing by the Holder: This seems fine to me. In the context, \"the digests\" refers to \"the digests of the Disclosures\". If there is a Disclosure that hashes to a digest not found in any _sd array, the SD-JWT is invalid. Maybe we can clarify by modifying this to: Verification by the Verifier: This is wrong and should read \"If no such Disclosure can be found, the digests MUST be ignored.\" Thanks for bringing this up, Filip!\nDon't we then also need something at the end of Verification by the Verifier - that makes sure all of the provided disclosures match a digest in the payload and reject if that is not the case or am I missing something here?\nThat would force the Holder to act correctly, but would it provide any benefit beyond that?\nThe changes above are included in PR - please review NAME NAME\nPR merged"} +{"_id":"q-en-oauth-selective-disclosure-jwt-cb0238370c26d8bcd60595fdb39bb0dcfe37655a8f804e1a70acc1a9173a0ff5","text":"This fixes Issue . The main change is to better describe in which role the adversary acts.\ndoes this one need a doc history entry... ?"} +{"_id":"q-en-oauth-selective-disclosure-jwt-8c6289f570884ef1fa1324b43e325e57e494717199194a95b3561fed2c713271","text":"To fix Issue , I found that removing Example 2b is the best way to go. I tried to fixate the nonces used in the examples despite decoys being generated in one of the examples, but that got quite messy and introduces a lot of unsafe code paths into the python code. An alternative would be to manage the example manually, but that is a source for errors, so let's not do that.\nThe decoy concept is intuitive enough that a separate example isn't needed, I think. And per the separate example that has a completely different set of digests might cause confusion. So removing 2b seems like a good course of action. The change is larger than I'd have expected. Are all the other file changes due to regenerating examples? I think so but it caught my attention. And it seems like and this will be difficult to resolve. But maybe you've already considered that.\nWhy remove an entire example, if we can add a text clarifying that \"Example 2b uses the same data as 2a, but because even for the same claim values, different nonces are used, not just the decoys but all digests are different from those in 2a\"?\nThat was my initial approach, but with that change there's really not that much value to the example: It just shows a bunch of digests, where the number of digests is higher than it was previously. The fact that the digests are different can be confusing for readers at worst and the example is not very helpful at best.\nSo instead of showing the decoy digests in the examples you propose to list them seperately and not show a complete example with decoy digests?\nWe agreed on our call to generate a snippet of markdown code to explicitly list the decoy digests to improve the explanation.\nI updated the python script and the spec to show the decoy digests: URL\nOther than the little nit about the the text that says the decoys in the example are about the address, this looks pretty good.\nDo you want to mention something in the doc history?\nGood point, done!\nNAME Can you please review this?\nstates that it was created using data from , but adding decoy values. I'd expect its param to be a superset of example 2a's, but it's not the case. Am I missing something?\nExample 2b includes different nonces than Example 2a. We try to use deterministic random numbers to generate the examples, but in this case, due to the order in which the decoys and regular salts are created, this approach does not produce the same nonces for the same \"real\" disclosures.\nPR was merged"} +{"_id":"q-en-oauth-selective-disclosure-jwt-46bdc1cf89c08e656a28bda9f9dc3853ce12feef69e4811963cda227b88ae87a","text":"Discussions around defining SD-JWT-VC show that it would be very helpful for interoperability if HB JWT is normatively defined in SD-JWT specification itself. Using HB JWT is optional, but if used, it should be compliant to the definitions in SD-JWT. How to represent validity (iat, exp, nbf) will probably need the most discussion... also we should introduce an abbreviation .... cc NAME\nneed to re-generate example 4b, and 4a to update hb jwt header....\ncc NAME please take a look (can't ask review from you...)\nThis change is sufficient to warrant an entry in the doc history like just, \"Defined the structure of the Holder Binding JWT\"\ndone, thank you, Brian\nlgtm"} +{"_id":"q-en-oauth-selective-disclosure-jwt-5038022c2488c07c7d9cecf26653b1c6e8420cdf54a29ff32a438399afbd193b","text":"to\nI think the first paragraph mixes Issuer with Verifier. Old: New:\nThanks for catching that. We'll fix."} +{"_id":"q-en-oauth-selective-disclosure-jwt-d22b4cf991349885157e5a072200ca0edcc6cb5f553669528d8770bd195c944c","text":"Thanks, editorial comments taken.\nWhether this format is JWT or JWT-like, the considerations in the still apply. Please consider adding a claim. cc NAME\nExplicit typing via the header can and should be done by specific applications or types of SD-JWT. But this SD-JWT draft is defining the general JWT-like construct and explicit typing of the generic thing isn't particularly useful or meaningful.\nThe The next draft will have a media type and structured suffix that can help with specific applications or types of SD-JWT doing explicit typing FWIW URL\nI would be open to adding one sentence in the draft saying \"applications of SD-JWT SHOULD be explicitly typed\". I agree there cannot be only one type for all SD-JWTs.\nI'm not quite sure where to add that one sentence. And a bit more context might be needed with it. I'll see what I can figure out. Maybe a small section like 3.11 in rfc8725. But (helpful) suggestions\/ideas would be welcome in the meantime.\nAdd this sentence at the end of 5.1: The header parameter is RECOMMENDED, see Sec. 5.1.4 for more details. And as a new Sec. 5.1.4, \"Explicit Typing\": A commonly seen attack is for one kind of JWT to be confused for another. Both regular JWTs as well as SD-JWTs are vulnerable to confusion attacks. To prevent these attacks, it is RECOMMENDED to specify an explicit type by including the header parameter when the token is issued, and for Verifiers to check this value. The relevant token's content type can be used, e.g. , but often a more application-specific value is preferred. See Sec. 3.11 of RFC 8725 for more details.\nThanks NAME I took some inspiration from you words in creating PR .\nmerged PR to close this one\nThanks for preparing this! LGTM, just two small editorial comments."} +{"_id":"q-en-oauth-selective-disclosure-jwt-3b8d11757adbe0456450d3d12677bab50c2b1e76373331988444ad7323a54d01","text":"Honestly, I'm not sure about this one. But I said I'd put together a PR for with an update to the title and some qualifying text. So this is that PR as something to discuss anyway. At least these few sentences took me an embarrassingly long time to write :\/"} +{"_id":"q-en-oauth-selective-disclosure-jwt-d220d8a1bd4153aae36bafce1f981d666aff410d61ae6f2c32ccf805bf239f36","text":"RIPEMD-160 is usable, but not its predecessors. For this reason this PR removes RIPEMD-160 from the list of the weak algs by saying that only -160 predecessors must not be used.\nThank you!"} +{"_id":"q-en-oauth-selective-disclosure-jwt-fa1e3abee3ece76b0ecb6877b57b214be3783ea14f1ea35d3e1d456abfddddfc","text":"Text looks good but it's going to clash with the array spec work URL\nIt's a small change, we can sort that out when merging either.\nNow that is allowed at multiple locations in the JSON document, including recursively, I would expect Sec. 5.1.2 to define explicitly where the claim should appear. Once at the top of the document? Can it also appear alongside claims located deeper in the document? Also, please change the section heading from \"hash function\" to \"hash algorithm\", for consistency.\nThank you! The intention is that there is only one at the top level. We will clarify.\nConcur with NAME\n+1"} +{"_id":"q-en-oauth-selective-disclosure-jwt-f02127a83421a8f74aaa4034f642e417bcf5b92c36202c67052196b6cf7a1111","text":"changes. created a common sd-jwt verification section wallet processing section talks only about presentation creation verifier processing section talks only about key binding JWT processing and refers to a common sd-jwt verification section Note that the terminology is new - based on the PR that NAME is doing for issue (probably not touching these sections in your PR might be the best to minimize conflict, Brian, or I can push later to your branch - wanted to do this PR while I had the idea fresh in my head). I used as a placeholder.\nThe direction of this looks good. Thanks for putting together the PR! I made just a few cleanup type comments\/suggestions. Some do touch on the terminology, but otherwise think I can avoid this section in the in-progress PR\nI do think a document history bullet should be added - \"Consolidate processing rules for Holder and Verifier\"\nAs discussed on a recent call - merging this now and closing the associated issues. NAME intends to look at it in the main branch and propose additional changes in a new PR, if there are opportunities for improvement."} +{"_id":"q-en-oauth-selective-disclosure-jwt-f3a63df58bc6532b8fdc36eb9e134980db4b46c4d38f7647079a3ba04bcd9e57","text":"This solves Issue by moving away from an example that wasn't ideal anyway (it encoded a different bytestring for the value Möbius). The Disclosure JSON should be:\n\"it shows as is there any way to change that..? (this is not the change introduced in this PR)\" on the text in the doc at URL \"A different way to encode the umlaut (two dots &; placed over the letter):\" Originally posted by NAME in URL\nNot sure why this broke now. I created a PR because the example wasn't ideal anyway: URL\nI just noticed that the PR does not fully solve the problem. We have the same problem somewhere else as well: !\nit's seems to have been an issue at least back to -01 URL and\/or -02 URL (so basically since inception). I'm sure we could fight with the tool chain to fix it. But at what cost? I'm inclined to just reword that \"The two representations... \" part to make the issue go away. Which I did with this 48efde35a81ca3520eb30ceea6c43d62e8cd96f8 update to URL\nPR merged"} +{"_id":"q-en-oauth-selective-disclosure-jwt-fdfa9535cbc8546d35d44b29991fef49be6882cfa5244b1084513d4dbd2d833d","text":"I've seen the \"new\" title from 332ded8faf20d528c5c05e04cebed28aab1a0d33 a few times in context now and it is really awkward.\nyup, that updated intro and abstract text stays\nas long as the intro text itself stays as we updated it, this makes sense to me."} +{"_id":"q-en-oauth-selective-disclosure-jwt-38cc12320c3e5bde4079aa00735884fab457ff7c0f692e35782a6ca1be196eb6","text":"I noticed that the consolidation of SD-JWT terminology in c6a27373dee13cc7c3f70a8b7fd374e62306f5a0 introduced a mistake in the \"Enveloping\" section title by including the word \"Presentation\" while the section talks about enveloping either issued or presented SD-JWTs. While fixing that, I decided to remove the SHOULD recommending the use of an unregistered claim name . I also tried to align the example enveloping payload a bit with a move minimal set of claims similar to those used in key binding. I realize I probably should have just fixed the section title and not expanded the scope of the changes. But there I was and these few little changes were so inviting... See it here: URL"} +{"_id":"q-en-oauth-selective-disclosure-jwt-0dde26ac483543dcc9d25de57825d5127b3e6629165d40bde1bc713033e3e21f","text":"update: agreed to replace SD-JWT VC example with PID. agreed to have only one sd-jwt vc example, so this will also address and\nThe and the editor's draft didn't get generated - URL 404s\nfixed - thank you, Brian!\nNAME Binding design and the usage of biometrics data is based on your original PR 210. Shall we replace 4a example with this one? But if you don’t see the need for this example anymore, I am not going to die on this hill.\nThinking about this just a little more NAME does this warrant a doc history bullet? Something like: Replaced the general SD-JWT VC example with one based on Person Identification Data (PID) from the European Digital Identity Wallet Architecture and Reference Framework The history is nice to have and I often use it as a starting place when making \"what's changed from last time\" type slides for f2f meetings.\nI'm just gonna add the doc history to keep things moving here... and done w\/ 08d70d2fbfcb1afce5e2d088cb97934b91ce83a1\nrelevant section in the editor's copy preview: URL should anyone want to take a look before the coming soon merge NAME NAME @ etc\nI think this is ready to merge (for real this time). It's kinda awkward process and numbers wise with Dr. out. And peppelinux is still listed as requesting changes but I think his comments were addressed. But I think it's good to go. Approved with two pendings suggestioni\/change request and a general comment that's not relevant for the revision but Just informative"} +{"_id":"q-en-oauth-selective-disclosure-jwt-51c849d467a2fad9e92ef86d631953f35fc17e29217a7092fdc94c1f62bfd7fd","text":"as additional explanation in the draft for see it here URL\nin this PR might be replaced with depending on how PR goes.\nThanks, LGTM, just one editorial comment."} +{"_id":"q-en-oauth-selective-disclosure-jwt-229dac40e3ef78092c3202180f6173c6ddb81b86fe0b871fd70b39ad84d058ad","text":"replaces , addressing issue I also cleaned up example structure, so that it is the same and uses the same language across multiple examples."} +{"_id":"q-en-oauth-selective-disclosure-jwt-b4402a347e5dd7a44aaead1693a9a5457ff67653e6f11bb36e1d5c13b79eb1b9","text":"first pass at addressing Issue and .\nDoes this touch all the mentions of random in URL and convey what the feedback was looking for? I'm honestly not sure. But I also think it doesn't really need changes in that area so I'm not too worried about it. So why am I writing this note? Good question. Just trying to \"contribute\" I guess.\nApproved with the proposed changes."} +{"_id":"q-en-oauth-selective-disclosure-jwt-f4660781eb9e73b1644966a9ebc026fd607cdf3ca0ba525205eb3eabfc1310d9","text":"URL\nI was wondering if there might be a mistake in the encoding of the address given in example 4: The documentation says: I assume this should be ? It looks like the claims where added to the list on error and there is a missing hash. I would expect to see some additional disclosure elements for postal_code, ... (each with a nonce) similar to instead.\nThanks for catching that! There is definitely a mistake in Example 4a. But it looks like the problem has to do with decoy digests in that example. The decoys listed at URL got pulled from the wrong place and so don't match up with the other example content, which includes the address claim Disclosure you've pointed. That all makes parts of that Example 4a problematic and hard to follow. I think it'd be best addressed by not including decoys in Example 4a. This is related: URL FWIW\nPR aims to fix this. A preview from that PR of that example: URL and specifically the Disclosure for address: URL\nThanks for the quick response! That looks much better. I was expecting the address to be passed in a way that allows the selective disclosure of single items (e.g., only the post code). Maybe you can add an additional comment which of the three options in 5.7 every example uses (here 5.7.1). To me, having three options in 5.7 makes it unnecessarily complicated. and 5.7.3 are basically the same when you would allow having non-selectively disclosable items in this as well. If that would be the case, this could probably also handle all cases 5.7.2 would be used for.\nThanks for reporting this and NAME thanks for the quick fix! NAME The options in 5.7 are meant to show what is possible with SD-JWT and are not meant to restrict or define how can be encoded. It is up to the issuer to decide for each claim whether it should be SD or not. Options 1-3 just show some of the possible results. I'm not sure I'm following what you mean by \"having non-selectively disclosable items in this as well\". Could you give an example?\nI created this PR to ensure that our wording around the options is clear in that the options are non-normative and not exhaustive: URL\nI was wondering, if 5.7.1 would allow to have non-selectively disclosable items in the address claim: This single definition could then cover all use cases of 5.7.1,5.7.2, and 5.7.3. But i see what you are up to (thanks for ). You do want to leave a maximum flexibility. However, i am a bit concerned of \"creative\" uses of what is technically possible and how to make sure to be able to parse\/process all possible combinations in the openid4vp context.\nYes, this mixture is possible within the current specification. With the algorithm defined in Section 6.3 there is no need (for the Holder) to be aware of all possible combinations. (In other words, such assumptions should never be hard coded.) It is just the Issuer that has to make a decision on which pattern to use.\nSure, this should never be hard coded. However you need e.g., to come up with a user friendly way of displaying that kind of information in a wallet context. It is a challenge to find an intuitive\/understandable way to inform the user which information will or can be disclosed and so on. But this is becoming way to off-topic ;) Thanks for your quick response!\nURL merged\nI don't know why but I think we should have more examples, but not going to go against these changes."} +{"_id":"q-en-oauth-selective-disclosure-jwt-4422b062dad9bd4d19133eb77073c5b579823a8df1b8c37974ad28289ef7872f","text":"In response to Issue : Changed some words to ensure that the examples are not understood as an exhaustive list of possible structures Added note on non-normativity of examples Removed the word Option in the section titles and the numbers to avoid the notion of an exhaustive list of options"} +{"_id":"q-en-oauth-selective-disclosure-jwt-de38be03a124e9204ebfc43b3278c49addbd8749379f241d456b8a9abcbaa4dd","text":"-Update JSON Serialization to remove the kb_jwt member and allow for the disclosures to be conveyed elsewhere -Expand the Enveloping SD-JWTs section to also discuss enveloping JSON serialized SD-JWTs -Swap Enveloping SD-JWTs and JSON Serialization sections Editor's Copy from this PR: URL URL\nI think this also addresses"} +{"_id":"q-en-oauth-selective-disclosure-jwt-275fb20fdfe0838ce7eeb99bfae370894934e31ba735f68229f2202a446084d1","text":"Added JWT claims registration requests to IANA (to fix issue ) URL\nThis was meant as a placeholder\/reminder for JWT Claims Registration requests to IANA that would be needed eventually. The and claims should be registered. I'm not sure about to be honest, which is ironic b\/c I'm supposed to know about such things (see the so-called experts ). But it's somewhat different than the other claims in the registry. But, I dunno, requesting registration is probably the way to go w\/ .\nI share your unsureness, and agree that requesting registration is the way to go. btw in the spec, we have a text registered claim, so if we decide not to register it, that text needs to be updated.\nWait... really, where?\nPR has been merged."} +{"_id":"q-en-oauth-selective-disclosure-jwt-75e1fc6cae4140d13b222b80ccb13606ed76c7f9a3d1e566aacfa4dfb6df695f","text":"based roughly on feedback from Anders to draft-ietf-oauth-selective-disclosure-jwt at URL"} +{"_id":"q-en-oauth-selective-disclosure-jwt-711bc771e6af10ed1a19210aa840fd7c53d26d6595a63cc3d523b6d2805b3316","text":"for and based largely on text suggested on list by Neil Madden\nfrom this thread URL Neil Madden has suggested the following for \"Choice of a Hash Algorithm\" section. I think (with maybe some minor tweaks) the text works and it would also address issue .\nSaying something about matching the strength of hash function and signature algorithm would probably be worthwhile. resulting from this thread URL \/ URL etc \"... indicates that the security strength of the signature scheme is bounded by the collision resistance of the hash function - e.g. there’s little point using ES512 with SHA-256, for example. Probably the security considerations should suggest matching hash functions to signature algorithms.\"\nHow to actually write this in an appropriate way for a draft RFC feels kinda tricky though. JWE has some text about using that I was hopping to borrow from but the context is (unsurprisingly) different enough that using text straight from it doesn't quite work. Maybe adding a very general statement in would be sufficient.\nAt IETF 118 Tuesday meeting, Orie proposed locking the hash to the one committed to by the Issuer. Thanks for the comments and questions Neil. With the help of the draft co-authors, I've tried to reply (probably inadequately!) inline below. On Tue, Oct 24, 2023 at 3:48 AM Neil Madden wrote: Verifier has to trust the Issuer ultimately to act\/issue honestly. So colluding Holders and Issuers just isn't part of the threat model. I don't honestly know how that could be different. The Issuer, if malicious, can issue a token with whatever content they want. verification and processing rules (this one specifically ). While malicious Issuers and colluding Holders and Issuers are outside the threat model for the reasons explained above. So this situation is handled as much as it can be (as best I know anyway) in the draft. A more sophisticated version of this “attack” illustrates the need for prevents a malicious Holder from finding a different Disclosure value that results in a digest that's the same as one in the signed credential. Protecting against a malicious user effectively forging or modifying disclosed claim names\/values is the security objective. Second-preimage resistance is not as strong as collision resistance but I believe is correct and sufficient for the context of use. And a malicious Issuer isn't something that's in scope and could do all kinds of other bad things. This is the section of the security considerations with that: URL signature algorithm would probably be worthwhile. Furthermore, preimage resistance is not a sufficient property to ensure section-11.5 along with some text that says that it needs to be “infeasible to calculate the salt and claim value that result in a particular digest”. We are trying to say that the hash has to have the property that it can’t be reversed (or even partially reversed, to your point). There’s probably a better way to state that the hash function has to be not at all reversible. Can you perhaps suggest some text? Or could we just replace “preimage resistant” with “irreversible” in that text in 11.5? And maybe qualify that text a bit more too with something like “infeasible to calculate the salt and claim value (or any portion thereof) that results in a particular digest”. current venue. And as the end of the draft’s intro states, “this specification aims to be easy to implement and to leverage established and widely used data formats and cryptographic algorithms” which is why a regular old secure hash function is being used - SHA-256 specifically and as the default, which I believe is completely suitable for the purpose. It seems like some tightening up or changes around the approach provided for hash algorithm agility might be in order. But I think the overall salted hash based approach is okay\/appropriate\/sufficient. “inclusion in the [...] registry alone does not indicate a hash algorithm's suitability for use in SD-JWT” after attempting to describe what makes a hash algorithm suitable for use with SD-JWT. We wanted to allow for some hash algorithm agility but really didn’t want to create yet another hash algorithm registry. That \"Named Information Hash Algorithm Registry\" isn’t an ideal fit but was the best one we were able to find. Pointing to that registry and having the associated security considerations seemed like a reasonably reasonable approach. originally) and I’m very reluctant to make any such statements that venture into legal territory. model implies\/assumes that an Issuer issues a token with no idea of where it’ll be presented and having a shared symmetric key seemed really weird in that situation. Also had to explain why or how a shared symmetric key would work in that situation too. That said, there’s been some talk about loosening that restriction - largely around the same reasoning on deniability that you cite - I guess your comment here should be taken as supportive of that prospective change? hard-wired set of claims\/constraints enforced\/required by verifiers is appropriate and necessary even when selective disclosure isn’t at play. It’s how regular JWT works (or should work) in practice already. And don’t think it harms future evolution because any new security constraints would necessarily need updates all around because they won’t be understood otherwise. The 11.8 section could perhaps be stronger in saying that claims controlling the validity of the token shouldn’t be made selectively disclosable. But what exactly constitutes such a claim is actually rather murky and there are undoubtedly exceptions like you point out with “aud”. And I know some have expressed the need\/desire to have key\/holder binding type claims (such as “cnf” but not only that) be made selectively disclosable for privacy reasons. I also don’t think a new registry would really help the situation. At this level we're describing a generic mechanism and don’t necessarily even have a list of all claims that could be interpreted as constraints in a particular context or application. Note, for example though, that URL does say that certain claims must be included directly and not made selectively disclosable. Which leads back to ultimately needing that requirement that Verifiers ensure that all the claims they deem necessary are present (in the clear or disclosed) when checking the validity and accepting the token. It’s not meant to preclude that kind of thing (and reading it again, I don’t think it does preclude it). But rather to just say that whatever the Verifier’s requirements on key binding are have to be based on its policy and not on the content of the token (some of which could be manipulated by the holder). A Verifier’s policy could, as you describe, afford different levels of access based on different security characteristics of the presented token\/credential. Or the Verifier’s requirements could, for example, say that for credential type A, Key Binding is always required, while for credentials of type B, it is only required when the credential was issued by an issuer that is known to issue credentials supporting Key Binding. CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you.\nHi Brian, Apologies for the late reply. I think we’re closing in on agreement here. Comments and some wording suggestions inline below. How about the following? — To ensure privacy of claims that are not being selectively disclosed in a given presentation, the hash function MUST ensure that it is infeasible to calculate the salt and claim name and value (or any portion thereof) that results in a particular digest. This implies the hash function MUST be preimage resistant, but should also not allow an observer to infer any partial information about the undisclosed content. In the terminology of cryptographic commitment schemes, the hash function MUST be computationally hiding. The hash function MUST be second-preimage resistant. For any salt and claim value pair, it is infeasible to find a different salt and claim value pair that result in the same digest. The hash function SHOULD also be collision resistant. Although not essential to the anticipated uses of SD-JWT, without collision resistance an Issuer may be able to find multiple disclosures that have the same hash value. The signature over the SD-JWT would not then commit the Issuer to the contents of the JWT, which is surprising. Where this is a concern, the collision resistance of the hash function SHOULD match the collision resistance of the hash function used by the signature scheme. For example, use of the ES512 signature algorithm would require a disclosure hash function with at least 256-bit collision resistance, such as SHA-512. — (I’d like to add an informational reference defining these terms, but I can’t find a good one - even the NIST\/FIPS standards seem to just take terms like “collision resistance” for granted, so maybe we can too?) The current wording of section 11.6 says: “Verifiers MUST NOT take into account […] whether Key Binding data is present in the SD-JWT or not[…]” While 11.8 says: \"Verifiers therefore MUST ensure that all claims they deem necessary for checking the validity of the SD-JWT are present (or disclosed, respectively) before checking the validity and accepting the SD-JWT\" I think these constraints do rule out the migration path that I described. I don’t see how a service could incrementally and securely roll-out support for Key Binding (for example) over time—starting from a base ecosystem that isn’t using Key Binding. Either they would have to suddenly require Key Binding, and therefore start rejecting clients that haven’t updated yet, or they may have the situation where a stolen key-bound SD-JWT can be treated as a simple bearer token by not selectively-disclosing the cnf claim. Concretely, I think these phrases and section 8.4 should be removed\/re-written. How about the following: — An Issuer MUST NOT allow any security-critical claim to be selectively disclosable. The exact list of “security-critical” claims will depend on the application, and SHOULD be listed by any application-specific profile of SD-JWT. The following is a list of standard claim names that SHOULD be considered as security-critical by any SD-JWT Issuer: “iss” (Issuer) “aud” (Audience), although issuers may want to allow individual entries in the array to be selectively-disclosable “exp” (Expiration Time) “nbf” (Not Before) “iat” (Issued At) “jti” (JWT ID) In addition, the “cnf” (Confirmation Key) claim MUST NOT be selectively disclosable. Best wishes, Neil\nI’ve had a look through this new draft and I have some comments and questions. Some of which are similar to comments I already raised [1], but haven’t been addressed. Are we concerned about Holders and Issuers colluding? For example, now that claim names are blinded an Issuer can add the same claim multiple times with different, potentially contradictory, values and then the Holder can selectively release one disclosure to one Verifier and a completely different one to another Verifier. This seems problematic at least, that the “same” credential could be interpreted differently by different parties. A more sophisticated version of this “attack” illustrates the need for collision resistance in the hash function, not just preimage resistance as stated in the draft (and already raised by me in [1]). If the hash is not CR then a malicious Issuer can find colliding [salt, key, value] triplets that have the same hash value, give one to the Holder and then later claim that they actually signed the other one. (This is not just theoretical, similar attacks have been used to create fraudulent SSL certificates [2]). This also indicates that the security strength of the signature scheme is bounded by the collision resistance of the hash function - e.g. there’s little point using ES512 with SHA-256, for example. Probably the security considerations should suggest matching hash functions to signature algorithms. Furthermore, preimage resistance is not a sufficient property to ensure confidentiality of withheld claims. Preimage resistance holds if the attacker cannot recover the exact input that produced a given hash value, but it doesn’t preclude the attacker being able to recover partial information about that value. For example, consider the hash function H(x) = SHA-256(x) is concatenation). This hash function when applied to SD-JWT is preimage resistant (assuming the salt is strong), but leaks the last 10 bytes of the claim value. The appropriate security definition for SD-JWT is some form of cryptographic commitment scheme [3], with the associated security notions of hiding and binding. Honestly, I still think that this WG is not an appropriate venue for this kind of novel (in the OAuth world) cryptographic scheme and external review by experts would be helpful. All of this suggests that simply referring the the IANA hash function registry is not sufficient, as probably most of the entries there are not actually suitable for use in SD-JWT for one reason or another. Some other comments: The original JWT RFC [4] has this to say about the order of encryption and signatures: signatures over encrypted text are not considered valid in many jurisdictions. Presumably the same argument holds about signatures over blinded values? That should perhaps be noted at least. The draft repeatedly says that a symmetric signature algorithm (MAC) must not be used. Perhaps I’m missing something here, but why not? It doesn’t seem like it compromises any of the intended security properties. Use of a symmetric MAC may also limit the privacy impacts on users as it provides some measure of deniability (similar to that mentioned in section 12.1), as any Verifier in possession of the secret key could also have produced any claims that are subsequently leaked, allowing the user to deny that they were produced by the supposed Issuer. (This deniability property holds even without subsequent leaking of old signing keys). The security considerations about salt entropy should probably reference RFC 4086 (BCP 106) or something more up to date (maybe RFC 8937 too). I think section 11.8 about selectively disclosing contraints on the use of a SD-JWT is completely inadequate and will almost certainly lead to attacks. Requiring Verifiers to hard-wire the set of constraints they enforce is likely to be damaging to future evolution of the standard as new security constraints are added. It would seem especially problematic to allow selective disclosure of a “cnf” claim for example, and yet nothing forbids it and the history of JWT usage suggests that anything not forbidden will at some point happen (and even some things that are forbidden). I suggest establishing a registry of claims that are really usage constraints and forbidding issuers from allowing anything in that list to be selectively-disclosed. (There are perhaps some exceptions here, e.g. “aud” is a constraint in this sense but it probably does make sense for it to be selectively disclosed, but only on a per-array-item basis: that way a Holder cannot remove the constraint entirely but can still hide other “recipients”). On a related note, section 11.6 says that to avoid this kind of attack, Verifiers MUST NOT take into account whether a Key Binding is present or not to decide whether to require Key Binding. This seems to preclude Verifiers that can handle different levels of access with different security requirements and is the sort of requirement that makes it near impossible to incrementally evolve tougher security requirements over time. It effectively becomes an all-or-nothing switch that will have the likely effect of making the use of Key Binding less attractive. Security hardening should be the easy path if we want to see it adopted. Best wishes, Neil [1]: URL [2]: URL [3]: URL"} +{"_id":"q-en-oauth-selective-disclosure-jwt-3a511402b15433427c992a820040f0567dfff267b9edefc3d970b2ad6487e0c8","text":"for\n\"The security considerations about salt entropy should probably reference RFC 4086 (BCP 106) or something more up to date (maybe RFC 8937 too).\" resulting from this thread URL \/ URL Thanks for the comments and questions Neil. With the help of the draft co-authors, I've tried to reply (probably inadequately!) inline below. On Tue, Oct 24, 2023 at 3:48 AM Neil Madden wrote: Verifier has to trust the Issuer ultimately to act\/issue honestly. So colluding Holders and Issuers just isn't part of the threat model. I don't honestly know how that could be different. The Issuer, if malicious, can issue a token with whatever content they want. verification and processing rules (this one specifically ). While malicious Issuers and colluding Holders and Issuers are outside the threat model for the reasons explained above. So this situation is handled as much as it can be (as best I know anyway) in the draft. A more sophisticated version of this “attack” illustrates the need for prevents a malicious Holder from finding a different Disclosure value that results in a digest that's the same as one in the signed credential. Protecting against a malicious user effectively forging or modifying disclosed claim names\/values is the security objective. Second-preimage resistance is not as strong as collision resistance but I believe is correct and sufficient for the context of use. And a malicious Issuer isn't something that's in scope and could do all kinds of other bad things. This is the section of the security considerations with that: URL signature algorithm would probably be worthwhile. Furthermore, preimage resistance is not a sufficient property to ensure section-11.5 along with some text that says that it needs to be “infeasible to calculate the salt and claim value that result in a particular digest”. We are trying to say that the hash has to have the property that it can’t be reversed (or even partially reversed, to your point). There’s probably a better way to state that the hash function has to be not at all reversible. Can you perhaps suggest some text? Or could we just replace “preimage resistant” with “irreversible” in that text in 11.5? And maybe qualify that text a bit more too with something like “infeasible to calculate the salt and claim value (or any portion thereof) that results in a particular digest”. current venue. And as the end of the draft’s intro states, “this specification aims to be easy to implement and to leverage established and widely used data formats and cryptographic algorithms” which is why a regular old secure hash function is being used - SHA-256 specifically and as the default, which I believe is completely suitable for the purpose. It seems like some tightening up or changes around the approach provided for hash algorithm agility might be in order. But I think the overall salted hash based approach is okay\/appropriate\/sufficient. “inclusion in the [...] registry alone does not indicate a hash algorithm's suitability for use in SD-JWT” after attempting to describe what makes a hash algorithm suitable for use with SD-JWT. We wanted to allow for some hash algorithm agility but really didn’t want to create yet another hash algorithm registry. That \"Named Information Hash Algorithm Registry\" isn’t an ideal fit but was the best one we were able to find. Pointing to that registry and having the associated security considerations seemed like a reasonably reasonable approach. originally) and I’m very reluctant to make any such statements that venture into legal territory. model implies\/assumes that an Issuer issues a token with no idea of where it’ll be presented and having a shared symmetric key seemed really weird in that situation. Also had to explain why or how a shared symmetric key would work in that situation too. That said, there’s been some talk about loosening that restriction - largely around the same reasoning on deniability that you cite - I guess your comment here should be taken as supportive of that prospective change? hard-wired set of claims\/constraints enforced\/required by verifiers is appropriate and necessary even when selective disclosure isn’t at play. It’s how regular JWT works (or should work) in practice already. And don’t think it harms future evolution because any new security constraints would necessarily need updates all around because they won’t be understood otherwise. The 11.8 section could perhaps be stronger in saying that claims controlling the validity of the token shouldn’t be made selectively disclosable. But what exactly constitutes such a claim is actually rather murky and there are undoubtedly exceptions like you point out with “aud”. And I know some have expressed the need\/desire to have key\/holder binding type claims (such as “cnf” but not only that) be made selectively disclosable for privacy reasons. I also don’t think a new registry would really help the situation. At this level we're describing a generic mechanism and don’t necessarily even have a list of all claims that could be interpreted as constraints in a particular context or application. Note, for example though, that URL does say that certain claims must be included directly and not made selectively disclosable. Which leads back to ultimately needing that requirement that Verifiers ensure that all the claims they deem necessary are present (in the clear or disclosed) when checking the validity and accepting the token. It’s not meant to preclude that kind of thing (and reading it again, I don’t think it does preclude it). But rather to just say that whatever the Verifier’s requirements on key binding are have to be based on its policy and not on the content of the token (some of which could be manipulated by the holder). A Verifier’s policy could, as you describe, afford different levels of access based on different security characteristics of the presented token\/credential. Or the Verifier’s requirements could, for example, say that for credential type A, Key Binding is always required, while for credentials of type B, it is only required when the credential was issued by an issuer that is known to issue credentials supporting Key Binding. CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you.\nI’ve had a look through this new draft and I have some comments and questions. Some of which are similar to comments I already raised [1], but haven’t been addressed. Are we concerned about Holders and Issuers colluding? For example, now that claim names are blinded an Issuer can add the same claim multiple times with different, potentially contradictory, values and then the Holder can selectively release one disclosure to one Verifier and a completely different one to another Verifier. This seems problematic at least, that the “same” credential could be interpreted differently by different parties. A more sophisticated version of this “attack” illustrates the need for collision resistance in the hash function, not just preimage resistance as stated in the draft (and already raised by me in [1]). If the hash is not CR then a malicious Issuer can find colliding [salt, key, value] triplets that have the same hash value, give one to the Holder and then later claim that they actually signed the other one. (This is not just theoretical, similar attacks have been used to create fraudulent SSL certificates [2]). This also indicates that the security strength of the signature scheme is bounded by the collision resistance of the hash function - e.g. there’s little point using ES512 with SHA-256, for example. Probably the security considerations should suggest matching hash functions to signature algorithms. Furthermore, preimage resistance is not a sufficient property to ensure confidentiality of withheld claims. Preimage resistance holds if the attacker cannot recover the exact input that produced a given hash value, but it doesn’t preclude the attacker being able to recover partial information about that value. For example, consider the hash function H(x) = SHA-256(x) is concatenation). This hash function when applied to SD-JWT is preimage resistant (assuming the salt is strong), but leaks the last 10 bytes of the claim value. The appropriate security definition for SD-JWT is some form of cryptographic commitment scheme [3], with the associated security notions of hiding and binding. Honestly, I still think that this WG is not an appropriate venue for this kind of novel (in the OAuth world) cryptographic scheme and external review by experts would be helpful. All of this suggests that simply referring the the IANA hash function registry is not sufficient, as probably most of the entries there are not actually suitable for use in SD-JWT for one reason or another. Some other comments: The original JWT RFC [4] has this to say about the order of encryption and signatures: signatures over encrypted text are not considered valid in many jurisdictions. Presumably the same argument holds about signatures over blinded values? That should perhaps be noted at least. The draft repeatedly says that a symmetric signature algorithm (MAC) must not be used. Perhaps I’m missing something here, but why not? It doesn’t seem like it compromises any of the intended security properties. Use of a symmetric MAC may also limit the privacy impacts on users as it provides some measure of deniability (similar to that mentioned in section 12.1), as any Verifier in possession of the secret key could also have produced any claims that are subsequently leaked, allowing the user to deny that they were produced by the supposed Issuer. (This deniability property holds even without subsequent leaking of old signing keys). The security considerations about salt entropy should probably reference RFC 4086 (BCP 106) or something more up to date (maybe RFC 8937 too). I think section 11.8 about selectively disclosing contraints on the use of a SD-JWT is completely inadequate and will almost certainly lead to attacks. Requiring Verifiers to hard-wire the set of constraints they enforce is likely to be damaging to future evolution of the standard as new security constraints are added. It would seem especially problematic to allow selective disclosure of a “cnf” claim for example, and yet nothing forbids it and the history of JWT usage suggests that anything not forbidden will at some point happen (and even some things that are forbidden). I suggest establishing a registry of claims that are really usage constraints and forbidding issuers from allowing anything in that list to be selectively-disclosed. (There are perhaps some exceptions here, e.g. “aud” is a constraint in this sense but it probably does make sense for it to be selectively disclosed, but only on a per-array-item basis: that way a Holder cannot remove the constraint entirely but can still hide other “recipients”). On a related note, section 11.6 says that to avoid this kind of attack, Verifiers MUST NOT take into account whether a Key Binding is present or not to decide whether to require Key Binding. This seems to preclude Verifiers that can handle different levels of access with different security requirements and is the sort of requirement that makes it near impossible to incrementally evolve tougher security requirements over time. It effectively becomes an all-or-nothing switch that will have the likely effect of making the use of Key Binding less attractive. Security hardening should be the easy path if we want to see it adopted. Best wishes, Neil [1]: URL [2]: URL [3]: URL"} +{"_id":"q-en-oauth-selective-disclosure-jwt-2f6f936928fbfe80a41aa36f616987431cdbcf0bdcf823cdc035390390653a46","text":"to\nIn draft 06 section there is a list of steps: >For presentation to a Verifier, the Holder MUST perform the following (or equivalent) steps: >2. If Key Binding is required, create a Key Binding JWT. >3. Assemble the SD-JWT for Presentation, including the Issuer-signed JWT, the selected Disclosures and, if applicable, the Key Binding JWT. >4. Send the Presentation to the Verifier. Now that draft 06 also has the requirement to add (section ) to Key Binding JWTs, a more natural order for the steps would be to create the SD-JWT for Presentation with the selected Disclosures before the Key Binding JWT, as the presentation is needed for calculating the required digest for the Key Binding JWT. E.g. something like this: >For presentation to a Verifier, the Holder MUST perform the following (or equivalent) steps: >2. Assemble the SD-JWT for Presentation, including the Issuer-signed JWT and the selected Disclosures. >3. If Key Binding is required, create a Key Binding JWT and add it to the Presentation. >4. Send the Presentation to the Verifier. With that step 2. constructs the input required for the digest computation in step 3. For extra clarity, the requirement to compute and include the integrity protection digest to the Key Binding JWT could be mentioned also here with a reference to section 5.3.1. for details.\nThat makes sense, thank you!"} +{"_id":"q-en-oauth-selective-disclosure-jwt-1778203ade0b15e5d887ea455c1229cf9393b3f54125ed16a99c2363e3b0e67d","text":"Remove initial underscore on for I do think this change is appropriate but think we should publish a -07 soon if we proceed with it.\nIt makes sense for to start with an underscore, since it is being inserted into a JSON object supplied by the caller and it needs to be something unlikely to conflict with an existing claim. The field appears in the key binding JWT, a format being defined by this document. So it can safely live without the underscore.\nThat's a fair point, I think.\nI think the same argument also applies to . It's just a normal JWT claim, so it can be deconflicted using the usual registry process.\nReally they are all just JWT claims and will be registered. The leading underscore isn't magical and doesn't exempt from registration. But it is a nicety to help avoid conflict, which is particularly relevant with where\/how and are used. They both are in the issuer signed JWT where there is some potential for name conflicts with application supplied data (JWT's whole registered, public and private claims guidance is supposed to help here too but conflicts are still possible). For general clarity and consistency, I agree (I think anyway) with removing the initial underscore on . But is more similar to in usage\/context and those two should keep their leading underscore.\nTo me the difference between and seems to be that is at the top level of the JWT claims, where we're used to dealing with . But can be down in the guts of the claims, which the JWT signer might have less control over. So I would go for , but and .\nRegistered claims (I'm a so-called DE on there so I have some familiarity) don't fully address conflicting names even at the top level with the possibility of the use of private claim names. More could be said about that registry and what it does and doesn't provide in practice but that'd be a digression... Anyway, that and show up in the content of issuer-signed JWT is my point of vew for keeping the underscore on both. Also for keeping those two similar to each other. Lastly note that and have been in the draft for many revisions (back to -02 I think) whereas was just introduced in -06. Changing at this point likely wouldn't impact implementations much but changing would.\nI'd prefer to not see ANY public claim names that start with . It starts a convention which I would prefer not to see repeated.\nIn OpenID Connect, we decided that our \"meta-claims\" would start with underscore to visually distinguish them from normal claims. For instance, see the claims and in the registry at URL I support SD-JWT continuing to follow this convention and so would prefer that the leading underscore be kept for its meta-claims.\nNAME What is the difference between \"normal claims\" and \"meta-claims\" here? And why did OpenID Connect go that way?\n\"Meta-claims\" are claims about the structure of where actual claim values can be found - claim values that are not simply top-level members in the JWT Claims Set. They're road-signs saying where to go to get them - not the destinations themselves.\nThanks, that distinction makes sense. Applying that to SD-JWT, it seems directly applicable to , and sort of fits as well, since it tells you how to read the road-signs (to continue NAME metaphor). I'm OK with these keeping ``. But is not a road-sign, it's the main event. If you ignore or , you miss out on some claims, but nothing fundamental breaks. If you ignore , you're not enforcing the key security properties of the system. So I think the original issue here is still valid.\nPR URL would remove the leading underscore on (and only there). Which is the original issue here."} +{"_id":"q-en-oauth-selective-disclosure-jwt-4e737744c10436e87f41319b29ba5689cca856fbd33f44ad01a49027d366b7f3","text":"In an attempt at addressing URL see it live here URL\nApproved with a minor editorial fix."} +{"_id":"q-en-oauth-selective-disclosure-jwt-e4286263dc374d23bb24687a450a34babdd543792c27d5e65f19786b61437501","text":"adding SD-JWT claim Note: need to update the examples to include it once we agree this claim is needed (cc NAME )\nWe need to say in the checking algorithm that verifiers MUST check the alg and MUST only accept hash_algs they understand and deem secure.\ninformation about the hash function should probably be included inside SD-JWT. Based on NAME 's comment to the PR URL\nYes, needed indeed. This and more details about hash formatting (how to transform inputs into byte arrays to be passed to the hash function).\nThe alternative is to use the same hash function as was used for the signature. This is done in OpenID Connect Core at URL in the definition (as well as in the definition. The language is:\nYou might want to use a different hash function to reduce the SD-JWT size (e.g., SHA-256 truncated to 128 bit), or decide on a stronger hash function with iteration to reduce the size of the salt (again, to minimize the size of the artefact). Bottom line, choosing the hash function independently might be useful.\nAs discussed in the call, if we allow different hashes to be used, we may want to make a recommendation that for simplicity an implementor may want to use the same hash algorithm to minimise implementation errors.\nAssuming we're not going to use the trick that OpenID Connect uses to determine the hash algorithm implicitly, then I approve of this approach."} +{"_id":"q-en-oauth-selective-disclosure-jwt-e15a80ecbd39c82eedc7761ae1ce64bfe25deec3cabe03247160cdd4d9743c2b","text":"Clarify validation around no duplicate digests in the payload (directly or recursively) and no unused disclosures at the end of processing (to & )\nSD-JWT validation processes should require that the validating party (Holder or Verifier) validate that the hash of each disclosure object appears in the Issuer JWT claims. If this validation is not performed, then the SD-JWT format is malleable, in the sense that arbitrary additional disclosure objects can be added beyond those that the issuer intended. This creates the opportunity for mischief, from bugs to covert channels. This is not a difficult property to enforce. The already has the validating party compute the lists of (1) hashes of disclosure objects and (2) disclosure hashes in the Issuer JWT. So the only additional step required is to check that (1) is a subset of (2).\nThat requirement that the hash of each disclosure must appear in the Issuer-singed JWT is in the current draft text - near the top of has \"a Holder or a Verifier MUST ensure that ... all Disclosures are correct, i.e., their digests are referenced in the Issuer-signed JWT\". Seems it could be made more explicit or clear? But it is there.\nAh, thanks, I had missed that. Even given that though: I would not call this property \"correct\". The only other place that property appears is in 11.2, so maybe it can just be dropped, and we can just say directly \"All disclosures correspond to a hash value that appears in the Issuer JWT\" Given recursive disclosure, though, even that's not really sufficient. A recursively disclosed claim does not in fact appear in the Issuer JWT -- you basically need to verify that it hash-chains into the Issuer JWT via a series of disclosures If we're going to eliminate recursive disclosure, I think the current text probably suffices (modulo \"correct\"). But if we're going to keep it, we need to elaborate the algorithm. Maybe it can be folded into the existing recursive algorithm as a termination condition. If you're adding the hashes inside disclosed values as you restore them, then if you get to the point where none of the claim hashes are in the set of disclosure hashes, then if the set of disclosure hashes is non-empty, the object is invalid.\nAssuming here in this ticket at least that we are not going to eliminate recursive disclosure. The use of \"correct\" is indeed awkward and that whole bit of text could be improved. There's room for a lot of improvement there. I've always understood text like \"referenced in the Issuer-signed JWT\" as intending to mean included directly in that JWT but also included recursively. But being more explicit is probably worthwhile. I think it'd be sufficient to say somewhere something like \"all disclosures must correspond to a hash value that appears (either directly or recursively) in the Issuer-signed JWT\". That somewhere could be in place of the \"all Disclosures are correct\" text and\/or a step in the validation steps near . To you earlier point - it's not a difficult property to enforce. Even with the recursion. A validating party needs to compute the lists of (1) hashes of the disclosure objects and has to look at (2) all disclosure hashes in the Issuer JWT (including by recursive inclusion). The only additional step is still just to check that (1) is a subset of (2). Which is basically just checking that there were no unused disclosures at the end of processing.\nAgreed, if we can update the algorithm to reject if any hashes are left over, then that would fix this issue.\nGreat, I think we can update the the algorithm to reject if any disclosures are left over. It'd probably go near the check to ensure that no digest value is repeated in the whole of the SD-JWT - either in the payload directly or recursively in any disclosure (that check about duplicate digests needs a bit of improvement too). Also we should tidy up that text with \"correct\".\nFrom the mailing list Hi Jacob, the intention was to cover the first case you listed. We should clarify this. -Daniel Am 20.10.23 um 15:02 schrieb Jacob Ward:\nmy 2 cents at\/from URL\nAlso... um, wouldn't \"More than one Disclosure have[ing] the same digest\" imply a collision in the hash function? And therefore infeasible to actually happen. Agree that it should be clarified. Being precise with language around this stuff is tricky. But my understanding of the intent was to ensure that no digest value is repeated in the whole of the SD-JWT - either in the payload directly or recursively in any Disclosure. Because of the trickiness of language, I'm not sure if we disagree or not about the intent... CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you."} +{"_id":"q-en-oauth-selective-disclosure-jwt-300fd513d58c5a0ca0cacc89eb51f429f1301cf941b398d082843f3cc146c467","text":"which can be found at\nI think the following change might be cleaner per in Jefferey's review. \"9. Check that the Key Binding JWT is valid in all other respects, per [NAME and [NAME -> \"9. Check that the Key Binding JWT is a valid JWT in all other respects, per [NAME and [NAME\nSure, that's done in 4442c3a06767498ee24154024d8dff987325b08f\nWhile reviewing URL, I had to understand some of the SD-JWT verification algorithms. I noticed some issues, which I've listed here. I haven't filed individual issues yet because I'm not sure how you'll want to group them. Thanks to NAME for doing a first pass; any remaining mistakes are probably because I didn't take his advice. [x] This should follow URL by specifying \"what serialization and serialization features are used\". I see that 1.1. Feature Summary mentions the compact serialization, but that doesn't clearly say that this embedded JWT uses it.[x] \"The claim name, or key, as it would be used in a regular JWT payload. The value MUST be a string.\" <- I assume the \"value\" is the key, and not the key's value in the JSON object? [x] Why is 128 bits only recommended and not required for the salt?[x] This should follow URL by specifying \"what serialization and serialization features are used\". I see that 1.1. Feature Summary mentions the compact serialization, but that doesn't clearly say that this embedded JWT uses it.[x] This mentions \"bytes preceding the KB-JWT in the Presentation\", but several other places mention \"Key Binding is provided by means not defined in this specification\", in which case the relevant bytes don't precede the KB-JWT. These should be consistent.[x] Are \"Upon receiving an SD-JWT, a Holder or a Verifier MUST ensure that\" and \"The Holder or the Verifier MUST perform the following (or equivalent) steps when receiving an SD-JWT\" accomplishing the same thing? [x] \"Separate the SD-JWT\" should cite an algorithm for doing that. I think it's \"5. SD-JWT Data Formats\" which implies it's just \"split on '~'\"? [x] \"Ensure that a signing algorithm was used that was deemed secure\" disclosure and then call a recursive element-processing-and-replacing function, but this isn't too bad. [x] \"Remove all array elements for which the digest was not found\" probably won't confuse anyone for long, but my first reading said that an array in the payload like [1, 2, 3] would wind up empty since no digests were found for the elements. [x] \"Check that the SD-JWT is valid using claims such as nbf, iat, and exp\" should say where those claims are defined and what part of which object they appear on. (URL, and the top level of the payload, not the header, right?) [x] The \"required validity-controlling claim\"s should probably be an input parameter to this algorithm. Possibly all the IANA-registered JWT claim handling should be passed in, or this algorithm should return all the claims for the caller to deal with.[x] \"create a Key Binding JWT\" <- refer to the section that says how to do this. [x] \"Assemble the SD-JWT for Presentation\" <- This should at least refer to \"5. SD-JWT Data Formats\" and possibly just write out the algorithm of \"make an array of the Issuer-signed JWT, the selected Disclosures and, if applicable, the Key Binding JWT, and join the array with '~'.\"[x] This is clearer that \"Verifiers MUST ensure that\" is accomplished by the steps in the following algorithm, but I don't like the redundant MUST. You can just say \"Verifiers need to ensure that\" and then leave the normative requirements to the algorithm. [x] \"according to the Verifier's policy\" should be an input parameter to this algorithm, that the verifier provides along with the bytes of the presentation. [x] \"verify the Key Binding according to the method used\" <- how? This might be URL, but you should cite the details. [x] \"Determine the public key for the Holder\" <- how? This might be the 'cnf' claim, but you should say so. [x] \"Ensure that a signing algorithm was used that was deemed secure\" <- as above, this should be an input parameter to this algorithm or computed from the public key. [x] \"Validate the signature\" <- again, cite how to do this. [x] \"the typ of the Key Binding JWT\" <- is that in the header or payload? (cite 5.3. Key Binding JWT and possibly URL) [x] \"within an acceptable window.\" <- should be an input parameter [x] \"validating nonce and aud claims\" <- The instructions for doing this should be an input parameter to this algorithm. [x] \"Check that the Key Binding JWT is valid in all other respects\" should call a particular algorithm from the other specs that it cites.[x] Many of these should be normative. For example, base64url is used all over the normative parts of this specification, and it comes from [RFC7515] in this section.\nThanks for the review Jeffrey! I'm acknowledging it with this comment but also noting that there's a lot here so response\/action will take some time.\n1c1f165ab821dea5350f62225f103bdceff31532 one was an oversight and one was a tooling issue and oversight. I think that covers them tho.\nThose other specs don't have algorithms that can be called nor do they have suitable sections to reference directly.\nSome contributors want to leave open the option of a more computationally expensive hash with smaller salt values.\nYes and I've tried to remove any ambiguity there by just not using the word \"value\" with d015a3de2ae50623b4a294590fb2d0249e7d33b5\nPer , \"JWTs are always represented using the JWS Compact Serialization\" the compact serialization is already specified.\nThat is for the claim in Key Binding JWT so isn't relevant elsewhere.\n& Yeah, with 2a0e270f2e1dbfa9664bb5ce4989a2f6a1765217 I basically applied your suggestion to both places.\nYup 7d68d4ed9b91e49a0aa00254475bf7c116669dd6\nYes but I believe it's sufficiently clear in the context of the draft.\nThis draft is not structured such that it has input parameters to algorithms so suggestions to make something an input parameter to an algorithm are not actionable or practical.\ne9acd5ed665b107776b6b41355680470cde89f2c\nis a JOSE header. Will cite sec 5.3. where that's more explicit 64a4c4519668c7a63f1a0be215176657895fa0c8\nThat sentence starts with \"If Key Binding is provided by means not defined in this specification,\" so there are not specific details to cite. mentions that other ways of proving Key Binding are possible. DPoP\/rfc9449 is one possibility but not the only one. Nor a particularly likely one. At least not likely enough to warrant mention here.\nThe very next part of that sentence says \"in the processed payload\" and that's a non-exhaustive list of JWT claims for which a citation\/reference isn't really needed.\nI feel like it's sufficiently clear from context and shouldn't confuse anyone for too long.\nNoted.\nThis draft intentionally doesn't prescribe how keys are distributed, identified, selected, trusted, etc. any more than JWT\/JWS do (e.g. URL, URL, etc). A map from (issuer ID, key ID) pairs to public keys is one application but certainly not the only one.\nreferencing that 072a0605bcb62937e343aaa7812aa8ef86d8379f\nThat's a somewhat existential and loaded question but common usage of the word doesn't preclude it describing nested items and array elements. And this draft is not alone in that. In the intro it also says, '\"Claims\" here refers to both object properties (name-value pairs) as well as array elements.'\n\n\"create a Key Binding JWT\" <- refer to the section that says how to do this. cbee5f8457f447574400377dc6f7c9fe7f1c3c18\na474e2b0b0ff7635af38c2ac95e70b40b2f82798 makes things a little more a bit more prescriptive about suggesting RFC7800 cnf\/jwk be used to convey the Key Binding key and references that section from the \"Verification by the Verifier\" section.\nThank you so much, Brian! overall, agree with all the responses. I made few comments on the PR, but one thing to note here too: I think recommending claim per comment is inconsistent with a comment that says that \"this draft intentionally doesn't prescribe how keys are distributed, identified, selected, trusted, etc. any more than JWT\/JWS do. so I think recommending is redundant and using in examples is sufficient.\nThat latter comment was about (and in reply to a suggestion about) validating the signature on the Issuer-signed JWT and checking that the signing key belongs to this Issuer. While the former comment is about the holder's public key inside the Issuer-signed JWT itself. Different things so I don't believe there's really an inconsistency in my statements. Nor do I think that being a little more a bit more prescriptive and more clear about using RFC7800 cnf\/jwk to convey the Key Binding key is redundant. Rather this review of a review seemed like a good opportunity to provide a little more clarity and guidance in an area the draft that could benefit from such.\none nit, otherwise looks good - thank you, Brian!"} +{"_id":"q-en-oauth-selective-disclosure-jwt-dfa9f84e0eeaea0152f0527cea126671cab477e749f200124036773a03b7c8bc","text":"This addresses Issue . The rest of the text still refers to asymmetric signatures, but technically, an HMAC is now allowed.\nThe scope of this PR was intentionally to only remove the explicit prohibition on MAC. Not to explain how it might work or adjust other language about asymmetric signatures or private keys or similar. Just remove the explicit prohibition so that, should some deployment\/ecosystem\/jurisdiction need to MAC for whatever reason, it's not directly going against a normative must."} +{"_id":"q-en-oauth-selective-disclosure-jwt-da9fd5bfc508e9db6247024d2700470e77f6f091fbcd69bf080f82bd35031c66","text":"Editorial updates for more consistent treatment of a Disclosure vs the contents of a Disclosure (to Confusing terminology) Editor's draft PR preview: URL\nHi, Disclosure is used at least two ways in the document. to refer to the array containing the salt, (maybe name), and value. to refer to the base64url encoded string It's easy to understand how this could have happened, as both contain logically the same information. I propose explicitly calling these: disclosure array, and disclosure presentation I also propose calling the hash\/digest of the disclosure the disclosure digest in full in every use. The only time we would use \"disclosure\" without one of these modifiers is talking about the process of disclosing in general. As for the plaintext of the claims that can be disclosed, I propose blinded claim. So the first paragraph of Section 4.1 would read: An SD-JWT consists of: a signed JSON document, a list of disclosure presentations (each of which reveals a blinded claim), and optionally an issuer key binding (discussed in Section 4.3). The signed JSON document (the JWT body) contains plaintext claims; and disclosure digests, each of which either refers to a specific blinded claim or is a decoy. The holder can include zero or more disclosure presentations without breaking the signature of the JWT body. The contents of the blinded claims cannot be modified, because the corresponding digest would no longer match any digest in the JWT body. Blinded claims can be individual object properties (key-value pairs) or array elements.\nThis is appreciated! I agree that we should be more consistent with our terms. I'm not sure about \"disclosure presentation\" as \"presentation\" suggests a specific meaning. Maybe just \"disclosure\" and \"disclosure contents\" is fine.\nThe current document attempts to mostly use \"Disclosure\" to mean the base64url encoded string. I think that works okay. Places where it's referring to the decoded JSON array should be fixed to be more clear. So I think I agree that \"disclosure\" and where needed \"disclosure contents\" would be good.\nPR has editorial updates for more consistent treatment of a Disclosure vs the contents of a Disclosure\nmade one suggestion, but overall, looks good to me! thank you!"} +{"_id":"q-en-oauth-selective-disclosure-jwt-c4e5ef2e872a087b62a8f2bd531a2a5b52ff59fadcede5cd53e7cd87ad1f45b3","text":"Attempt to better explain how salt in the Disclosure makes guessing the preimage of the digest infeasible Consolidate salt entropy and length security consideration subsections\nquestion from an email, my response: The exchange makes me think a brief mention\/explanation what salt does\/provides in the SD-JWT context would be a worthwhile addition. Maybe just add or modify a sentence or two in sec 10.3. And\/or something in URL where the salt value in the Disclosure is introduced\/described.\nHi Brian, just check for understanding: does the issuer send the clear text values of the disclosed claims to the holder and verifier or does the issuer only send the salted hashes? +------------+ +------------+ | Issues SD-JWT including all Disclosures | v +------------+ +------------+ | Presents SD-JWT including selected Disclosures | v +-------------+ + + +-------------+ +-------------+| +-------------+\nI'm sorry but I don't quite understand the question or how it relates to the small change I'm suggesting in this issue.\nFYI, I was asked what the salt does by an IETF attendee yesterday. I support explaining this in the draft.\nPR has some proposed text"} +{"_id":"q-en-oauth-selective-disclosure-jwt-17889bd1781ad13105ae6f5075ab220a8344d28d1fb64835211725300ac867b1","text":"for issue URL\nThe layout of these section delineations seems off. There's an \"Example 1\" section header, but no other numbers. Some examples are top-level, some are subordinate. Is this intended?\nKinda. has additional examples with some of those other numbers. ! And a few places refer back to \"Example 1\" by name. This all could probably be tidied up though.\nlike it"} +{"_id":"q-en-oauth-selective-disclosure-jwt-7550618da45de94fb171c9ccb92dbc356549632c2d70cfe76153cc61442db00e","text":"would\nIt seems like it could be helpful to implementors, allowing them to quickly validate whether what they have is syntactically an SD-JWT or an SD-JWT with key binding. Something like:\nThe actual helpfulness of ABNF IMHO really depends on the readers familiarity with ABNF. I don't know that such familiarity is particularity prevalent. But, as long as it's correct, it doesn't hurt to include either. And while I'm not overly familiar with ABNF myself, I know enough to know that that isn't valid ABNF and doesn't quite correctly convey the SD-JWT constructs. I've endeavored to fix it up but am not 100% sure this is correct either: \\ with a bit of help from https:\/\/author-URL\nLooks good to me, thank you! Maybe a small improvement would be to introduce a name for disclosure?\nThat's a good improvement, thanks!\nLooks good to me, and appears valid according to the .\n(SD-JWT-KB part of the ABNF depends on another PR)\nneed to wait after is resolved to do a PR.\nmaybe add a KB-JWT line to be even more better? a la:\nNot my religion, but happy to celebrate anyway."} +{"_id":"q-en-oauth-selective-disclosure-jwt-aad38a7aa1fee05a8ede13dea26c9441c147a19451fc4f01ac4478f0e98f67e6","text":"More definitive language around the exclusive use of the cnf claim for enabling Key Binding might just\nNeed to look at text in one or more of the sec considerations too... todo\nRight after the IETF 119 OAuth session yesterday in a talk with NAME and NAME that started with discussion of PR , the question again came up (with some surprise on their part) as to why the claim isn't the one and only way specified to put the holder's public key in the issuer-signed JWT. I honestly had a hard time giving a good rational for SD-JWT not being more prescriptive with it's requirements here. It seems that using the claim exclusively would be more straightforward and much better for interop. NAME has previously made similar comments, \"it seems like there's an unnecessary level of ambiguity around how the Issuer-signed JWT binds to a holder public key. Can we not just say ? Why do we need more? This point actually worries me more than [others] -- without clarity on this, there is no hope of writing stacks that interop.\" Using the claim exclusively could also help with some clarity in the where there currently has to be wording like \"[if there's] no recognized Key Binding data is present in the SD-JWT...\" and \" ... the Issuer might have added the key to the SD-JWT in a format\/claim that is not recognized by the Verifier.\". Note this is related to but separate from the question of if the presence of a claim in the issuer-signed JWT means that the verifier should have to require a KB-JWT. The claim can be made the way to do holder binding while still leaving that up to verifier policy.\nI support defining as the way to do holder binding. I'll note that is extensible via a registry, so should the existing parameters not work for a use case, a new one can be defined and used.\nYep, this sounds good to me.\n:+1:\nmaybe leaving the door open to something like\nI'd be fine with having as the only defined mechanism but leaving the door open for or similar.\nPR URL aims to make cnf the one true way to bind to a holder public key while leaving room (same as RFC7800) for other as yet to be defined ways to deal with more than one key"} +{"_id":"q-en-oauth-selective-disclosure-jwt-419b2f207536dfd0dca8d09ed9b62893d60d673c6a7fb777e90793e06b3ff887","text":"Mr. Heenan suspects he should probably be listed in the SD-JWT acknowledgements section and I don't disagree"} +{"_id":"q-en-oauth-selective-disclosure-jwt-045b0d5289097db442e93e496cf0a2ba10f128271db69426591db6f835197868","text":"to\nI was not completely sure either and that's why I posted only the issue and no PR first. I still think that in chapter 5. the extra mentions are confusing as the text and the ABNF are separating the SD-JWT and SD-JWT+KB formats. In contrast, similar explicit mention in the verification section is clarifying to me.\nI wasn't sure when first looking at it and actually thing it's okay either way but have come to think it's more okay this way.\nThe distinguishing of SD-JWT from SD-JWT+KB is a good clarifying change in -09. Some extra mentions of KB-JWT still remain that could probably removed: In : >An SD-JWT without Disclosures and without a KB-JWT and >An SD-JWT with Disclosures and without a KB-JWT The \"and without a KB-JWT\" could be removed as there are separate examples for SD-JWT+KB and SD-JWT is now defined not to include KB-JWT.\nThanks NAME PR does exactly what you've suggested.\napproved the PR () too.\nI am not sure these are actually extraneous, but ok looks good"} +{"_id":"q-en-oauth-selective-disclosure-jwt-06b4866fda506a17c0203d12baf1b009e39bb229c58692d51cbd699284076268","text":"BTW NAME please do a quick PR adding yourself to the Acknowledgements section, if you;'d like.\nThanks NAME !"} +{"_id":"q-en-oauth-selective-disclosure-jwt-b76d0da0e02d7463be772372601e38c2bc597d9b0c95afbeba9d59f173a72668","text":"Acknowledge Martin Thomson for his contribution: URL\nOpen for one week. No approvals. No objections. One minor editorial change applied. Will continue to wait.\nIf this is this acceptable, I love it"} +{"_id":"q-en-oauth-selective-disclosure-jwt-5b08d755799c1fe623abe8206ae9665c6733385e1ca7fa8d4bdf49290e36127f","text":"see URL and URL\noh wow... I apparently didn't confirm the merge so this is still pending. I really thought it was done and published -11. Sigh. Sorry. Will look at doing a -12 soonish.\nsee\nThanks for these additional updates. Michael B. Jones, Sir, I have endeavored to reply inline below to the remaining comments and\/or addressing them with further updates to the source of the draft, which will appear in the -11 revision that hopefully will be published soon. I have the honor to be Your Obedient Servant, B. Cam remain consistent with that. But will expand the text defining Holder to better accommodate its use throughout the document. possible\" line of reasoning, the use of the words \"array elements\" to mean array elements is the most concise and meaningful terminology the document authors were able to conceive of to convey the concept of elements in arrays. I believe it would be a disservice to readers to embellish the wording. a different misunderstanding at play here... and Terminology\", sufficient and appropriate for the mention of a piece of terminology established by convention in a previously published RFC. 5.3. DPoP\/RFC9449 is actually a complete sentence and this draft will be fixed so that it will also have a complete sentence. 5.3.2. the cnf claim the way to enable key binding with that increased interoperability as a primary aim. factually incorrect. The current wording might not be as descriptive as you'd like but it is correct. In what way does the encoding make it not a SHA-256 Hash? Asking rhetorically. opinion, that supports the assertion that any of that is \"best practice\". There are two occurrences of \"[like or such as] nbf, iat, and exp\" in the draft. In URL they are mentioned in the context of \"claims carrying time information\", which is what they all are. In URL they are mentioned in the context of claims to check regarding the validity of the SD-JWT. The \"nbf\" claim, as you likely know from (I think?) having written the text in RFC7519 , has normative requirements around when the \"JWT MUST NOT be accepted for processing.\" As such, it is entirely appropriate in the context of mentioning some validity-controlling claims. The only normative requirement on the \"iat\" claim , however, is about the syntax of its value. Rejecting a token based on the \"iat\" value can be required by a specific profile of JWT, such as was done in DPoP, but is otherwise something that is at the discretion of an implementation\/deployment and something that has apparently been the cause of some interoperability problems . Arguably \"iat\" shouldn't be in that list. 10.7. belong in this list. place of content omitted from the main payload. The ellipsis is often used in place of content omitted from the context at hand. CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you.\nThanks very much for addressing the majority of my comments and for replying to the rest of them. I’m replying inline to the remaining ones that I believe should still be addressed. I’ve deleted the ones from this thread that I’m satisfied with the responses to. Best wishes, -- Mike From: Kristina Yasuda Sent: Wednesday, August 14, 2024 6:07 AM To: Brian Campbell Cc: Michael Jones ; EMAIL Subject: Re: [OAUTH-WG] Re: Review of draft-ietf-oauth-selective-disclosure-jwt-10 Thank you very much, Mike. Majority of your comments have been incorporated in this PR URL, which has been merged. Below, in bold, please find explanations for the points that have not been reflected. We appreciate your review. Best, Kristina Introduction: The usage of “Holder” in “used a number of times by the user (the \"Holder\" of the JWT)” inconsistent with its usage in the definitions, which defines “Holder” as being “an entity”. The usage here in the introduction makes the holder into a person rather than an entity. Please adjust the language here to not confuse the user who utilizes the holder with the holder itself. -> Holder is defined as an entity and a person can be considered an entity. We also have a phrase \"the user (the \"Holder\" of the JWT)\" in the introduction. Hence the editors consider the current wording to be sufficient. Since “Holder” is sometimes used in the spec to refer to the Wallet software and sometimes used to refer to the person using the Wallet, I believe that readers would be well served to make a clear terminological distinction between the two. Introduction: In “\"Claims\" here refers to both object properties (name-value pairs) as well as array elements.” change “array elements” to “elements in arrays that are the values of name\/value pairs” (or something like that). Without saying what the array elements are, readers will be confused about what’s being referred to. I’d apply this clarification in 4.1. SD-JWT and Disclosures and other applicable locations as well. -> There is a misunderstanding. the intended meaning here is exactly what the text says: \"each element in the array (regardless if it is a name\/value pair or not, an array element as a string counts too) can be selectively disclosed\". The fact that a misunderstanding is possible and occurred leads me to believe that a clarification of this phrase is needed. Readers won’t necessarily know where the “array elements” being referred to occur in the data structures. That was certainly my experience, hence my initial comment. Possible wording that’s simpler than my initial proposed wording could be “array elements occurring within Claim values”. Or clarify it with different wording of your choosing. Feature Summary: In “A format for enabling selective disclosure in nested JSON data structures, supporting selectively disclosable object properties (name-value pairs) and array elements”, consider expanding “array elements” in the same manner as the preceding comment to make the meaning more evident. -> same as above. There is a misunderstanding. the intended meaning here is exactly what the text says: \"each element in the array (regardless if it is a name\/value pair or not, an array element as a string counts too) can be selectively disclosed\". Same response as my previous comment. 1.2. Conventions and Terminology: I suggest moving the “base64url” definition to the Terms and Definitions section and use a parallel structure to that at URL Specifically, say “The “base64url” term denotes the URL-safe base64 encoding without padding defined in Section 2 of [RFC7515].” Then introduce the rest of the definitions with the phrase “These terms are defined by this specification:” as is done in RFC 7519. The current presentation is fairly jarring. -> Current text already points to Section 2 of RFC7515. The editors consider the current definition sufficient and clear enough. Yes, the definition is clear, but it’s in the wrong place. Definitions belong in the Terms and Definitions section. The current location leaves it strangely hang in a different section by itself, separate from the other definitions in the spec. Key Binding JWT: Change “MUST NOT be none.” to “It MUST NOT be \"none\".”. -> Changed to “It MUST NOT be .”. it is exactly the same wording as in DPoP RFC. The current wording is a sentence fragment – not a complete sentence. The fact that that error slipped through in DPoP isn’t a reason not to correct it here. 5.3.2. Validating the Key Binding JWT: In “if the SD-JWT contains a cnf value with a jwk member, the Verifier could parse the provided JWK and use it to verify the Key Binding JWT.”, change “could” to “MUST”. Make this normative to increase interoperability! -> the sentence you point at is an example, so we cannot add a normative statement here. We changed \"should\" to \"would\" to reflect the intent behind the suggestion. OK. Please address this then by adding this normative sentence at an appropriate place in the specification. “When the SD-JWT contains a cnf value with a jwk member, this key MUST be used to very the Key Binding JWT.” 6.1. Issuance: There are many places from here on where the label “SHA-256 Hash” is used, for instance “SHA-256 Hash: jsu9yVulwQQlhFlM3JlzMaSFzglhQG0DpfayQwLUK4”. Change all of these to “Base64url-Encoded SHA-256 Hash” for correctness. -> Editors consider it to be sufficiently clear from the specification text that it is a base64url-endoded hash. This would also require changes to the tooling and might unintentionally make the examples harder to read. The fact remains that the current wording is factually incorrect and therefore needs to be corrected. If this can be done by updating the tooling, that’s actually good, since it means that it will be consistently corrected everywhere the error occurs. Verification of the SD-JWT: Delete “nbf” from “claims such as nbf, iat, and exp” and everywhere else in the spec, other than in 10.7. Selectively-Disclosable Validity Claims, where both “iat” and “nbf” should be listed, as described in my comment there. “iat” (Issued At) is a standard validity claim in JWT tokens (for instance, ID Tokens), whereas “nbf” (Not Before) is rarely used, since it is only useful when future-dating tokens, which is rare. -> \"nbf\" is used merely as an example here. the fact that it might be used rarely does not mean it cannot be used as an example, since it is a registered claim in JWT RFC. Sure, it’s an example, but the examples should nonetheless reflect best practices. Please delete “nbf” here. 10.7. Selectively-Disclosable Validity Claims: Add “iat (Issued At) to the validity claims list before “nbf”, and change “nbf (Not Before)” to “nbf (Not Before) (if present)”. “iat” (Issued At) is a standard validity claim in JWT tokens (for instance, ID Tokens), whereas “nbf” (Not Before) is rarely used, since it is only useful when future-dating tokens, which is rare. Change all uses of “nbf” to “iat” elsewhere in the spec. -> same as above. \"nbf\" is used merely as an example here. the fact that it might be used rarely does not mean it cannot be used as an example, since it is a registered claim in JWT RFC. As above, the examples should reflect best practices. Please at least add “iat” to this list, even if you choose to retain “nbf”. Everywhere: Consider changing the name “…” to something more indicative of its function, such as “sdelement” or “sd_item”. Or alternatively, at least provide rationale for why that non-obvious name was chosen. -> we have extensively bikeshedded this claim name, and this one was chosen because it is concise and natural since '...' is what is used when something is omitted in a list. there seems to have been some misunderstanding how disclosure in the arrays work, so hopefully with the clarifications above, this claim also feels more appropriate than before. I wouldn’t say that there’s a misunderstanding in how disclosure in arrays work. Rather, I’d say that the current exposition of array disclosure can leave readers wondering what is meant, and could be improved by the suggested clarifications above. Thank you for explaining that “…” is intended to be understood as the ellipsis character, and the intuition behind it. Its usage in that way is slightly odd, since the ellipsis character is used in place of omitted content, whereas in this case it’s used with the opposite meaning to introduce content that is NOT omitted (content that is disclosed). Be that as it may, I understand very well the reasons for keeping the claim name as it is. Nonetheless, I think it would be helpful to readers to add a comment when the claim name is introduced about why this otherwise non-obvious claim name was chosen. The sentence you add could be something like “The claim name “…” was chosen because the ellipsis character, typically entered as three period characters, is used in contexts where content is omitted (not disclosed).” Or choose your own wording. Give readers the intuition behind the choice. Finally, this comment was incompletely addressed: Section 11.1. Storage of Signed User Data: Please change “OpenID Connect” to “OpenID Connect [OpenID.Core]��� (adding the missing citation). Best wishes, -- Mike"} +{"_id":"q-en-oauth-selective-disclosure-jwt-2563acf78420d4880c43ced2a96fe27deb8bbe8abf65914f96bd942049514413","text":"Introduced the phrase in the end of the Section 8.1 on Verifying the SD-JWT for\nFair question. But I did add this one mention of the term in the location that NAME requested it in URL And it was already used in the text at the end of . And the just merged URL sprinkled in a few casual uses of it too. So I think it's sufficient enough. Also we have a fair number of kinda similar terms\/phrases that are used in the doc but not formally introduced in the the terminology section. Which I think is fine. But putting this one there might be somewhat inconsistent and beg the question about other similar terms\/phrases being there too.\nWe had the discussion in SD-JWT VC how to refer to the dehydrated (or maybe hydrated) result (Issuer-signed JWT payload) of the . It would help if we could give it a name to allow us to be more precise in other specs that are building on top of SD-JWT. The discussion occurred when adding the Schema for SD-JWT VCs that uses that result of the SD-JWT processing section to match the JSON schema against: URL We also have another issue that asks for an example of the result as well which would benefit from giving it a defined name: URL\nPerhaps you can also confirm that this result in fact contains things like , , etc., and would only remove , , .\nI was typing the following over in when it closed out from under me in favor of this issue.\nwhy is not good enough? (in any case, would really like to avoid using hydrated\/dehydrated since that term is kind of confusing and i used for something related but different in ISO I think)\nI don't think that helps when referencing from another spec such as SD-JWT VC. The text you are referring to is from 8.3 Verification by the Verifier which is not general enough. I'd suggest to add one sentence to URL, e.g., \"7. Let xzy be the processed result of the asdf algorithm ...\" ... and make xzy a defined term, so it can be referenced. I don't think we can use from SD-JWT VC since it is not really defined in SD-JWT. If you don't define it in SD-JWT, we will define it in SD-JWT VC which I prefer not to.\nBtw. if you call it processed payload, that is fine by me but it should be added to 8.1.\n+ 1000000000% !\nDefining terms is good. Dehydrated\/hydrated would be bad. ;-)\nPR URL has a small addition to sec 8.1 for this\nagree to common law definition"} +{"_id":"q-en-oauth-selective-disclosure-jwt-8cd54a9ec280fea117a368503f35bd2f774d0265ebd75f86f5595c2b85aca24d","text":"resolves URL which is really based on URL\nsome of URL likely applies here too\nNotes on section 6: URL say more about the input JWT Claims Set URL show the processed SD-JWT payload somewhere in this section\nnotes on appendix: URL say more about the input JWT Claims Set URL show the processed SD-JWT payload somewhere after URL say more about the input JWT Claims Set URL use the processed SD-JWT payload name URL say more about the input JWT Claims Set URL names URL input URL name"} +{"_id":"q-en-oauth-selective-disclosure-jwt-b4667f37ec02b76fd3b483a7be263c3e41b70c2fc6276f2c94e7aa11aebd72b2","text":"Despite or maybe in light of some recentish discussion on the mailing list, the topic seems like it deserves more prominent placement."} +{"_id":"q-en-oauth-selective-disclosure-jwt-38ee95d4d2b709dbe6eb464cd53deae80d930325eef7dbc2f3380347de88be29","text":"to\nAs far as I know, the idea of burning private keys to facilitate repudiation or reduced data value didn't resonate with any of the audiences for whom it was intended. I think the document would be better off without the paragraph copied below from URL And we might avoid some potentially difficult\/confusing\/painful\/etc conversations in various latter stage IETF process steps.\nI'm told that my statement that it \"didn't resonate with any of the audiences for whom it was intended\" might be premature.\nI agree with deleting the text about publishing private keys. Yes, it's a technique that could be used with SD-JWTs but it's in no way specific to SD-JWTs. It could be used with normal JWTs, CWTs, SAML tokens, X.509 certs, etc. It doesn't add to the specification to describe this here.\nleaving this open with no action for the time being because, as Dr. NAME says, \"it's complicated\".\nagreed to remove."} +{"_id":"q-en-oauth-selective-disclosure-jwt-ff0e62b6fcab0d5eb9ee429077ddd280bb797d8e4ba09918125da72b4c186bf8","text":"URL and I don't know why my response isn't on the mail archive yet but here's a copy: !\noh hey that's fun:\nsadness addressed in 9b7ad2d010e0f49e3dd8b5833c59fe4642e2892b (on the surface anyway)\nisn't part of this duplicate of ?\nYeah, I did it here first just to fix things. Then decided fixing the build process should be decoupled from addressing one more of Mike's previous review comments.\nNow overcome by events. I'm going to resurrect exactly one of my previous review comments that was not addressed. The original comment was: Issuance: There are many places from here on where the label \"SHA-256 Hash\" is used, for instance \"SHA-256 Hash: jsu9yVulwQQlhFlM_3JlzMaSFzglhQG0DpfayQwLUK4\". Change all of these to \"Base64url-Encoded SHA-256 Hash\" for correctness. Brian responded \"The current wording might not be as descriptive as you'd like but it is correct.\" I'll water down my request if you're not willing to change all the occurrences to \"Base64url-Encoded SHA-256 Hash\" to then please at least add a textual caveat before the first such occurrence along the lines of: In the text below and in other locations in this specification, the label \"SHA-256 Hash:\" is used as a shorthand for the label \"Base64url-Encoded SHA-256 Hash:\". As I said in my initial review, I look forward to this specification being published as an RFC. Best wishes, -- Mike From: Rifaat Shekh-Yusef Sent: Tuesday, September 3, 2024 3:39 AM To: oauth Subject: [OAUTH-WG] WGLC for SD-JWT All, As per the discussion in Vancouver, this is a WG Last Call for the SD-JWT document. URL Please, review this document and reply on the mailing list if you have any comments or concerns, by Sep 17th. Regards, Rifaat & Hannes"} +{"_id":"q-en-oauth-selective-disclosure-jwt-ae2be335df22bc134fb3418f11906dce14b479d113b7d4a067b568a1815156ca","text":"URL\nDanke! Thanks Neil, That is indeed an error. Thanks for catching that. We'll get it fixed. I see how that other part is a bit confusing too and will look at improving how those pieces flow together. And also maybe fix some other stuff in that area while we're at it, like inadequate salt length in at least one of the disclosures in 5.2.3. On Wed, Sep 4, 2024 at 9:17 AM Neil Madden wrote: CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you."} +{"_id":"q-en-oauth-selective-disclosure-jwt-528d93203ae3a9066243ea314f79505b20b47a8ceb545ed5c995d16257af18c1","text":"From this conversation URL this PR tries to somewhat better explain that explicit typing, per JWT BCP before it, is about distinguishing different types of SD-JWTs. And also be a little more open to other ways of typing, such as cty.\nThanks, looks COMPLETELY REASONABLE to me. inline this time... On Mon, Sep 30, 2024 at 6:30 AM Dick Hardt wrote: While the use of \"typ\" is unnecessary. So this first choice issue didn't need to be an issue at all. That's the issue. Elevating an unnecessary and arguable flawed construct to a MUST would only further engrain the issue. that we are still talking past one another and there remains some misunderstanding as distinguishing between an sd-jwt and other tokens isn't what explicit typing is about. I'll take another look at the language in the draft around typing and see if there's anything that can reasonably be done to make it less confusing or potentially problematic. CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you."} +{"_id":"q-en-oauth-selective-disclosure-jwt-20fc0af08d6daa4c9993038778ddaa813e955a566db2047883831ee04a6ccd38","text":"should prob add something in history just for posterity - even just a -14 with a \"stuff from WGLC part deux\""} +{"_id":"q-en-oauth-selective-disclosure-jwt-ff521547f733cee374de924af640c038393d4f420d4b7cb0555dba5c797c1903","text":"No good nit goes unaccepted\nIn Section 4.1.1 Hash Function Claim it's stated that: I've seen this reference and similar phrasing in other recent specifications and it's never indicated whether these values are case-sensitive or not. Perhaps this is fault of RFC 6920 not being explicit on the matter, but being vague here creates interop issues; I've already witnessed one regarding this very claim. Should these values be considered case-sensitive or not?\nI guess it doesn't say so explicitly but the intent is absolutely that the values be considered case-sensitive. We can maybe add a brief statement to that effect.\nadds one sentence indicating case-sensitivity.\nExcellent, thank you!\nJust for clarification, are sha-256 and SHA-256 different claim?\nyes, sha-256 and SHA-256 are different\nGood Thank you :)\nassuming my small nit gets accepted"} +{"_id":"q-en-oauth-selective-disclosure-jwt-f9c8fbc5f92f32bade48b23086c8f268ea3c59ba85cb21fe4bfd53071082bd5a","text":"Issue and . Describing the concept of holder binding and giving minimum directions on: find pubKey in SD-JWT Sign SD-JWT-R using privKey bound to a pubKey in SD-JWT but leaving details to the profiles and implementations."} +{"_id":"q-en-oauth-selective-disclosure-jwt-8a673e491dbb76799c068db5fc6c27aaa6aacf83733af7fdeb523aa32f3fce19","text":"In some use cases of SD-JWT, the key for what we currently call Holder Binding may not be bound to a holder (but, for example, to a device). We should therefore consider renaming Holder Binding to Key Binding.\nPRs have been merged\nThe spec currently does not contain a reference for RFC 7519 (JWT), which it clearly needs to. It also uses RFC 7515 (JWS) as the citation for JWT in several place, which is incorrect. Please add a RFC 7519 (JWT) reference and cite it the first time that the term JSON Web Token (JWT) is used in the body of the spec (but not in the abstract). And look at the current uses of RFC 7515 and correct those that should be RFC 7519."} +{"_id":"q-en-oauth-selective-disclosure-jwt-3cd9c245db86c9f25decdbac0fc045c99262f00b8c1f4649a5ec6cb73703bd35","text":"feat: Minimum length of the salt feat: weak hash algorithms to not use\nmh ... we already have in Section\nNAME I commited your suggestion I'm not sure that the normative language could be used in the considerations but it's good anyway to me, it's intellegibile :)"} +{"_id":"q-en-oauth-step-up-authn-challenge-0762d2f7c55fb808d008d5f4e28149a1d9c3373cfb910e0cd6f286cba2906d73","text":"changes\/clarifications from Httpdir telechat review URL Thank you for the review Mark. I've replied inline below with some context or explanation as best I can. And I'll put together a PR with corresponding changes\/clarifications. mechanism. Rather the intent is to provide a new error code and two new parameters for the \"Bearer\" authentication scheme challenge from RFC6750 (and other OAuth schemes like \"DPoP\" that use the RFC6750 challenge params). for the Bearer authentication scheme challenge defined by [RFC6750] not the WWW-Authenticate response header in general. RFC6749 and used throughout the OAuth family of specs providing useful context and disambiguation for OAuth roles and functionality etc. I agree with Aaron about adding a terminology paragraph to the draft to make it more explicit. CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you."} +{"_id":"q-en-oauth-step-up-authn-challenge-a72d2876a0766ed7be8591da1ae55b0dac43b4ecbf960bcb4acb985eabac7cd2","text":"updates from Lars Eggert's IESG review\/ballot URL Lars Eggert has entered the following ballot position for draft-ietf-oauth-step-up-authn-challenge-14: No Objection When responding, please keep the subject line intact and reply to all email addresses included in the To and CC lines. (Feel free to cut this introductory paragraph, however.) Please refer to URL for more information about how to handle DISCUSS and COMMENT positions. The document, along with other ballot positions, can be found here: URL COMMENT: CC NAME Thanks to Christer Holmberg for the General Area Review Team (Gen-ART) review (URL). Found terminology that should be reviewed for inclusivity; see URL for background and more guidance: Term ; alternatives might be , , , , , , , , , , , , , , , , All comments below are about very minor potential issues that you may choose to address in some way - or ignore - as you see fit. Some were flagged by automated tools (via URL), so there will likely be some false positives. There is no need to let me know what you did with these suggestions. These URLs in the document can probably be converted to HTTPS: URL This word seems to be formatted incorrectly. Consider fixing the spacing or removing the hyphen completely. Consider using \"after\". The verb form \"needs\" does not seem to match the subject \"servers\". An apostrophe may be missing. This review is in the [\"IETF Comments\" Markdown format][ICMF], You can use the [ tool][ICT] to automatically convert this review into individual GitHub issues. Review generated by the [][IRT]. [ICMF]: URL [ICT]: URL [IRT]: URL"} +{"_id":"q-en-oauth-transaction-tokens-5f3f3d45a2127ef9c777eec7af8dc7e5e165ee6585d9f1a5dac2f5abeab17f0c","text":"Added text to address issue\nThe following text in 2.2.2 is too open ended and should be refactored. First, the text should include specific guidance on replacement tokens to require scopes to remain the same or be down-scoped on replacement. Second, the draft should include a mechanism to ensure that scopes cannot be increased, or an increase in scope can be detected by downstream services. Finally, add text to the security considerations section regarding the risks of replacement tokens and\/or a malicious txn-token-sts.\nAs far as I understand it, replacement txn-tokens is just a profile to Token Exchange. The Token Exchange RFC defines that an issued token can only have the same or a smaller scope than the presented token. I would assume this applies to replacement tokens as well. If the replacement token mechanism is intended to deviate from Token Exchange in this regard, this should probably be mentioned specifically.\nNAME please review PR and see if I addressed your comments.\nCompleted. Looks good to me.\nLGTM."} +{"_id":"q-en-oauth-transaction-tokens-1f74826d4161d0a2d28f3ef7d62204e3172c0165977e386199baf15c714ce909","text":"Added text to address issue regarding the value of the claim within the Txn-Token\nFrom Yaron's\nText from Yaron's email: : I think txn should be OPTIONAL. While it is very useful, there may be architectural reasons why transaction ID issuance in an organization is independent of transaction tokens.\nI'm curious to the use cases? I'd prefer a way to make the claim REQUIRED and allow an organization to provide it's own value in that context. Maybe we could go with RECOMMENDED? This is one of those cases where people who know what they are doing can potentially remove it or possibly provide a \"N_A\" value and it will be fine. However, if it's OPTIONAL most developers will not specify the value and then they will lose a lot of value from the Transaction Token.\nNo specific use cases, it's just that we're assuming that an organization can easily integrate its (preexisting) transaction ID-issuing service with this one. Sometimes it can, sometimes it can't. I'm good with RECOMMENDED.\nNAME please review PR which addresses this issue"} +{"_id":"q-en-oauth-transaction-tokens-e185719147b806b4736a12446433622f6c60a445f78af172710d7731a8ebb5b4","text":"Added a section to clarify the intent of the claim and how it is different from scope values addressing issue .\nFrom Yaron's : need a lot more discussion of this claim, also it may be OPTIONAL too. Also, why not call it \"scope\" if that's what it is?\nFrom NAME In an external to internal flow, the scopes tend to be broad. The TraT can be set to a more specific to narrow the use of the TraT. We could call it scope, but it could be confused with the OAuth scope One could set the value to the actual API that was called to be very specific We should add a sub-section to describe the above in the draft."} +{"_id":"q-en-oauth-transaction-tokens-ebbbc1909706a059ec880ee7e735dc6b56b8ed2c62badfd32add93afba902a3f","text":"We currently use the URI: \"urn:ietf:params:oauth:token-type:txn-token\" to denote a Txn-Token in requests or in the token itself. However, if I look at other entries in the OAuth URI Subregistry here (URL), they use the \"token\" style for the token types. We also use that style when the subject-token-type is \"selfsigned\". So I'd like to propose that we use the URI: \"urn:ietf:params:oauth:token-type:txn_token\" for the transaction token URI.\nAgreed with aligning to the common practices elsewhere."} +{"_id":"q-en-oauth-transaction-tokens-6e9ae4bca9f499f18734baea231a19461ab847c909c3d0d079effb4a7c4835f2","text":"The notes copy-pasted from the WG notes do say \"when\", but based on our recollection of the events during that session, the real ask was to clarify that one could use unsigned JSON objects. That is indicated in the action item (i.e. the last line in that issue description)\nthe following came up in the IETF 120 session about TraTs: \"Add an outline on when self-signed may be useful Yaron thought self-signed wil be a footgun Suggest a simple JSON object as replacement.\" The description of in section 7.1 is too rigid as it only allows an inbound token or a self-signed token. It should make it clear that an implementation may choose to have any different format for the subjecttoken.\nJust a few very minor comments"} +{"_id":"q-en-oauth-transaction-tokens-9576e83b377e8644c3676f3191cd33e9c07b4ce893dfef33de36b7884858fd3d","text":"The following issue came up during the IETF 120 session on TraTs: There was a comment about some of the processing rules that was the equivalent of NULL with extra processing steps (N_A)\nGeorge and Atul recommend that we should drop the sentences after \"REQUIRED A unique transaction identifier as defined in Section 2.2 of ].\" Since this is a normative section, it need not have any usage guidance in there. If required we can add non-normative usage guidance such as \"The txn value MAY be used to identify the call chain\""} +{"_id":"q-en-oauth-transaction-tokens-d50cd164ce035804f861aee9b51e3cb4c2a98321f3273e64aeb92ef088d8d682","text":"Is there a reason why you did not use uppercase terms like MUST NOT or SHOULD? Because it's supposed to be informative?\nYes, the section is non-normative.\nCurrently logging guidance says \"Txn-Tokens SHOULD NOT be logged if they contain Personally Identifiable Information (PII)\". I have few questions on this one Risks with logging is that token is still valid for few minutes and if logs are rotated to any other central archival storage with access to broader set of people or someone has access to the server with logs, then they can reuse the token. So why not just say MUST not log token? Also how would a service know if incoming token has PII or not. They will have to validate the token, extract claims and then somehow know infer that a claim is PII or not. On similar lines, for PII , can we mention that TTS SHOULD evaluate encrypting or tokenizing PII claims rather than adding them in plaintext. Adding in plaintext is a risk because TTS isn't aware where the token flows.\nTxn-Tokens already contain the subject identifier which is enough to be considered PII in some environments. Our own policy states that PII data can be logged, but the logs must be deleted after a few days (max 7) days. We forbid logging of Txn-Tokens in our integrations for security reasons, not just because they could contain PII.\nOur implementation in my company, we do follow the same guidance i.e do not log tokens at all, nothing to do with PII. Tokens are pretty much considered as credentials as validating parties leverage the claims for authorizing the request."} +{"_id":"q-en-oauth-transaction-tokens-7fe02d6430b62d4fc1003611863ae088ffffe4ba4cac0bfed670607b53523c81","text":"See issue Question for reviewers. Do we need the final sentence on how a workload is invoked? If the sentence was not there would it change how we think about a trust domain? Do we need the second sentence as an example? Should we add more examples (e.g. trust domains can be defined as all the applications or workloads that recognise a specific issuer).\nThe definition of a Trust Domain feels very specific to network segmentation. Is there a industry level Trust Domain definition we can use instead? For example, should we use a definition that defines the trust domain in terms of the shared policies that apply? Something like: \"A trust domain refers to a collection of systems, applications, or workloads that share a common security policy\".\nI'm good with this"} +{"_id":"q-en-oauth-transaction-tokens-04dd2ce9233869071ccc49ecac2754bc43e04f32e3d3341cbd2a44018bbb8375","text":"Related to issue\nSection 7.2 includes a \"access_token\", which is part of the token exchange protocol. RFC 8693 makes it clear that it des not need to be an OAuth token, but perhaps we can reference that this parameter is define in RFC 8693? Perhaps something like: \"The following describes required values of the token exchange protocol as defined in RFC 8693 which must be included in the Txn-Token Response:\" or \"A successful response to a Txn-Token Request by a Transaction Token Service is called a Txn-Token Response. The Txn-Token Response is a profile of the token exchange response defined in RFC 8693. If the Transaction Token Service responds with an error, the error response is as described in Section 5.2 of RFC6749. The following describes required values of the token exchange protocol as defined in RFC 8693 which must be included in the Txn-Token Response:\"\nI'm not seeing access_token mentioned in section 7.2 other than implicitly when referencing the token types defined by 8693. Did you mean section 7.4? I'm fine with the first suggestion.\nYes, section 7.4.\nI'm good with this"} +{"_id":"q-en-oauth-transaction-tokens-781228f4462e629f8d57362231ba7096f2f32eef2ba35df143c429da1f3516a3","text":"See issue\nIn section 2.2.1 the context that may be included in a transaction token includes \"The external authorization token (e.g., the OAuth access token)\". To ensure this is not interpreted to mean that an access token is included in a transaction token, I would suggest the following additions, in line with section 9.3 in Security Considerations: \"A reference to the external authorization token (e.g., the OAuth access token), including scopes or claims included in the authorization token, but not the unmodified authorization token (see Security Considerations, Section 9.3)\"\nPieter, in that section of the spec we are describing what needs to be sent to the Transaction token service, I think in that case the full access token should be passed. However, when generating the request context, we want to ensure that the full access token is not included. Did I get my spec sections wrong?\nMaybe instead we shouldn't say \"This context MAY include:\" but rather ... \"The information provided to the Txn-Token Service MAY include\" to not confuse readers who might think this data should be included in the request context.\nAgreed - I missed that this was for initial token creation. I did create a PR with your suggested text.\nPR approved\nLooks good."} +{"_id":"q-en-oauth-transaction-tokens-aa6ab3394adc29210dd56ea9498d0f64dae937d5f156549b47b464561b8fed4f","text":"See issue\nGood practice is to explain why something is a MUST. Suggestion: From: The value of the aud claim MUST remain unchanged in a replacement Txn-Token. To: The value of the aud claim MUST remain unchanged in a replacement Txn-Token to prevent the Txn-Token from being accepted outside it's current Trust Domain. Alternatively, we cann add security considerations for the draft as well.\nI'm good with that text.\nI think the current PR is making the change in the wrong place (unless I'm horribly mistaken)\nLooks good"} +{"_id":"q-en-oauth-transaction-tokens-741c7bd91451b88ca0a9cc0c6dcf230f324ed5f37a3b517ee44e60343f65b848","text":"In response to Updated section on: Mutual Authentication Security Considerations for Client Authentication Added security considerations for protecting the workload configuration NAME NAME\nI wonder if we can clarify the guidance in Section 7.4 a bit: From: It SHOULD rely on mechanisms, such as Spiffe or some other means of performing MTLS [RFC8446], to securely authenticate the requester. To: It SHOULD rely on JWT or X.509 credentials, which may be provisioned using SPIFFE or other mechanisms, to securely authenticate the requester. The final sentence in section 7 probably also needs a bit of clarification: It SHOULD rely on mechanisms, such as [Spiffe], to securely authenticate the Transaction Token Service before making a Txn-Token Request. I think the requirement here should be that the Transaction Server should be authenticated to the workload using a JWT or X.509 certificate, which may be provisioned using SPIFFE or another mechanism and used with a secure protocol like MTLS or using the WIMSE service-to-service authentication mechanisms.\nI think clarifying this would be good (Section 7.6:) For me the key is that any workload invoking the TTS must use some form of strong client authentication. That could be SPIFFE, privatesecretjwt, mTLS, ??? The normative requirement should be \"strong client authentication\" and the others can be examples of such. I'm not sure how the workload can \"pre-authenticate\" the TTS. I like the pre-defined endpoint (or maybe \"pre-configured\" endpoint) concept. Basically, the TTS should be at a well known location. I guess there is a security issue if the workload sends an accesstoken to the wrong server especially if the external accesstoken is a bearer token. Is this something we should just cover in the Security Considerations section?\nI think you touch on a few things here: We don't address discoverability of transaction token servers. Should we add something (e.g. define a well known endpoint, include discovery metadata in the authorization server metadata etc?) Can you say more about \"pre-authenticate\"? Do you mean some mechanism whereby the workload can trust the TTS or the location of the TTS? Agreed on the security considerations for ensuring that the access token is not sent to an incorrect endpoint (I can imagine an adversary getting into a build server, modifying the TTS location for a workload, and then have the TTS send tokens to it. I can also imagine this happening by accident, especially if there are multiple TTS's in a deployment.\nRegarding the second point: Yes, should there be a way for the workload to determine it can trust the TTS before sending it an external token? This isn't something that is common in OAuth specs. It is assumed that the client knows whether to send the request or not.\nRegarding 1: I think we wanted to talk about it at the last IETF and ran out of time. I seem to remember a few conversations around \"discovery\" but nothing concrete was decided.\nI found the issue it's"} +{"_id":"q-en-oauth-transaction-tokens-1138a091d848dbb2176a32fd7e4b558cbbc1360aa6cdb08e0dfbb1ad87f41582","text":"Addresses issue - clarifying the value for when requesting a replacement Txn-Token\nIn : Does this assume that should be ? Should we call it out explicitly?\nYes, I think that makes sense."} +{"_id":"q-en-oauth-transaction-tokens-15e277bbc43dcd69858d055d893824ec8acd229412309e28ee3003d06f19de0f","text":"We received the following email from IANA: Before the IETF meeting, we check working group agendas for documents with IANA-related issues. We have notes about the current version of this document: URL 1) We haven’t been asked to update the change controller field for existing registrations, but the IESG prefers that the IETF be listed as the change controller for new registrations. 2) A few minor editorial notes about the media type registration: The applicant name and email field should be omitted. Where those fields appear on the IANA website, they’re an artifact of the web form, and not officially part of the template. We’re gradually removing those fields from existing registrations.The media type registrations don’t appear to include a change controller field.The quotation marks around type, subtype, and required and optional parameters can be removed, as should the “(RFC 2046)” after “application.” Finally, please note that RFC 6838 recommends asking the mailing list for an informal review. If you have any questions, just let us know. If you'd like to talk in person, you can find us next to the RFC Editor's table from Monday through Thursday. You can also request another review at any time by contacting us at . For more information about IANA Considerations section requirements, please see URL Best regards, Amanda Baber IANA Operations Manager"} +{"_id":"q-en-oauth-transaction-tokens-2d94ca2f1b7261e2a86af64e85e710a014422dd22f04316cbc567420617258cb","text":"Update Transaction Token Service responsibility when creating replacement tokens. Also contains minor changes for consistent language. Related URL"} +{"_id":"q-en-oauth-transaction-tokens-67fc11ec3cb968c49b54e806c6676bd2da2fb3828cb213c7cfc3a50e0dd587fe","text":"Based on discussion and feedback in the IETF 117 Session, I've split the single flow diagram into three different flow diagrams."} +{"_id":"q-en-oauth-transaction-tokens-1d4d93ebbf09a46d113705bf57f70d56ba01c09c73d0607ef9f554a533988dfd","text":"The current draft had a request parameter named \"azc\" and also the claim within the TraT is called \"azc\", which leads to confusion whether the request parameter content is directly embedded into the token. So I am renaming the request parameter to \"rctx\".\nAhh... I think this parameter is for cases where the requestor wants to add some specific authorization data into the overall TxTkn. I would prefer to rename the parameter 'authz_details' which would be a JSON object and then add it as a sub-object in the of the TxTkn.\nFrom Dr. Kelley W. Burgin Section 6.2: What does the AS do with the values in “azc”? Are they included in the transaction token? Can I put anything I want in “azc”?"} +{"_id":"q-en-oauth-transaction-tokens-47d3662cbb055512edb338c27eedf2c0e0b360cdd8ab4cb1ff5c7e2380589b4b","text":"is already registered (URL). Do we need to register it again? I'm not sure what the guidance here is.\nI believe we should add an IANA consideration to propose the registration of azc claim. A list of registered claims can be found here: URL\nI believe we should add an IANA consideration and propose to register .\nThanks for this email and adding the issues to GitHub. I will update the spec shortly.\nImplementation feedback. The \"tid\" claim clashes with a claim commonly used in AAD for the tenant ID. Perhaps we can use an alternative like idt, txi, txnid, txn_id or tti?\nDiscussed in the identity chaining call. Recommendation is to use the 'txn' claim name as defined in RFC 8417.\nAdd reference to SET add RFC 8417\nLooks good to me"} +{"_id":"q-en-oauth-transaction-tokens-396f3b5caa2714268c0eea21048f54c3d5e828ea385e6d7bfc16b80c13d1993d","text":"See issue\nImplementation feedback. The \"tid\" claim clashes with a claim commonly used in AAD for the tenant ID. Perhaps we can use an alternative like idt, txi, txnid, txn_id or tti?\nDiscussed in the identity chaining call. Recommendation is to use the 'txn' claim name as defined in RFC 8417.\nAdd reference to SET add RFC 8417"} +{"_id":"q-en-oauth-transaction-tokens-884ba59adb89419c8201a35693b2da8df124fea1c0784629c0c6dd0e709e05e5","text":"Constrain Txn-Token to be shorter than the Access Token Lifetime (upper bound on lifetime) Editorial update to security considerations"} +{"_id":"q-en-oauth-transaction-tokens-84e5bf2822c47219b4a538bce8d8e800f88fbc103e2c9bc0ca834df961f0df5c","text":"Looks good to me. Left a comment, but nothing major.\nLeft a comment, but looks good to me."} +{"_id":"q-en-oauth-transaction-tokens-aeb37fa0edee848ed4aae52d2830381374f65a03b8241517b6df5f7b937f5ea7","text":"I moved the Security Considerations section before the IANA registry text and added a Privacy Considerations section as raised in the IETF 118 meeting."} +{"_id":"q-en-oauth-transaction-tokens-377ba13d7aa9f58689e42fbbb0016c0d909e715f66571c1f5297d10942b2f4f9","text":"Updated the text to more clearly profile the token exchange specification for use with Transaction Tokens.\nIn PR , the token exchange parameter is profiled to carry the purpose or intent of the transaction and its value is copied into the claim of the resulting Txn-Token. Is this how we want to handle the 'purp' claim?\nI propose that the draft should allow value of the claim in the TraT request be independent of the claim in the TraT, because the requesting service may not know sufficient details about how the TraT is actually going to be used. For example, the requester may say the is \"buy stock\", whereas the claim could have a value like \"equity trade\"\nOk, so allow the TTS to transform the input scope to the appropriate value (if necessary). I'm ok with that.\nRecommended change that the TTS MUST take the value of the parameter to determine the claim of the TraT.\nBased on this feedback, the claim is being made REQUIRED.\nShould we allow the use of and to be used as a means of client authentication for the Transaction Token Service? If not, should explicitly prohibit the use of these parameters in the profile of the Token Exchange spec.\nSince RFC8693 (Token Exchange) refers to the as \"A security token that represents the identity of the acting party\", we should not use it in the TraT request. I thought we wanted to have some way to convey the inbound token to the TraT service, and that's why we were using , but that is inconsistent with RFC8693. I think we should neither require nor disallow the use of because some implementations may want that for client auth, and some implementations may want to do something else (e.g. mTLS) for client auth.\nThis is kind of what the spec says today. It's not required and up to the implementation. It is just referenced as an example. However, I'm fine removing the example and just being silent in the spec on the topic.\nRecommendation to update example and be silent on use of and . Add a section to Security Considerations to talk about client authentication and add some non-normative examples.\nRemoved the additional text regarding possible client authentication methods and just left it that the client MUST authenticate itself to the Transaction Token Service and that the specific client authentication method is out of scope for this specification.\nPR raised a number of issues around processing of authorization details How are these details determined by the Transaction Token Service (i.e. where do they come from)? Should all the claims be visible to all workloads? or should they be restricted to a subset of workloads? Who is authoritative for specifying the claims of the object\nSome thoughts: I think the spec should be un-opinionated about how the TraT service generates the value of claim. I can see in some instances that the requester has more control over what goes into the claim, and in some cases, the TraT service has more control. A TraT service could implement selective disclosure, although we could recommend in the spec that one should implement it by encrypting certain fields. Stating this makes the actual mechanism for selective disclosure outside the scope of the spec. I believe the TraT service MUST be authoritative for the claims of the object, because it is signing the TraT.\nRegarding (1) -- I think we still need a way for the client to pass in data to the TTS that it can use to generate the values. Even if how all that works is out of scope for the specificatio. Regarding (2) and (3) -- I'm ok with leaving the rules out of scope for the specification. The TTS will be authoritative for the resulting object and whether it should be protected in some way or not. I am curious to hear from others as to their thoughts.\nSuggestion to rename to and then add a processing rule to the effect of... \"The TTS SHOULD propagate the data from the object into claims in the object as authorized by the TTS authorization policy for the requesting client\"\nThis has been fixed AFAIK. Please reopen the issue if it hasn't been fixed by George's PR.\nI suspect in many cases the AS that issues an accesstoken used to obtain an Txn-Token is going to be different than the Transaction Token Service. In that context, the issuer of the 'subject' of the transaction token is different than the issuer of the transaction token itself. Section 3.2.3 of RFC 9493 allows for the specification of an issuer and sub within the object. Should we require this format? or just leave out of scope of the specification what values are present in the claim of the transaction token?\nI would definitely not require that format. And would again suggest that RFC 9493 isn't needed in the Txn-Token context and good enough.\nMy concern with relying on the JWT's subject is that the Transaction Token is issued by the Transaction Token Service which is potentially different than the AS that say issued the access token from which the claim is determined. In that context, I think the Transaction Token should carry the issuer of the claim which means the default and claims of the JWT are not sufficient. Or if the entity requesting the Transaction token is the service that receives inbound mail, and that service wants a transaction token with a purpose of , is the Transaction Token Service the correct issuer of the which is an email address?\nI feel like there might be an undue emphasis on a relationship between and that doesn't truly exist. RFC7519 only mentions uniqueness in the context of an issuer URL but not ownership or correctness or semantic meaning.\nIn the context of OpenID and id_tokens there is a clear relationship between the and the such that the globally unique identification is the combination of and . I've always viewed the relationship of those two claims in that light. Maybe that is an OpenID Connect centric view.\nYeah, I am somewhat familiar with OpenID Connect ;) But this isn't OpenID Connect. With a Transaction Token the subject just needs to be identified within a single trust domain.\nSo that assumes that the claim MUST be unique within the single trust domain as identified by the claim. This might require the Transaction Token Service (TTS) to do some sort of identifier transformation as well as requiring both the AS and the TTS to both be authoritative for the claim value. This is easy if the AS is managing the transaction token request endpoint but if the TTS is it's own service and deployed in a distributed way, would that still be true? I can see the TTS being authoritative for the single trust domain but not necessarily for the identifier. However, I'm ok with moving in the direction you are suggesting Brian; I just want to make sure we aren't prohibiting some deployments by making that simplification.\nFrom discussion... 'iss' claim is optional be clear that there is no relationship between iss and sub claim sub is unique within context of the 'aud' or trust domain go back just supporting the claim (no sub_id)\nAdded a new commit to PR to address this issue.\nA few months ago, I also suggested to support the sub claim as per JWT RFC. As the current spec has replaced sub_id with sub, I appreciate that the sub claim is supported now.\nHi Kai, Thanks so much for your input in this process. Atul\nSince we have removed sub_ids now, I will close this issue.\nWkn"} +{"_id":"q-en-oauth-transaction-tokens-1191446bd56ba4d27f0591cf34221fd6160be975624a775b84e1a839a832c961","text":"Addresses Issue\nWe need to add in the section, information on how services may use Txn-Tokens securely, by possibly using them in conjunction with SPIFFE or other service-to-service security mechanisms. (based on feedback by Kai Lehmann (NAME\ntalks about the same issue, but this is a broader statement.\nWe will use a new header named \"Txn-Token\""} +{"_id":"q-en-oauth-transaction-tokens-67f47ae825d39b1060fa8f207f4adc7cdcacfbb84aa6ebffed30f8abcb39b1c1","text":"See meeting notes here where we agreed to remove this: URL\nThe language in section needs to refer to the RFC and specify how to use them with Txn-Tokens"} +{"_id":"q-en-oauth-transaction-tokens-c9874b0392ea07179b2a304f1ee5dc190cfa3ae576a6b5e022cc40bb63bdaf28","text":"In the Terminology section, the Trust Domain value is being defined and referenced in the aud claim as universal resource identifier. It should be spelled Universal Resource Identifier and a reference to RFC3986 should be added. However, later in the document, the aud claim value is specified as defined in RFC7519. This RFC defines the aud claim as \"StringOrURI\". The different passages should be aligned. Furthermore, the examples for the aud claim values should be aligned accordingly. In section Txn-Token Request the POST request example to the token service uses: If an URI is used, I suggest to use 'https' instead of 'http'. Later the JWT body example of the Txn-Token just uses a string:"} +{"_id":"q-en-oauth-transaction-tokens-8933a8813b207d45b623d930d50b5fae2d22f5420b7ad817e600ba4ab678f811","text":"I updated the text to require the claim remain unchanged when requesting a replacement transaction token. I also added a mechanism to allow for tracking the workloads that have requested replacement transaction tokens. This was necessary as the current text required the to remain unchanged in the replacement transaction token and the object contains the requesting workload identifier. I used a mechanisms similar to x-forwarded-for. This needs lots of review.\nIt would be good to show how, in a replacement txn-token, the identity of the previous subid is preserved. The replacement token may have a new subid that represent the workload that requested the replacement token. Originally posted by NAME in URL\nCan we close this since it was addressed in PR ?"} +{"_id":"q-en-oauth-transaction-tokens-d0accb8e3bdb34767edbe49d334fc6357a81b85fccf4f1be84553df1062d7a2e","text":"Changed MUST to MAY. Cardinality of the txn-token service is 0..1, not exactly one. MAY not be the appropriate framing, I'm open to alternative text here to describe this more correctly.\nI've updated the PR with the language provided by NAME in two places where the previous language was found in the draft."} +{"_id":"q-en-oauth-transaction-tokens-8a8b14d98557dae03cba7688b4b553155dc49a737000436868430a50bb3dceeb","text":"I'll create a new issue to track that logging concern\nFrom Yaron's : : salted SHA256. : also, in most cases txn tokens MUST NOT be logged because they contain PII (e.g. a subject that's an email address).\nI'm ok with these changes. I think we could give some more advise around logging that could be useful to the industry."} +{"_id":"q-en-oauth-transaction-tokens-523417f9e86e238d2a528b9b1e021775790d7c0e4420c1c6a5ba3111561dc6d4","text":"a replacement request. Removed vague language\nThere was a point in Yaron's about specifying in 7.4.1 that sub must be unchanged, and although this is mentioned in 7.4, perhaps we should add bullet points in 7.4.1 (Txn-Token Service Responsibilities) that specify that MUST NOT be modified and MUST NOT be modified.\nLGTM"} +{"_id":"q-en-oauth-transaction-tokens-570593f7b7d8bd14068acade6921a321259aaed254c510148714abe3d49230b9","text":"Yup. Thanks. I realized I forgot to add the IANA section after I sent out the PR. I'll update it soon.\nNAME Regarding your comment about wrong approach: I mentioned something along those lines as well. I personally don't think that a new token type is actually necessary. A JWT is a good fit as it can carry the necessary data, but whether it should be signed by the workload (or what additional claims and associated checks should be incorporated) could be a decision by the individual TTS\/trust domain.\nNAME My point is that even the JWT structure\/encoding is not needed. No need to sign anything, no need to encode stuff in base64 (and either of them adds potential security issues). All the semantics we need can be carried in a simple JSON object.\nI tend to agree with Yaron, that perhaps self-signed JWTs aren't the best way to express this. We can discuss this on the call tomorrow.\nGiven the requirement that the TTS\/AS always has to authenticate the identity of the requesting workload, conveying this info in an unsigned thing (the subject token could even contain just plain JSON) would probably be okay. Though it seems like the kind of thing that might lead to mistakes in deployment and resultant security issues.\nLetting the input be any kind of JSON structure feels problematic - not least because of interop. Using a JWT as the container and then putting constraints on it (i.e. a minimal set of supported claims) will help transaction token services in parsing and making some basic determinations about security. Self-signing has pros and cons - we can make it optional for those deployments that have a risk profile that would get additional assurances from it, but as Brian notes, signing is easy to get wrong.\nWell, you'd put some constraints on the JSON content too. I don't see how that'd be any more problematic from an interop perspective.\nAs long as there are some constraints on the format and contents that everyone would expect. Why not just use a JWT at that point though (there are libraries that support it etc)?\nNAME On we discussed this topic, and while there are valid opinions on both approaches, for now I am keeping the request as a self-signed JWT unless we think this approach is problematic.\nLet's go ahead and merge and then file issues if there is stuff we want to change."} +{"_id":"q-en-oauth-transaction-tokens-4a6046ec60b776772f005a2d8326105f3cc20ab3f09286ace3ffd6019484bcc9","text":"To address issue\nFrom Yaron's : how is \"azd\" different from \"rctx\"? There's a whole section about \"rctx\" and nothing about \"azd\".\nDiscussed in meeting on 06\/14\/2024: Add a subsection for azd, like we agreed to add a section for purp.\nLooks good"} +{"_id":"q-en-oauth-v2-1-5d4c5ecd1fb4bce4e88ede55bb325ebf8a5ec6ccd8a4be4fb4a0d48c427c77ac","text":"An invalid link to the OpenID core spec has been fixed, as well as a section number referencing the draft."} +{"_id":"q-en-oauth-v2-1-6d34cce201f9ea16a4ae073a920eb7ee9041742bd03debd9e52ac0d94f116a29","text":"OAuth 2 gives a clear definition of public clients, which is where OAuth 2.1 says isn't one of the forms \/ attributes that could be referred to as credentials? Public clients do have them, they didn't go away, right? ... while they don't have clientsecret of course.\nA is not a credential, it is an identifier, and it is always public."} +{"_id":"q-en-oauth-v2-1-be1899b6712124d6eb18d5a7ca05118e05e8a1f28df26670e0fb54edfcc19f7f","text":"for issue\nURL Redirect URIs are not required to be registered if the client is using PAR and is a confidential client.\nDiscussed in the side meeting: Mike suggested that if 2.1 talks about this exception at all, it include a note that PAR defines an exception to the pre-registration requirement, which cites the PAR section. The text should be a heads-up to implementers - not appear to be defining something new."} +{"_id":"q-en-oauth-v2-1-4ac96598a93220f2926577e3505b09b63ce6fe1a573ba238b2388fd67c21a0cb","text":"Fixed URL\nThank you!\nThe following link appears to be broken. URL I have confirmed that the link works correctly by fixing `."} +{"_id":"q-en-oauth-v2-1-d43920750c7bee706c638acd9abcf739b9fd12555c019ab6457c01b263c340ba","text":"Most of the text is copied from the security BCP and only slightly adjusted. I am not sure if 2.1 needs to contain the detailed description of mix-up attacks and variants (56b6b6cdcfccf30baa124d37597d18a7cecf032e) or if it would be sufficient to add mix-up mitigations (2d07d10b9787d3798fab0913ff67ea1cd2e29d89).\nThank you, in general we have been erring on the side of leaving the detailed discussion of the security considerations in the Security BCP, and only moving the mitigation recommendations in 2.1. Would you be able to remove the detailed description from this PR?\nFinally had the time for this quick revert of my latest commit. Do you think the description of mix-up is fine now? Should we add a reference to the detailed description of mix-up attacks in the security BCP?\nThanks, I added a reference to the security BCP in that section"} +{"_id":"q-en-oauth-v2-1-f325ebe286b6ad1284df496e54063d29ff463ba18f0120a67961a84e53210751","text":"This small PR attempts to clarify an unfortunately not that rare mistake of server implementations where they respond with a JSON string and not the expected JSON number. In between the client implementations I maintain I get a PRs\/issues at least twice a year which ask that the clients attempt to normalize the Token Endpoint response expires_in value instead of expecting it to be a number.\nJust curious, won't this necessities adding the type for all other parameters? Because a few sentences after the proposed edit, the specification states:\nIt might. In the past this awkward definition in a follow up paragraph might've been seen as necessary because the same response parameters and their descriptions were used for the implicit grant response which does not have any data types associated given it's part of the url fragment. With that portion of the spec gone we might as well put the JSON data type in the descriptions for all token response parameters."} +{"_id":"q-en-oauth-v2-1-e5f4b7eea38b404f93601934198c60499e29b2a90848c9a6bcf2941b196c725f","text":"657c1b620170b7034d0787ef711218c69572dc51 - Changes \"eg.\" to \"e.g.,\". 5289949541404795e56cea41f688d43c272b2ab4 - Changes \"e.g.\" to \"e.g.,\""} +{"_id":"q-en-oauth-v2-1-fdd193b9bd5d7be5369178f453047a92d493c7320c3fac41595b3a7f27924357","text":"cb2616f30c91b68b99f8eb492cce400bf394bc08 - Removes a duplicate \"are\"."} +{"_id":"q-en-oauth-v2-1-9519e692ee4c995d1375834d662e6b69608ce226d92beb03d60644fa0d118b9d","text":"This moves the description of serializations to the appendix part of\nThough i wonder whether the Authorization Response () section should account for the extension - URL with text like \"unless specified otherwise by an extension\".\nyeah I can add that, sounds reasonable\nadded a lot of text around how clients form authorization requests, the language does not however account for the optional POST binding at the authorization endpoint. Sections and do not account for the optional POST binding which uses the request body to carry the encoded parameters. In OIDC there's the request parameters.\nThis has been resolved in , there is now a section that describes the serialization methods in an appendix, which is referenced elsewhere in the spec: URL\nLGTM"} +{"_id":"q-en-oauth-v2-1-89a36fbf44a861ba91d67f04a9f26dc2053f1284a86445b6dfd7c03952cef987","text":"Edited intro to reflect what this document is about changed some OAuth 2.0 references to OAuth 2.1 when referring to this document changed document name to follow ID standard name (repo and filename should be updated as well?) pulled out acknowledgments as this doc will be different"} +{"_id":"q-en-oauth-v2-1-068492e154059800f97d77b8e5bc163ce00fea537b850ab45cf9a8eb7d70f37a","text":"align terminology with RFC7231 term. I'd consider switching to httpbis-semantics URL\nHi NAME NAME it would be great to have some feedback on this PR. In case I could brush even more the spec to ensure compatibility with the ongoing HTTP work :)\nThanks! I'm working on this this week so I will take a look soon!\nNAME let me know if you need more feedback ;)\nThis looks great. I'm not sure of the other implications of switching the references to draft-ietf-httpbis-semantics so we can discuss that in a new issue if you'd like."} +{"_id":"q-en-oauth-v2-1-f114e45b81ce72302c9d51ba3782fab9e1153b0f1c1125036b2078fe72cfaf6b","text":"These are suggested changes for a quite trivial issue. Please, take a look at my explanation in URL for more details.\nURL URL \" An example successful response: HTTP\/1.1 200 OK Content-Type: application\/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache\" Pretty sure application\/json shouldn't have a charset (see note at end of section URL) and I've long thought that \"Pragma: no-cache\" shouldn't be there (see URL ) Note that these apply to most of the example responses in the draft. Examples in RFC 6750 and RFC 6749 as well as some normative text in section of RFC 6749 use a \"Pragma: no-cache\" HTTP response header. However, both RFC 2616 and the shiny new RFC 7234 make special note along the lines of the following to say that it doesn't work as response header: 'Note: Because the meaning of \"Pragma: no-cache\" in responses is not specified, it does not provide a reliable replacement for \"Cache-Control: no-cache\" in them.' The header doesn't hurt anything, I don't think, so having it in these documents isn't really a problem. But it seems like it'd be better to not further perpetuate the \"Pragma: no-cache\" response header myth in actual published RFCs. So with that said, two questions: 1) Do folks agree that 6747\/6750 are using the \"Pragma: no-cache\" response header inappropriately? 2) If so, does this qualify as errata?\nBeen working on this on and off for a while now (it's not exactly short at 80+ pages, various other priorities, etc.) but wanted to share my thoughts from an initial review of the OAuth 2.1 draft before the interim next week where it is on the agenda URL So for better or worse, here's that review: Abstract: \"replaces and obsoletes the OAuth 2.0 Authorization Framework described in RFC 6749.\" I think \"replaces\" is probably unnecessary here and, to be pedantic, is arguably inaccurate because published RFCs don't ever go away or get replaced. Probably should also consider using the official \"obsoletes\" attribute marker too for 6749 URL and probably also \"updates\"\/\"obsoletes\" for others based on the scope of the rest of the document (not sure how this even works with a BCP or if you can or would want to update or obsolete a BCP) if this work proceeds. That scope could be better described in the abstract too as discussed somewhat in the thead URL and also the 1st paragraph of URL What is and isn't in scope is another larger question that I\"m not even sure how to ask. What's included vs. what's referenced? Should this doc be incorporating bits of BCP 212 - OAuth 2.0 for Native Apps? Seems kinda antithetical to what a BCP is supposed to be but it's also a bit hard to tell how much was used. I mean, what happens if\/when the BCP is updated? And that kinda begs the question of if it should also incorporate parts of or even replace the browser based apps draft? I guess that's a TBD circa page 68. There was talk about the device grant being in scope but I'm not seeing it (not saying it should or shouldn't be there but it was talked about). I dunno exactly but those are some scope questions that come to mind. Speaking of obsoleting, I do want to ensure that existing extensions and profiles of RFC 6749 that use legitimate extension points still present and unchanged in OAuth 2.1 aren't inadvertently impacted by this effort. I'm not sure how that should work in practice but want to be aware of it as\/if this work progresses. URL \"is designed for use with HTTP ([RFC2616]).\" I was momentarily proud of myself for knowing that RFC 2616 is obsolete but then remembered that the nits tool can automate the check for other obsolete or problematic references URL so take a look at those issues. Probably should also check the errata of any documents this one intends to update\/obsolete or just borrowed a lot of text from to see if anything should be applied. URL \"The interaction between the authorization server and resource server is beyond the scope of this specification.\" Don't want to try and change the scope but perhaps a mention that things like RFC 7662 Token Introspection and self-contained tokens a la JWT exist here (or sec 1.4 or 7 or...) would be worthwhile. URL \" Steps (3), (4), (5), and (6) are outside the scope of this specification, as described in Section 7.\" But Section 7 incorporates parts of RFC 6750 so being out of scope isn't really true. URL Not too sure about this section in this document but seems that it should at least reflect the fact that some things like RFC 8414 Authorization Server Metadata do now exist. URL All the cool drafts are doing BCP 14 [RFC2119] [RFC8174] these days. URL Mentioning the existence of RFC 7591 Dynamic Client Registration here seems appropriate. And I think it's be super useful to say that even when RFC 7591 isn't being used directly, it (and registered extensions to it URL) imply a common general data model for clients that's pretty widely used and useful even with so called static registration that \"typically involve end-user interaction with an HTML registration form.\" URL \"Authorization servers SHOULD NOT allow clients to influence their \"clientid\" or \"sub\" value or any other claim if that can cause confusion with a genuine resource owner.\" This text taken from draft-ietf-oauth-security-topic is out of context and somewhere between meaningless and confusing here in this document. Removing it or maybe something like this might be more appropriate: \"Authorization servers SHOULD NOT allow clients to influence their \"clientid\" value in such a way that it might cause confusion with the identifier of a genuine resource owner during subsequent protocol interactions.\" URL \"The client MAY omit the [clientsecret] parameter if the client secret is an empty string.\" Let's remove that sentence. As Michael Peck noted in his review, it doesn't even make sense in the context of this section describing authentication of confidential clients using a password\/client secret. And that sentence has been the source of some other problems. More detail is in this thread URL URL \"Other Authorization Methods\" should be \"Other Authentication Methods\" It's probably worthwhile to note or reference the other client authentication methods that have been specified since this text was written in 6749 and\/or point to the OAuth Token Endpoint Authentication Methods registry URL for them and also maybe mention that, despite the Token Endpoint in the name, they are general methods of client auth to the AS when making direct requests. URL \"The means through which the client obtains the location of the authorization endpoint are beyond the scope of this specification, but the location is typically provided in the service documentation.\" Maybe add something like \"or in the authorization server's metadata document [RFC8414].\" \"Request and response parameters MUST NOT be included more than once.\" please clarify that sentence to say: \"Request and response parameters defined by this specification MUST NOT be included more than once.\" URL \" The authorization server MUST compare the two URIs using simple string comparison as defined in [RFC3986], Section 6.2.1.\" There's absolutely no context here for understanding what two URIs are being compared. Mandating full redirecturi comparison is maybe more appropriate in 4.1.1 with the redirecturi parameter somewhere. And 3.1.2.3. and 3.1.2.2 needs some attention with respect to this too. But also an exception (for the port part) to full redirecturi comparison is needed for loopback redirecturis on native clients as in URL and URL and URL URL As Michael Peck noted in his review, it's probably okay now to just mandate TLS for HTTP redirect URIs. Although custom scheme or loopback redirect URIs for native apps wouldn't use or require TLS. URL and URL Seem to still allow for registration of partial redirection URIs. Which isn't gonna work with an exact match requirement. URL \"Request and response parameters MUST NOT be included more than once.\" please clarify that sentence to say: \"Request and response parameters defined by this specification MUST NOT be included more than once.\" \" The means through which the client obtains the location of the token endpoint are beyond the scope of this specification, but the location is typically provided in the service documentation.\" Maybe add something like \"or in the authorization server's metadata document [RFC8414].\" URL \"Client authentication is critical when an authorization code is transmitted to the redirection endpoint over an insecure channel or when the redirection URI has not been registered in full.\" Isn't full registration of redirection URI now required (other than maybe the port for native apps) by virtue of requiring a full comparison be done when validating the authz request? And other than a native app using the loopback, maybe it's time to move to require that redirection URIs be accessed via secure channel? URL \"Clients are permitted to use \"plain\" only if they cannot support \"S256\" for some technical reason and know via out-of-band configuration that the server supports \"plain\".\" With codechallengemethodssupported from Authorization Server Metadata \/ RFC 8414 it doesn't have to be out-of-band anymore. URL \" Typically, the \"codechallenge\" and \"codechallengemethod\" values are stored in encrypted form in the \"code\" itself but could alternatively \" 'Typically' - really? URL \" ensure that the \"redirecturi\" parameter is present if the \"redirecturi\" parameter was included in the initial authorization request as described in Section 4.1.1, and if included ensure that their values are identical.\" The reference should be to 4.1.1.3. URL \" An example successful response: HTTP\/1.1 200 OK Content-Type: application\/json;charset=UTF-8 Cache-Control: no-store Pragma: no-cache\" Pretty sure application\/json shouldn't have a charset (see note at end of section URL) and I've long thought that \"Pragma: no-cache\" shouldn't be there (see URL ) Note that these apply to most of the example responses in the draft. URL \" specifying the grant type using an absolute URI (defined by the authorization server) as the value of the \"granttype\" parameter\" The words in the parenthetical have led to questions in AD review of documents making use of the grant type extension point (see https:\/\/mozphab-URL) and I think \"understood by the authorization server\" might be better phrasing or even removing the parenthetical all together. RFC7522 is mentioned here as an example extension grant but maybe worth also including mention of others like RFC7523, RFC8628, RFC8693, and maybe even non IETF ones. URL \" Sender-constrained refresh tokens: the authorization server cryptographically binds the refresh token to a certain client instance by utilizing [I-D.ietf-oauth-token-binding] or [RFC8705].\" Given the relative immaturity of ways to do this, maybe something more open ended would be appropriate? This reads like token-binding or MTLS are the only ways allowed. I'd think wording that would allow for DPoP or some yet-to-be-defined method would be better here. Also maybe drop the token-binding reference all together (it's long expired and doesn't look like that's gonna change). URL The SHOULDs\/RECOMMENDEDs in this section seem a little overzealous and\/or too specific. URL Seems to be a lot of overlap and duplication between 9.4. Access Tokens (under 9. Security Considerations) and section 7.4. Access Token Security Considerations, which could\/should maybe be reconciled. \"(#poptokens)\" hints that some text was copied from elsewhere but the markdown references weren't fixed\/updated URL \"(#refreshtokenprotection)\" same thing about markdown references not fixed\/updated URL \"(#insufficienturivalidation)\", \"(#mixup)\", \"(#openredirectoronclient)\",\"(#csrfcountermeasures)\", again \"(#mixup)\", and \"(#redirect307)\" URL \"Attacker A4 in (#secmodel)\" \"The use of PKCE is RECOMMENDED to this end.\" PKCE is required elsewhere so this doesn't seem quite right. Similar comments about text ijn URL that talks as though PKCE might not be there. URL \"(#rediruriopenredir)\" URL \"(#clientimpersonating)\" URL \" * Redirect URIs must be compared using exact string matching as per Section 4.1.3 of [I-D.ietf-oauth-security-topics]\" Should that maybe be qualified to cover dynamic ports on ephemeral local http servers used for redirect URI with native clients? BTW, does [I-D.ietf-oauth-security-topics] need to make a similar allowance? URL \"TBD\" Given the potentially high visibility of an OAuth 2.1 effort, I think it'd be worthwhile to list organizational affiliations of individuals here in the acknowledgements along with their names. Something like what was done in the first part of URL This can help with visibility and justification of (sometimes not insignificant) time spent on the work by non-authors\/editors. CONFIDENTIALITY NOTICE: This email may contain confidential and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you have received this communication in error, please notify the sender immediately by e-mail and delete the message and any file attachments from your computer. Thank you._"} +{"_id":"q-en-oauth-v2-1-7ff76f3c1b7d44b0ff7853de454afb0f07bb750e5ed03cc5435f7ad2f23fc6aa","text":"RFC7230 references HTTP Messaging aka HTTP\/1.1. OAuth2.1 is agnostic of the HTTP version instead. Consider that httpbis moves further definitions to including \"user agent\" and the and provides some details on validation"} +{"_id":"q-en-oauth-v2-1-cdc5047bbb4485f55270cc5012aff024d0f3a53648b66cc25efaabd0e05a9361","text":"uses the ::boilerplate macro of URL\nI'm not sure what I'm doing wrong but when I try this I get an error: I'm using kramdown-rfc2629 version 1.4.3\nfixed in URL NAME\nFriendly reminder ;)\nPTAL"} +{"_id":"q-en-oauth-v2-1-03d8edf44865979c5ce37d871af29cd157176f85e953c74f5a3621e8c78b2ac4","text":"rewords a circular definition in §2.3\nSome editorial improvements [x] definition of user agent ( is an abnf syntax)"} +{"_id":"q-en-oauth-v2-1-5128fa52a6c5055be5fee651c8262c10758b6b8783070c28b0ce0cc9fd1841b8","text":"consistently use \"confidential or credentialed\"\nSome editorial improvements [x] definition of user agent ( is an abnf syntax)"} +{"_id":"q-en-oauth-v2-1-b835eabe03c0355f71c24db5a21ed66697806652980d46cdf40fd0363146f08c","text":"From Vittorio: The “insufficientscope” description here is problematic. The privileges the AT carries\/points to are not necessarily (or exclusively) represented by the included scopes (eg the RO might have granted document:read to the client, but RO might have no privileges for the particular document being requested in this particular call). It might be useful to specify that “invalidscope” should be used for authorization errors that can be actually expressed in terms of delegated authorization, leaving to RS implementers the freedom to handle other authorization issues (eg user privileges, RBAC, etc) with a different error code. Or at least, we should be clear that authorization logic not expressed via scopes is out of scope (pun not intended) for this specification. Note, this isn’t an abstract problem: there are SDKs out there that use “invalid_scope” for every permission issues. Very confusing.\nto me, insufficientscope => the client does not have scope required for the request invalidscope => when I see invalid_scope, I think that the RS does not understand what scope the client has\nI'll get clarity for what Vittorio is asking for.\nHere is my restatement from thread with Vittorio: \"insufficientscopes\" - is the correct error to return if the application has not been granted the scopes required for the request Vittorio: \"to me the highest order but is ensuring that the reader doesn’t abuse insufficientscopes and realizes other error codes are possible.\" I've submitted a pull request with suggested language changes"} +{"_id":"q-en-oauth-v2-1-fc5f77ba3e52c5024f989eab2dc2af074558f762fa44cae690844c3c77877cc5","text":"Thanks as lot Vittorio! You gave us a lot of homework but I think the draft will be improved a lot based on it. Re OIDC implicit: I‘m reluctant to explicitly endorse use of OIDC implicit (response type „idtoken“ or „code idtoken“) as there are examples in the wild where the idtoken is used as access token. Moreover, I‘m not aware of any systematic security threat analysis of those flows. I‘m fine with pointing out to readers that omission of response type „token“ does not deprecate other extension response types. WDYT? Thank you, I am so glad you think so! I hear you on the idtoken abuse. That would be easily solved by appending a “provided that the resulting idtoken is not abused by using it as access token”, in fact that would explicitly address one of the most common abuses we witness in this space by finally providing explicit language on the matter. I had frequent clashes with the Kubernetes crowd about it, and they required nuanced arguments, making them grok the concept of audience etc etc- all stuff that could have been avoided by having straightforward language along the lines of the above. We could argue whether that language belongs to the OIDC spec more than the OAuth2.1: my position is that we should take this opportunity to bring extra clarity, nothing prevents repeating that if the OIDC people will do their own updates in the future. I also hear you on the open endorsement, however I suspect that just saying what you suggest without mentioning OIDC at all will not solve the problem of people thinking this deprecates those OIDC flows, too. Perhaps a compromise would make it explicit that the security considerations that led to the omission of implicit for the token response type in oauth2.1 do not apply to those flows in OIDC, provided that the idtoken is not used as access token. So non an endorsement, but an explicit scoping statement would that sound more balanced?\nBest place to add this is probably URL\nWe got consensus in the interim meeting to add the proposed text."} +{"_id":"q-en-rfc5033bis-8830181565d7cbbab0fad7de112defbfd5308a7137ca00ae54f5a7d7cdfc070c","text":"I think this is the tip of a larger iceberg, and I'd suggest we'd need to go deeper, if the IETF now thinks they may produce specs in this space. As I recall (others may add to the list), there are challenges to be considered by a proponent\/IETF, which include: 1. Does the proposal rely on technologies specified by other Forums\/SDOs? 2. Does the IETF have sufficient experience of the DC requirements and to review the proposals? 3. Does the proposal include experience of interoperability between different implementations? or a plan for this? 4. Does the DC traffic share resources with Internet traffic? 5. Specifically: does the method to be used between DCs or only within DCs? - How is this usage scoped? And what happens when the protocol is bridged across an Internet path? One or more of these has in the past thwarted progress towards adoption in the IETF, can we say something about this?\nDo you want to take a shot at it? Do we need a completely different section about DC algorithms?\nOK, it might be good to keep some version of this as a guidance for evaluating general-purpose protocols in datacenters, AND have a totally different section for DC-specific algorithms.\nFrom IETF 119: Data centers with really small RTTs should be discussed as a special case\nNAME We need to be specific about what we mean by datacenter (e.g. Hadoop queries), please give some details about the data center scenarios that we want to take into account NAME Typically you see proposals that rely on the fabric doing non-internet-y things, often there's combined implementation with the data plane, so we need to be specific there. Also saw some presentations this week about non-ethernet fabrics, need to describe what you see from your fabric and what you can then do from a congestion control perspective. NAME Some spillover from IPPM, doesn't work on public internet and not intended to. Number of things being deployed today, massive clusters, 100,000 endpoints constantly talking to each other. Special congestion controllers are very much needed. Trying to keep CPU\/GPU non-idle all the time. We need something that is fast within 1-2 RTTs, doesn't matter if done on endpoints, NICs, switches.\nThere are sets of specific network characteristics; some of this relies on non-IETF specs and it would help to know the interop requirements between implementations and also what the coexistence requirements are for other flows.\nApproving this text - LGTM, I propose to move the additional context on data centres to a separate issue, and resolve in a small separate PR against Section 2 & 3."} +{"_id":"q-en-rfc5033bis-691c0bfe1a1f0b9b77769cf8ecc380a9946f12edfed7b594d1e28a9486a1ce6c","text":"The second paragraph of 5 reads: I probably agree that some of the scenarios are statistically small, and may remain that way in the future, but others aren't all that statistically small even today and\/or could become less small in the foreseeable future. AQM is now reasonably widely deployed, anyone with Starlink internet sees varying delay, most Wi-Fi links exhibit periodic transient link disconnections during channel scanning, etc. I'd suggest something like: The first paragraph of 5.7 Transient Events reads: The [Tools] link takes one to URL, where Section 17 provides us with: Was this intended to be a joke? .... Actually, I think you meant Section 16.\nThis is indeed long-since expired: title: Tools for the Evaluation of Simulation and Testbed Scenarios target: \"URL\" seriesinfo: Work in Progress date: 2007-7 author: - ins: S. Floyd - ins: E. Kohler.\nWFM"} +{"_id":"q-en-rfc5033bis-394290060c77ab1784336adba675b36cfcbecdf5ea12d67c27e515bca880c6c8","text":"This is an attempt to address issue\nCurrent version looks good to me as well, FWIW.\nThanks! Looks good to me, FWIW. :-)\nI'm satisfied but won't merge until Gorry is satisfied too.This seems fine."} +{"_id":"q-en-rfc5033bis-11537485c813ae5fda5774ff8471d19ed456652bb1c951927fe9e76f16e0bae2","text":"…ms...\" AFAICT the goal is to consider harm induced relative to the harm induced by Reno\/CUBIC. There may be other experimental algorithms that become \"defined\", which are not intended as a yardstick by which to measure harm.\ncc: NAME NAME NAME NAME"} +{"_id":"q-en-rfc5033bis-796fdd67fc560086bc8c96a998d639bffcb61532c28567de416780db5f092ea6","text":"Fixed an editorial problem resulting in removing the end of the sentence, which went on to say that you can't assume DSCP, etc. Also changed \/many deployments\/ to simply say this happens without saying it is often.\nFrom NAME review: Section 3.2.2: Although there are clearly a number of network especially access networks that do deploy and use DSCP classes for its real-time media traffic I think this sentence is misleading. I would think that the majority of the WebRTC traffic carrying audio and video will in fact not be marked or have DSCP markings that survive into the general Internet. I would at least suggest to rewrite the above first sentence to indicate that although this may occur, the CC algorithm under consideration will have to deal with this type of traffic."} +{"_id":"q-en-rfc5033bis-882cfa85cf49e9301f44bb07f7c58256b5e4234bf8c523a033951895be53318f","text":"To address review comments and closing Issue\nFrom NAME review: I reacted to the use of “forbidden” in this sentence. Would not “not intended at all for use in the Internet” be a better formulation? Because forbidden want me to invoke the Protocol Police if someone ever attempted to use it.\nFor me, this was meant to be stronger than \"not intended\", more like \"confined to a limited to a controlled environment\". RFC 8799 describes some of these cases, and could be a useful reference?\nSo I think a stronger wording is fine. But, an alternative formulation would be good that doesn't imply that you might get the protocol police after you."} +{"_id":"q-en-rfc5033bis-7456088a822725db5e8978c380403aede4cf5ceb1248a36d037f44cd47cce958","text":"Please do not merge yet, I will tray to update to include a few sentences to address the issue related to in-network Circuit breakers.\nFrom NAME review: I do wonder if this document needs to bring up the circuit breaker specifications (RFC 8083, RFC 8084) we do have and potential interaction with them. I would expect a congestion control algorithm to work well within the envelope which would trigger them. The question is if one needs to take them into account when dealing with them. I will note that traffic applying a circuit breaker can be dynamic in transmission pattern and rates and otherwise be very non-responsive in relation to congestion signals. Only if they exceed the circuit breaker threshold will something happen.\nThis looks good, but please don't mangle my name, or that of Lisong Xu, or ACM Queue reference. Minor edit suggestion on the wording of the last paragraph in \"## Paths with Varying Delay {#delay}\""} +{"_id":"q-en-rfc5033bis-39a767e643f48de544160a99adb7f4f418dc70b8d8a710257707edb8a3817c5d","text":"Proposed text to add MP-TCP RFCs.\nFrom NAME review: Section 5.8: What is the verdict on RFC 6356? Is it not worth bringing this up in relation to concurrent usage? RFC 8041 do bring up additional experience with also other algorithms for MP-TCP that have been tried. I would at least claim “At the time of writing, there are no IETF standards for concurrent multipath congestion control in the general Internet.” that this sentence is factual wrong as RFC 6356 is an IETF experimental specification. I do think this section brings up important aspects to consider. However, it also point to some interesting challenges in reasoning and responsibility between functionalities when managing different paths. Looking at the text I think we have scheduler application data, path availability checks as well as congestion control all impacting what is happening. What is truly congestion control here?"} +{"_id":"q-en-rfc5033bis-9adaf94369d4c8216d209e878331a939cc2b723b273e07d88dad5e06e1bc2cc3","text":"Thanks for the edits. I am a big fan of losing fewer words, so I reverted changes that went in the wrong direction."} +{"_id":"q-en-rfc5033bis-aaccfe6f9acac875bb95708ea1b602f8c57c76458132e3f41ade37c10f9499be","text":"Make it clear this is about guidelines for authors, not status of this document avoid redundant guidance avoid \"subjective\" guidance such as \"believes\"\nthanks"} +{"_id":"q-en-rfc5033bis-2f2833a7ad06915099ade628508ab20c9e4202c0b112515f01683b62006fdb99","text":"I suggest that we swap the section 3 and 2 in the sequence of sections so that the readers understand the interpretation of normative text before they read about it. URL"} +{"_id":"q-en-rfc5033bis-ca45301dc5c70b2707437a589ef974fa91818df1918f2b3df1481d0a72126a41","text":"The title of section 5.2.1 mentions general-purpose transports but the content really about comparing against standard congestion control algorithms, I suggest to reflect that in the section title. URL\nHmmm: This was originally conceived as General-Purpose v Real-Time (next section). If we change this to \"standard general-purpose congestion control algorithms\" , does it then make \"real-time look non-standard - so ought that now be \"standard real-time congestion control algorithms\"? Would that help?\nIt was not that clear that it was supposed to convey \"general-purpose\" VS \"real-time\". But my main point was that if this is all about congestion control for general purpose transport then lets say that. Competing with general purpose congestion control Competing with real-time congestion control"} +{"_id":"q-en-rfc5033bis-511023c743c4adc29d7d98e317e80bbe62d9299fd0f8b6e65f8c0d86133834b0","text":"This paragraph is a good summery of what is available related to congestion control when it comes to real-time traffic in the context of this specification. However, all of those referenced resources were developed following the requirements set out by URL , also RFC8836 describes the aspects of real-time conversational media that helps the scope of being real-time. hence, I suggest we add a reference to RFC8836. URL"} +{"_id":"q-en-rfc5033bis-436ba1ce6de89e1a88cd1152a9f9e9df77350e342516bd5c3ff20553d1d4941d","text":"This responds to an AD question in review. The text is proposed to shed some light on how to go about evaluating a new CC algorithm, and when to use different approaches. It could be enough, or it could be improved, please think about how best to say this.\nThanks for this.. this looks like very good expression of the expectations on how one can think of evaluating the candidate congestion control algorithms without going into lengthy details. For the sake of clarity, it seems like the word \"overload\" (in the quoted sentence below) needs some explanation on what exactly we mean here - I am assuming we are referring to typical congestion scenarios on the path due to packet inflow and out flow mismatch in a bottleneck network node. >>\"For many algorithms, an initial evaluation will consider individual protocol mechanisms in a simulator to analyse their stability and safety across a wide range of conditions, including overload\".\noverload, aka a lot more traffic than the path can handle... rather than typical congestion. Martin may have suggested wording...\nWhile section 5 expects to do well-round of evaluations, it does not specify where or how to do the evaluation. Does the congestion control proposals provide the evaluation results in a simulator or in a real lab setup or in a controlled part of the Internet or over the Internet? While it is understandable that an accurate description on the evaluation environment is tough to define but the specification should at least mention what is good enough environment to evaluation on, or state it does not really matter.\nSee below:On 27 May 2024, at 10:50, Zaheduzzaman Sarker NAME wrote: While section 5 expects to do well-round of evaluations, it does not specify where or how to do the evaluation. Does the congestion control proposals provide the evaluation results in a simulator or in a real lab setup or in a controlled part of the Internet or over the Internet? While it is understandable that an accurate description on the evaluation environment is tough to define but the specification should at least mention what is good enough environment to evaluation on, or state it does not really matter. —Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you are subscribed to this thread.Message ID: NAME I am not sure how we add guidance … surely all of these method can be appropriate (more so for some of the evaluations) when bringing new work to the IETF. When last calling a draft - it would be good to see more than one approach where possible. Simulation ought to be better in understanding stability and identifying worst case impact; real lab or controlled tested could be good for understanding typical interactions with other traffic. I still wonder what we can best advise?Gorry Postscript: Reading again: my comment is about testing specific new CC mechanisms, if we're talking about a new CC algorithm (not just a new technique), then I agree with Martin below.\nThis is covered in Section 3, which is becoming Section 2: Is that good enough?\nI have noticed that section. I think it is great that we are somewhat defining the criteria for a PS. But still we are not giving any hint on - say, if simulaiton results are good enough to be considered as experimental or shall we at least want to see some controlled testbed evaluations, or put it like this - what is considered as bare minimum empirical evidence? I would be great if we can put some clarity here.\nI think we ought to expect some experiment results ... but how detailed, rather depends on what is being proposed. Having some experiments (even at large scale) and no simulation of theory (e.g. on how a method treats overload) would also seem very dangerous to me. These have always been the case in the past - i.e. TSVWG has pushed back to ICCRG to see experimental results AND some simulation\/analysis when it comes to CC.\nNAME are you saying we are just fine with some expection on evidence of experiments without specifying what is the minimum required environ to test in? we do have test cases defined for new algorithm testing, can't we at least say get inspiration or use them in the experiments?\nAha - I think you're suggesting we provide examples (rather than requirements) and we could refer to RFCs? If so, I'll try a PR.\nI am not 100% sure this is necessary but given the AD review I'm happy to include it now.Just one little nit, but it looks good."} +{"_id":"q-en-rfc5033bis-07dd9247442b07ba22f47074af743876ffc23d8159af289a62f11c014ed96d81","text":"All I did in that sentence is expand BDP. If you would like to change its content, please file a different PR.\nThanks Sean Turner: Questions: s3.2: This section includes the following: This document applies to proposals for congestion control algorithms that seek Experimental or Standards Track status. So ... if I get an Informational RFC I can do whatever I want? Wondering why Informational isn't listed here. s5.3.2: The 1st para invokes some like requirements language like \"ought to\" and \"it would be helpful\". Any reason there isn't BCP 1 4language used in this para? s6: Is \"MUST find\" another way of saying there must be coommunity consensus? If so maybe: OLD: Unless a proposed congestion control specification explicitly forbids use on the public Internet, the community MUST find that it meets the criteria in these scenarios for the proposed congestion control algorithm to progress. NEW: Unless a proposed congestion control specification explicitly forbids use on the public Internet, the community MUST reach consensus that it meets the criteria in these scenarios for the proposed congestion control algorithm to progress. Nits: s5.1.3, 2nd para: expand BDP. s7.9, lastt para: s\/(see Section 4.\/(see Section 4).\nI think this could be read as endorsing the change to BBR - but that isn't the case, maybe: \/fixed\/addressed\/ or something similar, to show the intent."} +{"_id":"q-en-rfc5033bis-86ca246a6e59ead24980547d915ca0218577851e1ccd1b73a94945e4150f121c","text":"From Juergen Schoenwaelder: The document provides guidelines for the IETF when evaluating new proposed congestion control algorithms. While important, this document is not directly influencing network operations. This update grows the document from 10 pages () to 25 pages. I found the draft well structured and easy to read and all content appears to be well justified. The draft provides helpful advice to everybody involved in the development and evaluation of congestion control algorithms. I am wondering whether section 7.1.1 really should be a sub-section of 7.1, which implies that a network circuit breaker are viewed as a special kind of an active queue management technique. Keeping the sub-sections a flat list of special cases may simplify things. I also found the title of section 7.1.1 a bit longish compared to the other section titles. Perhaps turning 7.1.1 into \"7.2 Interaction with Network Transport Circuit Breakers\" or just \"7.2 Network Transport Circuit Breakers\" leads to a simpler structure. Similarly, I wonder whether sub-section 7.7.1 should be lifted up as well. Path changes are not necessarily a transient event. Perhaps 7.7.1 should become \"7.X Changes in the Path\" (dropping sudden as well). The point made in the text is that paths are not static, they may change. Perhaps the above two comments make no sense, then just ignore them. I just thought I share them since they came up during my first time read of the document.\nThe organisation could be better, but I don't see a network circuit breaker (as policing) as a kind of an active queue management technique (as a scheduling\/queuing action), they are rather different in operation and design, so I'd rather keep these separate, and just move CB's up one level? If we did that, i wouldn't object to 7.7.1 being made 7.8."} +{"_id":"q-en-rfc5033bis-f51c47d286318e5b76fcf4b040b181e1e183da1fe8ef15f251e545464ecf5a33","text":"Suggested final edits. do not infer that BBR is \"fixed\", but rather say that it contains a \"fix\" for this. don't overstate the importance:-) fix to start by saying what the para is about (i.e. the section title)."} +{"_id":"q-en-rfc5033bis-cf3a14fc56f441209decf03562e7ef50690015bceb6f14e7d1c52431c1b46f5f","text":"Proposed resolutions\nAppears ready to merge - but has conflicts I do not know how to resolve.\nYou could merge it as is, but I don't like the remaining passive form too much."} +{"_id":"q-en-rfc5033bis-0ee905bfd7f80c4a1936385b3da79ea8f1e0c2804325f9ae21ac7fb8cb573ecb","text":"A few clarifying nits, as resolving the tension between a global \"MUST evaluate\" and some local \"SHOULD evaluate\"s.\nMerge!"} +{"_id":"q-en-rfc5033bis-0ee48628b54cb862fa404a5d8b715ded88e20ed0f3f2a95e24d135764d6606e7","text":"required -> REQUIRED\nI agree with this change. The use of REQUIRED\/RECOMMENDED for process rather than SHOULD\/MUST for protocol, is not entirely consistent, but I think it is fine and unambiguous. Please do merge"} +{"_id":"q-en-rfc5033bis-2a1ff55f4b7fab4b2d0ac7891647e48e457afed2c96997c0544b0f9c64b897a9","text":"Addresses issue\nNAME You wrote \"Some comments on Bufferbloat\" and I am pretty sure these are interesting comments, but I cannot see any. I am afraid something went wrong and they were not published.\nA good improvement. I wonder if framing it like \"use the full bandwidth and nothing more\" would be a more holistic approach that covers a few different requirements at once, but I have no actual text to propose.\nThis looks like reasonable first text pin this topic, thanks Christian (my previous comments appear already to be resolved - so happy to approve this)"} +{"_id":"q-en-rfc5033bis-0912a5c5f9ec0e4d668f6aede74c56db88ff69826f37a0a9fb955272610c7afa","text":"at the time 5033 was written, only TCP, DCCP and SCTP were around, using CC; nowadays, many more protocols use CC - with QUIC standing out as a here, probably warranting it's explicit mentioning.\nYes. Note that there is a bit of sadness here, because QUIC was forced to describe use of Reno in RFC 9002, while most deployments use Cubic or BBR. But the WG could not go beyond Reno... because of RFC 5033!\nThis one seems simple, but is not. The introduction text says: This formulation, \"Impact on standard TCP\", assumes that the congestion control algorithm is an integral part of the TCP specification. The situation has changed, and I think this part should be rewritten as: The reference here should be RFC9000, the specification of the transport protocol, rather than RFC9002, the specification of default loss recovery and congestion control algorithms. And the text should then be modified to mention that our goal is to not wantonly disrupt existing deployments -- whether they use Reno, Cubic or BBR. The point is \"don't disrupt the Internet\", not \"don't offend the pride of the IETF\".\nLooks good."} +{"_id":"q-en-rfc5033bis-60ef6e3104cbc9f4f03f6587bcb56eab521125a7137f730fa37f0e81ad6f9206","text":"The guidelines section as discussion of \"Difficult Environments\", starting with \"The proposed algorithms should be assessed in difficult environments such as paths containing wireless links.\" Opinion may vary, but wireless links are probably now part of something between \"a vast majority\" and \"a plurality\" of the paths on the Internet.\nI agree. Wireless serves the vast majority and are still increasing.\nGorry and Martin: wireless constraints deserve their own section. Three different phenomena to discuss Non-congestion loss Variable bandwidth on the path Jitter due to media access and link-layer retransmission\nI changed based on the thread with Christian, and agreed to these insights. I think this ready."} +{"_id":"q-en-rfc5033bis-e63bd4bb010944d59708e507817cdf92185893309c81a81fb57b3e00c4c3a2ef","text":"This takes the initial part of that charter and adds to the introduction as a starting point for the new document.\nLet's add text to the Introduction with our motivation and intention for revising RFC 5033, i.e., why and what do we want to update. The goal is to improve our alignment early on. This text could draw from the CCWG charter and prior discussions."} +{"_id":"q-en-rfc5033bis-333e71a5501eb071e25b2563b7c9df239b4f1bb3cd970240197504fc7bf59449","text":"Update the QS text and various editorial suggestions. if merged.\nI'll propose a PR with some editorial updates to the first part of the ID."} +{"_id":"q-en-rfc5033bis-32941112f4de723d1f9ffab43f5c96d4acaf5f6459fc95e1ee0e6fae18a7c612","text":"I resisted the urge to resolve other issues with this, so there are a bunch of TODOs in the text. Most of the big blocks of text are moved from elsewhere in the document.\nThe guidelines overlap in odd ways. The first one is essentially TCP-friendliness, and I guess it's implied to be talking about \"normal\" wired networks. Then different guidelines say the algorithm should be evaluated with all sorts of environments and\/or link parameters, presumably for friendliness. Then, in \"Minimum Requirements\" some of these are listed as mandatory to satisfy, and the rest as something the submitter has to report on, but the community may decide to accept even if it's bad. There are several different concepts that are being muddled here. Alternatively, we can structure it as follows: (A) Metrics to evaluate Does the new algorithm match its share of the existing bottleneck bandwith better than existing algorithms? (not underutilizing, not bufferbloating or congestion collapsing) in isolation or with multiple flows? Are flows using widely deployed algorithms not severely degraded by the introduction of the new algorithm? Is there a plausible incremental deployment plan? Something about short flows? (B) Mandatory Domains: must answer positively to (A) Wired networks, \"normal\" range of BDPs Wireless networks, \"normal\" range of BDPs (loss & jitter) Internet-scale deployment is sufficient for this? (C) Optional Domains: must evaluate (A) & report IoT Satellite Misbehaving nodes Response to sudden & transient events Tunnels for explicit-signal protocols Extreme packet reordering\nI agree we need to restructure, I like the idea of A. I think defining heterogeneity in B is likely helpful - although we really ought to stop using \"bandwidth\" when we mean \"capacity\" - because it really confuses SDOs that actually do cap[acity v bandwidth tradeoffs:-) I'll fight that Internet-Scale deployment is sufficient;-). That plays to the big operators, and I do not like that. It worries me quite a lot that we target \"normal BDPs\" - that's not something I would welcome. It places all satellite outside normal, which again, I don't like - some satellite is definitely odd, but I'd really like to see broadband services as \"normal\", if nothing else, to include those people who can only use this technology. So, main question first: can we factor the last section as different in a particular perspective, rather than \"labelling\" networks as optional to support? i.e. XXXX is a property, and IoT is an example, transients and abrupt capacity variation is an example of radio networks; etc.?\nI agree that we need a better structure. Including having actual subsections per issue, rather than the current list format. Two points: short flows, and transient events. Short flows are pretty much the norm, with reports of 90% of connections never going out of slow start. There is an associated problem: capacity discovery algorithm like slow start will rapidly increase the amount of data sent on a path, with a high potential for disrupting current users of that path. That's not good. There are potentials solutions such as quickly evaluating the capacity (e.g., packet pair, squirting, variants), remembering past connections (careful resume), or potentially getting input from the network (maybe something like Prague\/ECN). These solutions have been explored for \"special\" links (satellite, deep space), but the problem is not just special links. I would like to see that discussed in the \"main\" section (A). Transients are also the norm. Congestion control is a form of control algorithm, and control algorithms must absolutely deal with transients. For example, if we deal with wireless, we need to deal with \"entering a tunnel\" -- and exiting. We may want to deal with . That should belong to the core (A), not the \"weird configuration\" (C).\nWait to land PRs, then do this with no substantive change, then do other PRs -- this change will be editorially disruptive.\nI like the flow after this refactoring!Good to merge, the text here looks good to me. There are still things to think about, but these are probably best down on the basis of the entire document."} +{"_id":"q-en-rfc5033bis-5b90739822c95fcad2a7b518eae0aaf414a63eedc9f2ef8880ae393f5831f7e5","text":"Define congestion collapse in terms of excess overhead. Rename the prior congestion collapse section \"Implement full backoff\" Closes issue\nEditors: This metric is very interesting, but wonder if it's really a requirement for loss recovery, not congestion control (granted, these two are often intertwined). This would put it out of scope. Our proposal would be to revert this, though perhaps we are misunderstanding the intent.\nDefine congestion collapse in terms of excess overhead under adverse conditions. This is how it was defined circa 1987. Rename the current \"congestion collapse\" section.\nNAME would you like to take a crack at finding some text that would be in scope to address the intent behind this issue?\nClosing this issue, as it's through IETF Last Call."} +{"_id":"q-en-rfc5033bis-3d23a85b5082e94b9c55e2823fb47149663b9a3482322f94fc033f127cf55fd9","text":"It closes the issue by explicitly stating the I-D describes two use- cases: One we know more about, one much less. There is no intention to add a range of other use-cases in between. I hope this calls out some of the issues and that new proposers could understand where they ought to evaluate.\nMP variants of TCP and QUIC, as well as SCTP may see some deployments - should this document mention something aroud coupled congestion control vs. single path congestion control also?\nYes. But we should note that it is still in research.\nI have mixed feelings there. If users have payed for a wireless subscription and a land line subscription, why should they not use the full capacity of both paths? I see the \"selfish\" case for proper scheduling, such as not sending delay sensitive traffic on a lossy path. But I don't see anything with the same moral level as \"don't cause congestion\" or \"don't cause buffer bloat\".\nDiscussed at IETF 117: Christian: Not enough info to make a recommendation Gorry: We should think about it Ian: Inclination is that we don't have enough information to make a recommendation Matt: Impact is no worse than browsers using multiple connections\nGorry will take a look if he has anything constructive to write here; if so, we will run the text by the community. otherwise, close with no action.\nSee PR for some initial text.\nI am really not sure that we should say much about multipath at all, besides the minimum -- making sure that any use of a path is not worse than creating a separate connection on that path. In PR , Gorry is trying to separate \"multipath for failover\" and \"multipath for load balancing\". But these are only two points in the spectrum of things that can be done. For example, in various QUIC implementations, I have also seen: multipath of ACK -- using the lowest latency path to carry ACKs for the other paths, multipath of loss recovery -- resending the content of packets lost on path A on the hopefully more reliable path B, multipath redundancy -- sending duplicate traffic on path A and B, so the receiver can pick whichever arrives first, multipath redundancy with FEC -- same as above, but send FEC redundancy instead of merely duplicating traffic, multipath affinity -- subsets of application traffic are tied to a specific path as long as that path is available. Not all of these have obvious congestion control consequences, but some do. For example: Multipath of ACK is widely implemented, and the draft multipath specifically supports it. It is particularly useful when one of the links is high bandwidth and high latency. But then, this means the multipath connection will enjoy a shorter RTT than the other connections sharing the same bottleneck, which generally results in a larger share of the capacity. Multipath redundancy was pioneered by the QUIC team at Ali-Baba, as a way to get lower latency when sending video streams. They do that when one of the paths is considered a bit suspect. Of course, it leads to using more bandwidth.\nThis was intentional - Failover has been om the RFC series for sometime, and has been deployed. Concurrent multipath is something that has been on the edge for quite a while, but was at the other extreme - in the middle there are many things ... If you have an exhaustive list of the CC-related ones, that could perhaps be useful, I was avoiding making that list. I note FEC has interesting interactions with CC.\nI wouldn't mind text that made it clear that \"concurrent multipath\" is a continuum of immature ideas. But Gorry is right that failover is well established and everything else involves hand-waving.\nRegarding FEC and CC, it depends whether FEC is below CC or above it. With TCP, it has to be below, but with QUIC it is typically above, with FEC frames being controlled by CC like datagram or Stream data frames. I missed one interesting data point in the previous list, interactions between loss recovery and multipath scheduling.\nDiscussed at IETF 118: Asking for folks to take a look and comment here\nLooks good."} +{"_id":"q-en-rfc5033bis-522d17eba1b332f6a9e0c65c368b02e4f4d8352e2fb94e8aeef4751f476418f2","text":"Starting to unpick where equal access is important and where harm for others becomes important. If merged, this\nIssue possibly related to , but more focussed on refocus away from a fairness evaluation. When a transport uses a path to send packets (i.e. a flow), this can impact other Internet flows (possibly from or to other endpoints) that share the capacity of any common network device or link (i.e., are multiplexed) along the path. In turn, this can result in variation in capacity, loss, or additional latency. Transports and network layer scheduling can be designed to offer fairness between sharing flows. This comes with both strengths and weaknesses - it can reduce collateral damage to other flows; but it also can motivate the use of additional flows to gain additional network resource. Transport proposals need to be evaluate and avoid inducing flow starvation to the other flows that share resources along the path they use. Transports proposals need to be evaluate and ensure they treat a loss of all feedback (e.g., expiry of a retransmission time out) as an indication of persistent congestion.\nGorry will start with a PR that defines the terms; then we can talk about what to emphasize.\nAlso is related\nOne of the guidelines is \"Fairness within the Alternate Congestion Control Algorithm\". This a great consideration, but behind that are quite a few problems. The classic fairness evaluation derives from the equation about asymptotic behavior drawn by Matt Mathis, which end up requiring a non linear response to the frequency of a control signal such as packet loss or ECN marks. There are of course two problems. First, time scale, because many connections don't last long enough to approach asymptotic behavior. Second, scaling, as discussed for example in RFC3649. If an algorithm requires ever increasing spacing between control events to sustain larger data rates, then the control loop becomes very long, or, in the case of packet losses very noisy as \"true\" control events get intermixed with random losses. In contrast, algorithms with linear response scale very well, but do not naturally provide fairness. Which is kind of a bummer, and also a research topic. I think the Bis draft will have to recognize that tension somehow, and encourage experimentation.\nThe Guidelines section asks developers of new congestion control algorithms to discuss the \"Impact on Standard TCP, SCTP ], and DCCP ].\" Maybe I am overly sensitive, but this way of discussing the issue feels dated, from a time when transport and congestion control where specified jointly. I think that ceased to be the case many years ago, for example with implementations of TCP providing choices between multiple congestion algorithms. (Indeed, DCCP itself provides a choice between two algorithms.) The Bis draft should probably present the issue differently.\nSpeaking of fairness, I also wonder if there is an accurate definition for it. There has been activity on the topic at the IETF (on in academia) : URL URL URL should this be discussed in the draft ?\nDiscussed at IETF 117: Matt: Replace fairness with freedom from starvation. Fairness only makes sense at low bandwidth. Christian: Not focus entirely at fairness, non-starvation goal is good. Gorry: Suggestion to put it in other draft, but not in this one Overall, get rid of it all together for now\nIn academia a next metric is to look at harm rather than fairness: URL\nThat's a pretty good reference, Mirja!\nThe reality is that Cubic has a horrific effect on the Internet: it is pretty efficient at building queues and making life terrible for others. The classic example is the parent busy on a video conference from home, and then seeing the perceived quality go down sharply because the teenager started downloading a movie... Which means I would take \"fairness with Cubic\" with a huge grain of salt. As is, yes, let it run and get some bandwidth, but make sure that it gets seriously penalized, so there is an incentive to replace it by something better...\nA more serious problem is harm to tiny transactions using any protocol. They have to rely on timeouts for loss recovery. I suggest replacing this with absolute upper bounds on loss rates. E..g. see: URL Likewise for excessive standing queues, and to a lessor extent excessive CE marks.\nModulo the comments I've already posted, LGTMThe last changes by NAME probably mitigate my issues. I think we will have to come back to definition of harm and to the \"self compete\" part, but this is good enough for the next iteration of the draft."} +{"_id":"q-en-rfc5033bis-2e89866aa7f8081572732101787d27ff85cd368ee15d5c5d0980f065f278690f","text":"Reverts ietf-wg-ccwg\/rfc5033bis. This was accidentally submitted without review. The conclusion of the editors is that this is out of scope for the document."} +{"_id":"q-en-rfc5033bis-6e9d4e49ec7d6401a9a161b5cb20f83a414311bbc7bf7a25fc1289b433fc8730","text":"rfc7141 partially updates rfc5033; include relevant sections in the -bis document\n7141 does not mention 5033, so can you be more specific?\nOct 12: Pinged NAME to follow up\nIIRC, it was due to rfc7141 updating rfc2914; rfc2914 is mentioned and referenced at numerous points throughout the 5033 document. 5033bis should heed the advice from 7141 around marking probabiliy taking into consideration packet sizes rather than only the packets themselves. At least this train of thought is the most likely one, which was had in my mind at the time.\nEditors: we suspect that this question is orthogonal to the purpose of this document; will look more carefully at the reference.\nIt turns out that 7141 does have recommendations on response to congestion signals from the network, not just AQM recommendations (See Sec 2.3 and 2.4 for a good summary)."} +{"_id":"q-en-rfc5033bis-8091bebdbc94a6c4e68f3c2910e811e72dcd580a5fdf6f9d08544b5c1c6c324d","text":"Created a separate issue to discuss ECN.\nThe guideline section discusses \"Investigating a Range of Environments\", with a focus especially on \"Random Early Detection (RED) ] and Drop-Tail.\" The IETF and the IRTF have produced several different proposals for Active Queue management, with somewhat different characteristics. Interestingly, there is a tension there between developing Queue Management that \"plays well\" with existing congestion control algorithms, and developing congestion control algorithms that \"take advantage\" of AQM. Not sure what to say there, but at a minimum we should develop expectations about \"signals\" produced by AQM (e.g., the ECN markings of L4S) and reaction by congestion control algorithms. To go further, the could be a discussion of \"honest signals\", such as delays, that are very hard to \"fake\" in routing nodes, other signals such as ECN marks that could be produced by the network at very little cost, and packet losses that fall kind of in between. But we are not writing for the IRTF, so this could be kept brief.\nOn the topic of AQM, IETF published Characterization Guidelines (RFC7928) and Recommendations (RFC7567) that could be exploited for further discussions in this work.\nMartin and Gorry: add a reference to 7567 to AQM behavior and evaluation criteria, which is not in scope for this document talk about evaluating CC interaction with AQM\nSomething like \"CCs should be evaluated against AQMs that are prevalent in the internet\" -- is there a reference that provides a taxonomy?\nEditors: FQ-CoDel, PIE, L4S (RFC9332) have RFCs Are the others that should be evaluated, and do they have available specs?\nDiscussed at IETF 118: Many operators are not particularly aware of AQM or these concepts in general Include FIFO in the list Also do at least an AQM \"you should evaluate the scenario with FIFO queues and should also consider the impact of various common AQMs and run detailed simulations against them\", while maybe name-checking a few particularly relevant ones. Yes, should have criteria for this, and also should not ask people to run NxN interop matrix. Out of scope to mandate AQM\nI suggest: Probably ought to include a suitably configured RED also. ... and not suggest ALL of these need to be checked, more encourage consider AQM and FIFO scenarios.\nI fixed a typo, but otherwise this looks good."} +{"_id":"q-en-rfc5033bis-4db2ecf4445a592ecd83e332a1b21428477d036a5838a5034f46d75d32ff5ef9","text":"Editors: just focus on what needs to be evaluated (the last sentence) and toss the rest. We are not going to satisfy anyone with a discussion of the bigger picture.\nIn the Internet, most connections never exit Slow-Start. RFC 5033 kind of explicitly focuses on the long duration connections, because at the time of writing the accepted rule was that the 10% long duration connections were were causing 90% of traffic. I don't know whether that's still true.\nWith the usage of QUIC, more short-lived connections are aggregated into a long connection.\nLooks OKThanks for updating this."} +{"_id":"q-en-rfc5033bis-37627f05b9f3cad75e7d72a154a21e002c76751213d911f49e6ac9c6817523ec","text":"and .\nAlso\nThe text currently reads: \"The community should also consider other non-standard congestion controls known to be widely deployed,\" This is a compromise to land a mostly unrelated PR. NAME doesn't think it's fair to expect participants to test algorithms that aren't fully specified. I would argue that if such specifications exist and the algorithm is systemically important, it absolutely should be modeled. Moreover, if the algorithm is rather loosely described, in many cases there is open source code (often in a Linux fork!) that can be tested on a host, or in simulation frameworks like ns3. If an algorithm is a true black box, with no description besides marketing material, and no open-source code, I agree there is not much to be done. I also believe that while algorithms like this exist, they are not systemically important to the internet.\ngorry: ns3 code is not super high fidelity.\nDiscussed at IETF 118: There's a danger if you see a realtime flow that goes faster than a best effort flow, since realtime is usually constrained by other factors (potentially needing time to move forwards to generate more data) Seems hard to imagine that something not specified is able to place a burden on anybody else. If you want others to do a thing, write it down and document it. Of course, there is no protocol police: we cannot prevent people from deploying whatever they want. Can we provide a carrot for people to play by the rules: Could be a SHOULD consider requirements and how to interact We have: Guidance for people who are writing specs that aren't yet part of the community How do we evaluate those protocols How do we ensure that fairness can exist with these Need to have a commitment from this WG to help and not just throw up new requirements for data and for those bringing new CCs to engage positively. Perhaps spelling this out clearly NAME to file issue to write something encouraging for people early on in the document Christian notes that separate queues between realtime and best effort are not actually a strong requirement To measure capacity you do need some sort of queue, or else you cannot measure\nThis info RFC has some sensible guidelines on RTP feedback frequency, if we need this: \"Sending RTP Control Protocol (RTCP) Feedback for Congestion Control in Interactive Multimedia Conferences\".\nRelated to\nCondition (1) is concerned with the behavior with respect to Reno and, thanks to our edits, Cubic. But there are of course other congestion controls out there, both standard and non-standard, in support of real-time and other non-greedy applications. Should we mention those as a class? Namecheck some examples?\neditors meeting 8836 requirements document 8867 and 8868 are test cases, but informational. Reese: standard stuff is not widely deployed, widely deployed stuff is not standardized. But RMCAT is new. Maybe there aren't well-documented protocols to evaluate vs., so it's hard to require anything here. I'll take a look, will probably have to go to the community.\nEditors: We need to consider the interaction between real time and not-real time is evaluating real-time in scope? See 8836, 8867, 8868.\nLooks good enough.Thanks for the changes."} +{"_id":"q-en-rfc5033bis-978f2d680bd3b610966baeb2c2d85b00239b8b9900e06d8c7903ae0ef77ecfae","text":"I am not an IoT guy but I took a stab at it.\nNAME do you have suggested text, or should I submit this is as a start and you can open a new issue?\nSection 5.1 is just a TODO to write about this.\nOK, that's a start. But there are two possible angles: could resource constrained devices use this spec? And, how would this spec coexist with IoT specific transports, such as \"single buffer\" implementations?Looks like a start (I fixed a typo). We don't speak about scale ... in that although the sending rate is often low (or short flows), the number of endpoints can be high."} +{"_id":"q-en-rfc5033bis-54a855e1608856aba0b5954e09d4a32b9cfb2177e5d0bf04f254b567bacc3f72","text":"Add some text to indicate that proponents don't need a complete report on all the criteria before coming to IETF. They can arrive having thought about these problems a little and have a willingness to do the work implied in this draft.\nWe need to be nice, but not too nice. This phrase is targeting folks like researchers or PhD candidates. They should be aware that this topic is not going to be a walk in the park. But showing why the problem is hard could also be an encouragement -- you don't get a PhD for solving again a problem that 100s have already solved..."} +{"_id":"q-en-rfc5033bis-2cd6917d81cfbbbdbac588ccb6998fd4a4e82225c3cc1311b74f5f2909acc310","text":"Added the missing 'd' - thanks Neal.\nI'm separating AQM from ECN, although ECN relies on AQM design and deployment - ECN can be a force for good when detecting bottlenecks and latency - aka RFC 8087.\nWell, yes. ECN, by itself, is one of the signals that can be used by CC algorithms, together with packet loss, delay variation, and packet acks (input to bandwidth measurement). Makes sense to separate it. However, the L4S folks have introduced a tight coupling between a very specific use of ECN in the L4S AQM, and a fine grain mandate of the expected behavior of the CCA. And I cringe each time I read their dual queue formula based on the asymptotic limit of Reno -- but not Cubic. I wish we could walk that back, or describe it in more neutral terms. With L4S, ECT(1) is a signal from the sending host to the network that CE marks should be apply at incipient congestion, instead of waiting for actual loss-generating queue buildup as in RFC 3168. Maybe that should be true even for algorithms that are not described in L4S. Of course, we have to deal with what we have. Maybe we should just quote all the relevant RFCs, and denote the difference between ECT(1)\/EC and ECT(0)\/EC. Senders using ECT(1) should consider that the rate of CE marking signals the intensity of the congestion, while senders using ECT(0) should consider individual CE marks as denoting actual congestion.\nI think the ECN text ought to approach with something very generic like setting ECT can provide nice properties [RFC8087], but comes with requirements for reacting to ECN-CE marking. This reaction to ECN-CE marks needs to be evaluated to ensure this conforms with the style of ECN being used. I'll make a PR.\nsee Neal's comment"} +{"_id":"q-en-rfc5033bis-00ed85d13bacc7ff0ee47807b9f5e1778550336ec2219518c97dee1db93d70e3","text":"Adding a scope that excludes multicast.\nUh, is a PR addressing the same issue -- you were supposed to review it?\nWould if this variant is merged\nDiscussed at IETF-118. At least we can mention Sect 4 of Multicast UDP Usage Guidelines in RFC8085.\nIETF 118: Possibility that we could rule this out of scope Also some interest in this as a separate document that comes after the -bis\nSo just to provide the background for my question. The Reliable Multicast Transport WG (RMT) did produce two expermential congestion control specifications for multicast: So these specifications exists. Most mulitcast deployments are done in controlled environments however, with [Automatic Multicast Tunneling] (URL) multicast traffic these days have more probability to occur across links where this traffic would share bottlenecks with other unicast traffic. However, this amount of traffic is still normally bit-rate bound so it might be considered just in-elastic and consuming the bit-rate it does. I also want to state that I am fine with declaring multicast out of scope, but maybe some information about that there exist algorithms should be includeded in the draft. So an explicit out of scope with some background section\/appendix.\nWe will put it out of scope and point to RFC 8085\nSorry, just reading through these changes now -- Why does a PR on multipath close an issue on multicast? Am I missing something here?\nWhoops, that does not seem correct, re-opened this issue.\nTo answer NAME more directly, I don't think this draft is in the business of listing algorithms unless it requires new proposals to be measured against them, and not even that if the topic is out of scope. 5033bis is not a congestion control tutorial.\nNAME I think it is a question of how one resolve this and if one declares it out of scope. Are we explicitly stating something like this: Multicast congestion control algorithms is out scope, and the rules and requirements within this specification has not been evaluated on their applicability for multicast congestion control. Future work on multicast congesiton control to specify new algorithms or update existing [webrc][TFMCC] will have to evaluate what is applicable and if additional requirements are necessary.\nAdd a changelog item (as in ) and it LGTM."} +{"_id":"q-en-rfc5033bis-e1562d43570c1225a69e9fdeb2e58076fb1bbcd887c571dad846381a668045eb","text":"Added first-cut of minimum delay text. Intended to\nDelay based congestion control protocols are making an assumption that the observed RTT is the sum of a constant \"min delay\" and a variable queuing delay. This assumption may not be valid on wireless links, as delays vary when the distance between mobile endpoint and base station increases, or when changing transmission condition cause the radios to adopt different modulation and coding strategies. I suspect we also have these \"variable delays\" in satellite services, especially when using constellations of low orbit satellites. The state of the art is not very good. BBR for example recognizes its dependency on accurate estimate of the min RTT, but has to resort to some kind of periodic reset if measurements of RTT kept over RTT min for 10 seconds -- which is both quite long for some apps, and quite annoying for real time apps suffering a transmission issue every ten seconds. LEDBAT++ does something similar. I think these \"variable delays\" link are good candidates for \"special case\" links, to be mentioned in a separate section.\nThe variable delay part applies to most satellite broadband access using GEO also - not because of propagation usually, but more a result of radio resource sharing. It might well apply to some other technologies that use a shared capacity pool.\nMin delay variation strikes me as a bit of a corner case -- perhaps it can be an optional case to evaluate. The other piece is delay variation is not always congestion -- this is IIUC very common in wireless and is part of that mandatory case.\nIn think this is something like: An Internet Path can include simple links, where the minimum delay is the propagation delay, and any additional delay can be attributed to link buffering. In this case, a congestion controller could reduce the sending rate to the point of minimum buffering while still preserving the maximum throughput. An internet Path can also include complex subnetworks where the minimum delay changes over various time scales. This occurs when a subnet changes the forwarding path to optimise capacity, resilience, etc. It could also arise when a subnet uses a capacity management method where the available resource is periodically distributed among the active nodes and where a node might then have to buffer data until an assigned transmission opportunity. Variation also results when a higher priority diffserv traffic classic prompts the transmission by a lower class. In these cases, the delay varies as a function of external factors and attempting to infer congestion from an increase in the delay results in reduced throughput.\nThe delay will also change when the wireless path between two wireless nodes becomes longer, or slower, or shorter, or faster. Or when a wireless router changes the schedule of transmission and starts polling a wireless station more often. or less often. To give an extreme example, suppose a spacecraft flying away from earth at 12,000 km\/h. Every hour, the delay will increase by 40 ms.\nSure, however that requirement is very specific to certain systems - my point is that even for fixed cabled and broadband wireless systems, the base delay changes.\nI suggest the language \"non stationary minimum delay\" to encompass all causes that resemble some form of route or path change, explicitly including LEO and mobile motion. It doesn't really cover link layer re transmissions or delays caused by link scheduling algorithms, but these are short term variations that are more likely to be statistically similar to IP queuing delay. I would use the term \"hidden queues\" to refer to unmanaged queues that (almost) never incur significant backlog but do occasionally cause jitter and generally can not be observed at the IP queuing layer.\nNAME will add something in Sec 5 (\"Special Cases\")"} +{"_id":"q-en-rfc5033bis-fd08e7e7cb58f6cb32f9b3baa6bdd774b0fae5c93935624d775e129c8052c6d0","text":"If merged this\nOur abstract starts: \"The IETF's standard congestion control schemes have been widely shown to be inadequate for various environments (e.g., high-speed networks, wireless technologies such as 3GPP and WiFi, long distance satellite links) and also in conflict with the needed, more isochronous, behaviors of VoIP, gaming, and videoconferencing traffic.\" ... is it just me, or does \"inadequate\" and \"at conflict\" seem an odd BCP statement from the IETF itself.\nAgreed. How about something closer to: \"The IETF's standard congestion control schemes have been shown to have performance challenges in various environments (e.g., high-speed networks, cellular and WiFi wireless technologies, long distance satellite links) and also for interactive traffic workloads (VoIP, gaming, and videoconferencing).\" cc: NAME NAME"} +{"_id":"q-en-rfc5033bis-fc19ad361ea29b3bfde25ace8b8b7c473acc473f5f42249a5ed5dfdebb7dc4fb","text":"Suggested AQM text to address issue, when merge will\nThis section is listed as a special case, which it is (kind of), but it seems rather out of place in that section - I think we might wish to move it to the previous section. Since the original BCP, AQM is more common and really ought to be considered in any new design - even if the design does not wish to optimise for this.\nPreviously discussed in\nEditors 1\/16\/24: Just add a sentence that AQMs are hard to detect, and therefore must be thought about -- cannot just ban use of a CC with AQM. But this falls thematically in with the special cases.\nAQ are only a problem if they introduce unexpected behavior. The classic example is rate limiting, for which BB had to introduce special code.\nAlthough packet scheduling and policing are useful complimentary functions, I don't think they are AQM."} +{"_id":"q-en-rfc5033bis-d92c53ef6941135f284dad0826a1f13d7bab555eb1b0265ee26cfbe3da670a2e","text":"eliminate some awkward\/unnecessary phrases. Include brief rationale for a bis.\ndone\nOK, how about now?"} +{"_id":"q-en-rfc5033bis-c0e1dad6c6525d27490c254c2aec4deb56a47373e0cc6af1f58d297503818883","text":"Sec 2 leans heavy on the dated example of Highspeed TCP and drives pretty much everything to Experimental \"until such time that the community better understands the solution space.\" It says that experimental RFCs can either be those judged to be safe on the internet, or those intended for testbeds. At this point, we probably understand enough to allow proposals to go for Proposed Standard. The safe\/testbed distinction also doesn't make sense -- we might want to standardize CCs for controlled environments, like DCTCP. This needs a complete rewrite. It wouldn't hurt for that rewrite to also note that the IRTF is another venue, non-normatively.\nI think a rewrite would help, but I'm think that could be too radical - we still do have proposals that ought ton be EXP either until deployment experience; or until we can agree they are \"safe\" for more general use. That doesn't seem to be a problem to me, but we ought to allow PS also. I agree we ought to mention IRTF."} +{"_id":"q-en-rfc5033bis-c9e80d509ba80a188232e7b7dde508069c05ff0f757f61ee63edc51d207073ab","text":"Also eliminated obsolete\/redundant discussion of AQM.\nWe missed a TODO in \"Wired Networks\"\nApproved. Is \"As\" really \"Because\" ?"} +{"_id":"q-en-rfc5033bis-3deac2c1e669724587b823cbed72cc9f78e6492d3034ab3cade2af3d677c6584","text":"(Contributing text as an individual, chair hat off) Here's some initial proposed text based to Please feel free to tweak my proposal.\nVery nice. Looks good to me. Thanks for this!\nThe Introduction correctly points out the following: The above text says guidelines for specifications are important, but such guidelines only make sense if there is a specification in the first place. So I wonder if it's worth saying a few words here about why it's useful to specify\/standardize congestion control algorithms. A few thoughts on the benefits of specifying congestion control algorithms: Help get a shared understanding between implementers, operators, and other interested parties of how the algorithm works and how it is expected to behave (or explicit discussion of parameters and such which can lead to differences in behavior) As a result of the shared understanding: Make it easier for anyone, not just the initial proponents or main implementers, to point out current issues\/limitations or suggest improvements of a spec that is still in development, and make it easier to converge such changes to a consensus behavior There may be more benefits. Though I wonder how to best capture the nuance of \"Some documentation or artifacts are better than nothing, but an actual spec is better than just a snapshot for documentation or artifacts\". Do people think it'll be useful to add something along these lines to the document?\nI agree it would be great to add this to the doc, and agree that those two bullet points would be good to integrate. Another more specific benefit that may make sense to add: due to the constraints of open source licenses, some implementors have noted that in some cases they are unable to read open source reference implementations; in such cases it is important to have a detailed spec of the algorithm that is unencumbered by open source license constraints.\nI can see this would be of value to someone considering submitting a proposal. It also may help them to justify the effort in writing this down. If we can find words that work, I'd like to consider adding something on this.\nGuidelines on what to consider are good for anyone (those who design, those who implement and those who document). The only place where guidelines could be used for enforcing something is where people wish to have congestion control algorithms published through the IETF. But that doesn't make them useless in other parts of the process. (My friend who does CC for realtime media is developing his work through iterations of code, not iterations of documentation. Guidelines would still be useful for him.)"} +{"_id":"q-en-rfc5033bis-5550198985107018a54c775ce4b9785ed7108cddc6107084a1a1be8d7745bf03","text":"Minor proposal: the leading sentence in the second paragraph of the section \"Mixed algorithm\" could be made a bit easier to read.\nI think we can merge"} +{"_id":"q-en-warp-streaming-format-33d80a27c22c7ce2cc61a519c50c3d26531df69d612b0a8640bf35c5bcecddd6","text":"The current draft allows tracks within a catalog to be updated. This causes a few problems: What happens if a track is changed (ex. new profile)? When does this update occur? What happens when the catalog update is delayed? If the publisher changes the codec\/profile in the middle of a track, the decoder will blow up. There's no way to know when the change will take place.\nI don't see how we could easily support tracks being updated. At the very least we would need a group ID when the change occurs, but that won't work in the face of packet loss, because the catalog update could arrive after the starting group. I think the only way to fix this is to put the init information at the start of each group (ex. SPS\/PPS) which is not desirable. I would prohibit it for now. At the very least, some text to say that the init payload MUST NOT change on subsequent updates. We can only add or remove tracks.\nIMO we start with no catalog updates for interop. Then we can build an ADD and REMOVE track message to avoid retransmitting the entire catalog on each update.\nNAME By 'update', I agree that we shouldn't allow their media characteristics (anything described in the init segment or the init itself) to change. But we should allow them to be added and deleted . Agree I think that having no catalog updates in v1 is unnecessarily restrictive and would lead to missed use-cases, rather than better interop. We also don't need to invent ADD and REMOVE track messages if we allow the CATALOG (and its updates) to define the availability of tracks. The whole purpose of the CATALOG track is to do exactly that. This point is up to MOQ base protocl however and we need to resolve it there. And \"avoiding retransmitting the entire catalog on each update\" can be addressed by either ignoring (because catalog updates are small to begin even with their init payloads), or by allowing delta updates on the catalog, which wouldn't be complex and I can mock that up if you'd like to see what it might look like?\nYeah, the ADD and REMOVE messages would be delta updates. I think we need some timing information about when a track starts and ends, although I'm not sure.\nDiscussion 4\/20: Need to add a delta update mechanism Same group, second or subsequent object Each object is a list of delta updates. Sync'd tracks should be added in the same object. Some text to instruct player to process all available tracks before performing track selection.\nAlso, if removing track, publisher should send remove along with a last Group, Object number. Update with all tracks removed signals EOS."} +{"_id":"q-en-warp-streaming-format-2a5e7653b9ec08d109faedf24786134f29fa380f29acdc049bcb23308dfffebe","text":"Changing name to draft-law-moq-warpstreamingformat to match consensus at last authors call."} +{"_id":"q-en-warp-streaming-format-7a4a2a015fbf7a597291c607459dd0f305dc0a289a7cdea165c3fa83d24fafb5","text":"Modifies document to reference external CMAF packaging draft. .\nAs proposed in , I think we can simplify the mapping: To clarify: There are any number of frames in a chunk (moof\/mdat pair). There are any number of chunks in a fragment (styp and moof\/mdat pairs). There are any number of fragments in a segment. (concatenated together) A fragment* is independently decodable. In order to minimize latency, there should be a chunk per frame, which is the same recommendation as LL-DASH. If the object has a length in the header, then there should be a chunk per fragment too, but I desperately want to avoid that. That way there would be no latency impact for using a single OBJECT for the entire group.\nThe catalog object has to be more than just a CMAF header, since it must described multiple tracks and also communicate their subscription names. On the GROUP and OBJECT mapping, I agree. The core requirement is that a sequence of OBJECTS is independently decodable if the client starts with the first object in a GROUP. This is satisfied with the restriction that a Group marks a fragment boundary, which is also satisfied by saying that the GROUP MUST represent a segment boundary, since each CMAF segment holds one of more fragments. COnsider the case of a 2s GOP at 30fps. The first OBJECT holds an I frame, the rest Ps. It would be compliant to mark a new GROUP every 30 Objects. It would also be compliant to mark a new group every 60 objects, which is the equivalent of transmitting a 4s segment comprising 2x2s fragments.\nAs an outcome of the discussions at ETF 117, I wrote an ID to describe how CMAF packaged content can be utilized with moq-transport. This ID is available here URL The WARP draft needs to be updated to reference this CMAF packaging draft."} +{"_id":"q-en-warp-streaming-format-626827addd30b41a3ef29b183d902d0b2ee7eee5bdc792237a3ba5044cdc38f6","text":"This PR adds a definition for a timeline track format, along with details about how to identify timeline tracks in the catalog and how to update timeline tracks. and\nI like it! Regarding \"The publisher MUST publish a complete timeline in the first MOQT Object of each MOQT Group.\" it means that the first object of the group strictly inflates with time. For an open range track (for example a 24\/7 video channel), it may become excessively long. We may allow summarization of old times in the timeline csv.\nA special ‘timeline’ track is produced which describes the availability of groups and objects with respect to media time and wallclock time. May also carry media time events which can be used by the player in constructing a UI, for example “Goal”, “Penalty”. This track can be used by the player for seeking to request specific portions of a DVR window in a live stream, or to any portion of a VOD asset. This may be used for advertising insertion at a later date. In the proposed structure below, the metadata field may be empty. If present, it must be enclosed inside double quotes and may hold any string data. In this example, JSON and XML metadata entires are shown.\nI like the idea. Note that some have advocated for KLV as the support for timed metadata for live video. Here is a Demuxed talk about it a couple of years ago URL"} +{"_id":"q-en-warp-streaming-format-3ae459f663362a5a7f15254b3a38bd77941833eefc7833103e143b7480f5de54","text":"The current version of WARP only references CMAF packaging and does not make it clear that WARP is also intended to support LOC (Low Overhaded Container) packaging. LOC is defined by URL Currently the LOC spec defines both a streaming format and a packaging format. Reference for packaging will therefore need to be made to specific subsections of that draft. In the future if LOC packaging is separated from LOC streaming format, then the WARP references can be updated."} +{"_id":"q-en-warp-streaming-format-9d69f328b8ef0b5cfdf050fe5530683103d0d9ac49979cb8ccdb17ede28d8d52","text":"Fixing table references and reformatting certain sections to 80 chars line width.\nTable 2 is declared twice, as is Table 3."}