{"query_id": "q-en-rust-60e0371694df40ceef41dd020298e41d7b7d957227a0df16c72ebd6a2d878467", "query": "I was looking at namely issue when I thought about this. If work is being put into improving error messages when somebody uses a keyword that defines a function in another language, there should also be improved error messages if someone uses the incorrect directive to bring something into the scope of their module or namespace/ Given the following code: The current output is: The error message should preferably suggest the user to use the directive to bring something into scope of their module. $DIR/use_instead_of_import.rs:9:1 | LL | require std::time::Duration; | ^^^^^^^ help: items are imported using the `use` keyword error: expected item, found `include` --> $DIR/use_instead_of_import.rs:12:1 | LL | include std::time::Instant; | ^^^^^^^ help: items are imported using the `use` keyword error: expected item, found `using` --> $DIR/use_instead_of_import.rs:9:5 --> $DIR/use_instead_of_import.rs:15:5 | LL | pub using std::io; | ^^^^^ help: items are imported using the `use` keyword error: aborting due to 2 previous errors error: aborting due to 4 previous errors ", "commid": "rust_pr_100167"}], "negative_passages": []} {"query_id": "q-en-rust-8a04a95acb8c7b7a5bf542d8cc456887460be3e547158e68a47a207a8a6b2db1", "query": "should be to guarantee that we emit an error when that flag is set. This is already done for of other types in rustc. $DIR/issue-102989.rs:7:15 | LL | fn ref_Struct(self: &Struct, f: &u32) -> &u32 { | ^^^^ not semantically valid as function parameter | = note: associated functions are those in `impl` or `trait` definitions error[E0412]: cannot find type `Struct` in this scope --> $DIR/issue-102989.rs:7:22 | LL | fn ref_Struct(self: &Struct, f: &u32) -> &u32 { | ^^^^^^ not found in this scope error[E0425]: cannot find value `x` in this scope --> $DIR/issue-102989.rs:11:13 | LL | let x = x << 1; | ^ help: a local variable with a similar name exists: `f` error[E0152]: found duplicate lang item `sized` --> $DIR/issue-102989.rs:5:1 | LL | trait Sized { } | ^^^^^^^^^^^ | = note: the lang item is first defined in crate `core` (which `std` depends on) = note: first definition in `core` loaded from SYSROOT/libcore-*.rlib = note: second definition in the local crate (`issue_102989`) error[E0277]: the size for values of type `{integer}` cannot be known at compilation time --> $DIR/issue-102989.rs:11:15 | LL | let x = x << 1; | ^^ doesn't have a size known at compile-time | = help: the trait `std::marker::Sized` is not implemented for `{integer}` error[E0308]: mismatched types --> $DIR/issue-102989.rs:7:42 | LL | fn ref_Struct(self: &Struct, f: &u32) -> &u32 { | ---------- ^^^^ expected `&u32`, found `()` | | | implicitly returns `()` as its body has no tail or `return` expression | note: consider returning one of these bindings --> $DIR/issue-102989.rs:7:30 | LL | fn ref_Struct(self: &Struct, f: &u32) -> &u32 { | ^ ... LL | let x = x << 1; | ^ error: aborting due to 6 previous errors Some errors have detailed explanations: E0152, E0277, E0308, E0412, E0425. For more information about an error, try `rustc --explain E0152`. ", "commid": "rust_pr_103003"}], "negative_passages": []} {"query_id": "q-en-rust-ca218e704e3584865a48dcb1c407d1e4e3caa5e14a18dfccc834e183f82ab47f", "query": "I tried this code: I expected to see this happen: I expected to see two diagnostic messages, telling me about the distinct parse errors on lines 1 and 2. Instead, this happened: The first error was treated as non-recoverable, yielding this output (): I'm assuming recovering in the face of this parse error would be a relatively simple task, given that there is already the present that should be a strong hint that someone meant to type . So I'm tagging this with labels indicating that its a good opportunity for someone who wants to acquaint themselves with the code base. $DIR/use-colon-as-mod-sep.rs:3:17 | LL | use std::process:Command; | ^ help: use double colon | = note: import paths are delimited using `::` error: expected `::`, found `:` --> $DIR/use-colon-as-mod-sep.rs:5:8 | LL | use std:fs::File; | ^ help: use double colon error: expected `::`, found `:` --> $DIR/use-colon-as-mod-sep.rs:7:8 | LL | use std:collections:HashMap; | ^ help: use double colon error: expected `::`, found `:` --> $DIR/use-colon-as-mod-sep.rs:7:20 | LL | use std:collections:HashMap; | ^ help: use double colon error: aborting due to 4 previous errors ", "commid": "rust_pr_103443"}], "negative_passages": []} {"query_id": "q-en-rust-308d4d22d171d10640e66fbfe7b3f09c6ccbe23ebd3039ef29b9d7e08149f9fb", "query": "Latest mingw has broken headers () therefore missing definitions locally. This causes error on mingw-w64 since its header is not broken. :( cc\nQuick fix is adding at . mingw-w64 doesn't define them, defines instead. I think we should prepare a plan when mingw also fixes it. (They already fixed third one of on w32api-4.0.3.) However, seems like mingw doesn't increment : both 4.0.0-1 and 4.0.3 have value .\nmingw-w64 gets its' own section in , doesn't it? So adding mingw-fix-include to include path in can be conditioned on some macro defined there. As for mingw fixing their headers, I think we can just remove fixed headers from mingw-fix-include when that happens.\nI'm building rust on mingw-w64-32bit but rust thinks it is on mingw. I'm not sure this is \"legal\", but I don't want to cross-build from mingw to mingw-w64-32bit since their difference is small.\nhas three mingw entries: i686-pc-mingw32 (normal mingw), i586-mingw32msvc (anyone who know this?), and x86_64-w64-mingw32 (mingw-w64 64bit). We don't have i686-w64-mingw32 (mingw-w64 32-bit). I don't know if it is good idea of adding it, so quick fix seems sufficient for now.", "positive_passages": [{"docid": "doc-en-rust-58e9ff172c034e4991db0bcc12b7e9048afa66bde5bc5b560f93f53a1b93d50a", "text": "#include_next // mingw 4.0.x has broken headers (#9246) but mingw-w64 does not. #if defined(__MINGW_MAJOR_VERSION) && __MINGW_MAJOR_VERSION == 4 typedef struct pollfd { SOCKET fd; short events;", "commid": "rust_pr_10346.0"}], "negative_passages": []} {"query_id": "q-en-rust-308d4d22d171d10640e66fbfe7b3f09c6ccbe23ebd3039ef29b9d7e08149f9fb", "query": "Latest mingw has broken headers () therefore missing definitions locally. This causes error on mingw-w64 since its header is not broken. :( cc\nQuick fix is adding at . mingw-w64 doesn't define them, defines instead. I think we should prepare a plan when mingw also fixes it. (They already fixed third one of on w32api-4.0.3.) However, seems like mingw doesn't increment : both 4.0.0-1 and 4.0.3 have value .\nmingw-w64 gets its' own section in , doesn't it? So adding mingw-fix-include to include path in can be conditioned on some macro defined there. As for mingw fixing their headers, I think we can just remove fixed headers from mingw-fix-include when that happens.\nI'm building rust on mingw-w64-32bit but rust thinks it is on mingw. I'm not sure this is \"legal\", but I don't want to cross-build from mingw to mingw-w64-32bit since their difference is small.\nhas three mingw entries: i686-pc-mingw32 (normal mingw), i586-mingw32msvc (anyone who know this?), and x86_64-w64-mingw32 (mingw-w64 64bit). We don't have i686-w64-mingw32 (mingw-w64 32-bit). I don't know if it is good idea of adding it, so quick fix seems sufficient for now.", "positive_passages": [{"docid": "doc-en-rust-9e89ecf45186ab847f40849727c621b7a719d62a9429c54490cff10fc0ba0a71", "text": "} WSAPOLLFD, *PWSAPOLLFD, *LPWSAPOLLFD; #endif #endif // _FIX_WINSOCK2_H ", "commid": "rust_pr_10346.0"}], "negative_passages": []} {"query_id": "q-en-rust-5f760ab77290f3dcf14cc84fc1b562354518a92b6a28bc31ac8e08938f17f94e", "query": "Instead, this happened: Successful compilation Tested on the playground, all versions and editions. Stable channel: 1.64.0 Beta channel: 1.65.0-beta.3 (2022-10-10 ) Nightly channel: 1.66.0-nightly (2022-10-19 ) I'm labeling this as a regression as I believe it is severe enough of an unplanned change (at best an accidental stabilization?) that I should draw attention to it, even though I haven't found a way to cause miscompilation for example. I also have not tried to do so at all yet. label +T-compiler +A-lifetimes +regression-from-stable-to-stable $DIR/lifetime-elision-return-type-requires-explicit-lifetime.rs:45:37 | LL | fn l<'a>(_: &'a str, _: &'a str) -> &str { \"\" } | ------- ------- ^ expected named lifetime parameter | = help: this function's return type contains a borrowed value with an elided lifetime, but the lifetime cannot be derived from the arguments help: consider using the `'a` lifetime | LL | fn l<'a>(_: &'a str, _: &'a str) -> &'a str { \"\" } | ++ error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0106`.", "commid": "rust_pr_103450"}], "negative_passages": []} {"query_id": "q-en-rust-218a411c275ab4f98a728c42dcbc4b6f2d9caf50b9934f1a62e3d2b55b439589", "query": " $DIR/issue-103573.rs:18:5 | LL | fn g<'a>(_: &< as TraitB>::TypeB as TraitA>::TypeA); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `TraitA` is not implemented for `<>::TypeC<'a> as TraitB>::TypeB` | help: consider further restricting the associated type | LL | fn g<'a>(_: &< as TraitB>::TypeB as TraitA>::TypeA) where <>::TypeC<'a> as TraitB>::TypeB: TraitA; | +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_103586"}], "negative_passages": []} {"query_id": "q-en-rust-09b5325a02084d26364ee3c1ea80f0a249f5b70f6d05f566605f66c3a8973beb", "query": " for (i, arg) in args.args.iter().enumerate() { // the domain size check is needed because the HIR may not be well-formed at this point for (i, arg) in args.args.iter().enumerate().take(params_in_repr.domain_size()) { if let hir::GenericArg::Type(ty) = arg && params_in_repr.contains(i as u32) { find_item_ty_spans(tcx, ty, needle, spans, seen_representable); }", "commid": "rust_pr_104202"}], "negative_passages": []} {"query_id": "q-en-rust-09b5325a02084d26364ee3c1ea80f0a249f5b70f6d05f566605f66c3a8973beb", "query": " #![crate_type = \"lib\"] struct Apple((Apple, Option(Banana ? Citron))); //~^ ERROR invalid `?` in type //~| ERROR expected one of `)` or `,`, found `Citron` //~| ERROR cannot find type `Citron` in this scope [E0412] //~| ERROR parenthesized type parameters may only be used with a `Fn` trait [E0214] //~| ERROR recursive type `Apple` has infinite size [E0072] ", "commid": "rust_pr_104202"}], "negative_passages": []} {"query_id": "q-en-rust-09b5325a02084d26364ee3c1ea80f0a249f5b70f6d05f566605f66c3a8973beb", "query": " error: invalid `?` in type --> $DIR/issue-103748-ICE-wrong-braces.rs:3:36 | LL | struct Apple((Apple, Option(Banana ? Citron))); | ^ `?` is only allowed on expressions, not types | help: if you meant to express that the type might not contain a value, use the `Option` wrapper type | LL | struct Apple((Apple, Option(Option Citron))); | +++++++ ~ error: expected one of `)` or `,`, found `Citron` --> $DIR/issue-103748-ICE-wrong-braces.rs:3:38 | LL | struct Apple((Apple, Option(Banana ? Citron))); | -^^^^^^ expected one of `)` or `,` | | | help: missing `,` error[E0412]: cannot find type `Citron` in this scope --> $DIR/issue-103748-ICE-wrong-braces.rs:3:38 | LL | struct Apple((Apple, Option(Banana ? Citron))); | ^^^^^^ not found in this scope error[E0214]: parenthesized type parameters may only be used with a `Fn` trait --> $DIR/issue-103748-ICE-wrong-braces.rs:3:22 | LL | struct Apple((Apple, Option(Banana ? Citron))); | ^^^^^^^^^^^^^^^^^^^^^^^ only `Fn` traits may use parentheses | help: use angle brackets instead | LL | struct Apple((Apple, Option)); | ~ ~ error[E0072]: recursive type `Apple` has infinite size --> $DIR/issue-103748-ICE-wrong-braces.rs:3:1 | LL | struct Apple((Apple, Option(Banana ? Citron))); | ^^^^^^^^^^^^ ----- recursive without indirection | help: insert some indirection (e.g., a `Box`, `Rc`, or `&`) to break the cycle | LL | struct Apple((Box, Option(Banana ? Citron))); | ++++ + error: aborting due to 5 previous errors Some errors have detailed explanations: E0072, E0214, E0412. For more information about an error, try `rustc --explain E0072`. ", "commid": "rust_pr_104202"}], "negative_passages": []} {"query_id": "q-en-rust-3b1d1e66f75b5a2b68b293052bab89f7074a591d053048c50d5a890f4c3ac934", "query": "Our CI system building artifact for aarch64 with the release of Rust 1.65 I tried to reduce the bug a bit: dependencies Build with (same with ) Since Rust 1.65, we get this error: Can still be reproduced in nightly\nI wasn't able to reproduce this (on either stable or nightly).\nYou're right, trying in a fresh project i couldn't reproduce. I realized the original also contained a that was also necessary I edited the original description to add Thanks for looking into this.\nAlso, contrary to what i wrote initially, the bug appeared first in 1.65 (and not 1.64)\nThanks, I can reproduce the issue now. Reduced test case:\nUpstream issue:\nThe upstream issue appears to be fixed. What needs to be done now to apply this fix?\nI can still reproduce this on 1.68.0-nightly ( 2022-12-15).\nCan confirm that the issue still exists.\nNew upstream issue:", "positive_passages": [{"docid": "doc-en-rust-b9a5644be7969802135a608b20cb1c6f74015dd483e136bb26929cba6a40a2d3", "text": "[submodule \"src/llvm-project\"] path = src/llvm-project url = https://github.com/rust-lang/llvm-project.git branch = rustc/15.0-2022-08-09 branch = rustc/15.0-2022-12-07 [submodule \"src/doc/embedded-book\"] path = src/doc/embedded-book url = https://github.com/rust-embedded/book.git", "commid": "rust_pr_105415"}], "negative_passages": []} {"query_id": "q-en-rust-3b1d1e66f75b5a2b68b293052bab89f7074a591d053048c50d5a890f4c3ac934", "query": "Our CI system building artifact for aarch64 with the release of Rust 1.65 I tried to reduce the bug a bit: dependencies Build with (same with ) Since Rust 1.65, we get this error: Can still be reproduced in nightly\nI wasn't able to reproduce this (on either stable or nightly).\nYou're right, trying in a fresh project i couldn't reproduce. I realized the original also contained a that was also necessary I edited the original description to add Thanks for looking into this.\nAlso, contrary to what i wrote initially, the bug appeared first in 1.65 (and not 1.64)\nThanks, I can reproduce the issue now. Reduced test case:\nUpstream issue:\nThe upstream issue appears to be fixed. What needs to be done now to apply this fix?\nI can still reproduce this on 1.68.0-nightly ( 2022-12-15).\nCan confirm that the issue still exists.\nNew upstream issue:", "positive_passages": [{"docid": "doc-en-rust-017f997a7430e643d73dc38a0754e5c799d6b5b9b8fd5d2420559c9c1117e8b5", "text": " Subproject commit a1232c451fc27173f8718e05d174b2503ca0b607 Subproject commit 3dfd4d93fa013e1c0578d3ceac5c8f4ebba4b6ec ", "commid": "rust_pr_105415"}], "negative_passages": []} {"query_id": "q-en-rust-3b1d1e66f75b5a2b68b293052bab89f7074a591d053048c50d5a890f4c3ac934", "query": "Our CI system building artifact for aarch64 with the release of Rust 1.65 I tried to reduce the bug a bit: dependencies Build with (same with ) Since Rust 1.65, we get this error: Can still be reproduced in nightly\nI wasn't able to reproduce this (on either stable or nightly).\nYou're right, trying in a fresh project i couldn't reproduce. I realized the original also contained a that was also necessary I edited the original description to add Thanks for looking into this.\nAlso, contrary to what i wrote initially, the bug appeared first in 1.65 (and not 1.64)\nThanks, I can reproduce the issue now. Reduced test case:\nUpstream issue:\nThe upstream issue appears to be fixed. What needs to be done now to apply this fix?\nI can still reproduce this on 1.68.0-nightly ( 2022-12-15).\nCan confirm that the issue still exists.\nNew upstream issue:", "positive_passages": [{"docid": "doc-en-rust-b0ba11d306adc9de6450acc368f432a7ff1985019abc4fef6e963b28e5d98133", "text": " Subproject commit 3dfd4d93fa013e1c0578d3ceac5c8f4ebba4b6ec Subproject commit 9ad24035fea8d309753f5e39e6eb53d1d0eb39ce ", "commid": "rust_pr_106406"}], "negative_passages": []} {"query_id": "q-en-rust-2587d2270051b55754f2506aa7e21eed9a659b66492590a218232496a8a0c406", "query": " assert!(self_ty.is_none() && parent_substs.is_empty()); assert!(self_ty.is_none()); } let arg_count = Self::check_generic_arg_count(", "commid": "rust_pr_105224"}], "negative_passages": []} {"query_id": "q-en-rust-2587d2270051b55754f2506aa7e21eed9a659b66492590a218232496a8a0c406", "query": " if let ty::Adt(adt_def, _) = qself_ty.kind() { if let ty::Adt(adt_def, adt_substs) = qself_ty.kind() { if adt_def.is_enum() { let variant_def = adt_def .variants()", "commid": "rust_pr_105224"}], "negative_passages": []} {"query_id": "q-en-rust-2587d2270051b55754f2506aa7e21eed9a659b66492590a218232496a8a0c406", "query": " // FIXME(inherent_associated_types): This does not substitute parameters. let ty = tcx.type_of(assoc_ty_did); let item_substs = self.create_substs_for_associated_item( span, assoc_ty_did, assoc_segment, adt_substs, ); let ty = tcx.bound_type_of(assoc_ty_did).subst(tcx, item_substs); return Ok((ty, DefKind::AssocTy, assoc_ty_did)); } }", "commid": "rust_pr_105224"}], "negative_passages": []} {"query_id": "q-en-rust-2587d2270051b55754f2506aa7e21eed9a659b66492590a218232496a8a0c406", "query": " // check-pass #![feature(inherent_associated_types)] #![allow(incomplete_features)] struct S(T); impl S { type P = T; } fn main() { type A = S<()>::P; let _: A = (); } ", "commid": "rust_pr_105224"}], "negative_passages": []} {"query_id": "q-en-rust-3fa12eb07998d6dfcecf2cfb96431c1be38074784ef464656403a0954ad7e9e1", "query": "In the following code, I can successfully refer to the private inherent associated type : I expected to see an error denying the access. If the associated item was a constant, this would indeed be the case: : label T-compiler requires-nightly A-associated-items A-visibility F-inherentassociatedtypes $DIR/assoc-inherent-private.rs:10:10 | LL | type P = (); | ------ associated type defined here ... LL | type U = m::T::P; | ^^^^^^^ private associated type error[E0624]: associated type `P` is private --> $DIR/assoc-inherent-private.rs:21:10 | LL | pub(super) type P = bool; | ----------------- associated type defined here ... LL | type V = n::n::T::P; | ^^^^^^^^^^ private associated type error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0624`. ", "commid": "rust_pr_104348"}], "negative_passages": []} {"query_id": "q-en-rust-3fa12eb07998d6dfcecf2cfb96431c1be38074784ef464656403a0954ad7e9e1", "query": "In the following code, I can successfully refer to the private inherent associated type : I expected to see an error denying the access. If the associated item was a constant, this would indeed be the case: : label T-compiler requires-nightly A-associated-items A-visibility F-inherentassociatedtypes $DIR/assoc-inherent-unstable.rs:4:13 | LL | type Data = aux::Owner::Data; | ^^^^^^^^^^^^^^^^ | = help: add `#![feature(data)]` to the crate attributes to enable error: aborting due to previous error For more information about this error, try `rustc --explain E0658`. ", "commid": "rust_pr_104348"}], "negative_passages": []} {"query_id": "q-en-rust-3fa12eb07998d6dfcecf2cfb96431c1be38074784ef464656403a0954ad7e9e1", "query": "In the following code, I can successfully refer to the private inherent associated type : I expected to see an error denying the access. If the associated item was a constant, this would indeed be the case: : label T-compiler requires-nightly A-associated-items A-visibility F-inherentassociatedtypes $DIR/item-privacy.rs:119:12 | LL | type A = u8; | ------ associated type defined here ... LL | let _: T::A; | ^^^^ private associated type", "commid": "rust_pr_104348"}], "negative_passages": []} {"query_id": "q-en-rust-49ec9f37f3015e9b3b24c7ec2ddf3b4040d6f440722cb64b31b7c6e3ff8dab25", "query": "I was chatting with today, who mentioned it'd be helpful to mention the newly stabilized APIs from the docs. The benefit of using is that even if the type widens, the result doesn't change. It'd be nice if someone could contribute docs mentioning from . This probably just needs to be a one-liner. $DIR/fn-to-method-deeply-nested.rs:8:9 | LL | z???????????????????????????????????????????????????????????????????????????????????????? | ^ not found in this scope error[E0425]: cannot find function `e` in this scope --> $DIR/fn-to-method-deeply-nested.rs:2:13 | LL | a(b(c(d(e( | ^ not found in this scope error[E0425]: cannot find function `d` in this scope --> $DIR/fn-to-method-deeply-nested.rs:2:11 | LL | a(b(c(d(e( | ^ not found in this scope error[E0425]: cannot find function `c` in this scope --> $DIR/fn-to-method-deeply-nested.rs:2:9 | LL | a(b(c(d(e( | ^ not found in this scope error[E0425]: cannot find function `b` in this scope --> $DIR/fn-to-method-deeply-nested.rs:2:7 | LL | a(b(c(d(e( | ^ not found in this scope error[E0425]: cannot find function `a` in this scope --> $DIR/fn-to-method-deeply-nested.rs:2:5 | LL | a(b(c(d(e( | ^ not found in this scope error: aborting due to 6 previous errors For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_106055"}], "negative_passages": []} {"query_id": "q-en-rust-d6c9f849ab534b9178f4c1c5f3c6f5bd5df94aba4c582fd87c2c2ed642d666fe", "query": "Found with a From : From : Deep recursion involving ? Regression in nightly-2022-10-07, apparently in rollup . I suspect (", "positive_passages": [{"docid": "doc-en-rust-17b720c902de637d51aa34a96d0fbdfb888467fd5b0dbedc2c8e59c223ffbd8c", "text": " fn main() { a((), 1i32 == 2u32); //~^ ERROR cannot find function `a` in this scope //~| ERROR mismatched types } ", "commid": "rust_pr_106055"}], "negative_passages": []} {"query_id": "q-en-rust-d6c9f849ab534b9178f4c1c5f3c6f5bd5df94aba4c582fd87c2c2ed642d666fe", "query": "Found with a From : From : Deep recursion involving ? Regression in nightly-2022-10-07, apparently in rollup . I suspect (", "positive_passages": [{"docid": "doc-en-rust-bce566408450f8713a3ee5047f9d88cbc05f1b6b956b38dfe5074cf84e78e5b5", "text": " error[E0308]: mismatched types --> $DIR/check-args-on-fn-err-2.rs:2:19 | LL | a((), 1i32 == 2u32); | ---- ^^^^ expected `i32`, found `u32` | | | expected because this is `i32` | help: change the type of the numeric literal from `u32` to `i32` | LL | a((), 1i32 == 2i32); | ~~~ error[E0425]: cannot find function `a` in this scope --> $DIR/check-args-on-fn-err-2.rs:2:5 | LL | a((), 1i32 == 2u32); | ^ not found in this scope error: aborting due to 2 previous errors Some errors have detailed explanations: E0308, E0425. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_106055"}], "negative_passages": []} {"query_id": "q-en-rust-d6c9f849ab534b9178f4c1c5f3c6f5bd5df94aba4c582fd87c2c2ed642d666fe", "query": "Found with a From : From : Deep recursion involving ? Regression in nightly-2022-10-07, apparently in rollup . I suspect (", "positive_passages": [{"docid": "doc-en-rust-fd562c34adda8629857570dc5437982c77280b3756a4330a5b63392efba5b64c", "text": " fn main() { unknown(1, |glyf| { //~^ ERROR: cannot find function `unknown` in this scope let actual = glyf; }); } ", "commid": "rust_pr_106055"}], "negative_passages": []} {"query_id": "q-en-rust-d6c9f849ab534b9178f4c1c5f3c6f5bd5df94aba4c582fd87c2c2ed642d666fe", "query": "Found with a From : From : Deep recursion involving ? Regression in nightly-2022-10-07, apparently in rollup . I suspect (", "positive_passages": [{"docid": "doc-en-rust-e52e2b89015415c2c6e74e8d625cb6038c6fb13f45fa4f8ae46a50449701e561", "text": " error[E0425]: cannot find function `unknown` in this scope --> $DIR/check-args-on-fn-err.rs:2:5 | LL | unknown(1, |glyf| { | ^^^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_106055"}], "negative_passages": []} {"query_id": "q-en-rust-b97137b5fab0276e36704957d9ddc3dfdac20a03d92aabc7691f45d30870c3e2", "query": " $DIR/issue-105732.rs:4:8 | LL | auto trait Foo { | --- auto trait cannot have associated items LL | fn g(&self); | ---^-------- help: remove these associated items error[E0599]: the method `g` exists for reference `&Self`, but its trait bounds were not satisfied --> $DIR/issue-105732.rs:9:14 | LL | self.g(); | ^ | = note: the following trait bounds were not satisfied: `Self: Foo` which is required by `&Self: Foo` `&Self: Foo` = help: items from traits can only be used if the type parameter is bounded by the trait help: the following trait defines an item `g`, perhaps you need to add a supertrait for it: | LL | trait Bar: Foo { | +++++ error: aborting due to 2 previous errors Some errors have detailed explanations: E0380, E0599. For more information about an error, try `rustc --explain E0380`. ", "commid": "rust_pr_105747"}], "negative_passages": []} {"query_id": "q-en-rust-b71a569870ede18901ce0e33aa59e67823054da28e4ed5f006be1f51468bfac1", "query": "Compiling the from failed on nightly due to lifetime error while it works on stable. The code in question looks like this and is in taken from On nightly, this failed with: :\nmodify labels: +regression-from-stable-to-beta\nlabels +E-needs-bisection\nsearched nightlies: from nightly-2022-12-06 to nightly-2022-12-10 regressed nightly: nightly-2022-12-10 searched commit range: regressed commit: use rustc_hir::def::DefKind; use rustc_hir::def_id::DefId; use rustc_middle::ty::{self, TyCtxt}; use rustc_session::lint;", "commid": "rust_pr_106759"}], "negative_passages": []} {"query_id": "q-en-rust-b71a569870ede18901ce0e33aa59e67823054da28e4ed5f006be1f51468bfac1", "query": "Compiling the from failed on nightly due to lifetime error while it works on stable. The code in question looks like this and is in taken from On nightly, this failed with: :\nmodify labels: +regression-from-stable-to-beta\nlabels +E-needs-bisection\nsearched nightlies: from nightly-2022-12-06 to nightly-2022-12-10 regressed nightly: nightly-2022-12-10 searched commit range: regressed commit: match item.kind { ItemKind::OpaqueTy(hir::OpaqueTy { .. }) => { ItemKind::OpaqueTy(hir::OpaqueTy { origin: hir::OpaqueTyOrigin::FnReturn(fn_def_id) | hir::OpaqueTyOrigin::AsyncFn(fn_def_id), in_trait, .. }) => { if in_trait { assert!(matches!(tcx.def_kind(fn_def_id), DefKind::AssocFn)) } else { assert!(matches!(tcx.def_kind(fn_def_id), DefKind::AssocFn | DefKind::Fn)) } Some(fn_def_id.to_def_id()) } ItemKind::OpaqueTy(hir::OpaqueTy { origin: hir::OpaqueTyOrigin::TyAlias, .. }) => { let parent_id = tcx.hir().get_parent_item(hir_id); assert_ne!(parent_id, hir::CRATE_OWNER_ID); debug!(\"generics_of: parent of opaque ty {:?} is {:?}\", def_id, parent_id);", "commid": "rust_pr_106759"}], "negative_passages": []} {"query_id": "q-en-rust-b71a569870ede18901ce0e33aa59e67823054da28e4ed5f006be1f51468bfac1", "query": "Compiling the from failed on nightly due to lifetime error while it works on stable. The code in question looks like this and is in taken from On nightly, this failed with: :\nmodify labels: +regression-from-stable-to-beta\nlabels +E-needs-bisection\nsearched nightlies: from nightly-2022-12-06 to nightly-2022-12-10 regressed nightly: nightly-2022-12-10 searched commit range: regressed commit: // check-pass // edition: 2021 // known-bug: #105197 // failure-status:101 // dont-check-compiler-stderr #![feature(async_fn_in_trait)] #![feature(return_position_impl_trait_in_trait)]", "commid": "rust_pr_106759"}], "negative_passages": []} {"query_id": "q-en-rust-071cac4fb8e120957cf8521185557e94eeedea5183127ae84b1e8cf0eb7cce04", "query": " $DIR/missing-semicolon.rs:6:7 | LL | () | ^ help: add `;` here LL | } | - unexpected token error: expected `;`, found `}` --> $DIR/missing-semicolon.rs:32:7 | LL | () | ^ help: add `;` here LL | } | - unexpected token error[E0618]: expected function, found `{integer}` --> $DIR/missing-semicolon.rs:5:13 | LL | let x = 5; | - `x` has type `{integer}` LL | let y = x | ^- help: consider using a semicolon here to finish the statement: `;` | _____________| | | LL | | () | |______- call expression requires function error[E0618]: expected function, found `{integer}` --> $DIR/missing-semicolon.rs:11:13 | LL | let x = 5; | - `x` has type `{integer}` LL | let y = x | ^- help: consider using a semicolon here to finish the statement: `;` | _____________| | | LL | | (); | |______- call expression requires function error[E0618]: expected function, found `{integer}` --> $DIR/missing-semicolon.rs:16:5 | LL | let x = 5; | - `x` has type `{integer}` LL | x | ^- help: consider using a semicolon here to finish the statement: `;` | _____| | | LL | | () | |______- call expression requires function error[E0618]: expected function, found `{integer}` --> $DIR/missing-semicolon.rs:31:13 | LL | let y = 5 | ^- help: consider using a semicolon here to finish the statement: `;` | _____________| | | LL | | () | |______- call expression requires function error[E0618]: expected function, found `{integer}` --> $DIR/missing-semicolon.rs:35:5 | LL | 5 | ^- help: consider using a semicolon here to finish the statement: `;` | _____| | | LL | | (); | |______- call expression requires function error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0618`. ", "commid": "rust_pr_114474"}], "negative_passages": []} {"query_id": "q-en-rust-ba3948f9877a268e7602b81101d87c181536386828f01dbb092f7426894cab36", "query": "rust version: Do you think it would be useful to add a hint here? Something in the lines of $DIR/float_iterator_hint.rs:4:14 | LL | for i in 0.2 { | ^^^ `{float}` is not an iterator | = help: the trait `Iterator` is not implemented for `{float}` = note: if you want to iterate between `start` until a value `end`, use the exclusive range syntax `start..end` or the inclusive range syntax `start..=end` = note: required for `{float}` to implement `IntoIterator` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_106740"}], "negative_passages": []} {"query_id": "q-en-rust-ba3948f9877a268e7602b81101d87c181536386828f01dbb092f7426894cab36", "query": "rust version: Do you think it would be useful to add a hint here? Something in the lines of $DIR/test.rs:104:20 | LL | missing => \"./missing-message-ref.ftl\" | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: you may have meant to use a variable reference (`{$message}`) error: aborting due to 11 previous errors For more information about this error, try `rustc --explain E0428`.", "commid": "rust_pr_107096"}], "negative_passages": []} {"query_id": "q-en-rust-c5c9e8e1c1dde85efd8b843671d395fb20c6b55b2af7867f2c7ab2660ea9ec70", "query": "searched nightlies: from nightly-2023-01-27 to nightly-2023-01-28 regressed nightly: nightly-2023-01-28 searched commit range: regressed commit: // Erase and shadow everything that could be passed to the new infcx. let ty = tcx.erase_regions(moved_place.ty(self.body, tcx).ty); let method_substs = tcx.erase_regions(method_substs); if let ty::Adt(def, substs) = ty.kind() && Some(def.did()) == tcx.lang_items().pin_type() && let ty::Ref(_, _, hir::Mutability::Mut) = substs.type_at(0).kind()", "commid": "rust_pr_107422"}], "negative_passages": []} {"query_id": "q-en-rust-c5c9e8e1c1dde85efd8b843671d395fb20c6b55b2af7867f2c7ab2660ea9ec70", "query": "searched nightlies: from nightly-2023-01-27 to nightly-2023-01-28 regressed nightly: nightly-2023-01-28 searched commit range: regressed commit: // run-rustfix use std::pin::Pin; fn foo(_: &mut ()) {} fn main() { let mut uwu = (); let mut r = Pin::new(&mut uwu); foo(r.as_mut().get_mut()); foo(r.get_mut()); //~ ERROR use of moved value } ", "commid": "rust_pr_107422"}], "negative_passages": []} {"query_id": "q-en-rust-c5c9e8e1c1dde85efd8b843671d395fb20c6b55b2af7867f2c7ab2660ea9ec70", "query": "searched nightlies: from nightly-2023-01-27 to nightly-2023-01-28 regressed nightly: nightly-2023-01-28 searched commit range: regressed commit: // run-rustfix use std::pin::Pin; fn foo(_: &mut ()) {} fn main() { let mut uwu = (); let mut r = Pin::new(&mut uwu); foo(r.get_mut()); foo(r.get_mut()); //~ ERROR use of moved value } ", "commid": "rust_pr_107422"}], "negative_passages": []} {"query_id": "q-en-rust-c5c9e8e1c1dde85efd8b843671d395fb20c6b55b2af7867f2c7ab2660ea9ec70", "query": "searched nightlies: from nightly-2023-01-27 to nightly-2023-01-28 regressed nightly: nightly-2023-01-28 searched commit range: regressed commit: error[E0382]: use of moved value: `r` --> $DIR/pin-mut-reborrow-infer-var-issue-107419.rs:10:9 | LL | let mut r = Pin::new(&mut uwu); | ----- move occurs because `r` has type `Pin<&mut ()>`, which does not implement the `Copy` trait LL | foo(r.get_mut()); | --------- `r` moved due to this method call LL | foo(r.get_mut()); | ^ value used here after move | note: `Pin::<&'a mut T>::get_mut` takes ownership of the receiver `self`, which moves `r` --> $SRC_DIR/core/src/pin.rs:LL:COL help: consider reborrowing the `Pin` instead of moving it | LL | foo(r.as_mut().get_mut()); | +++++++++ error: aborting due to previous error For more information about this error, try `rustc --explain E0382`. ", "commid": "rust_pr_107422"}], "negative_passages": []} {"query_id": "q-en-rust-f966b13242129b3fd386e19fbd758f692b68d67fd30fba87fc516c8b16bf1063", "query": " $DIR/in-closure.rs:7:12 | LL | fn a() {} | ^ | note: the lint level is defined here --> $DIR/in-closure.rs:3:9 | LL | #![deny(dead_code)] | ^^^^^^^^^ error: constant `A` is never used --> $DIR/in-closure.rs:13:11 | LL | const A: usize = 1; | ^ error: aborting due to 2 previous errors ", "commid": "rust_pr_108315"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-be3076d11d9fb71044a605d7bfff2ebfdf2e07cf5dfee7afe3ba1da48307ddcc", "text": "// through and constrain Pi. let mut subcomponents = smallvec![]; let mut subvisited = SsoHashSet::new(); compute_components_recursive(tcx, ty.into(), &mut subcomponents, &mut subvisited); compute_alias_components_recursive(tcx, ty, &mut subcomponents, &mut subvisited); out.push(Component::EscapingAlias(subcomponents.into_iter().collect())); } }", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-0e5187b435a93a81d78a1b2e383fc74748a37bbc89414eb65ac2a54fc467dc53", "text": "/// /// This should not be used to get the components of `parent` itself. /// Use [push_outlives_components] instead. pub(super) fn compute_components_recursive<'tcx>( pub(super) fn compute_alias_components_recursive<'tcx>( tcx: TyCtxt<'tcx>, alias_ty: Ty<'tcx>, out: &mut SmallVec<[Component<'tcx>; 4]>, visited: &mut SsoHashSet>, ) { let ty::Alias(kind, alias_ty) = alias_ty.kind() else { bug!() }; let opt_variances = if *kind == ty::Opaque { tcx.variances_of(alias_ty.def_id) } else { &[] }; for (index, child) in alias_ty.substs.iter().enumerate() { if opt_variances.get(index) == Some(&ty::Bivariant) { continue; } if !visited.insert(child) { continue; } match child.unpack() { GenericArgKind::Type(ty) => { compute_components(tcx, ty, out, visited); } GenericArgKind::Lifetime(lt) => { // Ignore late-bound regions. if !lt.is_late_bound() { out.push(Component::Region(lt)); } } GenericArgKind::Const(_) => { compute_components_recursive(tcx, child, out, visited); } } } } /// Collect [Component]s for *all* the substs of `parent`. /// /// This should not be used to get the components of `parent` itself. /// Use [push_outlives_components] instead. fn compute_components_recursive<'tcx>( tcx: TyCtxt<'tcx>, parent: GenericArg<'tcx>, out: &mut SmallVec<[Component<'tcx>; 4]>,", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-9b71b5fde73fcc1554bd061a4d5624f89293614d24d87cb9fa0dd02db921ffba", "text": "// the problem is to add `T: 'r`, which isn't true. So, if there are no // inference variables, we use a verify constraint instead of adding // edges, which winds up enforcing the same condition. let is_opaque = alias_ty.kind(self.tcx) == ty::Opaque; if approx_env_bounds.is_empty() && trait_bounds.is_empty() && (alias_ty.needs_infer() || alias_ty.kind(self.tcx) == ty::Opaque) && (alias_ty.needs_infer() || is_opaque) { debug!(\"no declared bounds\"); self.substs_must_outlive(alias_ty.substs, origin, region); let opt_variances = is_opaque.then(|| self.tcx.variances_of(alias_ty.def_id)); self.substs_must_outlive(alias_ty.substs, origin, region, opt_variances); return; }", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-383533e1ffc9f9a7de0e73c3da075c5b1e57f57297d1fe819bf1b82e65982a45", "text": "self.delegate.push_verify(origin, GenericKind::Alias(alias_ty), region, verify_bound); } #[instrument(level = \"debug\", skip(self))] fn substs_must_outlive( &mut self, substs: SubstsRef<'tcx>, origin: infer::SubregionOrigin<'tcx>, region: ty::Region<'tcx>, opt_variances: Option<&[ty::Variance]>, ) { let constraint = origin.to_constraint_category(); for k in substs { for (index, k) in substs.iter().enumerate() { match k.unpack() { GenericArgKind::Lifetime(lt) => { self.delegate.push_sub_region_constraint( origin.clone(), region, lt, constraint, ); let variance = if let Some(variances) = opt_variances { variances[index] } else { ty::Invariant }; if variance == ty::Invariant { self.delegate.push_sub_region_constraint( origin.clone(), region, lt, constraint, ); } } GenericArgKind::Type(ty) => { self.type_must_outlive(origin.clone(), ty, region, constraint);", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-8bcbda56104edc5078582ff7c2e72c90a938cb9771dc46356b113f653ee82ddc", "text": " use crate::infer::outlives::components::{compute_components_recursive, Component}; use crate::infer::outlives::components::{compute_alias_components_recursive, Component}; use crate::infer::outlives::env::RegionBoundPairs; use crate::infer::region_constraints::VerifyIfEq; use crate::infer::VerifyBound;", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-1270aafa272eb93aceab26cbd0ba0cc14f1cbf6980560701cf1c359c824a93ae", "text": "// see the extensive comment in projection_must_outlive let recursive_bound = { let mut components = smallvec![]; compute_components_recursive(self.tcx, alias_ty_as_ty.into(), &mut components, visited); compute_alias_components_recursive( self.tcx, alias_ty_as_ty.into(), &mut components, visited, ); self.bound_from_components(&components, visited) };", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-c3ee31413b6576b1a5d350cb10c34cdf54769ab42733d4ca07f1928444a2c03d", "text": " // check-pass #![feature(type_alias_impl_trait)] struct MyTy<'a>(Vec, &'a ()); impl MyTy<'_> { fn one(&mut self) -> &mut impl Sized { &mut self.0 } fn two(&mut self) -> &mut (impl Sized + 'static) { self.one() } } type Opaque<'a> = impl Sized; fn define<'a>() -> Opaque<'a> {} fn test<'a>() { None::<&'static Opaque<'a>>; } fn one<'a, 'b: 'b>() -> &'a impl Sized { &() } fn two<'a, 'b>() { one::<'a, 'b>(); } fn main() {} ", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-9870c6488605aefa85aac4fcef8e656ab69c3410ae4e47694e0eba8019bbb511", "query": "Some regressions of that I previously reported in , but they were missed: I guess the way to fix this is to ignore non-captured lifetimes of opaque types here cc label C-bug regression-from-stable-to-stable T-types T-compiler A-impl-trait\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-3f9f0805fa0df31fe1c63fa293eeb4ec5b7e32339ba025b0b5c19fff77ee52a6", "text": " // check-pass #![feature(type_alias_impl_trait)] fn opaque<'a: 'a>() -> impl Sized {} fn assert_static(_: T) {} fn test_closure() { let closure = |_| { assert_static(opaque()); }; closure(&opaque()); } type Opaque<'a> = impl Sized; fn define<'a>() -> Opaque<'a> {} fn test_tait(_: &Opaque<'_>) { None::<&'static Opaque<'_>>; } fn main() {} ", "commid": "rust_pr_110399"}], "negative_passages": []} {"query_id": "q-en-rust-a2df77a32258d6df5fbc08b622c64309d8c279125404d166b1f8b2ffab410e2c", "query": "The compiler currently allows safe default method implementations to be marked with : which I don't think is allowed in . For reference, is not allowed on trait implementations: From my limited testing, this doesn't seem to be unsound however, as the compiler seems to consider all implementations of as having . cc label T-lang T-compiler C-bug F-targetfeature11 $DIR/trait-impl.rs:22:5 | LL | #[target_feature(enable = \"sse2\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot be applied to safe trait method LL | LL | fn foo(&self) {} | ------------- not an `unsafe` function error: `#[target_feature(..)]` cannot be applied to safe trait method --> $DIR/trait-impl.rs:13:5 | LL | #[target_feature(enable = \"sse2\")]", "commid": "rust_pr_108983"}], "negative_passages": []} {"query_id": "q-en-rust-a2df77a32258d6df5fbc08b622c64309d8c279125404d166b1f8b2ffab410e2c", "query": "The compiler currently allows safe default method implementations to be marked with : which I don't think is allowed in . For reference, is not allowed on trait implementations: From my limited testing, this doesn't seem to be unsound however, as the compiler seems to consider all implementations of as having . cc label T-lang T-compiler C-bug F-targetfeature11 $DIR/issue-109071.rs:8:17 | LL | type Item = &[T]; | ^ explicit lifetime name needed here error[E0107]: missing generics for struct `Windows` --> $DIR/issue-109071.rs:7:9 | LL | impl Windows { | ^^^^^^^ expected 1 generic argument | note: struct defined here, with 1 generic parameter: `T` --> $DIR/issue-109071.rs:5:8 | LL | struct Windows {} | ^^^^^^^ - help: add missing generic argument | LL | impl Windows { | +++ error[E0658]: inherent associated types are unstable --> $DIR/issue-109071.rs:8:5 | LL | type Item = &[T]; | ^^^^^^^^^^^^^^^^^ | = note: see issue #8995 for more information = help: add `#![feature(inherent_associated_types)]` to the crate attributes to enable error: aborting due to 3 previous errors Some errors have detailed explanations: E0107, E0637, E0658. For more information about an error, try `rustc --explain E0107`. ", "commid": "rust_pr_113030"}], "negative_passages": []} {"query_id": "q-en-rust-28e8335a47c9051eb17099d5f18ed01bd34acb8a37ae1a38fd86c2f4d02930c3", "query": " $DIR/issue-109071.rs:8:17 | LL | type Item = &[T]; | ^ explicit lifetime name needed here error[E0107]: missing generics for struct `Windows` --> $DIR/issue-109071.rs:7:9 | LL | impl Windows { | ^^^^^^^ expected 1 generic argument | note: struct defined here, with 1 generic parameter: `T` --> $DIR/issue-109071.rs:5:8 | LL | struct Windows {} | ^^^^^^^ - help: add missing generic argument | LL | impl Windows { | +++ error: aborting due to 2 previous errors Some errors have detailed explanations: E0107, E0637. For more information about an error, try `rustc --explain E0107`. ", "commid": "rust_pr_113030"}], "negative_passages": []} {"query_id": "q-en-rust-8595b953672ab85db04d23cb938f64affbd687e1464ab447377436863ae6ca30", "query": " $DIR/issue-105069.rs:1:5 | LL | use self::A::*; | ^^^^^^^^^^ = help: consider adding an explicit import of `V` to disambiguate note: `V` could also refer to the variant imported here --> $DIR/issue-105069.rs:3:5 | LL | use self::B::*; | ^^^^^^^^^^ = help: consider adding an explicit import of `V` to disambiguate error: aborting due to previous error", "commid": "rust_pr_112495"}], "negative_passages": []} {"query_id": "q-en-rust-14d87f94d6dd2f60938cc3099809c6fbd53892083d65f782d3f0ae98e232297d", "query": " $DIR/issue-109153.rs:11:5 | LL | use bar::bar; | ^^^ ambiguous name | = note: ambiguous because of multiple glob imports of a name in the same module note: `bar` could refer to the module imported here --> $DIR/issue-109153.rs:1:5 | LL | use foo::*; | ^^^^^^ = help: consider adding an explicit import of `bar` to disambiguate note: `bar` could also refer to the module imported here --> $DIR/issue-109153.rs:12:5 | LL | use bar::*; | ^^^^^^ = help: consider adding an explicit import of `bar` to disambiguate error: aborting due to previous error For more information about this error, try `rustc --explain E0659`. ", "commid": "rust_pr_112495"}], "negative_passages": []} {"query_id": "q-en-rust-da4be0531ed16fb4ad1cf49c21351dac4496e37ff0bc56e9e1e80d1c71797d7c", "query": "Found this while minimizing :smile: So maybe related? didn't seem like duplicates. : $DIR/issue-90014-tait2.rs:44:27 | LL | fn make_fut(&self) -> Box Trait<'a, Thing = Fut<'a>>> { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^query stack during panic: #0 [typeck] type-checking `::make_fut` #1 [type_of] computing type of `Fut::{opaque#0}` #2 [check_mod_item_types] checking item types in top-level module #3 [analysis] running analysis passes on this crate end of query stack error[E0792]: expected generic lifetime parameter, found `'a` error: aborting due to previous error For more information about this error, try `rustc --explain E0792`. ", "commid": "rust_pr_113648"}], "negative_passages": []} {"query_id": "q-en-rust-da4be0531ed16fb4ad1cf49c21351dac4496e37ff0bc56e9e1e80d1c71797d7c", "query": "Found this while minimizing :smile: So maybe related? didn't seem like duplicates. : $DIR/under-binder.rs:6:5 | LL | type Opaque<'a> = impl Sized + 'a; | -- this generic parameter must be used with a generic lifetime parameter ... LL | f | ^ error: aborting due to previous error For more information about this error, try `rustc --explain E0792`. ", "commid": "rust_pr_113648"}], "negative_passages": []} {"query_id": "q-en-rust-c3e177adf51992d723f6435c72558fdd03c1dd0c2365e5cdef770d9565fffc27", "query": " $DIR/array_subslice.rs:7:21 | LL | pub fn subslice_array(x: [u8; 3]) { | - help: consider changing this to be mutable: `mut x` ... LL | let [ref y, ref mut z @ ..] = x; | ^^^^^^^^^ cannot borrow as mutable error[E0596]: cannot borrow `f` as mutable, as it is not declared as mutable --> $DIR/array_subslice.rs:10:5 | LL | let [ref y, ref mut z @ ..] = x; | - calling `f` requires mutable binding due to mutable borrow of `x` ... LL | f(); | ^ cannot borrow as mutable | help: consider changing this to be mutable | LL | let mut f = || { | +++ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0596`. ", "commid": "rust_pr_109680"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-c88e4bb2d1c33f7d2f89bea814e6341e642686b820ea4feeac11b7e87e41e25f", "text": " //@ assembly-output: emit-asm //@ compile-flags:-Copt-level=3 //@ only-x86_64 #![crate_type = \"lib\"] #[no_mangle] type T = u8; type T1 = (T, T, T, T, T, T, T, T); type T2 = [T; 8]; #[no_mangle] // CHECK-LABEL: foo1a // CHECK: cmp // CHECK-NEXT: set // CHECK-NEXT: ret pub fn foo1a(a: T1, b: T1) -> bool { a == b } #[no_mangle] // CHECK-LABEL: foo1b // CHECK: mov // CHECK-NEXT: cmp // CHECK-NEXT: set // CHECK-NEXT: ret pub fn foo1b(a: &T1, b: &T1) -> bool { a == b } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-1f1622d7078c48d54749e4b08b4f15fc029376c2b730dbd3ca6d59da6353f53c", "text": " //@ compile-flags: -O //@ min-llvm-version: 17 #![crate_type = \"lib\"] #[no_mangle] // CHECK-LABEL: @foo // CHECK: getelementptr inbounds // CHECK-NEXT: load i64 // CHECK-NEXT: icmp eq i64 // CHECK-NEXT: br i1 #[no_mangle] pub fn foo(input: &mut &[u64]) -> Option { let (first, rest) = input.split_first()?; *input = rest; Some(*first) } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-35f320816bcb0ba41943216784220e3866b16b66c408ed09581f2e2415ac3af1", "text": " //@ compile-flags: -O // XXX: The x86-64 assembly get optimized correclty. But llvm-ir output is not until llvm 18? //@ min-llvm-version: 18 #![crate_type = \"lib\"] pub enum K{ A(Box<[i32]>), B(Box<[u8]>), C(Box<[String]>), D(Box<[u16]>), } #[no_mangle] // CHECK-LABEL: @get_len // CHECK: getelementptr inbounds // CHECK-NEXT: load // CHECK-NEXT: ret i64 // CHECK-NOT: switch pub fn get_len(arg: &K)->usize{ match arg { K::A(ref lst)=>lst.len(), K::B(ref lst)=>lst.len(), K::C(ref lst)=>lst.len(), K::D(ref lst)=>lst.len(), } } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-03f975a8d25650388370b27a01341ee6f7e6a2234bcf6a0a1d827001db81941d", "text": " //@ compile-flags: -O // This regress since Rust version 1.72. //@ min-llvm-version: 18.1.4 #![crate_type = \"lib\"] use std::convert::TryInto; const N: usize = 24; #[no_mangle] // CHECK-LABEL: @example // CHECK-NOT: unwrap_failed pub fn example(a: Vec) -> u8 { if a.len() != 32 { return 0; } let a: [u8; 32] = a.try_into().unwrap(); a[15] + a[N] } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-e281b5a1f8828b3cc909f2562a53e8540735d74a2381090c9a5386068291d804", "text": " //@ compile-flags: -O //@ min-llvm-version: 17 #![crate_type = \"lib\"] // CHECK-LABEL: @write_u8_variant_a // CHECK: getelementptr // CHECK-NEXT: icmp ugt #[no_mangle] pub fn write_u8_variant_a( bytes: &mut [u8], buf: u8, offset: usize, ) -> Option<&mut [u8]> { let buf = buf.to_le_bytes(); bytes .get_mut(offset..).and_then(|bytes| bytes.get_mut(..buf.len())) } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-7838ca78214a6ee13a087be8cc2ae6aca58d0f0c720a96241c39cf2d7fdcad1f", "text": " // in Rust 1.73, -O and opt-level=3 optimizes differently //@ compile-flags: -C opt-level=3 //@ min-llvm-version: 17 #![crate_type = \"lib\"] use std::cmp::max; #[no_mangle] // CHECK-LABEL: @foo // CHECK-NOT: slice_start_index_len_fail // CHECK-NOT: unreachable pub fn foo(v: &mut Vec, size: usize)-> Option<&mut [u8]> { if v.len() > max(1, size) { let start = v.len() - size; Some(&mut v[start..]) } else { None } } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-d65100dd9bb73ba9d0b249df14f671d2e895bf46f659dab449ab263ba1f1de22", "text": " //@ compile-flags: -O //@ min-llvm-version: 18 #![crate_type = \"lib\"] // CHECK-LABEL: @div2 // CHECK: ashr i32 %a, 1 // CHECK-NEXT: ret i32 #[no_mangle] pub fn div2(a: i32) -> i32 { a.div_euclid(2) } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-5a11803cbc7421cb836b5645b72141c0f0ddad499d0c81bd40e088f1edb0a766", "text": " //@ compile-flags: -O #![crate_type = \"lib\"] const N: usize = 3; pub type T = u8; #[no_mangle] // CHECK-LABEL: @split_mutiple // CHECK-NOT: unreachable pub fn split_mutiple(slice: &[T]) -> (&[T], &[T]) { let len = slice.len() / N; slice.split_at(len * N) } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-0035aa3b0eaca9981e8ef57f78a52308876952e6cb8366776db1ab4385ec1dfa", "text": " //@ compile-flags: -O //@ min-llvm-version: 17 #![crate_type = \"lib\"] #[no_mangle] // CHECK-LABEL: @foo // CHECK: {{.*}}: // CHECK: ret // CHECK-NOT: unreachable pub fn foo(arr: &mut [u32]) { for i in 0..arr.len() { for j in 0..i { assert!(j < arr.len()); } } } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-79b9f12a7f6135db51f907d02595959b24d0aeb50b91a53659beea77b1f95054", "query": "Impl 1: Impl 2: Godbolt: These functions should be equivalent, but the second function generates one more if than the first one. This also happens when using and then doing the slicing manually. I expected the impls to generate the same code.\nQuoting myself on Zulip:\nThis is a metadata preservation failure during SimplifyCFG speculation: Preserving here used to be illegal, but thanks to we are allowed to preserve it now. Only need to drop .\nUpstream patch:\nFixed by , needs codegen test.", "positive_passages": [{"docid": "doc-en-rust-bf221456c6835cf0d5b999371402841d3ffd3eca443d079f355c1b5157df7edb", "text": " //@ compile-flags: -O //@ min-llvm-version: 18 #![crate_type = \"lib\"] use std::ptr::NonNull; // CHECK-LABEL: @slice_ptr_len_1 // CHECK: {{.*}}: // CHECK-NEXT: ret i64 %ptr.1 #[no_mangle] pub fn slice_ptr_len_1(ptr: *const [u8]) -> usize { let ptr = ptr.cast_mut(); if let Some(ptr) = NonNull::new(ptr) { ptr.len() } else { // We know ptr is null, so we know ptr.wrapping_byte_add(1) is not null. NonNull::new(ptr.wrapping_byte_add(1)).unwrap().len() } } ", "commid": "rust_pr_125347"}], "negative_passages": []} {"query_id": "q-en-rust-d02e9f3b3d49fa1766e4e7761723325f86377035290cd16126e07b1da835aea1", "query": "(The actual desired output should have colours and minuses on the removed derive, but to give the idea) Many users are not familiar with the terminology being used by the compiler error (i.e. the different flavours of procedural macro, derive vs attribute) and might be confused as to what they need to do in order to fix the error. Since the fix is straight-forward (i.e. remove the ), the compiler should show a code snippet that fixes the issue. No response No response $DIR/macro-path-prelude-fail-4.rs:1:10 | LL | #[derive(inline)] | ^^^^^^ = help: add as non-Derive macro `#[inline]` error: aborting due to previous error", "commid": "rust_pr_109638"}], "negative_passages": []} {"query_id": "q-en-rust-d02e9f3b3d49fa1766e4e7761723325f86377035290cd16126e07b1da835aea1", "query": "(The actual desired output should have colours and minuses on the removed derive, but to give the idea) Many users are not familiar with the terminology being used by the compiler error (i.e. the different flavours of procedural macro, derive vs attribute) and might be confused as to what they need to do in order to fix the error. Since the fix is straight-forward (i.e. remove the ), the compiler should show a code snippet that fixes the issue. No response No response $DIR/macro-path-prelude-fail-5.rs:4:17 | LL | #[derive(Debug, inline)] | ^^^^^^ not a derive macro | help: remove from the surrounding `derive()` --> $DIR/macro-path-prelude-fail-5.rs:4:17 | LL | #[derive(Debug, inline)] | ^^^^^^ = help: add as non-Derive macro `#[inline]` error: expected derive macro, found built-in attribute `inline` --> $DIR/macro-path-prelude-fail-5.rs:7:10 | LL | #[derive(inline, Debug)] | ^^^^^^ not a derive macro | help: remove from the surrounding `derive()` --> $DIR/macro-path-prelude-fail-5.rs:7:10 | LL | #[derive(inline, Debug)] | ^^^^^^ = help: add as non-Derive macro `#[inline]` error: aborting due to 2 previous errors ", "commid": "rust_pr_109638"}], "negative_passages": []} {"query_id": "q-en-rust-d02e9f3b3d49fa1766e4e7761723325f86377035290cd16126e07b1da835aea1", "query": "(The actual desired output should have colours and minuses on the removed derive, but to give the idea) Many users are not familiar with the terminology being used by the compiler error (i.e. the different flavours of procedural macro, derive vs attribute) and might be confused as to what they need to do in order to fix the error. Since the fix is straight-forward (i.e. remove the ), the compiler should show a code snippet that fixes the issue. No response No response $DIR/macro-namespace-reserved-2.rs:53:10 | LL | #[derive(my_macro_attr)] | ^^^^^^^^^^^^^ = help: add as non-Derive macro `#[my_macro_attr]` error: can't use a procedural macro from the same crate that defines it --> $DIR/macro-namespace-reserved-2.rs:56:10", "commid": "rust_pr_109638"}], "negative_passages": []} {"query_id": "q-en-rust-d02e9f3b3d49fa1766e4e7761723325f86377035290cd16126e07b1da835aea1", "query": "(The actual desired output should have colours and minuses on the removed derive, but to give the idea) Many users are not familiar with the terminology being used by the compiler error (i.e. the different flavours of procedural macro, derive vs attribute) and might be confused as to what they need to do in order to fix the error. Since the fix is straight-forward (i.e. remove the ), the compiler should show a code snippet that fixes the issue. No response No response $DIR/macro-namespace-reserved-2.rs:50:10 | LL | #[derive(crate::my_macro)] | ^^^^^^^^^^^^^^^ = help: add as non-Derive macro `#[crate::my_macro]` error: cannot find macro `my_macro_attr` in this scope --> $DIR/macro-namespace-reserved-2.rs:28:5", "commid": "rust_pr_109638"}], "negative_passages": []} {"query_id": "q-en-rust-d02e9f3b3d49fa1766e4e7761723325f86377035290cd16126e07b1da835aea1", "query": "(The actual desired output should have colours and minuses on the removed derive, but to give the idea) Many users are not familiar with the terminology being used by the compiler error (i.e. the different flavours of procedural macro, derive vs attribute) and might be confused as to what they need to do in order to fix the error. Since the fix is straight-forward (i.e. remove the ), the compiler should show a code snippet that fixes the issue. No response No response $DIR/tool-attributes-misplaced-2.rs:1:10 | LL | #[derive(rustfmt::skip)] | ^^^^^^^^^^^^^ = help: add as non-Derive macro `#[rustfmt::skip]` error: expected macro, found tool attribute `rustfmt::skip` --> $DIR/tool-attributes-misplaced-2.rs:5:5", "commid": "rust_pr_109638"}], "negative_passages": []} {"query_id": "q-en-rust-d1ddbb515108ccc115e14e79e4b7b6846e9a8596f46e5d2dd60cc06985e4b983", "query": " $DIR/issue-109831.rs:4:24 | LL | struct A; | --------- similarly named struct `A` defined here ... LL | fn f(b1: B, b2: B, a2: C) {} | ^ | help: a struct with a similar name exists | LL | fn f(b1: B, b2: B, a2: A) {} | ~ help: you might be missing a type parameter | LL | fn f(b1: B, b2: B, a2: C) {} | +++ error[E0425]: cannot find value `C` in this scope --> $DIR/issue-109831.rs:7:16 | LL | struct A; | --------- similarly named unit struct `A` defined here ... LL | f(A, A, B, C); | ^ help: a unit struct with a similar name exists: `A` error[E0061]: this function takes 3 arguments but 4 arguments were supplied --> $DIR/issue-109831.rs:7:5 | LL | f(A, A, B, C); | ^ - - - unexpected argument | | | | | expected `B`, found `A` | expected `B`, found `A` | note: function defined here --> $DIR/issue-109831.rs:4:4 | LL | fn f(b1: B, b2: B, a2: C) {} | ^ ----- ----- ----- help: remove the extra argument | LL - f(A, A, B, C); LL + f(/* B */, /* B */, B); | error: aborting due to 3 previous errors Some errors have detailed explanations: E0061, E0412, E0425. For more information about an error, try `rustc --explain E0061`. ", "commid": "rust_pr_109850"}], "negative_passages": []} {"query_id": "q-en-rust-84fcbfe2dfb6d5a723b82e44468381e2a3da70b15ab70d0a99a9e4d0ba3092c5", "query": "If you combine LTO, anything that uses alloc and debug compilation mode, the compiler fails. The minimum example to reproduce is this : With this : To reproduce, run (or just use ): You should see this error: The above error does not reproduce for other compiler versions or options, all of the below work: I found reporting something similar, but for them is throwing the error, so I opened a new issue instead. $DIR/drop-tracking-error-body.rs:6:11 | LL | match true {} | ^^^^ | = note: the matched value is of type `bool` help: ensure that all possible cases are being handled by adding a match arm with a wildcard pattern as shown | LL ~ match true { LL + _ => todo!(), LL ~ } | error: aborting due to previous error For more information about this error, try `rustc --explain E0004`. ", "commid": "rust_pr_111533"}], "negative_passages": []} {"query_id": "q-en-rust-e51565f320ad311d045da620ea05c215884bcb634149e53228e3cf3e30d0adab", "query": "Apologies for the not-very-readable reproduction. It was generated by a fuzzer and this is the best I can minimise it to. The program has no UB under , and Miri prints 22, this is also the result with Very strangely, if you drop to 0 or 1, the result becomes 11 which is wrong. Reproducible on both Apple Silicon macOS and x8664 Linux cc $DIR/issue-112547.rs:8:4 | LL | }> V: IntoIterator | ^ not found in this scope | help: you might be missing a type parameter | LL | pub fn bar() | +++ warning: the feature `non_lifetime_binders` is incomplete and may not be safe to use and/or cause compiler crashes --> $DIR/issue-112547.rs:1:12 | LL | #![feature(non_lifetime_binders)] | ^^^^^^^^^^^^^^^^^^^^ | = note: see issue #108185 for more information = note: `#[warn(incomplete_features)]` on by default error: aborting due to previous error; 1 warning emitted For more information about this error, try `rustc --explain E0412`. ", "commid": "rust_pr_113396"}], "negative_passages": []} {"query_id": "q-en-rust-f490abedac653d0fa1c9912b12de56dea9ce8afdf417da43388a99b7e40152b7", "query": " $DIR/ice-112822-expected-type-for-param.rs:4:5 | LL | const move || { | ^^^^^ | = note: see issue #106003 for more information = help: add `#![feature(const_closures)]` to the crate attributes to enable error: ~const can only be applied to `#[const_trait]` traits --> $DIR/ice-112822-expected-type-for-param.rs:3:32 | LL | const fn test() -> impl ~const Fn() { | ^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0658`. ", "commid": "rust_pr_119255"}], "negative_passages": []} {"query_id": "q-en-rust-208d8c8595059e456b4a0daa106e55c058a03a1df9faea9482242cb8e7f51118", "query": "is left set to a dangling pointer if unwinds. This is unsound because a later call to will use that dangling pointer. There is a similar issue with and unwinding. since you're on the Stable MIR cc list: cc\nif you can add it as a dependency, using would probably be better than rolling your own version.\nYea, that makes sense. I think we should look into using it for the module, too, which is how we get a within the compiler itself", "positive_passages": [{"docid": "doc-en-rust-caa7d289f72a336861164ee21a83029df0a07dd3f5ef8f71c02e9be83ddb8cdb", "text": "\"rustc_hir\", \"rustc_middle\", \"rustc_span\", \"scoped-tls\", \"tracing\", ]", "commid": "rust_pr_113251"}], "negative_passages": []} {"query_id": "q-en-rust-208d8c8595059e456b4a0daa106e55c058a03a1df9faea9482242cb8e7f51118", "query": "is left set to a dangling pointer if unwinds. This is unsound because a later call to will use that dangling pointer. There is a similar issue with and unwinding. since you're on the Stable MIR cc list: cc\nif you can add it as a dependency, using would probably be better than rolling your own version.\nYea, that makes sense. I think we should look into using it for the module, too, which is how we get a within the compiler itself", "positive_passages": [{"docid": "doc-en-rust-90bdb397e5c6d1fac402541aacf9b144ba116531a890bcc9baed4098d9b9d315", "text": "rustc_middle = { path = \"../rustc_middle\", optional = true } rustc_span = { path = \"../rustc_span\", optional = true } tracing = \"0.1\" scoped-tls = \"1.0\" [features] default = [", "commid": "rust_pr_113251"}], "negative_passages": []} {"query_id": "q-en-rust-208d8c8595059e456b4a0daa106e55c058a03a1df9faea9482242cb8e7f51118", "query": "is left set to a dangling pointer if unwinds. This is unsound because a later call to will use that dangling pointer. There is a similar issue with and unwinding. since you're on the Stable MIR cc list: cc\nif you can add it as a dependency, using would probably be better than rolling your own version.\nYea, that makes sense. I think we should look into using it for the module, too, which is how we get a within the compiler itself", "positive_passages": [{"docid": "doc-en-rust-ea2dd1d7e340e0f45157c785cfc94ee342421a693b421d65a01eb6bcad7b2b4b", "text": "// Make this module private for now since external users should not call these directly. mod rustc_smir; #[macro_use] extern crate scoped_tls; ", "commid": "rust_pr_113251"}], "negative_passages": []} {"query_id": "q-en-rust-208d8c8595059e456b4a0daa106e55c058a03a1df9faea9482242cb8e7f51118", "query": "is left set to a dangling pointer if unwinds. This is unsound because a later call to will use that dangling pointer. There is a similar issue with and unwinding. since you're on the Stable MIR cc list: cc\nif you can add it as a dependency, using would probably be better than rolling your own version.\nYea, that makes sense. I think we should look into using it for the module, too, which is how we get a within the compiler itself", "positive_passages": [{"docid": "doc-en-rust-601b28195caaab5ccade86eac8c3b9407320098e3bd0cf5c039de7c262243fdd", "text": "fn rustc_tables(&mut self, f: &mut dyn FnMut(&mut Tables<'_>)); } thread_local! { /// A thread local variable that stores a pointer to the tables mapping between TyCtxt /// datastructures and stable MIR datastructures. static TLV: Cell<*mut ()> = const { Cell::new(std::ptr::null_mut()) }; } // A thread local variable that stores a pointer to the tables mapping between TyCtxt // datastructures and stable MIR datastructures scoped_thread_local! (static TLV: Cell<*mut ()>); pub fn run(mut context: impl Context, f: impl FnOnce()) { assert!(TLV.get().is_null()); assert!(!TLV.is_set()); fn g<'a>(mut context: &mut (dyn Context + 'a), f: impl FnOnce()) { TLV.set(&mut context as *mut &mut _ as _); f(); TLV.replace(std::ptr::null_mut()); let ptr: *mut () = &mut context as *mut &mut _ as _; TLV.set(&Cell::new(ptr), || { f(); }); } g(&mut context, f); }", "commid": "rust_pr_113251"}], "negative_passages": []} {"query_id": "q-en-rust-208d8c8595059e456b4a0daa106e55c058a03a1df9faea9482242cb8e7f51118", "query": "is left set to a dangling pointer if unwinds. This is unsound because a later call to will use that dangling pointer. There is a similar issue with and unwinding. since you're on the Stable MIR cc list: cc\nif you can add it as a dependency, using would probably be better than rolling your own version.\nYea, that makes sense. I think we should look into using it for the module, too, which is how we get a within the compiler itself", "positive_passages": [{"docid": "doc-en-rust-78d6916150aec0d9f9610d8db89f016724133b6d92a9556a03457e8dd54a16bf", "text": "/// Loads the current context and calls a function with it. /// Do not nest these, as that will ICE. pub(crate) fn with(f: impl FnOnce(&mut dyn Context) -> R) -> R { let ptr = TLV.replace(std::ptr::null_mut()) as *mut &mut dyn Context; assert!(!ptr.is_null()); let ret = f(unsafe { *ptr }); TLV.set(ptr as _); ret assert!(TLV.is_set()); TLV.with(|tlv| { let ptr = tlv.get(); assert!(!ptr.is_null()); f(unsafe { *(ptr as *mut &mut dyn Context) }) }) }", "commid": "rust_pr_113251"}], "negative_passages": []} {"query_id": "q-en-rust-155cd2de5458ece9b7a21da1108f2eed3e9d98dd1e7af41689210d209eb656d6", "query": " $DIR/issue-116186.rs:4:28 | LL | fn something(path: [usize; N]) -> impl Clone { | ^ not found in this scope | help: you might be missing a const parameter | LL | fn something(path: [usize; N]) -> impl Clone { | +++++++++++++++++++++ error[E0730]: cannot pattern-match on an array without a fixed length --> $DIR/issue-116186.rs:7:9 | LL | [] => 0, | ^^ error: aborting due to 2 previous errors Some errors have detailed explanations: E0425, E0730. For more information about an error, try `rustc --explain E0425`. ", "commid": "rust_pr_117046"}], "negative_passages": []} {"query_id": "q-en-rust-f97049d8193e1ee5129dab5f70cea16e0cd45605ad741e1470c33a45e39dd126", "query": " $DIR/recursive-fn-tait.rs:14:5 | LL | move |x: usize| m(n(x)) | ^^^^^^^^^^^^^^^^^^^^^^^ expected `{closure@$DIR/recursive-fn-tait.rs:7:5: 7:16}`, got `{closure@$DIR/recursive-fn-tait.rs:14:5: 14:20}` | note: previous use here --> $DIR/recursive-fn-tait.rs:7:5 | LL | |_: usize |loop {} | ^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_116801"}], "negative_passages": []} {"query_id": "q-en-rust-00ca41b0cb2eabc1bd6392bbdf19aa6df4d638312eeb6bf958140446f865d5ad", "query": "I tried this code: and I expected to see this happen: All tests passing. No doc tests running when using Instead, this happened: $DIR/fresh-lifetime-from-bare-trait-obj-114664.rs:5:24 | LL | fn ice() -> impl AsRef { | ^^^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see = note: `#[warn(bare_trait_objects)]` on by default help: use `dyn` | LL | fn ice() -> impl AsRef { | +++ warning: trait objects without an explicit `dyn` are deprecated --> $DIR/fresh-lifetime-from-bare-trait-obj-114664.rs:5:24 | LL | fn ice() -> impl AsRef { | ^^^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see help: use `dyn` | LL | fn ice() -> impl AsRef { | +++ warning: trait objects without an explicit `dyn` are deprecated --> $DIR/fresh-lifetime-from-bare-trait-obj-114664.rs:5:24 | LL | fn ice() -> impl AsRef { | ^^^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see help: use `dyn` | LL | fn ice() -> impl AsRef { | +++ warning: 3 warnings emitted ", "commid": "rust_pr_114667"}], "negative_passages": []} {"query_id": "q-en-rust-451a3d5bb03baf213804b5882fa7a0623f3c98ed98a735eb087b60e7b230f92b", "query": "Bug originally found by , which I've then reduced. I tried this code: I expected to see this happen: Instead, this happened: code compiles successfully and onwards modify labels: +I-unsound +regression-from-stable-to-stable -regression-untriaged This does not occur if the implied-lifetime-bounds array is in closure parameter position; so it looks like a bug w.r.t. not properly checking the closure output type implied bounds?does not help (as of )\nBisected to nightly-2022-08-10: <- looks maybe interesting\nyou beat me by one minute :) cc for being possible cause of this unsoundness?\nWG-prioritization assigning priority (). label -I-prioritize +P-high +T-compiler -needs-triage\nOh, neat it did turn out to be a new soundness bug instead of a dupe of . Thanks for figuring out this was a regression,\nfor we have a terminator calling , i.e. a function which returns . This does not include the implied bounds. Checking that is WF requires proving which succeeds without checking the usable implied bounds of the trait-ref. edit: there's a bit more to this than I thought\nThis soundness bug is effectively only reproducible via the traits because the impls for it are builtin and so a little screwy. An attempt to write an explicit impl for the builtin impls would require an explicit bound that holds which is not present on the builtin impl. There is an assumption that all it takes for an assoc type to be wf is the where clauses on it and the impl hold, and that all arguments in the traitref are wf. The builtin impls violate this by not having the required implied bound on the impl so we end up with being able to be proven wf without proving the implied bounds. When calling we check that the fn sig of is wf which is . Before we would check the normalized signature is wf so the return type would be which would cause us to prove . Now that we check the unnormalized_ signature is wf we end up never seeing the normalized form with the which introduces the implied bound. It is generally an invariant of the type system that holding should imply also holds. The fact that can be well formed but is not, is concerning. In an ideal world proving the traits would require also proving all the implied bounds hold but right now that is not possible afaik. A simple fix (albeit hacky until we make implied bounds desugared to actual bounds) would be to make us also check not just .", "positive_passages": [{"docid": "doc-en-rust-256fa959aa7ee708685bcd7f40498fda9ba80898b43d090bfaa104c6834a3925", "text": "return; } }; let (sig, map) = tcx.instantiate_bound_regions(sig, |br| { let (unnormalized_sig, map) = tcx.instantiate_bound_regions(sig, |br| { use crate::renumber::RegionCtxt; let region_ctxt_fn = || {", "commid": "rust_pr_118882"}], "negative_passages": []} {"query_id": "q-en-rust-451a3d5bb03baf213804b5882fa7a0623f3c98ed98a735eb087b60e7b230f92b", "query": "Bug originally found by , which I've then reduced. I tried this code: I expected to see this happen: Instead, this happened: code compiles successfully and onwards modify labels: +I-unsound +regression-from-stable-to-stable -regression-untriaged This does not occur if the implied-lifetime-bounds array is in closure parameter position; so it looks like a bug w.r.t. not properly checking the closure output type implied bounds?does not help (as of )\nBisected to nightly-2022-08-10: <- looks maybe interesting\nyou beat me by one minute :) cc for being possible cause of this unsoundness?\nWG-prioritization assigning priority (). label -I-prioritize +P-high +T-compiler -needs-triage\nOh, neat it did turn out to be a new soundness bug instead of a dupe of . Thanks for figuring out this was a regression,\nfor we have a terminator calling , i.e. a function which returns . This does not include the implied bounds. Checking that is WF requires proving which succeeds without checking the usable implied bounds of the trait-ref. edit: there's a bit more to this than I thought\nThis soundness bug is effectively only reproducible via the traits because the impls for it are builtin and so a little screwy. An attempt to write an explicit impl for the builtin impls would require an explicit bound that holds which is not present on the builtin impl. There is an assumption that all it takes for an assoc type to be wf is the where clauses on it and the impl hold, and that all arguments in the traitref are wf. The builtin impls violate this by not having the required implied bound on the impl so we end up with being able to be proven wf without proving the implied bounds. When calling we check that the fn sig of is wf which is . Before we would check the normalized signature is wf so the return type would be which would cause us to prove . Now that we check the unnormalized_ signature is wf we end up never seeing the normalized form with the which introduces the implied bound. It is generally an invariant of the type system that holding should imply also holds. The fact that can be well formed but is not, is concerning. In an ideal world proving the traits would require also proving all the implied bounds hold but right now that is not possible afaik. A simple fix (albeit hacky until we make implied bounds desugared to actual bounds) would be to make us also check not just .", "positive_passages": [{"docid": "doc-en-rust-25bd64db2ce2e460b55c407dc974551fbef355198a903eea0434e5a3feb88f53", "text": "region_ctxt_fn, ) }); debug!(?sig); debug!(?unnormalized_sig); // IMPORTANT: We have to prove well formed for the function signature before // we normalize it, as otherwise types like `<&'a &'b () as Trait>::Assoc` // get normalized away, causing us to ignore the `'b: 'a` bound used by the function.", "commid": "rust_pr_118882"}], "negative_passages": []} {"query_id": "q-en-rust-451a3d5bb03baf213804b5882fa7a0623f3c98ed98a735eb087b60e7b230f92b", "query": "Bug originally found by , which I've then reduced. I tried this code: I expected to see this happen: Instead, this happened: code compiles successfully and onwards modify labels: +I-unsound +regression-from-stable-to-stable -regression-untriaged This does not occur if the implied-lifetime-bounds array is in closure parameter position; so it looks like a bug w.r.t. not properly checking the closure output type implied bounds?does not help (as of )\nBisected to nightly-2022-08-10: <- looks maybe interesting\nyou beat me by one minute :) cc for being possible cause of this unsoundness?\nWG-prioritization assigning priority (). label -I-prioritize +P-high +T-compiler -needs-triage\nOh, neat it did turn out to be a new soundness bug instead of a dupe of . Thanks for figuring out this was a regression,\nfor we have a terminator calling , i.e. a function which returns . This does not include the implied bounds. Checking that is WF requires proving which succeeds without checking the usable implied bounds of the trait-ref. edit: there's a bit more to this than I thought\nThis soundness bug is effectively only reproducible via the traits because the impls for it are builtin and so a little screwy. An attempt to write an explicit impl for the builtin impls would require an explicit bound that holds which is not present on the builtin impl. There is an assumption that all it takes for an assoc type to be wf is the where clauses on it and the impl hold, and that all arguments in the traitref are wf. The builtin impls violate this by not having the required implied bound on the impl so we end up with being able to be proven wf without proving the implied bounds. When calling we check that the fn sig of is wf which is . Before we would check the normalized signature is wf so the return type would be which would cause us to prove . Now that we check the unnormalized_ signature is wf we end up never seeing the normalized form with the which introduces the implied bound. It is generally an invariant of the type system that holding should imply also holds. The fact that can be well formed but is not, is concerning. In an ideal world proving the traits would require also proving all the implied bounds hold but right now that is not possible afaik. A simple fix (albeit hacky until we make implied bounds desugared to actual bounds) would be to make us also check not just .", "positive_passages": [{"docid": "doc-en-rust-597f5444e94753f6cc13498508357cb412e65b4911259ceb2639fcd7d7f1a1d2", "text": "// // See #91068 for an example. self.prove_predicates( sig.inputs_and_output.iter().map(|ty| { unnormalized_sig.inputs_and_output.iter().map(|ty| { ty::Binder::dummy(ty::PredicateKind::Clause(ty::ClauseKind::WellFormed( ty.into(), )))", "commid": "rust_pr_118882"}], "negative_passages": []} {"query_id": "q-en-rust-451a3d5bb03baf213804b5882fa7a0623f3c98ed98a735eb087b60e7b230f92b", "query": "Bug originally found by , which I've then reduced. I tried this code: I expected to see this happen: Instead, this happened: code compiles successfully and onwards modify labels: +I-unsound +regression-from-stable-to-stable -regression-untriaged This does not occur if the implied-lifetime-bounds array is in closure parameter position; so it looks like a bug w.r.t. not properly checking the closure output type implied bounds?does not help (as of )\nBisected to nightly-2022-08-10: <- looks maybe interesting\nyou beat me by one minute :) cc for being possible cause of this unsoundness?\nWG-prioritization assigning priority (). label -I-prioritize +P-high +T-compiler -needs-triage\nOh, neat it did turn out to be a new soundness bug instead of a dupe of . Thanks for figuring out this was a regression,\nfor we have a terminator calling , i.e. a function which returns . This does not include the implied bounds. Checking that is WF requires proving which succeeds without checking the usable implied bounds of the trait-ref. edit: there's a bit more to this than I thought\nThis soundness bug is effectively only reproducible via the traits because the impls for it are builtin and so a little screwy. An attempt to write an explicit impl for the builtin impls would require an explicit bound that holds which is not present on the builtin impl. There is an assumption that all it takes for an assoc type to be wf is the where clauses on it and the impl hold, and that all arguments in the traitref are wf. The builtin impls violate this by not having the required implied bound on the impl so we end up with being able to be proven wf without proving the implied bounds. When calling we check that the fn sig of is wf which is . Before we would check the normalized signature is wf so the return type would be which would cause us to prove . Now that we check the unnormalized_ signature is wf we end up never seeing the normalized form with the which introduces the implied bound. It is generally an invariant of the type system that holding should imply also holds. The fact that can be well formed but is not, is concerning. In an ideal world proving the traits would require also proving all the implied bounds hold but right now that is not possible afaik. A simple fix (albeit hacky until we make implied bounds desugared to actual bounds) would be to make us also check not just .", "positive_passages": [{"docid": "doc-en-rust-9a3a5d54df7a4f0f0b5a9bf9e8b39c54120ff9c10768cb4013a0f337e8e93026", "text": "term_location.to_locations(), ConstraintCategory::Boring, ); let sig = self.normalize(sig, term_location); let sig = self.normalize(unnormalized_sig, term_location); // HACK(#114936): `WF(sig)` does not imply `WF(normalized(sig))` // with built-in `Fn` implementations, since the impl may not be // well-formed itself. if sig != unnormalized_sig { self.prove_predicates( sig.inputs_and_output.iter().map(|ty| { ty::Binder::dummy(ty::PredicateKind::Clause( ty::ClauseKind::WellFormed(ty.into()), )) }), term_location.to_locations(), ConstraintCategory::Boring, ); } self.check_call_dest(body, term, &sig, *destination, *target, term_location); // The ordinary liveness rules will ensure that all", "commid": "rust_pr_118882"}], "negative_passages": []} {"query_id": "q-en-rust-451a3d5bb03baf213804b5882fa7a0623f3c98ed98a735eb087b60e7b230f92b", "query": "Bug originally found by , which I've then reduced. I tried this code: I expected to see this happen: Instead, this happened: code compiles successfully and onwards modify labels: +I-unsound +regression-from-stable-to-stable -regression-untriaged This does not occur if the implied-lifetime-bounds array is in closure parameter position; so it looks like a bug w.r.t. not properly checking the closure output type implied bounds?does not help (as of )\nBisected to nightly-2022-08-10: <- looks maybe interesting\nyou beat me by one minute :) cc for being possible cause of this unsoundness?\nWG-prioritization assigning priority (). label -I-prioritize +P-high +T-compiler -needs-triage\nOh, neat it did turn out to be a new soundness bug instead of a dupe of . Thanks for figuring out this was a regression,\nfor we have a terminator calling , i.e. a function which returns . This does not include the implied bounds. Checking that is WF requires proving which succeeds without checking the usable implied bounds of the trait-ref. edit: there's a bit more to this than I thought\nThis soundness bug is effectively only reproducible via the traits because the impls for it are builtin and so a little screwy. An attempt to write an explicit impl for the builtin impls would require an explicit bound that holds which is not present on the builtin impl. There is an assumption that all it takes for an assoc type to be wf is the where clauses on it and the impl hold, and that all arguments in the traitref are wf. The builtin impls violate this by not having the required implied bound on the impl so we end up with being able to be proven wf without proving the implied bounds. When calling we check that the fn sig of is wf which is . Before we would check the normalized signature is wf so the return type would be which would cause us to prove . Now that we check the unnormalized_ signature is wf we end up never seeing the normalized form with the which introduces the implied bound. It is generally an invariant of the type system that holding should imply also holds. The fact that can be well formed but is not, is concerning. In an ideal world proving the traits would require also proving all the implied bounds hold but right now that is not possible afaik. A simple fix (albeit hacky until we make implied bounds desugared to actual bounds) would be to make us also check not just .", "positive_passages": [{"docid": "doc-en-rust-b27bc0c8c0c31a771e387a12370bab75454de050ee36f37ad3a7c324ad51e1cb", "text": " // fn whoops( s: String, f: impl for<'s> FnOnce(&'s str) -> (&'static str, [&'static &'s (); 0]), ) -> &'static str { f(&s).0 //~^ ERROR `s` does not live long enough } // fn extend(input: &T) -> &'static T { struct Bounded<'a, 'b: 'static, T>(&'a T, [&'b (); 0]); let n: Box Bounded<'static, '_, T>> = Box::new(|x| Bounded(x, [])); n(input).0 //~^ ERROR borrowed data escapes outside of function } // fn extend_mut<'a, T>(input: &'a mut T) -> &'static mut T { struct Bounded<'a, 'b: 'static, T>(&'a mut T, [&'b (); 0]); let mut n: Box Bounded<'static, '_, T>> = Box::new(|x| Bounded(x, [])); n(input).0 //~^ ERROR borrowed data escapes outside of function } fn main() {} ", "commid": "rust_pr_118882"}], "negative_passages": []} {"query_id": "q-en-rust-451a3d5bb03baf213804b5882fa7a0623f3c98ed98a735eb087b60e7b230f92b", "query": "Bug originally found by , which I've then reduced. I tried this code: I expected to see this happen: Instead, this happened: code compiles successfully and onwards modify labels: +I-unsound +regression-from-stable-to-stable -regression-untriaged This does not occur if the implied-lifetime-bounds array is in closure parameter position; so it looks like a bug w.r.t. not properly checking the closure output type implied bounds?does not help (as of )\nBisected to nightly-2022-08-10: <- looks maybe interesting\nyou beat me by one minute :) cc for being possible cause of this unsoundness?\nWG-prioritization assigning priority (). label -I-prioritize +P-high +T-compiler -needs-triage\nOh, neat it did turn out to be a new soundness bug instead of a dupe of . Thanks for figuring out this was a regression,\nfor we have a terminator calling , i.e. a function which returns . This does not include the implied bounds. Checking that is WF requires proving which succeeds without checking the usable implied bounds of the trait-ref. edit: there's a bit more to this than I thought\nThis soundness bug is effectively only reproducible via the traits because the impls for it are builtin and so a little screwy. An attempt to write an explicit impl for the builtin impls would require an explicit bound that holds which is not present on the builtin impl. There is an assumption that all it takes for an assoc type to be wf is the where clauses on it and the impl hold, and that all arguments in the traitref are wf. The builtin impls violate this by not having the required implied bound on the impl so we end up with being able to be proven wf without proving the implied bounds. When calling we check that the fn sig of is wf which is . Before we would check the normalized signature is wf so the return type would be which would cause us to prove . Now that we check the unnormalized_ signature is wf we end up never seeing the normalized form with the which introduces the implied bound. It is generally an invariant of the type system that holding should imply also holds. The fact that can be well formed but is not, is concerning. In an ideal world proving the traits would require also proving all the implied bounds hold but right now that is not possible afaik. A simple fix (albeit hacky until we make implied bounds desugared to actual bounds) would be to make us also check not just .", "positive_passages": [{"docid": "doc-en-rust-845f15e032a32ccafbb42a11825b3ecb1205bc128c545b3fe71dc21dc9f0a60a", "text": " error[E0597]: `s` does not live long enough --> $DIR/check-normalized-sig-for-wf.rs:7:7 | LL | s: String, | - binding `s` declared here ... LL | f(&s).0 | --^^- | | | | | borrowed value does not live long enough | argument requires that `s` is borrowed for `'static` LL | LL | } | - `s` dropped here while still borrowed error[E0521]: borrowed data escapes outside of function --> $DIR/check-normalized-sig-for-wf.rs:15:5 | LL | fn extend(input: &T) -> &'static T { | ----- - let's call the lifetime of this reference `'1` | | | `input` is a reference that is only valid in the function body ... LL | n(input).0 | ^^^^^^^^ | | | `input` escapes the function body here | argument requires that `'1` must outlive `'static` error[E0521]: borrowed data escapes outside of function --> $DIR/check-normalized-sig-for-wf.rs:23:5 | LL | fn extend_mut<'a, T>(input: &'a mut T) -> &'static mut T { | -- ----- `input` is a reference that is only valid in the function body | | | lifetime `'a` defined here ... LL | n(input).0 | ^^^^^^^^ | | | `input` escapes the function body here | argument requires that `'a` must outlive `'static` error: aborting due to 3 previous errors Some errors have detailed explanations: E0521, E0597. For more information about an error, try `rustc --explain E0521`. ", "commid": "rust_pr_118882"}], "negative_passages": []} {"query_id": "q-en-rust-3bfeb8dcf4b50a8cebde732b1aab3bebff2d7442cfe11787c90d10bb10b91037", "query": " $DIR/escaping_bound_vars.rs:11:35 | LL | (): Test<{ 1 + (<() as Elide(&())>::call) }>, | -^ | | | lifetime defined here error: aborting due to previous error ", "commid": "rust_pr_116561"}], "negative_passages": []} {"query_id": "q-en-rust-7c36428dc9c127fa8fd850649f973efe53e19a13db6b40da72d3f5447abc7e08", "query": "() never finishes building the crate (0.11.3) when building with optimizations. Regression is exactly in the LLVM 17 upgrade: Subproject commit 42263494d29febc26d3c1ebdaa7b63677573ec47 Subproject commit d404cba4e39df595710869988ded7cbe1104b52f ", "commid": "rust_pr_116227"}], "negative_passages": []} {"query_id": "q-en-rust-2ef192352c3becb16fe1385e30228a297c150b1b4840d1330e4c4610850c3607", "query": "The documentation of the macro currently does not mention at all that it also accepts optional format arguments, for example: Other macros like at least have the following sentence under \"Panics\": So maybe it would be good to do the following? [ ] Update the \"Panics\" section to be similar to [ ] Extend / adjust example to also show usage of custom message $DIR/location-insensitive-scopes-issue-116657.rs:18:1 | LL | fn call(x: Self) -> Self::Output; | --------------------------------- `call` from trait ... LL | impl Callable for T { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ missing `call` in implementation error: unconstrained opaque type --> $DIR/location-insensitive-scopes-issue-116657.rs:22:19 | LL | type Output = impl PlusOne; | ^^^^^^^^^^^^ | = note: `Output` must be used in combination with a concrete type within the same impl error[E0700]: hidden type for `impl PlusOne` captures lifetime that does not appear in bounds --> $DIR/location-insensitive-scopes-issue-116657.rs:28:5 | LL | fn test<'a>(y: &'a mut i32) -> impl PlusOne { | -- ------------ opaque type defined here | | | hidden type `<&'a mut i32 as Callable>::Output` captures the lifetime `'a` as defined here LL | <&mut i32 as Callable>::call(y) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | help: to declare that `impl PlusOne` captures `'a`, you can add an explicit `'a` lifetime bound | LL | fn test<'a>(y: &'a mut i32) -> impl PlusOne + 'a { | ++++ error: aborting due to 3 previous errors Some errors have detailed explanations: E0046, E0700. For more information about an error, try `rustc --explain E0046`. ", "commid": "rust_pr_116960"}], "negative_passages": []} {"query_id": "q-en-rust-f0e95a387e8a156dd08e418c5a37474b0aa64ccd01d6250272ee9a3b96bdbb2c", "query": "File: reduced: original: Version information Command: Program output:\ncc\nSome notes dump: the original test fuzzed into this MCVE is . NLLs and do compute the same scopes on that original difference in scopes here is on a somewhat ambiguous case in the current implementation: the loan ultimately flows into via member constraints, not a regular outlives constraint. NLL still computes a scope for it, doesn't since it considers the loan to escape the we ignored the member constraint here and computed the scope anyways, it would result in the same results as NLLs and not trigger the assert. A handful of other tests would then fail of SCC member constraints difference between this test and these other 4 seems to me to be about applied member constraints. That is, in this test the member constraint is likely not applied, so maybe the loan should only be considered to escape via these. We had discussed this very subject before, but thought the distinction didn't matter for the purpose of computing scopes -- maybe we were wrong in these edge cases. That should be a correct fix and should work on these 5 tests. So I'll look into that next.", "positive_passages": [{"docid": "doc-en-rust-308cd5fea4cbad523a75d0f51a447a62e7e7ff03b387aad6adfe6b2926ce1bac", "text": " // This is a non-regression test for issue #116657, where NLL and `-Zpolonius=next` computed // different loan scopes when a member constraint was not ultimately applied. // revisions: nll polonius // [polonius] compile-flags: -Zpolonius=next #![feature(impl_trait_in_assoc_type)] trait Callable { type Output; fn call(x: Self) -> Self::Output; } trait PlusOne {} impl<'a> PlusOne for &'a mut i32 {} impl Callable for T { //[nll]~^ ERROR not all trait items implemented //[polonius]~^^ ERROR not all trait items implemented type Output = impl PlusOne; //[nll]~^ ERROR unconstrained opaque type //[polonius]~^^ ERROR unconstrained opaque type } fn test<'a>(y: &'a mut i32) -> impl PlusOne { <&mut i32 as Callable>::call(y) //[nll]~^ ERROR hidden type for `impl PlusOne` captures lifetime //[polonius]~^^ ERROR hidden type for `impl PlusOne` captures lifetime } fn main() {} ", "commid": "rust_pr_116960"}], "negative_passages": []} {"query_id": "q-en-rust-1bd493e9377663935672203a53ef0aa23698c7d862c10706d21aad60ec37c99d", "query": "File: auto-reduced (treereduce-rust): original: Version information ` Command: $DIR/issue-116781.rs:4:16 | LL | field: fn(($),), | ^ expected pattern error: expected pattern, found `$` --> $DIR/issue-116781.rs:4:16 | LL | field: fn(($),), | ^ expected pattern | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error: aborting due to 2 previous errors ", "commid": "rust_pr_116889"}], "negative_passages": []} {"query_id": "q-en-rust-ec56a9ddfa3a7e44a11c759d1032a19c3a6a8016fbecab0d22abcb58a9956912", "query": " $DIR/semi-in-let-chain.rs:7:23 | LL | && let () = (); | ^ expected `{` | note: you likely meant to continue parsing the let-chain starting here --> $DIR/semi-in-let-chain.rs:8:9 | LL | && let () = () | ^^^^^^ help: consider removing this semicolon to parse the `let` as part of the same chain | LL - && let () = (); LL + && let () = () | error: expected `{`, found `;` --> $DIR/semi-in-let-chain.rs:15:20 | LL | && () == (); | ^ expected `{` | note: the `if` expression is missing a block after this condition --> $DIR/semi-in-let-chain.rs:14:8 | LL | if let () = () | ________^ LL | | && () == (); | |___________________^ error: expected `{`, found `;` --> $DIR/semi-in-let-chain.rs:23:20 | LL | && () == (); | ^ expected `{` | note: you likely meant to continue parsing the let-chain starting here --> $DIR/semi-in-let-chain.rs:24:9 | LL | && let () = () | ^^^^^^ help: consider removing this semicolon to parse the `let` as part of the same chain | LL - && () == (); LL + && () == () | error: aborting due to 3 previous errors ", "commid": "rust_pr_117743"}], "negative_passages": []} {"query_id": "q-en-rust-ad5e83d0e43c3b5a5895734829182b7bc96b295caa87a24c827c078caec24548", "query": "An iterator returns None if it is empty, but Peekable's isempty is defined like this: This needs to be changed to use isnone(). In fact, the code from testpeekableis_empty() fails. It looks like this escaped detection because of a missing \"#[test]\"\nWhoops! Feel free to submit a pull request fixing it. :)", "positive_passages": [{"docid": "doc-en-rust-c462bf71c374ac00909cc74ea397060d40df06c6d092d3e24364238f4762ee54", "text": "/// Check whether peekable iterator is empty or not. #[inline] pub fn is_empty(&mut self) -> bool { self.peek().is_some() self.peek().is_none() } }", "commid": "rust_pr_11788"}], "negative_passages": []} {"query_id": "q-en-rust-ad5e83d0e43c3b5a5895734829182b7bc96b295caa87a24c827c078caec24548", "query": "An iterator returns None if it is empty, but Peekable's isempty is defined like this: This needs to be changed to use isnone(). In fact, the code from testpeekableis_empty() fails. It looks like this escaped detection because of a missing \"#[test]\"\nWhoops! Feel free to submit a pull request fixing it. :)", "positive_passages": [{"docid": "doc-en-rust-902682a2b36b189e483eac2f5d7be796a5f9c54d46e72d6fd357c7a7a842d3ee", "text": "assert_eq!(ys, [5, 4, 3, 2, 1]); } #[test] fn test_peekable_is_empty() { let a = [1]; let mut it = a.iter().peekable();", "commid": "rust_pr_11788"}], "negative_passages": []} {"query_id": "q-en-rust-3ea70e43c8af036ad414c23fdc3c12118f19a891d5ecd7fa3401cf30e9ed8d53", "query": "Hi there! I encountered a weird situation when writing a device driver using . Despite getting correct temperature readings in byte form from my thermocouple converter, all readings were incorrect after casting them to floats and applying some fundamental transformations. After much confusion, I tried using instead and found it worked perfectly! However, I knew the transformations should be correct either way: I tested this exact setup and formula using Linux and the crate. That led me to test those transformations on a few different machines. Please take a look at the results below to gain better insight. Arduino Device Other machines for all below MacBook Pro M1 on macOS and Asahi Linux () Linux () macOS () Ryzen desktop machine running Linux () Milk-V Duo () I can do more if you'd like! If there's any additional context I can give, or checking to be done, please let me know, and I'll happily look into it! Also, I did check the crate to see if it was working properly! I assume that since the and ufmt_float'd are the same, it must be an issue with itself. Please see and of the original test to get more info. Thank you for taking a look at my problem!\ntrying to figure out if this is a regression. I see that you tested on , does it also reproduce on other versions of the rust compiler? Did you try using a stable compiler (if usable for your project)?\nAVR doesn't support 64-bit floats in Rust (operations on them should end up being linker errors, though ): The fundamental issue is that AVR intrinsics (e.g. operations for i32 or f32) require a special calling convention that hasn't been yet exposed from LLVM, preventing Rust's compiler-builtins from chiming in with its own implementation. Some intrinsics (such as those for i32, u32, f32 etc.) get implemented by linking the resulting AVR binaries with (that's why working with AVR requires avr-gcc), but GCC doesn't provide implementations for f64. tl;dr f64 is not supported, but (probably) should throw a linker error instead of trying to link with compiler-builtin's implementation (because that implementation uses a different calling convention than the LLVM's codegen expects) Edit: GCC does seem to contain proper impls nowadays (, actually) - maybe LLVM gets the calling convention wrong? I'll take a look over the weekend.\nAlright, got it! -- fix at: (note that merging that merge request in compiler-builtins doesn't yet solve the issue here, because then I'll need to prepare another merge request bumping compiler-builtins inside rustc - all should be ready within a few days) As for the cause - since AVRs don't have a native support for floating-point operations, those must be implemented in software; usually implementations for things like that end up in a crate called compiler-builtins (linked above), which implements them in Rust using simple bitwise operations and whatnot. There's a difference, though, between the ABI as expected by LLVM and the code that's present inside compiler-builtins - for \"bigger\" types (like i64, i128, f32 etc.), AVR requires a special calling convention (which defines things like \"which register should contain which value\") that is not yet supported by compiler-builtins (and not so easy to expose there). So instead, we pull hand-written functions (\"intrinsics\") from GCC's standard library (which is why avr-gcc is required as the linker) - those functions use the correct registers -- but the linking order is that stuff from compiler-builtins gets linked before GCC's, that's why those invalid intrinsics have to be commented out in compiler-builtins (so that the avr-gcc can provide its own, correct definitions). This is exactly the same case as we used to have .\nWoah, thanks for the fix! Sorry for not updating this issue for a bit - my wired internet hasn't been working, so I haven't had access to my Linux machine to test this functionality. Thank you for taking the time to get it working (and explain it to me)!", "positive_passages": [{"docid": "doc-en-rust-bc5e3f154233518cf97f5ef95b9b7c9ed8173827a3421be664c961ddd84b8b3a", "text": "[[package]] name = \"compiler_builtins\" version = \"0.1.103\" version = \"0.1.104\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"a3b73c3443a5fd2438d7ba4853c64e4c8efc2404a9e28a9234cc2d5eebc6c242\" checksum = \"99c3f9035afc33f4358773239573f7d121099856753e1bbd2a6a5207098fc741\" dependencies = [ \"cc\", \"rustc-std-workspace-core\",", "commid": "rust_pr_118645"}], "negative_passages": []} {"query_id": "q-en-rust-3ea70e43c8af036ad414c23fdc3c12118f19a891d5ecd7fa3401cf30e9ed8d53", "query": "Hi there! I encountered a weird situation when writing a device driver using . Despite getting correct temperature readings in byte form from my thermocouple converter, all readings were incorrect after casting them to floats and applying some fundamental transformations. After much confusion, I tried using instead and found it worked perfectly! However, I knew the transformations should be correct either way: I tested this exact setup and formula using Linux and the crate. That led me to test those transformations on a few different machines. Please take a look at the results below to gain better insight. Arduino Device Other machines for all below MacBook Pro M1 on macOS and Asahi Linux () Linux () macOS () Ryzen desktop machine running Linux () Milk-V Duo () I can do more if you'd like! If there's any additional context I can give, or checking to be done, please let me know, and I'll happily look into it! Also, I did check the crate to see if it was working properly! I assume that since the and ufmt_float'd are the same, it must be an issue with itself. Please see and of the original test to get more info. Thank you for taking a look at my problem!\ntrying to figure out if this is a regression. I see that you tested on , does it also reproduce on other versions of the rust compiler? Did you try using a stable compiler (if usable for your project)?\nAVR doesn't support 64-bit floats in Rust (operations on them should end up being linker errors, though ): The fundamental issue is that AVR intrinsics (e.g. operations for i32 or f32) require a special calling convention that hasn't been yet exposed from LLVM, preventing Rust's compiler-builtins from chiming in with its own implementation. Some intrinsics (such as those for i32, u32, f32 etc.) get implemented by linking the resulting AVR binaries with (that's why working with AVR requires avr-gcc), but GCC doesn't provide implementations for f64. tl;dr f64 is not supported, but (probably) should throw a linker error instead of trying to link with compiler-builtin's implementation (because that implementation uses a different calling convention than the LLVM's codegen expects) Edit: GCC does seem to contain proper impls nowadays (, actually) - maybe LLVM gets the calling convention wrong? I'll take a look over the weekend.\nAlright, got it! -- fix at: (note that merging that merge request in compiler-builtins doesn't yet solve the issue here, because then I'll need to prepare another merge request bumping compiler-builtins inside rustc - all should be ready within a few days) As for the cause - since AVRs don't have a native support for floating-point operations, those must be implemented in software; usually implementations for things like that end up in a crate called compiler-builtins (linked above), which implements them in Rust using simple bitwise operations and whatnot. There's a difference, though, between the ABI as expected by LLVM and the code that's present inside compiler-builtins - for \"bigger\" types (like i64, i128, f32 etc.), AVR requires a special calling convention (which defines things like \"which register should contain which value\") that is not yet supported by compiler-builtins (and not so easy to expose there). So instead, we pull hand-written functions (\"intrinsics\") from GCC's standard library (which is why avr-gcc is required as the linker) - those functions use the correct registers -- but the linking order is that stuff from compiler-builtins gets linked before GCC's, that's why those invalid intrinsics have to be commented out in compiler-builtins (so that the avr-gcc can provide its own, correct definitions). This is exactly the same case as we used to have .\nWoah, thanks for the fix! Sorry for not updating this issue for a bit - my wired internet hasn't been working, so I haven't had access to my Linux machine to test this functionality. Thank you for taking the time to get it working (and explain it to me)!", "positive_passages": [{"docid": "doc-en-rust-f0ce4fbc6c2722a46eb43518590757541d4634bb50374f43d1f96690963779f0", "text": "panic_abort = { path = \"../panic_abort\" } core = { path = \"../core\", public = true } libc = { version = \"0.2.150\", default-features = false, features = ['rustc-dep-of-std'], public = true } compiler_builtins = { version = \"0.1.103\" } compiler_builtins = { version = \"0.1.104\" } profiler_builtins = { path = \"../profiler_builtins\", optional = true } unwind = { path = \"../unwind\" } hashbrown = { version = \"0.14\", default-features = false, features = ['rustc-dep-of-std'] }", "commid": "rust_pr_118645"}], "negative_passages": []} {"query_id": "q-en-rust-3ea70e43c8af036ad414c23fdc3c12118f19a891d5ecd7fa3401cf30e9ed8d53", "query": "Hi there! I encountered a weird situation when writing a device driver using . Despite getting correct temperature readings in byte form from my thermocouple converter, all readings were incorrect after casting them to floats and applying some fundamental transformations. After much confusion, I tried using instead and found it worked perfectly! However, I knew the transformations should be correct either way: I tested this exact setup and formula using Linux and the crate. That led me to test those transformations on a few different machines. Please take a look at the results below to gain better insight. Arduino Device Other machines for all below MacBook Pro M1 on macOS and Asahi Linux () Linux () macOS () Ryzen desktop machine running Linux () Milk-V Duo () I can do more if you'd like! If there's any additional context I can give, or checking to be done, please let me know, and I'll happily look into it! Also, I did check the crate to see if it was working properly! I assume that since the and ufmt_float'd are the same, it must be an issue with itself. Please see and of the original test to get more info. Thank you for taking a look at my problem!\ntrying to figure out if this is a regression. I see that you tested on , does it also reproduce on other versions of the rust compiler? Did you try using a stable compiler (if usable for your project)?\nAVR doesn't support 64-bit floats in Rust (operations on them should end up being linker errors, though ): The fundamental issue is that AVR intrinsics (e.g. operations for i32 or f32) require a special calling convention that hasn't been yet exposed from LLVM, preventing Rust's compiler-builtins from chiming in with its own implementation. Some intrinsics (such as those for i32, u32, f32 etc.) get implemented by linking the resulting AVR binaries with (that's why working with AVR requires avr-gcc), but GCC doesn't provide implementations for f64. tl;dr f64 is not supported, but (probably) should throw a linker error instead of trying to link with compiler-builtin's implementation (because that implementation uses a different calling convention than the LLVM's codegen expects) Edit: GCC does seem to contain proper impls nowadays (, actually) - maybe LLVM gets the calling convention wrong? I'll take a look over the weekend.\nAlright, got it! -- fix at: (note that merging that merge request in compiler-builtins doesn't yet solve the issue here, because then I'll need to prepare another merge request bumping compiler-builtins inside rustc - all should be ready within a few days) As for the cause - since AVRs don't have a native support for floating-point operations, those must be implemented in software; usually implementations for things like that end up in a crate called compiler-builtins (linked above), which implements them in Rust using simple bitwise operations and whatnot. There's a difference, though, between the ABI as expected by LLVM and the code that's present inside compiler-builtins - for \"bigger\" types (like i64, i128, f32 etc.), AVR requires a special calling convention (which defines things like \"which register should contain which value\") that is not yet supported by compiler-builtins (and not so easy to expose there). So instead, we pull hand-written functions (\"intrinsics\") from GCC's standard library (which is why avr-gcc is required as the linker) - those functions use the correct registers -- but the linking order is that stuff from compiler-builtins gets linked before GCC's, that's why those invalid intrinsics have to be commented out in compiler-builtins (so that the avr-gcc can provide its own, correct definitions). This is exactly the same case as we used to have .\nWoah, thanks for the fix! Sorry for not updating this issue for a bit - my wired internet hasn't been working, so I haven't had access to my Linux machine to test this functionality. Thank you for taking the time to get it working (and explain it to me)!", "positive_passages": [{"docid": "doc-en-rust-48a7309f63f33832ab46e397e9c8e2bf056542c359004058db2d193f1bce1691", "text": "wasi = { version = \"0.11.0\", features = ['rustc-dep-of-std'], default-features = false } [target.'cfg(target_os = \"uefi\")'.dependencies] r-efi = { version = \"4.2.0\", features = ['rustc-dep-of-std']} r-efi-alloc = { version = \"1.0.0\", features = ['rustc-dep-of-std']} r-efi = { version = \"4.2.0\", features = ['rustc-dep-of-std'] } r-efi-alloc = { version = \"1.0.0\", features = ['rustc-dep-of-std'] } [features] backtrace = [", "commid": "rust_pr_118645"}], "negative_passages": []} {"query_id": "q-en-rust-48108cff7ac5a8bea68b96482e292c8a1540becf8b4e9dcbd791c196ef6d2c8f", "query": " $DIR/bad-let-else-statement.rs:9:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = ({ LL | 1 LL ~ }) else { | error: `for...else` loops are not supported --> $DIR/bad-let-else-statement.rs:18:7 | LL | let foo = for i in 1..2 { | --- `else` is attached to this loop LL | break; LL | } else { | _______^ LL | | LL | | return; LL | | }; | |_____^ | = note: consider moving this `else` clause to a separate `if` statement and use a `bool` variable to control if it should run error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:29:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = (if true { LL | 1 LL | } else { LL | 0 LL ~ }) else { | error: `loop...else` loops are not supported --> $DIR/bad-let-else-statement.rs:38:7 | LL | let foo = loop { | ---- `else` is attached to this loop LL | break; LL | } else { | _______^ LL | | LL | | return; LL | | }; | |_____^ | = note: consider moving this `else` clause to a separate `if` statement and use a `bool` variable to control if it should run error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:48:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = (match true { LL | true => 1, LL | false => 0 LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:58:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = (X { LL | a: 1 LL ~ }) else { | error: `while...else` loops are not supported --> $DIR/bad-let-else-statement.rs:67:7 | LL | let foo = while false { | ----- `else` is attached to this loop LL | break; LL | } else { | _______^ LL | | LL | | return; LL | | }; | |_____^ | = note: consider moving this `else` clause to a separate `if` statement and use a `bool` variable to control if it should run error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:76:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = (const { LL | 1 LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:85:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = &({ LL | 1 LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:95:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = bar = ({ LL | 1 LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:104:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = 1 + ({ LL | 1 LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:113:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = 1..({ LL | 1 LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:122:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = return ({ LL | () LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:131:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = -({ LL | 1 LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:140:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = do yeet ({ LL | () LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:149:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = become ({ LL | () LL ~ }) else { | error: right curly brace `}` before `else` in a `let...else` statement not allowed --> $DIR/bad-let-else-statement.rs:158:5 | LL | } else { | ^ | help: wrap the expression in parentheses | LL ~ let foo = |x: i32| ({ LL | x LL ~ }) else { | error: aborting due to 17 previous errors ", "commid": "rust_pr_118880"}], "negative_passages": []} {"query_id": "q-en-rust-57b341974f5339a3472388e2cccf34f83ae10943d38242f86046712940c925b4", "query": "and currently store pointers as , which is not ideal: They should really work with pointers and strict provenance APIs like , , etc. See a PR fixing a similar issue for some context: (note: in that one I ended up rewriting the whole thing, I think this issue requires far less changes). $DIR/storage-live-dead-var.rs:LL:CC | LL | val = 42; | ^^^^^^^^ accessing a dead local variable | = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information = note: BACKTRACE: = note: inside `main` at $DIR/storage-live-dead-var.rs:LL:CC note: some details are omitted, run with `MIRIFLAGS=-Zmiri-backtrace=full` for a verbose backtrace error: aborting due to 1 previous error ", "commid": "rust_pr_126154"}], "negative_passages": []} {"query_id": "q-en-rust-5e590cb4f2f085f106567f7c40bd9ef193db45fdd18a9b603338f2461d885111", "query": "The inliner strategy for dealing with storage statements is simple. If a callee local already has some storage statements, they are preserved as is when integrating callee into the caller. There are no new storage statements for such locals. Turns out this approach is unsound due to peculiar semantics of MIR. It is well defined to return from a function while there are still some live locals. At the same time it is undefined behaviour to execute StorageLive for already live local. Effectively inliner is obliged to end the storage for locals that are still live when callee returns, which it doesn't do at the moment. Arguably this is more of a bug in MIR semantics, then one in the inliner\nhey would you like to first T-opsem discuss this? Maybe have this topic on a meeting and figure out a plan to fix this?\nOh, interesting catch. Cc Maybe we should just allow StorageLive on an already live local (and declare it to first implicitly free the local, and then make it live again)...", "positive_passages": [{"docid": "doc-en-rust-5cff5c04f8d7e1830b40b78254a5fe59b3de353ed4492ecb03dabd3562cf3f4a", "text": " #![feature(core_intrinsics, custom_mir)] use std::intrinsics::mir::*; #[custom_mir(dialect = \"runtime\")] fn main() { mir! { let val: i32; let _val2: i32; { StorageLive(val); val = 42; StorageLive(val); // reset val to `uninit` _val2 = val; //~ERROR: uninitialized Return() } } } ", "commid": "rust_pr_126154"}], "negative_passages": []} {"query_id": "q-en-rust-5e590cb4f2f085f106567f7c40bd9ef193db45fdd18a9b603338f2461d885111", "query": "The inliner strategy for dealing with storage statements is simple. If a callee local already has some storage statements, they are preserved as is when integrating callee into the caller. There are no new storage statements for such locals. Turns out this approach is unsound due to peculiar semantics of MIR. It is well defined to return from a function while there are still some live locals. At the same time it is undefined behaviour to execute StorageLive for already live local. Effectively inliner is obliged to end the storage for locals that are still live when callee returns, which it doesn't do at the moment. Arguably this is more of a bug in MIR semantics, then one in the inliner\nhey would you like to first T-opsem discuss this? Maybe have this topic on a meeting and figure out a plan to fix this?\nOh, interesting catch. Cc Maybe we should just allow StorageLive on an already live local (and declare it to first implicitly free the local, and then make it live again)...", "positive_passages": [{"docid": "doc-en-rust-66951e5725a3cc789c54b71ffc16306e75648566a96e666efae5f89aeaa808fa", "text": " error: Undefined Behavior: constructing invalid value: encountered uninitialized memory, but expected an integer --> $DIR/storage-live-resets-var.rs:LL:CC | LL | _val2 = val; | ^^^^^^^^^^^ constructing invalid value: encountered uninitialized memory, but expected an integer | = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information = note: BACKTRACE: = note: inside `main` at $DIR/storage-live-resets-var.rs:LL:CC note: some details are omitted, run with `MIRIFLAGS=-Zmiri-backtrace=full` for a verbose backtrace error: aborting due to 1 previous error ", "commid": "rust_pr_126154"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-56af6338b9be8e520f3afe0791424870f0db744c4b2a775991d8dc66d21db7a1", "text": "], Applicability::MachineApplicable, ); } else { // FIXME: we may suggest array::repeat instead err.help(\"consider using `core::array::from_fn` to initialize the array\"); err.help(\"see https://doc.rust-lang.org/stable/std/array/fn.from_fn.html# for more information\"); } if self.tcx.sess.is_nightly_build()", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-e2410335bbfb023d5582d97a66885ac8750e75ae5d99b721741173417919b495", "text": "| ^^^^^^^^^^^^^^^^^^ the trait `Copy` is not implemented for `Header<'_>` | = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider using `core::array::from_fn` to initialize the array = help: see https://doc.rust-lang.org/stable/std/array/fn.from_fn.html# for more information help: consider annotating `Header<'_>` with `#[derive(Copy)]` | LL + #[derive(Copy)]", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-5bcec63070916e7bef220ef0383c9b6acdfd6cf89bc288930a4a1402688f1c0e", "text": "| ^^^^^^^^^^^^^^^^^^^ the trait `Copy` is not implemented for `Header<'_>` | = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider using `core::array::from_fn` to initialize the array = help: see https://doc.rust-lang.org/stable/std/array/fn.from_fn.html# for more information help: consider annotating `Header<'_>` with `#[derive(Copy)]` | LL + #[derive(Copy)]", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-8b5193701bf4f062116d2842a6fb8aca2875a662cd1ec72df4c9b0747a5bf3c6", "text": "| ^ the trait `Copy` is not implemented for `T` | = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider using `core::array::from_fn` to initialize the array = help: see https://doc.rust-lang.org/stable/std/array/fn.from_fn.html# for more information help: consider restricting type parameter `T` | LL | fn g(x: T) -> [T; N] {", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-15fe4e7ff26dfe7fa9d5cf04f89af2113f9671afae288ce0db09eb9298970419", "text": "| = note: required for `Option` to implement `Copy` = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider using `core::array::from_fn` to initialize the array = help: see https://doc.rust-lang.org/stable/std/array/fn.from_fn.html# for more information help: consider annotating `Bar` with `#[derive(Copy)]` | LL + #[derive(Copy)]", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-af970157c447ffcdb8fd16c3576d6392474be24eaab77bdc50f8eed975d37bc0", "text": "| ^ the trait `Copy` is not implemented for `Foo` | = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider using `core::array::from_fn` to initialize the array = help: see https://doc.rust-lang.org/stable/std/array/fn.from_fn.html# for more information help: consider annotating `Foo` with `#[derive(Copy)]` | LL + #[derive(Copy)]", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-44eae88b5136554fd5a69153a4580f91b4f47aed105b76159aa4b3003324c732", "text": " fn foo() -> String { String::new() } fn main() { let string_arr = [foo(); 64]; //~ ERROR the trait bound `String: Copy` is not satisfied } ", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-8ff14fa7b1b4f5f7d0740c7353d05ba532b7e5b439b4ae2a3fed27d1e2103814", "query": "For this snippet (): rustc gives: Since , for const-able values, rustc suggests the creation of a constant binding. Here this is not possible because is not const fn. However, we can suggest instead.\nPersonally, I prefer over a separate binding. Even in the case, the generated code will call the initialization function many times. Maybe in the case where bindings are possible, we can suggest both? Where they are not possible, we don't suggest anything right now, so at the least we should suggest there. what do you think?\nOnce is stable we can suggest that :3. But until then sounds reasonable.\nAgreed, good point about . It seems to have a stabilization PR open with an FCP, which is blocked: . Maybe there might be progress on it this month: . Even if we stabilize inline const, it might make sense to wait with suggesting it for a while so that we don't make code depend on the latest MSRV immediately. So therefore, I think that it might still make sense to implement this in the meantime.\nI think this should (in the future) suggest instead, since the function would only be called once if were , whereas calls it for every element.", "positive_passages": [{"docid": "doc-en-rust-271f0f84ca77e0e0c73c67ba5fbb64deb74c3620abc8d89864e194b5baf900cf", "text": " error[E0277]: the trait bound `String: Copy` is not satisfied --> $DIR/issue-119530-sugg-from-fn.rs:4:23 | LL | let string_arr = [foo(); 64]; | ^^^^^ the trait `Copy` is not implemented for `String` | = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider using `core::array::from_fn` to initialize the array = help: see https://doc.rust-lang.org/stable/std/array/fn.from_fn.html# for more information error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_119805"}], "negative_passages": []} {"query_id": "q-en-rust-b25c02e917baabfcb71e7bd0d2a05bb27690aee0a64d8e1cbadc2cd4b3a41432", "query": "The compiler struggles to optimize the for position on Iterators. Simplifying the implementation has increased the efficiency in various scenarios OMM: Old position(): #[inline] fn check bool, { #[inline] fn check( mut predicate: impl FnMut(T) -> bool, ) -> impl FnMut(usize, T) -> ControlFlow { fn check<'a, T>( mut predicate: impl FnMut(T) -> bool + 'a, acc: &'a mut usize, ) -> impl FnMut((), T) -> ControlFlow + 'a { #[rustc_inherit_overflow_checks] move |i, x| { if predicate(x) { ControlFlow::Break(i) } else { ControlFlow::Continue(i + 1) } move |_, x| { if predicate(x) { ControlFlow::Break(*acc) } else { *acc += 1; ControlFlow::Continue(()) } } } self.try_fold(0, check(predicate)).break_value() let mut acc = 0; self.try_fold((), check(predicate, &mut acc)).break_value() } /// Searches for an element in an iterator from the right, returning its", "commid": "rust_pr_119599"}], "negative_passages": []} {"query_id": "q-en-rust-cedbd099e1c44f50a85ea61cd27431b7a9e5cba8d117b32d1f883865bb9e0745", "query": "run Output: Miri: The underlying issue was found when reviewing the implementation of . The code is a reproduction to trigger the unsoundness. The implementation in question features: Where is turned into and dropped. Yes, that\u2019s , not . The code for seems to be doing the same thing. For comparison: The code for (and the same thing for ) seems to properly use a \u2026 no actually, that\u2019s a with zero lengths\u2026 notably the allocator is correct though. So the buggy code for should just switch to create a presumably. label I-unsound requires-nightly T-libs", "positive_passages": [{"docid": "doc-en-rust-c084cb288ab91115219674ed16bed5966fa641b475079dbe9b41431759ef365c", "text": "// Free the allocation without dropping its contents let (bptr, alloc) = Box::into_raw_with_allocator(src); let src = Box::from_raw(bptr as *mut mem::ManuallyDrop); let src = Box::from_raw_in(bptr as *mut mem::ManuallyDrop, alloc.by_ref()); drop(src); Self::from_ptr_in(ptr, alloc)", "commid": "rust_pr_119801"}], "negative_passages": []} {"query_id": "q-en-rust-70daf1f5826d557c780a13435ace358cd9e5a1c6f7ea879457ed85c0cc6ba09f", "query": "creates an error string dynamically, then passes it as an unsafe pointer to , which then casts the string to , then to , then throws it to another task. Nominating.", "positive_passages": [{"docid": "doc-en-rust-b746631e9f114a61df3674f0a8c62f82c71736bd800c0de2e9c1b92fda220dfe", "text": "//! Runtime calls emitted by the compiler. use c_str::ToCStr; use c_str::CString; use libc::c_char; use cast; use option::Some; #[cold] #[lang=\"fail_\"]", "commid": "rust_pr_12357"}], "negative_passages": []} {"query_id": "q-en-rust-70daf1f5826d557c780a13435ace358cd9e5a1c6f7ea879457ed85c0cc6ba09f", "query": "creates an error string dynamically, then passes it as an unsafe pointer to , which then casts the string to , then to , then throws it to another task. Nominating.", "positive_passages": [{"docid": "doc-en-rust-1ad2fe09beaecb7ace3efd0bd2af1f9469bfe2244409bb0d31e66889f9ee7e8b", "text": "pub fn fail_bounds_check(file: *u8, line: uint, index: uint, len: uint) -> ! { let msg = format!(\"index out of bounds: the len is {} but the index is {}\", len as uint, index as uint); msg.with_c_str(|buf| fail_(buf as *u8, file, line)) let file_str = match unsafe { CString::new(file as *c_char, false) }.as_str() { // This transmute is safe because `file` is always stored in rodata. Some(s) => unsafe { cast::transmute::<&str, &'static str>(s) }, None => \"file wasn't UTF-8 safe\" }; ::rt::begin_unwind(msg, file_str, line) } #[lang=\"malloc\"]", "commid": "rust_pr_12357"}], "negative_passages": []} {"query_id": "q-en-rust-fb7be0ddfc9ba3f215fe784ce5671e523cf6b1c1b96c90d756b87c85bd5708c1", "query": " $DIR/spec-effectvar-ice.rs:12:15 | LL | trait Foo {} | - help: mark `Foo` as const: `#[const_trait]` LL | LL | impl const Foo for T {} | ^^^ | = note: marking a trait with `#[const_trait]` ensures all default method bodies are `const` = note: adding a non-const method body in the future would be a breaking change error: const `impl` for trait `Foo` which is not marked with `#[const_trait]` --> $DIR/spec-effectvar-ice.rs:16:15 | LL | trait Foo {} | - help: mark `Foo` as const: `#[const_trait]` ... LL | impl const Foo for T where T: const Specialize {} | ^^^ | = note: marking a trait with `#[const_trait]` ensures all default method bodies are `const` = note: adding a non-const method body in the future would be a breaking change error: `const` can only be applied to `#[const_trait]` traits --> $DIR/spec-effectvar-ice.rs:16:40 | LL | impl const Foo for T where T: const Specialize {} | ^^^^^^^^^^ error[E0207]: the const parameter `host` is not constrained by the impl trait, self type, or predicates --> $DIR/spec-effectvar-ice.rs:12:9 | LL | impl const Foo for T {} | ^^^^^ unconstrained const parameter | = note: expressions using a const parameter must map each value to a distinct output value = note: proving the result of expressions other than the parameter are unique is not supported error[E0207]: the const parameter `host` is not constrained by the impl trait, self type, or predicates --> $DIR/spec-effectvar-ice.rs:16:9 | LL | impl const Foo for T where T: const Specialize {} | ^^^^^ unconstrained const parameter | = note: expressions using a const parameter must map each value to a distinct output value = note: proving the result of expressions other than the parameter are unique is not supported error: specialization impl does not specialize any associated items --> $DIR/spec-effectvar-ice.rs:16:1 | LL | impl const Foo for T where T: const Specialize {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | note: impl is a specialization of this impl --> $DIR/spec-effectvar-ice.rs:12:1 | LL | impl const Foo for T {} | ^^^^^^^^^^^^^^^^^^^^^^^ error: could not resolve generic parameters on overridden impl --> $DIR/spec-effectvar-ice.rs:16:1 | LL | impl const Foo for T where T: const Specialize {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0207`. ", "commid": "rust_pr_125865"}], "negative_passages": []} {"query_id": "q-en-rust-c807f2b23c3a52401709ae22ae320eac387cfe2e8d9eb01d7a06f9f6cacb2538", "query": " $DIR/opaque-used-in-extraneous-argument.rs:5:39 | LL | fn frob() -> impl Fn + '_ {} | ^^ expected named lifetime parameter | = help: this function's return type contains a borrowed value, but there is no value for it to be borrowed from help: consider using the `'static` lifetime, but this is uncommon unless you're returning a borrowed value from a `const` or a `static`, or if you will only have owned values | LL | fn frob() -> impl Fn + 'static {} | ~~~~~~~ error[E0412]: cannot find type `P` in this scope --> $DIR/opaque-used-in-extraneous-argument.rs:5:22 | LL | fn frob() -> impl Fn + '_ {} | ^ not found in this scope | help: you might be missing a type parameter | LL | fn frob

() -> impl Fn + '_ {} | +++ error[E0412]: cannot find type `T` in this scope --> $DIR/opaque-used-in-extraneous-argument.rs:5:34 | LL | fn frob() -> impl Fn + '_ {} | ^ not found in this scope | help: you might be missing a type parameter | LL | fn frob() -> impl Fn + '_ {} | +++ error[E0658]: the precise format of `Fn`-family traits' type parameters is subject to change --> $DIR/opaque-used-in-extraneous-argument.rs:5:19 | LL | fn frob() -> impl Fn + '_ {} | ^^^^^^^^^^^^^^^^^ help: use parenthetical notation instead: `Fn(P) -> T` | = note: see issue #29625 for more information = help: add `#![feature(unboxed_closures)]` to the crate attributes to enable = note: this compiler was built on YYYY-MM-DD; consider upgrading it if it is out of date error[E0061]: this function takes 0 arguments but 1 argument was supplied --> $DIR/opaque-used-in-extraneous-argument.rs:16:20 | LL | let old_path = frob(\"hello\"); | ^^^^ ------- | | | unexpected argument of type `&'static str` | help: remove the extra argument | note: function defined here --> $DIR/opaque-used-in-extraneous-argument.rs:5:4 | LL | fn frob() -> impl Fn + '_ {} | ^^^^ error[E0061]: this function takes 0 arguments but 1 argument was supplied --> $DIR/opaque-used-in-extraneous-argument.rs:19:5 | LL | open_parent(&old_path) | ^^^^^^^^^^^ --------- | | | unexpected argument of type `&impl FnOnce<{type error}, Output = {type error}> + Fn<{type error}> + 'static` | help: remove the extra argument | note: function defined here --> $DIR/opaque-used-in-extraneous-argument.rs:11:4 | LL | fn open_parent<'path>() { | ^^^^^^^^^^^ error: aborting due to 6 previous errors Some errors have detailed explanations: E0061, E0106, E0412, E0658. For more information about an error, try `rustc --explain E0061`. ", "commid": "rust_pr_120056"}], "negative_passages": []} {"query_id": "q-en-rust-8e9923fb018fde38964429005b55eb96bb0e271c23a44c28554fc1275db50c7c", "query": " $DIR/no-gat-position.rs:8:56 | LL | fn next<'a>(&'a mut self) -> Option>; | ^^^^^^^^^ associated type not allowed here error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0229`. ", "commid": "rust_pr_121406"}], "negative_passages": []} {"query_id": "q-en-rust-8e9923fb018fde38964429005b55eb96bb0e271c23a44c28554fc1275db50c7c", "query": " $DIR/bad-suggestion-on-missing-assoc.rs:1:12 | LL | #![feature(generic_const_exprs)] | ^^^^^^^^^^^^^^^^^^^ | = note: see issue #76560 for more information = note: `#[warn(incomplete_features)]` on by default warning: the feature `non_lifetime_binders` is incomplete and may not be safe to use and/or cause compiler crashes --> $DIR/bad-suggestion-on-missing-assoc.rs:3:12 | LL | #![feature(non_lifetime_binders)] | ^^^^^^^^^^^^^^^^^^^^ | = note: see issue #108185 for more information error: defaults for generic parameters are not allowed in `for<...>` binders --> $DIR/bad-suggestion-on-missing-assoc.rs:20:9 | LL | for T: TraitA>, | ^^^^^^^^^^^^^^^^^^^^^^ error[E0562]: `impl Trait` is not allowed in bounds --> $DIR/bad-suggestion-on-missing-assoc.rs:20:49 | LL | for T: TraitA>, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: `impl Trait` is only allowed in arguments and return types of functions and methods error: aborting due to 2 previous errors; 2 warnings emitted For more information about this error, try `rustc --explain E0562`. ", "commid": "rust_pr_121406"}], "negative_passages": []} {"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-f5705b1b8bcdc16efb7595022f88f2d3c99cf2d937ba27704e3e71052a07a0f3", "text": "#![allow(rustc::untranslatable_diagnostic)] use either::Either; use hir::ClosureKind; use hir::{ClosureKind, Path}; use rustc_data_structures::captures::Captures; use rustc_data_structures::fx::FxIndexSet; use rustc_errors::{codes::*, struct_span_code_err, Applicability, Diag, MultiSpan};", "commid": "rust_pr_120990"}], "negative_passages": []} {"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-b0b9863b403080f447e63ae0cf9327cf902a07dddbb167336016e902e8fc32b9", "text": "use rustc_middle::bug; use rustc_middle::hir::nested_filter::OnlyBodies; use rustc_middle::mir::tcx::PlaceTy; use rustc_middle::mir::VarDebugInfoContents; use rustc_middle::mir::{ self, AggregateKind, BindingForm, BorrowKind, CallSource, ClearCrossCrate, ConstraintCategory, FakeBorrowKind, FakeReadCause, LocalDecl, LocalInfo, LocalKind, Location, MutBorrowKind,", "commid": "rust_pr_120990"}], "negative_passages": []} {"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-a509f91547d230f0edfb1c98b24d4e762d10f8579721a92891c17994ca92a66a", "text": "self.suggest_cloning(err, ty, expr, None, Some(move_spans)); } } if let Some(pat) = finder.pat { self.suggest_ref_for_dbg_args(expr, place, move_span, err); // it's useless to suggest inserting `ref` when the span don't comes from local code if let Some(pat) = finder.pat && !move_span.is_dummy() && !self.infcx.tcx.sess.source_map().is_imported(move_span) { *in_pattern = true; let mut sugg = vec![(pat.span.shrink_to_lo(), \"ref \".to_string())]; if let Some(pat) = finder.parent_pat {", "commid": "rust_pr_120990"}], "negative_passages": []} {"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-2128e9fd06f41f49832d4e566037b422794f3aa8ff576207120a9095511b4968", "text": "} } // for dbg!(x) which may take ownership, suggest dbg!(&x) instead // but here we actually do not check whether the macro name is `dbg!` // so that we may extend the scope a bit larger to cover more cases fn suggest_ref_for_dbg_args( &self, body: &hir::Expr<'_>, place: &Place<'tcx>, move_span: Span, err: &mut Diag<'infcx>, ) { let var_info = self.body.var_debug_info.iter().find(|info| match info.value { VarDebugInfoContents::Place(ref p) => p == place, _ => false, }); let arg_name = if let Some(var_info) = var_info { var_info.name } else { return; }; struct MatchArgFinder { expr_span: Span, match_arg_span: Option, arg_name: Symbol, } impl Visitor<'_> for MatchArgFinder { fn visit_expr(&mut self, e: &hir::Expr<'_>) { // dbg! is expanded into a match pattern, we need to find the right argument span if let hir::ExprKind::Match(expr, ..) = &e.kind && let hir::ExprKind::Path(hir::QPath::Resolved( _, path @ Path { segments: [seg], .. }, )) = &expr.kind && seg.ident.name == self.arg_name && self.expr_span.source_callsite().contains(expr.span) { self.match_arg_span = Some(path.span); } hir::intravisit::walk_expr(self, e); } } let mut finder = MatchArgFinder { expr_span: move_span, match_arg_span: None, arg_name }; finder.visit_expr(body); if let Some(macro_arg_span) = finder.match_arg_span { err.span_suggestion_verbose( macro_arg_span.shrink_to_lo(), \"consider borrowing instead of transferring ownership\", \"&\", Applicability::MachineApplicable, ); } } fn report_use_of_uninitialized( &self, mpi: MovePathIndex,", "commid": "rust_pr_120990"}], "negative_passages": []} {"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-829ef606971383bd0beae6327fad90a3896165fda6edcecc2d2448e74bf86f4c", "text": " fn s() -> String { let a = String::new(); dbg!(a); return a; //~ ERROR use of moved value: } fn m() -> String { let a = String::new(); dbg!(1, 2, a, 1, 2); return a; //~ ERROR use of moved value: } fn t(a: String) -> String { let b: String = \"\".to_string(); dbg!(a, b); return b; //~ ERROR use of moved value: } fn x(a: String) -> String { let b: String = \"\".to_string(); dbg!(a, b); return a; //~ ERROR use of moved value: } macro_rules! my_dbg { () => { eprintln!(\"[{}:{}:{}]\", file!(), line!(), column!()) }; ($val:expr $(,)?) => { match $val { tmp => { eprintln!(\"[{}:{}:{}] {} = {:#?}\", file!(), line!(), column!(), stringify!($val), &tmp); tmp } } }; ($($val:expr),+ $(,)?) => { ($(my_dbg!($val)),+,) }; } fn test_my_dbg() -> String { let b: String = \"\".to_string(); my_dbg!(b, 1); return b; //~ ERROR use of moved value: } fn test_not_macro() -> String { let a = String::new(); let _b = match a { tmp => { eprintln!(\"dbg: {}\", tmp); tmp } }; return a; //~ ERROR use of moved value: } fn get_expr(_s: String) {} fn test() { let a: String = \"\".to_string(); let _res = get_expr(dbg!(a)); let _l = a.len(); //~ ERROR borrow of moved value } fn main() {} ", "commid": "rust_pr_120990"}], "negative_passages": []} {"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-505d050da985bd66ee4158271f64ac7c98fc819265ee9fa355078232a1333a64", "text": " error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:4:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | dbg!(a); | ------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(&a); | + error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:10:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | dbg!(1, 2, a, 1, 2); | ------------------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(1, 2, &a, 1, 2); | + error[E0382]: use of moved value: `b` --> $DIR/dbg-issue-120327.rs:16:12 | LL | let b: String = \"\".to_string(); | - move occurs because `b` has type `String`, which does not implement the `Copy` trait LL | dbg!(a, b); | ---------- value moved here LL | return b; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(a, &b); | + error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:22:12 | LL | fn x(a: String) -> String { | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let b: String = \"\".to_string(); LL | dbg!(a, b); | ---------- value moved here LL | return a; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | dbg!(&a, b); | + error[E0382]: use of moved value: `b` --> $DIR/dbg-issue-120327.rs:46:12 | LL | tmp => { | --- value moved here ... LL | let b: String = \"\".to_string(); | - move occurs because `b` has type `String`, which does not implement the `Copy` trait LL | my_dbg!(b, 1); LL | return b; | ^ value used here after move | help: consider borrowing instead of transferring ownership | LL | my_dbg!(&b, 1); | + help: borrow this binding in the pattern to avoid moving the value | LL | ref tmp => { | +++ error[E0382]: use of moved value: `a` --> $DIR/dbg-issue-120327.rs:57:12 | LL | let a = String::new(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let _b = match a { LL | tmp => { | --- value moved here ... LL | return a; | ^ value used here after move | help: borrow this binding in the pattern to avoid moving the value | LL | ref tmp => { | +++ error[E0382]: borrow of moved value: `a` --> $DIR/dbg-issue-120327.rs:65:14 | LL | let a: String = \"\".to_string(); | - move occurs because `a` has type `String`, which does not implement the `Copy` trait LL | let _res = get_expr(dbg!(a)); | ------- value moved here LL | let _l = a.len(); | ^ value borrowed here after move | help: consider borrowing instead of transferring ownership | LL | let _res = get_expr(dbg!(&a)); | + error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0382`. ", "commid": "rust_pr_120990"}], "negative_passages": []} {"query_id": "q-en-rust-dd66f626e30f9a422ef543deab970d3dbd7b6175d2cb3eb638f16edd4a301a95", "query": "No response No response No response\nI thought the intent with was to use it in a passthrough style, so you can insert it into existing expressions ie. becomes . In which case, maybe the lint should be for not using the value resulting from evaluating .\nNot sure about the majority of users but for me is a convenience over . I use it without using the return value almost all of the time.\nProbably a common thing to do, sure, but if we're talking about adding a lint, why not nudge the user in the direction of the intended and documented usage? You could at least do both: \"insert into an existing expression or pass it a borrow\" etc.\nThe fix in will only report a hint when there is already a happened. For example this code: before this fix the error is: with the fix, since there is already a error, the lint will try to find out the original macro, and suggest a borrow: The suggestion is not 100% right, since if we change it to , we need one more extra fix now(the return type is not expecting a reference), but there is a different issue, at least we have already moved one step on. If the original code is: Then suggest a borrow is a great and correct hint.\nSimilar in the scenario of function argument: original error is: new errors from the fix:\nWould not also be a good hint? The suggestion wouldn't compile anyway right? Because would need to change to take ? Without involved, the compiler does indeed suggest that: Once that's addressed (whichever way it's done), once again becomes transparent, and eg. or works. does not, however, and I see why a \"consider borrowing\" hint would be good there. However, if you make the change first and leave as-is, the resulting error does not include the suggestion to change , which is unhelpful. (I agree that the message is even less helpful though, I'm not disputing that.)\nYes, it's a good hint, but from the implementation view of a hint, it's much harder to analyze the following code from the error point and then construct a new expression. Even it seems an easy thing for humans. You are right, the suggestion is not 100% right, we may need an extra follow-up fix after applying the hint, which is also a common thing in rustc diagnostics.", "positive_passages": [{"docid": "doc-en-rust-054012863feed619bb0fee6ee44e9da98f182f51d6ada3d677a535788e56b7b4", "text": "| ^^^^^^^ value used here after move | = note: this error originates in the macro `dbg` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider borrowing instead of transferring ownership | LL | let _ = dbg!(&a); | + error: aborting due to 1 previous error", "commid": "rust_pr_120990"}], "negative_passages": []} {"query_id": "q-en-rust-8a55363f066ba4d7b354eec6162bb20ccc5d3db1122c0d96b8cab2ba3b99609b", "query": " $DIR/cfg-value-for-cfg-name-duplicate.rs:8:7 | LL | #[cfg(value)] | ^^^^^ | = help: expected names are: `bar`, `bee`, `cow`, `debug_assertions`, `doc`, `doctest`, `foo`, `miri`, `overflow_checks`, `panic`, `proc_macro`, `relocation_model`, `sanitize`, `sanitizer_cfi_generalize_pointers`, `sanitizer_cfi_normalize_integers`, `target_abi`, `target_arch`, `target_endian`, `target_env`, `target_family`, `target_feature`, `target_has_atomic`, `target_has_atomic_equal_alignment`, `target_has_atomic_load_store`, `target_os`, `target_pointer_width`, `target_thread_local`, `target_vendor`, `test`, `unix`, `windows` = help: to expect this configuration use `--check-cfg=cfg(value)` = note: see for more information about checking conditional configuration = note: `#[warn(unexpected_cfgs)]` on by default warning: 1 warning emitted ", "commid": "rust_pr_120435"}], "negative_passages": []} {"query_id": "q-en-rust-066ba5cc0aab9218cb800bae3c5d3c854573f12d88ddb2d7b2a20b6e43ed3ef3", "query": "The user wanted something with \"linux\", and there is a name-value cfg that has the value \"linux\", so they probably wanted that. No response", "positive_passages": [{"docid": "doc-en-rust-36d1045473ee29dce8f278fa04f6967933c22f234890904889088c6841c296ce", "text": " // #120427 // This test checks that when a single cfg has a value for user's specified name // // check-pass // compile-flags: -Z unstable-options // compile-flags: --check-cfg=cfg(foo,values(\"my_value\")) --check-cfg=cfg(bar,values(\"my_value\")) #[cfg(my_value)] //~^ WARNING unexpected `cfg` condition name: `my_value` fn x() {} fn main() {} ", "commid": "rust_pr_120435"}], "negative_passages": []} {"query_id": "q-en-rust-066ba5cc0aab9218cb800bae3c5d3c854573f12d88ddb2d7b2a20b6e43ed3ef3", "query": "The user wanted something with \"linux\", and there is a name-value cfg that has the value \"linux\", so they probably wanted that. No response", "positive_passages": [{"docid": "doc-en-rust-f16fac23ffe751f0044d38ce0bdae01d0645740c7d13285f8f5cc01e411416f9", "text": " warning: unexpected `cfg` condition name: `my_value` --> $DIR/cfg-value-for-cfg-name-multiple.rs:8:7 | LL | #[cfg(my_value)] | ^^^^^^^^ | = help: expected names are: `bar`, `debug_assertions`, `doc`, `doctest`, `foo`, `miri`, `overflow_checks`, `panic`, `proc_macro`, `relocation_model`, `sanitize`, `sanitizer_cfi_generalize_pointers`, `sanitizer_cfi_normalize_integers`, `target_abi`, `target_arch`, `target_endian`, `target_env`, `target_family`, `target_feature`, `target_has_atomic`, `target_has_atomic_equal_alignment`, `target_has_atomic_load_store`, `target_os`, `target_pointer_width`, `target_thread_local`, `target_vendor`, `test`, `unix`, `windows` = help: to expect this configuration use `--check-cfg=cfg(my_value)` = note: see for more information about checking conditional configuration = note: `#[warn(unexpected_cfgs)]` on by default help: found config with similar value | LL | #[cfg(foo = \"my_value\")] | ~~~~~~~~~~~~~~~~ help: found config with similar value | LL | #[cfg(bar = \"my_value\")] | ~~~~~~~~~~~~~~~~ warning: 1 warning emitted ", "commid": "rust_pr_120435"}], "negative_passages": []} {"query_id": "q-en-rust-066ba5cc0aab9218cb800bae3c5d3c854573f12d88ddb2d7b2a20b6e43ed3ef3", "query": "The user wanted something with \"linux\", and there is a name-value cfg that has the value \"linux\", so they probably wanted that. No response", "positive_passages": [{"docid": "doc-en-rust-aa8a9c19dcb8e76b1bb6f62946eef8a84f989dea993d0345de119c7d9d6137fb", "text": " // #120427 // This test checks that when a single cfg has a value for user's specified name // suggest to use `#[cfg(target_os = \"linux\")]` instead of `#[cfg(linux)]` // // check-pass // compile-flags: -Z unstable-options // compile-flags: --check-cfg=cfg() #[cfg(linux)] //~^ WARNING unexpected `cfg` condition name: `linux` fn x() {} // will not suggest if the cfg has a value #[cfg(linux = \"os-name\")] //~^ WARNING unexpected `cfg` condition name: `linux` fn y() {} fn main() {} ", "commid": "rust_pr_120435"}], "negative_passages": []} {"query_id": "q-en-rust-066ba5cc0aab9218cb800bae3c5d3c854573f12d88ddb2d7b2a20b6e43ed3ef3", "query": "The user wanted something with \"linux\", and there is a name-value cfg that has the value \"linux\", so they probably wanted that. No response", "positive_passages": [{"docid": "doc-en-rust-6cfc7e8047950d4b7d1402a12e9d29d4d851fe2e9055d4bf388db0ef86fbc982", "text": " warning: unexpected `cfg` condition name: `linux` --> $DIR/cfg-value-for-cfg-name.rs:9:7 | LL | #[cfg(linux)] | ^^^^^ help: found config with similar value: `target_os = \"linux\"` | = help: expected names are: `debug_assertions`, `doc`, `doctest`, `miri`, `overflow_checks`, `panic`, `proc_macro`, `relocation_model`, `sanitize`, `sanitizer_cfi_generalize_pointers`, `sanitizer_cfi_normalize_integers`, `target_abi`, `target_arch`, `target_endian`, `target_env`, `target_family`, `target_feature`, `target_has_atomic`, `target_has_atomic_equal_alignment`, `target_has_atomic_load_store`, `target_os`, `target_pointer_width`, `target_thread_local`, `target_vendor`, `test`, `unix`, `windows` = help: to expect this configuration use `--check-cfg=cfg(linux)` = note: see for more information about checking conditional configuration = note: `#[warn(unexpected_cfgs)]` on by default warning: unexpected `cfg` condition name: `linux` --> $DIR/cfg-value-for-cfg-name.rs:14:7 | LL | #[cfg(linux = \"os-name\")] | ^^^^^^^^^^^^^^^^^ | = help: to expect this configuration use `--check-cfg=cfg(linux, values(\"os-name\"))` = note: see for more information about checking conditional configuration warning: 2 warnings emitted ", "commid": "rust_pr_120435"}], "negative_passages": []} {"query_id": "q-en-rust-429c2ddf70db3d5d9bb2d74d1570c170a0c25f578c9f7fca6c4e8f16dace6f07", "query": "I tried this code (as package ): Run the following, which will break at , step the next 4 lines () to define the variables, then print twice -- the first time will succeed, the second time will fail: I expected to see this happen: would work the same no matter how many times I called it, instead everything seems to be undefined after the first call. Below I show that seems to work (including multiple times), works (including multiple times), but after a single call to , neither of these works anymore: Oddly, if I break on filename / line number instead of function name, it seems to work as expected: I'm using from a git checkout from yesterday:\nRealized was not a but an ; have changed to . Don't think it changes the rest, behavior still the same with .\nWorks just fine in gdb for me: Rust-lldb doesn't want to load for me at all, but plain lldb works fine too. For reference I'm using lldb version 14.0.6.\nIf only gdb worked on aarch64-darwin! But thanks for prompting me to try other OS -- I can confirm that I do not see the bug using rust-lldb on x86_64-linux, so this may be isolated to one of darwin / aarch64 / aarch64-darwin. (Still repros for me on aarch64-darwin with lldb 16.0.6.)", "positive_passages": [{"docid": "doc-en-rust-cda7d8f90282aa8cc965ef2baa96c25ca9f80dd402475b8d74e53d2d743bfe77", "text": "type summary add -F lldb_lookup.summary_lookup -e -x -h \"^(core::([a-z_]+::)+)RefMut<.+>$\" --category Rust type summary add -F lldb_lookup.summary_lookup -e -x -h \"^(core::([a-z_]+::)+)RefCell<.+>$\" --category Rust type summary add -F lldb_lookup.summary_lookup -e -x -h \"^(core::([a-z_]+::)+)NonZero<.+>$\" --category Rust type summary add -F lldb_lookup.summary_lookup -e -x -h \"^core::num::([a-z_]+::)*NonZero.+$\" --category Rust type summary add -F lldb_lookup.summary_lookup -e -x -h \"^(std::([a-z_]+::)+)PathBuf$\" --category Rust type summary add -F lldb_lookup.summary_lookup -e -x -h \"^&(mut )?(std::([a-z_]+::)+)Path$\" --category Rust type category enable Rust", "commid": "rust_pr_120557"}], "negative_passages": []} {"query_id": "q-en-rust-429c2ddf70db3d5d9bb2d74d1570c170a0c25f578c9f7fca6c4e8f16dace6f07", "query": "I tried this code (as package ): Run the following, which will break at , step the next 4 lines () to define the variables, then print twice -- the first time will succeed, the second time will fail: I expected to see this happen: would work the same no matter how many times I called it, instead everything seems to be undefined after the first call. Below I show that seems to work (including multiple times), works (including multiple times), but after a single call to , neither of these works anymore: Oddly, if I break on filename / line number instead of function name, it seems to work as expected: I'm using from a git checkout from yesterday:\nRealized was not a but an ; have changed to . Don't think it changes the rest, behavior still the same with .\nWorks just fine in gdb for me: Rust-lldb doesn't want to load for me at all, but plain lldb works fine too. For reference I'm using lldb version 14.0.6.\nIf only gdb worked on aarch64-darwin! But thanks for prompting me to try other OS -- I can confirm that I do not see the bug using rust-lldb on x86_64-linux, so this may be isolated to one of darwin / aarch64 / aarch64-darwin. (Still repros for me on aarch64-darwin with lldb 16.0.6.)", "positive_passages": [{"docid": "doc-en-rust-2daf96f681e90b0faf8e73311a65573fd97198b7133c5dcbd4c01e57cbd27e0c", "text": "if rust_type == RustType.STD_NONZERO_NUMBER: return StdNonZeroNumberSummaryProvider(valobj, dict) if rust_type == RustType.STD_PATHBUF: return StdPathBufSummaryProvider(valobj, dict) if rust_type == RustType.STD_PATH: return StdPathSummaryProvider(valobj, dict) return \"\"", "commid": "rust_pr_120557"}], "negative_passages": []} {"query_id": "q-en-rust-429c2ddf70db3d5d9bb2d74d1570c170a0c25f578c9f7fca6c4e8f16dace6f07", "query": "I tried this code (as package ): Run the following, which will break at , step the next 4 lines () to define the variables, then print twice -- the first time will succeed, the second time will fail: I expected to see this happen: would work the same no matter how many times I called it, instead everything seems to be undefined after the first call. Below I show that seems to work (including multiple times), works (including multiple times), but after a single call to , neither of these works anymore: Oddly, if I break on filename / line number instead of function name, it seems to work as expected: I'm using from a git checkout from yesterday:\nRealized was not a but an ; have changed to . Don't think it changes the rest, behavior still the same with .\nWorks just fine in gdb for me: Rust-lldb doesn't want to load for me at all, but plain lldb works fine too. For reference I'm using lldb version 14.0.6.\nIf only gdb worked on aarch64-darwin! But thanks for prompting me to try other OS -- I can confirm that I do not see the bug using rust-lldb on x86_64-linux, so this may be isolated to one of darwin / aarch64 / aarch64-darwin. (Still repros for me on aarch64-darwin with lldb 16.0.6.)", "positive_passages": [{"docid": "doc-en-rust-672148d0f0327da9f82c34210e2a7dd5c15c2c1898e595eac9d570cff74af32d", "text": "return '\"%s\"' % data def StdPathBufSummaryProvider(valobj, dict): # type: (SBValue, dict) -> str # logger = Logger.Logger() # logger >> \"[StdPathBufSummaryProvider] for \" + str(valobj.GetName()) return StdOsStringSummaryProvider(valobj.GetChildMemberWithName(\"inner\"), dict) def StdPathSummaryProvider(valobj, dict): # type: (SBValue, dict) -> str # logger = Logger.Logger() # logger >> \"[StdPathSummaryProvider] for \" + str(valobj.GetName()) length = valobj.GetChildMemberWithName(\"length\").GetValueAsUnsigned() if length == 0: return '\"\"' data_ptr = valobj.GetChildMemberWithName(\"data_ptr\") start = data_ptr.GetValueAsUnsigned() error = SBError() process = data_ptr.GetProcess() data = process.ReadMemory(start, length, error) if PY3: try: data = data.decode(encoding='UTF-8') except UnicodeDecodeError: return '%r' % data return '\"%s\"' % data class StructSyntheticProvider: \"\"\"Pretty-printer for structs and struct enum variants\"\"\"", "commid": "rust_pr_120557"}], "negative_passages": []} {"query_id": "q-en-rust-429c2ddf70db3d5d9bb2d74d1570c170a0c25f578c9f7fca6c4e8f16dace6f07", "query": "I tried this code (as package ): Run the following, which will break at , step the next 4 lines () to define the variables, then print twice -- the first time will succeed, the second time will fail: I expected to see this happen: would work the same no matter how many times I called it, instead everything seems to be undefined after the first call. Below I show that seems to work (including multiple times), works (including multiple times), but after a single call to , neither of these works anymore: Oddly, if I break on filename / line number instead of function name, it seems to work as expected: I'm using from a git checkout from yesterday:\nRealized was not a but an ; have changed to . Don't think it changes the rest, behavior still the same with .\nWorks just fine in gdb for me: Rust-lldb doesn't want to load for me at all, but plain lldb works fine too. For reference I'm using lldb version 14.0.6.\nIf only gdb worked on aarch64-darwin! But thanks for prompting me to try other OS -- I can confirm that I do not see the bug using rust-lldb on x86_64-linux, so this may be isolated to one of darwin / aarch64 / aarch64-darwin. (Still repros for me on aarch64-darwin with lldb 16.0.6.)", "positive_passages": [{"docid": "doc-en-rust-49ee4e4ef741dffadf904d236a1fbc4f8d79ab9038e1c2e5d81690c2f5c3a28e", "text": "STD_REF_MUT = \"StdRefMut\" STD_REF_CELL = \"StdRefCell\" STD_NONZERO_NUMBER = \"StdNonZeroNumber\" STD_PATH = \"StdPath\" STD_PATHBUF = \"StdPathBuf\" STD_STRING_REGEX = re.compile(r\"^(alloc::([a-z_]+::)+)String$\")", "commid": "rust_pr_120557"}], "negative_passages": []} {"query_id": "q-en-rust-429c2ddf70db3d5d9bb2d74d1570c170a0c25f578c9f7fca6c4e8f16dace6f07", "query": "I tried this code (as package ): Run the following, which will break at , step the next 4 lines () to define the variables, then print twice -- the first time will succeed, the second time will fail: I expected to see this happen: would work the same no matter how many times I called it, instead everything seems to be undefined after the first call. Below I show that seems to work (including multiple times), works (including multiple times), but after a single call to , neither of these works anymore: Oddly, if I break on filename / line number instead of function name, it seems to work as expected: I'm using from a git checkout from yesterday:\nRealized was not a but an ; have changed to . Don't think it changes the rest, behavior still the same with .\nWorks just fine in gdb for me: Rust-lldb doesn't want to load for me at all, but plain lldb works fine too. For reference I'm using lldb version 14.0.6.\nIf only gdb worked on aarch64-darwin! But thanks for prompting me to try other OS -- I can confirm that I do not see the bug using rust-lldb on x86_64-linux, so this may be isolated to one of darwin / aarch64 / aarch64-darwin. (Still repros for me on aarch64-darwin with lldb 16.0.6.)", "positive_passages": [{"docid": "doc-en-rust-52470893c6e9441a70c897d474610a648a9a910cb7c910bf58f6b4b58d2e3859", "text": "STD_REF_MUT_REGEX = re.compile(r\"^(core::([a-z_]+::)+)RefMut<.+>$\") STD_REF_CELL_REGEX = re.compile(r\"^(core::([a-z_]+::)+)RefCell<.+>$\") STD_NONZERO_NUMBER_REGEX = re.compile(r\"^(core::([a-z_]+::)+)NonZero<.+>$\") STD_PATHBUF_REGEX = re.compile(r\"^(std::([a-z_]+::)+)PathBuf$\") STD_PATH_REGEX = re.compile(r\"^&(mut )?(std::([a-z_]+::)+)Path$\") TUPLE_ITEM_REGEX = re.compile(r\"__d+$\")", "commid": "rust_pr_120557"}], "negative_passages": []} {"query_id": "q-en-rust-429c2ddf70db3d5d9bb2d74d1570c170a0c25f578c9f7fca6c4e8f16dace6f07", "query": "I tried this code (as package ): Run the following, which will break at , step the next 4 lines () to define the variables, then print twice -- the first time will succeed, the second time will fail: I expected to see this happen: would work the same no matter how many times I called it, instead everything seems to be undefined after the first call. Below I show that seems to work (including multiple times), works (including multiple times), but after a single call to , neither of these works anymore: Oddly, if I break on filename / line number instead of function name, it seems to work as expected: I'm using from a git checkout from yesterday:\nRealized was not a but an ; have changed to . Don't think it changes the rest, behavior still the same with .\nWorks just fine in gdb for me: Rust-lldb doesn't want to load for me at all, but plain lldb works fine too. For reference I'm using lldb version 14.0.6.\nIf only gdb worked on aarch64-darwin! But thanks for prompting me to try other OS -- I can confirm that I do not see the bug using rust-lldb on x86_64-linux, so this may be isolated to one of darwin / aarch64 / aarch64-darwin. (Still repros for me on aarch64-darwin with lldb 16.0.6.)", "positive_passages": [{"docid": "doc-en-rust-d5b711272d579d925303b1b8fdd1817eb6e01d69d30d0fd0201600b587186a83", "text": "RustType.STD_REF_CELL: STD_REF_CELL_REGEX, RustType.STD_CELL: STD_CELL_REGEX, RustType.STD_NONZERO_NUMBER: STD_NONZERO_NUMBER_REGEX, RustType.STD_PATHBUF: STD_PATHBUF_REGEX, RustType.STD_PATH: STD_PATH_REGEX, } def is_tuple_fields(fields):", "commid": "rust_pr_120557"}], "negative_passages": []} {"query_id": "q-en-rust-429c2ddf70db3d5d9bb2d74d1570c170a0c25f578c9f7fca6c4e8f16dace6f07", "query": "I tried this code (as package ): Run the following, which will break at , step the next 4 lines () to define the variables, then print twice -- the first time will succeed, the second time will fail: I expected to see this happen: would work the same no matter how many times I called it, instead everything seems to be undefined after the first call. Below I show that seems to work (including multiple times), works (including multiple times), but after a single call to , neither of these works anymore: Oddly, if I break on filename / line number instead of function name, it seems to work as expected: I'm using from a git checkout from yesterday:\nRealized was not a but an ; have changed to . Don't think it changes the rest, behavior still the same with .\nWorks just fine in gdb for me: Rust-lldb doesn't want to load for me at all, but plain lldb works fine too. For reference I'm using lldb version 14.0.6.\nIf only gdb worked on aarch64-darwin! But thanks for prompting me to try other OS -- I can confirm that I do not see the bug using rust-lldb on x86_64-linux, so this may be isolated to one of darwin / aarch64 / aarch64-darwin. (Still repros for me on aarch64-darwin with lldb 16.0.6.)", "positive_passages": [{"docid": "doc-en-rust-761e54ad2c40c71056600557f3d85bce73afe1ca485c01f736901170de77ca22", "text": " //@ ignore-gdb //@ compile-flags:-g // === LLDB TESTS ================================================================================= // lldb-command:run // lldb-command:print pathbuf // lldb-check:[...] \"/some/path\" { inner = \"/some/path\" { inner = { inner = size=10 { [0] = '/' [1] = 's' [2] = 'o' [3] = 'm' [4] = 'e' [5] = '/' [6] = 'p' [7] = 'a' [8] = 't' [9] = 'h' } } } } // lldb-command:po pathbuf // lldb-check:\"/some/path\" // lldb-command:print path // lldb-check:[...] \"/some/path\" { data_ptr = [...] length = 10 } // lldb-command:po path // lldb-check:\"/some/path\" use std::path::Path; fn main() { let path = Path::new(\"/some/path\"); let pathbuf = path.to_path_buf(); zzz(); // #break } fn zzz() { () } ", "commid": "rust_pr_120557"}], "negative_passages": []} {"query_id": "q-en-rust-0ccc9e608114cb255caf59388e6498b60506553bb3cbb795d8739ec6e76e2008", "query": "Crates with long type names result in a strange-looking error, see ! I've tested this on Chrome and Firefox, and by building the docs locally from I believe the text is rendering vertically due to long type names like which are auto-generated from OpenAPI. Link to original issue $DIR/field-access-after-const-eval-fail-in-ty.rs:4:10 | LL | [(); loop {}].field; | ^^^^^^^ | = note: this lint makes sure the compiler doesn't get stuck due to infinite loops in const eval. If your compilation actually takes a long time, you can safely allow the lint. help: the constant being evaluated --> $DIR/field-access-after-const-eval-fail-in-ty.rs:4:10 | LL | [(); loop {}].field; | ^^^^^^^ = note: `#[deny(long_running_const_eval)]` on by default error: aborting due to 1 previous error ", "commid": "rust_pr_120616"}], "negative_passages": []} {"query_id": "q-en-rust-f339cdca44db426320b4d6aa7c2e2e590395664b6df80f469071c5265785159c", "query": " $DIR/fail-const-eval-issue-121099.rs:9:31 | LL | global_asm!(\"/* {} */\", const 1 << 500); | ^^^^^^^^ attempt to shift left by `500_i32`, which would overflow error[E0080]: evaluation of constant value failed --> $DIR/fail-const-eval-issue-121099.rs:11:31 | LL | global_asm!(\"/* {} */\", const 1 / 0); | ^^^^^ attempt to divide `1_i32` by zero error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_122691"}], "negative_passages": []} {"query_id": "q-en-rust-17ab2abae56635e18b3cf3d11e7bd1f4f16d6e25e0fc21f6d9ae8201ecca5f47", "query": " $DIR/index-bounds.rs:4:14 | LL | let _n = [64][200]; | ^^^^^^^^^ index out of bounds: the length is 1 but the index is 200 | = note: `#[deny(unconditional_panic)]` on by default error: this operation will panic at runtime --> $DIR/index-bounds.rs:8:14 | LL | let _n = [64][u32::MAX as usize - 1]; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ index out of bounds: the length is 1 but the index is 4294967294 error: aborting due to 2 previous errors ", "commid": "rust_pr_125821"}], "negative_passages": []} {"query_id": "q-en-rust-874306207f9a879e4f39021370b30a8d98692ef46b5f3fc2924e3483bb84b43f", "query": " $DIR/broken_format.rs:1:32 | LL | #[diagnostic::on_unimplemented(message = \"{{Test } thing\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: `#[warn(unknown_or_malformed_diagnostic_attributes)]` on by default warning: positional format arguments are not allowed here --> $DIR/broken_format.rs:6:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {}\")] | ^^^^^^^^^^^^^^^^^^^ | = help: only named format arguments with the name of one of the generic types are allowed in this context warning: positional format arguments are not allowed here --> $DIR/broken_format.rs:11:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {1:}\")] | ^^^^^^^^^^^^^^^^^^^^^ | = help: only named format arguments with the name of one of the generic types are allowed in this context warning: invalid format specifier --> $DIR/broken_format.rs:16:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {Self:123}\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: no format specifier are supported in this position warning: expected `'}'`, found `'!'` --> $DIR/broken_format.rs:21:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {Self:!}\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^ warning: unmatched `}` found --> $DIR/broken_format.rs:21:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {Self:!}\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^ warning: unmatched `}` found --> $DIR/broken_format.rs:1:32 | LL | #[diagnostic::on_unimplemented(message = \"{{Test } thing\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error[E0277]: {{Test } thing --> $DIR/broken_format.rs:35:13 | LL | check_1(()); | ------- ^^ the trait `ImportantTrait1` is not implemented for `()` | | | required by a bound introduced by this call | help: this trait has no implementations, consider adding one --> $DIR/broken_format.rs:4:1 | LL | trait ImportantTrait1 {} | ^^^^^^^^^^^^^^^^^^^^^ note: required by a bound in `check_1` --> $DIR/broken_format.rs:28:20 | LL | fn check_1(_: impl ImportantTrait1) {} | ^^^^^^^^^^^^^^^ required by this bound in `check_1` warning: positional format arguments are not allowed here --> $DIR/broken_format.rs:6:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {}\")] | ^^^^^^^^^^^^^^^^^^^ | = help: only named format arguments with the name of one of the generic types are allowed in this context = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error[E0277]: Test {} --> $DIR/broken_format.rs:37:13 | LL | check_2(()); | ------- ^^ the trait `ImportantTrait2` is not implemented for `()` | | | required by a bound introduced by this call | help: this trait has no implementations, consider adding one --> $DIR/broken_format.rs:9:1 | LL | trait ImportantTrait2 {} | ^^^^^^^^^^^^^^^^^^^^^ note: required by a bound in `check_2` --> $DIR/broken_format.rs:29:20 | LL | fn check_2(_: impl ImportantTrait2) {} | ^^^^^^^^^^^^^^^ required by this bound in `check_2` warning: positional format arguments are not allowed here --> $DIR/broken_format.rs:11:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {1:}\")] | ^^^^^^^^^^^^^^^^^^^^^ | = help: only named format arguments with the name of one of the generic types are allowed in this context = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error[E0277]: Test {1} --> $DIR/broken_format.rs:39:13 | LL | check_3(()); | ------- ^^ the trait `ImportantTrait3` is not implemented for `()` | | | required by a bound introduced by this call | help: this trait has no implementations, consider adding one --> $DIR/broken_format.rs:14:1 | LL | trait ImportantTrait3 {} | ^^^^^^^^^^^^^^^^^^^^^ note: required by a bound in `check_3` --> $DIR/broken_format.rs:30:20 | LL | fn check_3(_: impl ImportantTrait3) {} | ^^^^^^^^^^^^^^^ required by this bound in `check_3` warning: invalid format specifier --> $DIR/broken_format.rs:16:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {Self:123}\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: no format specifier are supported in this position = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error[E0277]: Test () --> $DIR/broken_format.rs:41:13 | LL | check_4(()); | ------- ^^ the trait `ImportantTrait4` is not implemented for `()` | | | required by a bound introduced by this call | help: this trait has no implementations, consider adding one --> $DIR/broken_format.rs:19:1 | LL | trait ImportantTrait4 {} | ^^^^^^^^^^^^^^^^^^^^^ note: required by a bound in `check_4` --> $DIR/broken_format.rs:31:20 | LL | fn check_4(_: impl ImportantTrait4) {} | ^^^^^^^^^^^^^^^ required by this bound in `check_4` warning: expected `'}'`, found `'!'` --> $DIR/broken_format.rs:21:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {Self:!}\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` warning: unmatched `}` found --> $DIR/broken_format.rs:21:32 | LL | #[diagnostic::on_unimplemented(message = \"Test {Self:!}\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error[E0277]: Test {Self:!} --> $DIR/broken_format.rs:43:13 | LL | check_5(()); | ------- ^^ the trait `ImportantTrait5` is not implemented for `()` | | | required by a bound introduced by this call | help: this trait has no implementations, consider adding one --> $DIR/broken_format.rs:26:1 | LL | trait ImportantTrait5 {} | ^^^^^^^^^^^^^^^^^^^^^ note: required by a bound in `check_5` --> $DIR/broken_format.rs:32:20 | LL | fn check_5(_: impl ImportantTrait5) {} | ^^^^^^^^^^^^^^^ required by this bound in `check_5` error: aborting due to 5 previous errors; 12 warnings emitted For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_122402"}], "negative_passages": []} {"query_id": "q-en-rust-b184ec4d22533e543a02ed5b5a8df734a0fb46efa2799bf46f84bb864de8ebb7", "query": "Replacing with in makes it compile, so it feels to me the RPIT version should be accepted too. This is minimized from the . bisected: searched nightlies: from nightly-2024-03-14 to nightly-2024-03-20 regressed nightly: nightly-2024-03-16 searched commit range: regressed commit: //@ check-pass #![feature(type_alias_impl_trait)] fn spawn(future: F) -> impl Sized where F: FnOnce() -> T, { future } fn spawn_task(sender: &'static ()) -> impl Sized { type Tait = impl Sized + 'static; spawn::(move || sender) } fn main() {} ", "commid": "rust_pr_125008"}], "negative_passages": []} {"query_id": "q-en-rust-c3209d3dd253d13503cded1883128d218e6060c1dd5cb715659271307f570380", "query": " $DIR/dont-canonicalize-re-error.rs:25:26 | LL | impl Constrain<'missing> for W {} | - ^^^^^^^^ undeclared lifetime | | | help: consider introducing lifetime `'missing` here: `'missing,` error[E0119]: conflicting implementations of trait `Tr<'_>` for type `W<_>` --> $DIR/dont-canonicalize-re-error.rs:21:1 | LL | impl<'a, A: ?Sized> Tr<'a> for W {} | ----------------------------------- first implementation here LL | struct W(A); LL | impl<'a, A: ?Sized> Tr<'a> for A where A: Constrain<'a> {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `W<_>` error: aborting due to 2 previous errors Some errors have detailed explanations: E0119, E0261. For more information about an error, try `rustc --explain E0119`. ", "commid": "rust_pr_122907"}], "negative_passages": []} {"query_id": "q-en-rust-4b5f9773f814c4b7c5309facad95a02ba39d5a3aaf14224b0611c1d9a07fffa3", "query": " $DIR/trait-solver-overflow-123573.rs:12:5 | LL | impl Test for &Local {} | ^^^^^^^^^^^^^^^^^^^^^^^ | = help: move this `impl` block outside the of the current function `main` = note: an `impl` definition is non-local if it is nested inside an item and may impact type checking outside of that item. This can be the case if neither the trait or the self type are at the same nesting level as the `impl` = note: one exception to the rule are anon-const (`const _: () = { ... }`) at top-level module and anon-const at the same nesting as the trait or type = note: this lint may become deny-by-default in the edition 2024 and higher, see the tracking issue = note: `#[warn(non_local_definitions)]` on by default warning: 1 warning emitted ", "commid": "rust_pr_123594"}], "negative_passages": []} {"query_id": "q-en-rust-ab546b4f59ed309566604f98dc049d5a41384eb1acb20f8ec9218b3fc688340b", "query": " $DIR/dont-ice-when-body-tainted-by-errors.rs:19:23 | LL | trait_error::<()>(); | ^^ the trait `Impossible` is not implemented for `()` | help: this trait has no implementations, consider adding one --> $DIR/dont-ice-when-body-tainted-by-errors.rs:7:1 | LL | trait Impossible {} | ^^^^^^^^^^^^^^^^ note: required by a bound in `trait_error` --> $DIR/dont-ice-when-body-tainted-by-errors.rs:8:19 | LL | fn trait_error() {} | ^^^^^^^^^^ required by this bound in `trait_error` error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_123834"}], "negative_passages": []} {"query_id": "q-en-rust-58479c4424c9c0ef84197cc5803c3e17ebacd16a312c2d57e3eae1f751f0ab0c", "query": "Same issue as which was for blocks. $DIR/async-block-control-flow-static-semantics.rs:32:9 | LL | / async { LL | | break 0u8; | | ^^^^^^^^^ cannot `break` inside of an `async` block | | ^^^^^^^^^ cannot `break` inside `async` block LL | | }; | |_____- enclosing `async` block error[E0267]: `break` inside of an `async` block error[E0267]: `break` inside `async` block --> $DIR/async-block-control-flow-static-semantics.rs:39:13 | LL | / async { LL | | break 0u8; | | ^^^^^^^^^ cannot `break` inside of an `async` block | | ^^^^^^^^^ cannot `break` inside `async` block LL | | }; | |_________- enclosing `async` block", "commid": "rust_pr_124777"}], "negative_passages": []} {"query_id": "q-en-rust-58479c4424c9c0ef84197cc5803c3e17ebacd16a312c2d57e3eae1f751f0ab0c", "query": "Same issue as which was for blocks. $DIR/break-inside-coroutine-issue-124495.rs:8:5 | LL | async fn async_fn() { | _____________________- LL | | break; | | ^^^^^ cannot `break` inside `async` function LL | | } | |_- enclosing `async` function error[E0267]: `break` inside `gen` function --> $DIR/break-inside-coroutine-issue-124495.rs:12:5 | LL | gen fn gen_fn() { | _________________- LL | | break; | | ^^^^^ cannot `break` inside `gen` function LL | | } | |_- enclosing `gen` function error[E0267]: `break` inside `async gen` function --> $DIR/break-inside-coroutine-issue-124495.rs:16:5 | LL | async gen fn async_gen_fn() { | _____________________________- LL | | break; | | ^^^^^ cannot `break` inside `async gen` function LL | | } | |_- enclosing `async gen` function error[E0267]: `break` inside `async` block --> $DIR/break-inside-coroutine-issue-124495.rs:20:21 | LL | let _ = async { break; }; | --------^^^^^--- | | | | | cannot `break` inside `async` block | enclosing `async` block error[E0267]: `break` inside `async` closure --> $DIR/break-inside-coroutine-issue-124495.rs:21:24 | LL | let _ = async || { break; }; | --^^^^^--- | | | | | cannot `break` inside `async` closure | enclosing `async` closure error[E0267]: `break` inside `gen` block --> $DIR/break-inside-coroutine-issue-124495.rs:23:19 | LL | let _ = gen { break; }; | ------^^^^^--- | | | | | cannot `break` inside `gen` block | enclosing `gen` block error[E0267]: `break` inside `async gen` block --> $DIR/break-inside-coroutine-issue-124495.rs:25:25 | LL | let _ = async gen { break; }; | ------------^^^^^--- | | | | | cannot `break` inside `async gen` block | enclosing `async gen` block error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0267`. ", "commid": "rust_pr_124777"}], "negative_passages": []} {"query_id": "q-en-rust-16acbb4914bb9992ed29a1710c87156554dd8685769f60cdee2f321edfbae7e4", "query": "The idea of changing a field to unit type to preserve field numbering makes sense for fields in the middle of a tuple. However, if the unused field is at the end, or it's the only field, then deleting it won't affect field numbering of any other field. No response No response $DIR/tuple-struct-field.rs:8:26 error: fields `1`, `2`, `3`, and `4` are never read --> $DIR/tuple-struct-field.rs:8:28 | LL | struct SingleUnused(i32, [u8; LEN], String); | ------------ ^^^^^^^^^ LL | struct UnusedAtTheEnd(i32, f32, [u8; LEN], String, u8); | -------------- ^^^ ^^^^^^^^^ ^^^^^^ ^^ | | | field in this struct | fields in this struct | = help: consider removing these fields note: the lint level is defined here --> $DIR/tuple-struct-field.rs:1:9 | LL | #![deny(dead_code)] | ^^^^^^^^^ help: consider changing the field to be of unit type to suppress this warning while preserving the field numbering, or remove the field error: field `0` is never read --> $DIR/tuple-struct-field.rs:13:27 | LL | struct UnusedJustOneField(i32); | ------------------ ^^^ | | | field in this struct | LL | struct SingleUnused(i32, (), String); | ~~ = help: consider removing this field error: fields `0`, `1`, `2`, and `3` are never read --> $DIR/tuple-struct-field.rs:13:23 error: fields `1`, `2`, and `4` are never read --> $DIR/tuple-struct-field.rs:18:31 | LL | struct MultipleUnused(i32, f32, String, u8); | -------------- ^^^ ^^^ ^^^^^^ ^^ LL | struct UnusedInTheMiddle(i32, f32, String, u8, u32); | ----------------- ^^^ ^^^^^^ ^^^ | | | fields in this struct | help: consider changing the fields to be of unit type to suppress this warning while preserving the field numbering, or remove the fields | LL | struct MultipleUnused((), (), (), ()); | ~~ ~~ ~~ ~~ LL | struct UnusedInTheMiddle(i32, (), (), u8, ()); | ~~ ~~ ~~ error: aborting due to 2 previous errors error: aborting due to 3 previous errors ", "commid": "rust_pr_124580"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": if param.index == 0 { if let ty::GenericParamDefKind::Lifetime = param.kind { tcx.lifetimes.re_erased.into() } else if param.index == 0 && param.name == kw::SelfUpper { possible_rcvr_ty.into() } else if param.index == closure_param.index { closure_ty.into()", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": if ocx.select_all_or_error().is_empty() { if ocx.select_all_or_error().is_empty() && count > 0 { diag.span_suggestion_verbose( tcx.hir().body(*body).value.peel_blocks().span.shrink_to_lo(), \"dereference the return value\",", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": //@ known-bug: rust-lang/rust#124563 use std::marker::PhantomData; pub trait Trait {} pub trait Foo { type Trait: Trait; type Bar: Bar; fn foo(&mut self); } pub struct FooImpl<'a, 'b, A: Trait>(PhantomData<&'a &'b A>); impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> where T: Trait, { type Trait = T; type Bar = BarImpl<'a, 'b, T>; fn foo(&mut self) { self.enter_scope(|ctx| { BarImpl(ctx); }); } } impl<'a, 'b, T> FooImpl<'a, 'b, T> where T: Trait, { fn enter_scope(&mut self, _scope: impl FnOnce(&mut Self)) {} } pub trait Bar { type Foo: Foo; } pub struct BarImpl<'a, 'b, T: Trait>(&'b mut FooImpl<'a, 'b, T>); impl<'a, 'b, T> Bar for BarImpl<'a, 'b, T> where T: Trait, { type Foo = FooImpl<'a, 'b, T>; } ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": // #125634 struct Thing; // Invariant in 'a, Covariant in 'b struct TwoThings<'a, 'b>(*mut &'a (), &'b mut ()); impl Thing { fn enter_scope<'a>(self, _scope: impl for<'b> FnOnce(TwoThings<'a, 'b>)) {} } fn foo() { Thing.enter_scope(|ctx| { SameLifetime(ctx); //~ ERROR lifetime may not live long enough }); } struct SameLifetime<'a>(TwoThings<'a, 'a>); fn main() {} ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": error: lifetime may not live long enough --> $DIR/account-for-lifetimes-in-closure-suggestion.rs:13:22 | LL | Thing.enter_scope(|ctx| { | --- | | | has type `TwoThings<'_, '1>` | has type `TwoThings<'2, '_>` LL | SameLifetime(ctx); | ^^^ this usage requires that `'1` must outlive `'2` | = note: requirement occurs because of the type `TwoThings<'_, '_>`, which makes the generic argument `'_` invariant = note: the struct `TwoThings<'a, 'b>` is invariant over the parameter `'a` = help: see for more information about variance error: aborting due to 1 previous error ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": // #124563 use std::marker::PhantomData; pub trait Trait {} pub trait Foo { type Trait: Trait; type Bar: Bar; fn foo(&mut self); } pub struct FooImpl<'a, 'b, A: Trait>(PhantomData<&'a &'b A>); impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> where T: Trait, { type Trait = T; type Bar = BarImpl<'a, 'b, T>; //~ ERROR lifetime bound not satisfied fn foo(&mut self) { self.enter_scope(|ctx| { //~ ERROR lifetime may not live long enough BarImpl(ctx); //~ ERROR lifetime may not live long enough }); } } impl<'a, 'b, T> FooImpl<'a, 'b, T> where T: Trait, { fn enter_scope(&mut self, _scope: impl FnOnce(&mut Self)) {} } pub trait Bar { type Foo: Foo; } pub struct BarImpl<'a, 'b, T: Trait>(&'b mut FooImpl<'a, 'b, T>); impl<'a, 'b, T> Bar for BarImpl<'a, 'b, T> where T: Trait, { type Foo = FooImpl<'a, 'b, T>; } fn main() {} ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": error[E0478]: lifetime bound not satisfied --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:19:16 | LL | type Bar = BarImpl<'a, 'b, T>; | ^^^^^^^^^^^^^^^^^^ | note: lifetime parameter instantiated with the lifetime `'a` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:6 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ note: but lifetime parameter must outlive the lifetime `'b` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:10 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:23:21 | LL | self.enter_scope(|ctx| { | --- | | | has type `&'1 mut FooImpl<'_, '_, T>` | has type `&mut FooImpl<'2, '_, T>` LL | BarImpl(ctx); | ^^^ this usage requires that `'1` must outlive `'2` error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:22:9 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | -- -- lifetime `'b` defined here | | | lifetime `'a` defined here ... LL | / self.enter_scope(|ctx| { LL | | BarImpl(ctx); LL | | }); | |__________^ argument requires that `'a` must outlive `'b` | = help: consider adding the following bound: `'a: 'b` = note: requirement occurs because of a mutable reference to `FooImpl<'_, '_, T>` = note: mutable references are invariant over their type parameter = help: see for more information about variance error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0478`. ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": // Test a method call where the parameter `B` would (illegally) be // inferred to a region bound in the method argument. If this program // were accepted, then the closure passed to `s.f` could escape its // argument. //@ run-rustfix struct S; impl S { fn f(&self, _: F) where F: FnOnce(&i32) -> B { } } fn main() { let s = S; s.f(|p| *p) //~ ERROR lifetime may not live long enough } ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": //@ run-rustfix struct S;", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-ea6c2f742466aca108571f2c909618d07c310be9bf346fc494cfcf755c7dcffc", "query": ": --> $DIR/regions-escape-method.rs:15:13 --> $DIR/regions-escape-method.rs:16:13 | LL | s.f(|p| p) | -- ^ returning this value requires that `'1` must outlive `'2` | || | |return type of closure is &'2 i32 | has type `&'1 i32` | help: dereference the return value | LL | s.f(|p| *p) | + error: aborting due to 1 previous error", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-10e9d1703cf19b4cdf1742156dad2e2e46fe77614e662b75a2792519cf298f3a", "query": "The for that function state (emphasis mine) However, the way this function gets called does not ensure that the alignment is at least the size of a word. Cc (discovered by failing CI in )", "positive_passages": [{"docid": "doc-en-rust-6b491b97957f22b6ea5e5dd9d3c18e23f2d747fe4e1b44d1a2d1945a8a275690", "text": "cfg_if::cfg_if! { if #[cfg(any( target_os = \"android\", target_os = \"illumos\", target_os = \"redox\", target_os = \"solaris\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\",", "commid": "rust_pr_124798"}], "negative_passages": []} {"query_id": "q-en-rust-571da560101451a8f9d137c6e3ba9f2fe26cf3e8ab89b5854079b9fabdbdb6b5", "query": "This crashes with every edition except 2015 :thinking: $DIR/cycle-import-in-std-1.rs:5:11 | LL | use ops::{self as std}; | ^^^^^^^^^^^ no external crate `ops` | = help: consider importing one of these items instead: core::ops std::ops error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0432`. ", "commid": "rust_pr_126065"}], "negative_passages": []} {"query_id": "q-en-rust-571da560101451a8f9d137c6e3ba9f2fe26cf3e8ab89b5854079b9fabdbdb6b5", "query": "This crashes with every edition except 2015 :thinking: $DIR/cycle-import-in-std-2.rs:5:11 | LL | use ops::{self as std}; | ^^^^^^^^^^^ no external crate `ops` | = help: consider importing one of these items instead: core::ops std::ops error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0432`. ", "commid": "rust_pr_126065"}], "negative_passages": []} {"query_id": "q-en-rust-50635d701f713a0327f8a17f571efab941578631e301b92ccd34a211e21f479d", "query": " $DIR/skipped-ref-pats-issue-125058.rs:8:8 | LL | struct Foo; | ^^^ | = note: `#[warn(dead_code)]` on by default warning: unused closure that must be used --> $DIR/skipped-ref-pats-issue-125058.rs:12:5 | LL | / || { LL | | LL | | if let Some(Some(&mut x)) = &mut Some(&mut Some(0)) { LL | | let _: u32 = x; LL | | } LL | | }; | |_____^ | = note: closures are lazy and do nothing unless called = note: `#[warn(unused_must_use)]` on by default warning: 2 warnings emitted ", "commid": "rust_pr_125084"}], "negative_passages": []} {"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-077dfc1161c651344e1d269055e62dcf2731521cfa7ea2d4db3a177ba91dc09f", "text": "codegen_ssa_select_cpp_build_tool_workload = in the Visual Studio installer, ensure the \"C++ build tools\" workload is selected codegen_ssa_self_contained_linker_missing = the self-contained linker was requested, but it wasn't found in the target's sysroot, or in rustc's sysroot codegen_ssa_shuffle_indices_evaluation = could not evaluate shuffle_indices at compile time codegen_ssa_specify_libraries_to_link = use the `-l` flag to specify native libraries to link", "commid": "rust_pr_125263"}], "negative_passages": []} {"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-7452057a42604899636274d10c67d7116af1c40e3c5245383d83c7be48316ca9", "text": "let self_contained_linker = self_contained_cli || self_contained_target; if self_contained_linker && !sess.opts.cg.link_self_contained.is_linker_disabled() { let mut linker_path_exists = false; for path in sess.get_tools_search_paths(false) { let linker_path = path.join(\"gcc-ld\"); linker_path_exists |= linker_path.exists(); cmd.arg({ let mut arg = OsString::from(\"-B\"); arg.push(path.join(\"gcc-ld\")); arg.push(linker_path); arg }); } if !linker_path_exists { // As a sanity check, we emit an error if none of these paths exist: we want // self-contained linking and have no linker. sess.dcx().emit_fatal(errors::SelfContainedLinkerMissing); } } // 2. Implement the \"linker flavor\" part of this feature by asking `cc` to use some kind of", "commid": "rust_pr_125263"}], "negative_passages": []} {"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-b6e2a05bdcd0b4bb1deec57b0fc7371c017906dfb63146bb2609acbdbc8330b7", "text": "pub struct MsvcMissingLinker; #[derive(Diagnostic)] #[diag(codegen_ssa_self_contained_linker_missing)] pub struct SelfContainedLinkerMissing; #[derive(Diagnostic)] #[diag(codegen_ssa_check_installed_visual_studio)] pub struct CheckInstalledVisualStudio;", "commid": "rust_pr_125263"}], "negative_passages": []} {"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-692794394728d193c742cf59c51d7757e5ab50a3f9c95a588a2b43910a463b08", "text": "PathBuf::from_iter([sysroot, Path::new(&rustlib_path), Path::new(\"lib\")]) } /// Returns a path to the target's `bin` folder within its `rustlib` path in the sysroot. This is /// where binaries are usually installed, e.g. the self-contained linkers, lld-wrappers, LLVM tools, /// etc. pub fn make_target_bin_path(sysroot: &Path, target_triple: &str) -> PathBuf { let rustlib_path = rustc_target::target_rustlib_path(sysroot, target_triple); PathBuf::from_iter([sysroot, Path::new(&rustlib_path), Path::new(\"bin\")]) } #[cfg(unix)] fn current_dll_path() -> Result { use std::ffi::{CStr, OsStr};", "commid": "rust_pr_125263"}], "negative_passages": []} {"query_id": "q-en-rust-5f1ff9fed6de345ede55017963c0b6d14798edc4f39e85a8a1ffec4b4df5b5dc", "query": "This is a recent regression, starting with when is set, in my case to where I have prepared a locally built sysroot (that's ), then rustc now assumes it can find the linker in the sysroot, which does not work. See for some more details. It is new to me that the sysroot is supposed to contain the linker as well, is that expected behavior? Also the error message in that case is really unhelpful.^^ Maybe rustc should check that the path it sets for actually exists before starting ?\nYes. The blog post has a summary of that PR\nThat blog post doesn't mention the term \"sysroot\" so it doesn't help with my issue.\nFWIW what I would have expected is that rustc searches for the linker binary somewhere around . Isn't that how we find some other things as well that are usually installed with rustc by rustup?\nHow do you handle windows-gnu? Imo this also uses a self-contained linker by default.\nNo idea. CI passes on Windows, whatever github gives us when we say .\nYeah the rust-lld linker has been in the sysroot since , is used by default in e.g. some of the wasm and aarch64 targets, and rustc had flags to opt-into using it since that PR. What is new is that it is now used by nightlies we distribute. bootstrap puts it in the sysroot when is true (and it's now the default on x64 linux under certain conditions), so you can also set that when -ing if you want it. It works with or without download-ci-llvm.\nWe resolve the sysroot from rustc driver, not current executable.\nYes, the surprise is just that the linker is searched in the sysroot and thus affected by --sysroot, rather than being searched directly. But I guess this is working as intended, I just need to add some more stuff to these custom sysroots then.\nThis broke cargo's CI since the build-std tests try to verify that nothing in the sysroot is required. Would it be possible to change it so that it will only require rust-lld if it exists in the sysroot (similar to windows-gnu and other targets)?\nI don't know what we do on windows-gnu but we can do something similar, yes. I'll open a PR.\nFWIW I managed to add tests for windows-gnu and they work like a charm. So no copying of files seems to be required there. Though I do recall xargo copying some things for windows-gnu so maybe I am not testing enough... but my tests do involve building a small program against the sysroot and running it, so maybe those files are just not needed any more.\nWe actually hit a binutils linker bug last week on windows-gnu (not windows-gnullvm), and the linker flavor there is gnu-cc, so do these targets really use rust-lld by default?\nHow does this work with cross-compilation -- I assume we are always taking the host sysroot even when building for another target? To me that seems like a sign that this shouldn't be part of the sysroot, it should be part of the rustc distribution. The sysroot is for target-related things.\nIn particular, I would expect that if I set to a folder that only contains a sysroot for some foreign target, then that should work if I also set appropriately (and use the right linker and everything). But currently that will then likely fail as it tries to search the host target in the folder, and with it will instead silently fall back to the slow linker.\nThey do use a self-contained linker from sysroot by default, but not rust-lld. We ship from binutils for those targets.\nThanks for the clarification. That explains why there's no existence check (and your 2nd hack for older GCCs in that other issue :). Locally, it was just using mingw's ld probably because of $PATH differences.\nLinkers are generally target-dependent, so it's pretty reasonable to ship them in target sysroot, I guess. rust-lld is just applicable to a wide range of targets by being a cross-linker. Also there's no target vs host separation for sysroot in rustc right now, there's just one sysroot. Maybe there's some space for improvements here, it's just not many people use to notice any issues.\nI still hope to migrate both rust-lld and mingw linker to the common directory layout scheme based on .\nrust-lld is just applicable to a wide range of targets by being a cross-linker. If I'm on macOS and want to build for a Linux target, then the binary shipped in the sysroot is going to be completely useless to me. So in that sense shipping them in the target sysroot seems pointless. (Of course our cross-compile story is not great so such cross-building will hardly ever work, but it demonstrates the fundamental issue I was pointing out.) Even when I am on and building for I probably want to use a 64bit program for the linking, and not a binary shipped in the i686 sysroot -- this is e.g. how Firefox is (or was) built as 4GB of RAM (the maximum accessible to 32bit programs) are just not enough for linking. Given that these are binaries that need to be run on the host (i.e. the machine where the build happens), IMO the only sensible location is together with the other host stuff, i.e., where lives.\nAh I think I understand. That's right, and on macOS we wouldn't look for in the \"x8664-unknown-linux-gnu sysroot\" . You can try this locally on a helloworld (look for the path to ), from with a target:\nOh, so we always assume that contains a sysroot for the host and the current target. That's news to me as well (and Miri certainly doesn't guarantee it). But the fallback in your PR should solve that as well, I think.", "positive_passages": [{"docid": "doc-en-rust-6b26c2bb3a9fef9ba74911216bae8c090841d91259cf8d6cb20d6df9d13d0410", "text": ") } /// Returns a list of directories where target-specific tool binaries are located. /// Returns a list of directories where target-specific tool binaries are located. Some fallback /// directories are also returned, for example if `--sysroot` is used but tools are missing /// (#125246): we also add the bin directories to the sysroot where rustc is located. pub fn get_tools_search_paths(&self, self_contained: bool) -> Vec { let rustlib_path = rustc_target::target_rustlib_path(&self.sysroot, config::host_triple()); let p = PathBuf::from_iter([ Path::new(&self.sysroot), Path::new(&rustlib_path), Path::new(\"bin\"), ]); if self_contained { vec![p.clone(), p.join(\"self-contained\")] } else { vec![p] } let bin_path = filesearch::make_target_bin_path(&self.sysroot, config::host_triple()); let fallback_sysroot_paths = filesearch::sysroot_candidates() .into_iter() .map(|sysroot| filesearch::make_target_bin_path(&sysroot, config::host_triple())); let search_paths = std::iter::once(bin_path).chain(fallback_sysroot_paths); if self_contained { // The self-contained tools are expected to be e.g. in `bin/self-contained` in the // sysroot's `rustlib` path, so we add such a subfolder to the bin path, and the // fallback paths. search_paths.flat_map(|path| [path.clone(), path.join(\"self-contained\")]).collect() } else { search_paths.collect() } } pub fn init_incr_comp_session(&self, session_dir: PathBuf, lock_file: flock::Lock) {", "commid": "rust_pr_125263"}], "negative_passages": []} {"query_id": "q-en-rust-df27dcf85bb7e3c2b632ee80f5aed56fb82be88c8d63d6956c90d681989f5e8d", "query": "For example, from (And many more.) I think this is due to Should we just drop these from ?\ncc", "positive_passages": [{"docid": "doc-en-rust-74f641873a99df58de71303c85d821ea2911ef32272e95c5920f9f3e1ce326cf", "text": "(\"avx512bw\", Unstable(sym::avx512_target_feature)), (\"avx512cd\", Unstable(sym::avx512_target_feature)), (\"avx512dq\", Unstable(sym::avx512_target_feature)), (\"avx512er\", Unstable(sym::avx512_target_feature)), (\"avx512f\", Unstable(sym::avx512_target_feature)), (\"avx512fp16\", Unstable(sym::avx512_target_feature)), (\"avx512ifma\", Unstable(sym::avx512_target_feature)), (\"avx512pf\", Unstable(sym::avx512_target_feature)), (\"avx512vbmi\", Unstable(sym::avx512_target_feature)), (\"avx512vbmi2\", Unstable(sym::avx512_target_feature)), (\"avx512vl\", Unstable(sym::avx512_target_feature)),", "commid": "rust_pr_125498"}], "negative_passages": []} {"query_id": "q-en-rust-df27dcf85bb7e3c2b632ee80f5aed56fb82be88c8d63d6956c90d681989f5e8d", "query": "For example, from (And many more.) I think this is due to Should we just drop these from ?\ncc", "positive_passages": [{"docid": "doc-en-rust-ef862ee23e87868ec40c16e608a512cbc441fc0f8d3eb74eed3eb933aa216368", "text": "println!(\"avx512bw: {:?}\", is_x86_feature_detected!(\"avx512bw\")); println!(\"avx512cd: {:?}\", is_x86_feature_detected!(\"avx512cd\")); println!(\"avx512dq: {:?}\", is_x86_feature_detected!(\"avx512dq\")); println!(\"avx512er: {:?}\", is_x86_feature_detected!(\"avx512er\")); println!(\"avx512f: {:?}\", is_x86_feature_detected!(\"avx512f\")); println!(\"avx512ifma: {:?}\", is_x86_feature_detected!(\"avx512ifma\")); println!(\"avx512pf: {:?}\", is_x86_feature_detected!(\"avx512pf\")); println!(\"avx512vbmi2: {:?}\", is_x86_feature_detected!(\"avx512vbmi2\")); println!(\"avx512vbmi: {:?}\", is_x86_feature_detected!(\"avx512vbmi\")); println!(\"avx512vl: {:?}\", is_x86_feature_detected!(\"avx512vl\"));", "commid": "rust_pr_125498"}], "negative_passages": []} {"query_id": "q-en-rust-df27dcf85bb7e3c2b632ee80f5aed56fb82be88c8d63d6956c90d681989f5e8d", "query": "For example, from (And many more.) I think this is due to Should we just drop these from ?\ncc", "positive_passages": [{"docid": "doc-en-rust-a1e9c41af4aac74ee90f5ed2f8df12b25b981a340697d5c43668b5d39627d668", "text": "LL | cfg!(target_feature = \"zebra\"); | ^^^^^^^^^^^^^^^^^^^^^^^^ | = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512er`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512pf`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, and `bmi2` and 188 more = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, `bmi2`, `bti`, and `bulk-memory` and 186 more = note: see for more information about checking conditional configuration warning: 27 warnings emitted", "commid": "rust_pr_125498"}], "negative_passages": []} {"query_id": "q-en-rust-df27dcf85bb7e3c2b632ee80f5aed56fb82be88c8d63d6956c90d681989f5e8d", "query": "For example, from (And many more.) I think this is due to Should we just drop these from ?\ncc", "positive_passages": [{"docid": "doc-en-rust-a67618a960415c46a5456831532fcb49d62a169b899105d69988cf581e5106b5", "text": "LL | target_feature = \"_UNEXPECTED_VALUE\", | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512er`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512pf`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, `bmi2`, `bti`, `bulk-memory`, `c`, `cache`, `cmpxchg16b`, `crc`, `crt-static`, `d`, `d32`, `dit`, `doloop`, `dotprod`, `dpb`, `dpb2`, `dsp`, `dsp1e2`, `dspe60`, `e`, `e1`, `e2`, `edsp`, `elrw`, `ermsb`, `exception-handling`, `extended-const`, `f`, `f16c`, `f32mm`, `f64mm`, `fcma`, `fdivdu`, `fhm`, `flagm`, `float1e2`, `float1e3`, `float3e4`, `float7e60`, `floate1`, `fma`, `fp-armv8`, `fp16`, `fp64`, `fpuv2_df`, `fpuv2_sf`, `fpuv3_df`, `fpuv3_hf`, `fpuv3_hi`, `fpuv3_sf`, `frecipe`, `frintts`, `fxsr`, `gfni`, `hard-float`, `hard-float-abi`, `hard-tp`, `high-registers`, `hvx`, `hvx-length128b`, `hwdiv`, `i8mm`, `jsconv`, `lahfsahf`, `lasx`, `lbt`, `lor`, `lse`, `lsx`, `lvz`, `lzcnt`, `m`, `mclass`, `movbe`, `mp`, `mp1e2`, `msa`, `mte`, `multivalue`, `mutable-globals`, `neon`, `nontrapping-fptoint`, `nvic`, `paca`, `pacg`, `pan`, `pclmulqdq`, `pmuv3`, `popcnt`, `power10-vector`, `power8-altivec`, `power8-vector`, `power9-altivec`, `power9-vector`, `prfchw`, `rand`, `ras`, `rclass`, `rcpc`, `rcpc2`, `rdm`, `rdrand`, `rdseed`, `reference-types`, `relax`, `relaxed-simd`, `rtm`, `sb`, `sha`, `sha2`, `sha3`, `sign-ext`, `simd128`, `sm4`, `spe`, `ssbs`, `sse`, `sse2`, `sse3`, `sse4.1`, `sse4.2`, `sse4a`, `ssse3`, `sve`, `sve2`, `sve2-aes`, `sve2-bitperm`, `sve2-sha3`, `sve2-sm4`, `tbm`, `thumb-mode`, `thumb2`, `tme`, `trust`, `trustzone`, `ual`, `unaligned-scalar-mem`, `v`, `v5te`, `v6`, `v6k`, `v6t2`, `v7`, `v8`, `v8.1a`, `v8.2a`, `v8.3a`, `v8.4a`, `v8.5a`, `v8.6a`, `v8.7a`, `vaes`, `vdsp2e60f`, `vdspv1`, `vdspv2`, `vfp2`, `vfp3`, `vfp4`, `vh`, `virt`, `virtualization`, `vpclmulqdq`, `vsx`, `xsave`, `xsavec`, `xsaveopt`, `xsaves`, `zba`, `zbb`, `zbc`, `zbkb`, `zbkc`, `zbkx`, `zbs`, `zdinx`, `zfh`, `zfhmin`, `zfinx`, `zhinx`, `zhinxmin`, `zk`, `zkn`, `zknd`, `zkne`, `zknh`, `zkr`, `zks`, `zksed`, `zksh`, and `zkt` = note: expected values for `target_feature` are: `10e60`, `2e3`, `3e3r1`, `3e3r2`, `3e3r3`, `3e7`, `7e10`, `a`, `aclass`, `adx`, `aes`, `altivec`, `alu32`, `atomics`, `avx`, `avx2`, `avx512bf16`, `avx512bitalg`, `avx512bw`, `avx512cd`, `avx512dq`, `avx512f`, `avx512fp16`, `avx512ifma`, `avx512vbmi`, `avx512vbmi2`, `avx512vl`, `avx512vnni`, `avx512vp2intersect`, `avx512vpopcntdq`, `bf16`, `bmi1`, `bmi2`, `bti`, `bulk-memory`, `c`, `cache`, `cmpxchg16b`, `crc`, `crt-static`, `d`, `d32`, `dit`, `doloop`, `dotprod`, `dpb`, `dpb2`, `dsp`, `dsp1e2`, `dspe60`, `e`, `e1`, `e2`, `edsp`, `elrw`, `ermsb`, `exception-handling`, `extended-const`, `f`, `f16c`, `f32mm`, `f64mm`, `fcma`, `fdivdu`, `fhm`, `flagm`, `float1e2`, `float1e3`, `float3e4`, `float7e60`, `floate1`, `fma`, `fp-armv8`, `fp16`, `fp64`, `fpuv2_df`, `fpuv2_sf`, `fpuv3_df`, `fpuv3_hf`, `fpuv3_hi`, `fpuv3_sf`, `frecipe`, `frintts`, `fxsr`, `gfni`, `hard-float`, `hard-float-abi`, `hard-tp`, `high-registers`, `hvx`, `hvx-length128b`, `hwdiv`, `i8mm`, `jsconv`, `lahfsahf`, `lasx`, `lbt`, `lor`, `lse`, `lsx`, `lvz`, `lzcnt`, `m`, `mclass`, `movbe`, `mp`, `mp1e2`, `msa`, `mte`, `multivalue`, `mutable-globals`, `neon`, `nontrapping-fptoint`, `nvic`, `paca`, `pacg`, `pan`, `pclmulqdq`, `pmuv3`, `popcnt`, `power10-vector`, `power8-altivec`, `power8-vector`, `power9-altivec`, `power9-vector`, `prfchw`, `rand`, `ras`, `rclass`, `rcpc`, `rcpc2`, `rdm`, `rdrand`, `rdseed`, `reference-types`, `relax`, `relaxed-simd`, `rtm`, `sb`, `sha`, `sha2`, `sha3`, `sign-ext`, `simd128`, `sm4`, `spe`, `ssbs`, `sse`, `sse2`, `sse3`, `sse4.1`, `sse4.2`, `sse4a`, `ssse3`, `sve`, `sve2`, `sve2-aes`, `sve2-bitperm`, `sve2-sha3`, `sve2-sm4`, `tbm`, `thumb-mode`, `thumb2`, `tme`, `trust`, `trustzone`, `ual`, `unaligned-scalar-mem`, `v`, `v5te`, `v6`, `v6k`, `v6t2`, `v7`, `v8`, `v8.1a`, `v8.2a`, `v8.3a`, `v8.4a`, `v8.5a`, `v8.6a`, `v8.7a`, `vaes`, `vdsp2e60f`, `vdspv1`, `vdspv2`, `vfp2`, `vfp3`, `vfp4`, `vh`, `virt`, `virtualization`, `vpclmulqdq`, `vsx`, `xsave`, `xsavec`, `xsaveopt`, `xsaves`, `zba`, `zbb`, `zbc`, `zbkb`, `zbkc`, `zbkx`, `zbs`, `zdinx`, `zfh`, `zfhmin`, `zfinx`, `zhinx`, `zhinxmin`, `zk`, `zkn`, `zknd`, `zkne`, `zknh`, `zkr`, `zks`, `zksed`, `zksh`, and `zkt` = note: see for more information about checking conditional configuration warning: unexpected `cfg` condition value: `_UNEXPECTED_VALUE`", "commid": "rust_pr_125498"}], "negative_passages": []} {"query_id": "q-en-rust-8d9c1a3661356bfdf58db0ec2c24c2f76ace1a0396f38e4f264716f0f2f1ec3d", "query": "() Like and the other ABIs, should not trigger and for non-FFI-safe types. Related: , ; tracking issue . No response No response_", "positive_passages": [{"docid": "doc-en-rust-cd097b0cc0bbfa6370f4d528c7e1fa2a04e0ea57746c006d42fbd8d2dd114744", "text": "} fn is_internal_abi(&self, abi: SpecAbi) -> bool { matches!(abi, SpecAbi::Rust | SpecAbi::RustCall | SpecAbi::RustIntrinsic) matches!( abi, SpecAbi::Rust | SpecAbi::RustCall | SpecAbi::RustCold | SpecAbi::RustIntrinsic ) } /// Find any fn-ptr types with external ABIs in `ty`.", "commid": "rust_pr_130667"}], "negative_passages": []} {"query_id": "q-en-rust-8d9c1a3661356bfdf58db0ec2c24c2f76ace1a0396f38e4f264716f0f2f1ec3d", "query": "() Like and the other ABIs, should not trigger and for non-FFI-safe types. Related: , ; tracking issue . No response No response_", "positive_passages": [{"docid": "doc-en-rust-27a0ec0e26af47c44354400a16f0ef10ea5a9e85d96458e33e7d8e0fadbed124", "text": " //@ check-pass #![feature(rust_cold_cc)] // extern \"rust-cold\" is a \"Rust\" ABI so we accept `repr(Rust)` types as arg/ret without warnings. pub extern \"rust-cold\" fn f(_: ()) -> Result<(), ()> { Ok(()) } extern \"rust-cold\" { pub fn g(_: ()) -> Result<(), ()>; } fn main() {} ", "commid": "rust_pr_130667"}], "negative_passages": []} {"query_id": "q-en-rust-99d5b1f75e53201a11da405ce1e3581f854c121b0855e6c1ad4e8b9537f27a6f", "query": "When downloading the (and variant) target, the that ships with the compiler is broken: Upon further inspection, the symbols are indeed built for x86-64 and not LoongArch: It's not clear what causes this, but contains no corresponding definitions for . This issue appears related, in that it would catch whatever causes this:\ncc", "positive_passages": [{"docid": "doc-en-rust-d61fa04713df96c055e94ca2fb8db9db90cfba759ac2952880298b6d37a042b5", "text": "AR_loongarch64_unknown_linux_gnu=loongarch64-unknown-linux-gnu-ar CXX_loongarch64_unknown_linux_gnu=loongarch64-unknown-linux-gnu-g++ # We re-use the Linux toolchain for bare-metal, because upstream bare-metal # target support for LoongArch is only available from GCC 14+. # # See: https://github.com/gcc-mirror/gcc/commit/976f4f9e4770 ENV CC_loongarch64_unknown_none=loongarch64-unknown-linux-gnu-gcc AR_loongarch64_unknown_none=loongarch64-unknown-linux-gnu-ar CXX_loongarch64_unknown_none=loongarch64-unknown-linux-gnu-g++ CFLAGS_loongarch64_unknown_none=\"-ffreestanding -mabi=lp64d\" CXXFLAGS_loongarch64_unknown_none=\"-ffreestanding -mabi=lp64d\" CC_loongarch64_unknown_none_softfloat=loongarch64-unknown-linux-gnu-gcc AR_loongarch64_unknown_none_softfloat=loongarch64-unknown-linux-gnu-ar CXX_loongarch64_unknown_none_softfloat=loongarch64-unknown-linux-gnu-g++ CFLAGS_loongarch64_unknown_none_softfloat=\"-ffreestanding -mabi=lp64s -mfpu=none\" CXXFLAGS_loongarch64_unknown_none_softfloat=\"-ffreestanding -mabi=lp64s -mfpu=none\" ENV HOSTS=loongarch64-unknown-linux-gnu ENV TARGETS=$HOSTS ENV TARGETS=$TARGETS,loongarch64-unknown-none ENV TARGETS=$TARGETS,loongarch64-unknown-none-softfloat ENV RUST_CONFIGURE_ARGS --enable-extended ", "commid": "rust_pr_127150"}], "negative_passages": []} {"query_id": "q-en-rust-99d5b1f75e53201a11da405ce1e3581f854c121b0855e6c1ad4e8b9537f27a6f", "query": "When downloading the (and variant) target, the that ships with the compiler is broken: Upon further inspection, the symbols are indeed built for x86-64 and not LoongArch: It's not clear what causes this, but contains no corresponding definitions for . This issue appears related, in that it would catch whatever causes this:\ncc", "positive_passages": [{"docid": "doc-en-rust-bf34f7f24483ff5be36e5ad187d0e7753a0f9fe70be49b346a898715da130514", "text": "--enable-profiler --disable-docs ENV SCRIPT python3 ../x.py dist --host $HOSTS --target $HOSTS ENV SCRIPT python3 ../x.py dist --host $HOSTS --target $TARGETS ", "commid": "rust_pr_127150"}], "negative_passages": []} {"query_id": "q-en-rust-99d5b1f75e53201a11da405ce1e3581f854c121b0855e6c1ad4e8b9537f27a6f", "query": "When downloading the (and variant) target, the that ships with the compiler is broken: Upon further inspection, the symbols are indeed built for x86-64 and not LoongArch: It's not clear what causes this, but contains no corresponding definitions for . This issue appears related, in that it would catch whatever causes this:\ncc", "positive_passages": [{"docid": "doc-en-rust-65934332ae8269fd7485b645a5fe8aa49babf7bbbdaf8c0ca31f9961b68355e4", "text": "ENV TARGETS=$TARGETS,armv7-unknown-linux-musleabi ENV TARGETS=$TARGETS,i686-unknown-freebsd ENV TARGETS=$TARGETS,x86_64-unknown-none ENV TARGETS=$TARGETS,loongarch64-unknown-none ENV TARGETS=$TARGETS,loongarch64-unknown-none-softfloat ENV TARGETS=$TARGETS,aarch64-unknown-uefi ENV TARGETS=$TARGETS,i686-unknown-uefi ENV TARGETS=$TARGETS,x86_64-unknown-uefi", "commid": "rust_pr_127150"}], "negative_passages": []} {"query_id": "q-en-rust-0437dd8ba1fa0903e87a8b83f97256473d6ed1501c67f7aa6c8a6e93b258a164", "query": " $DIR/dangling-alloc-id-ice.rs:12:1 | LL | const FOO: &() = { | ^^^^^^^^^^^^^^ | ^^^^^^^^^^^^^^ constructing invalid value: encountered a dangling reference (use-after-free) | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior. = note: the raw bytes of the constant (size: $SIZE, align: $ALIGN) { HEX_DUMP } error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_126426"}], "negative_passages": []} {"query_id": "q-en-rust-b1521adfb989f38f32348ff8b24b981929409aef73ea7602860c96db097d9127", "query": " $DIR/dangling-zst-ice-issue-126393.rs:7:1 | LL | pub static MAGIC_FFI_REF: &'static Wrapper = unsafe { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ constructing invalid value: encountered a dangling reference (use-after-free) | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior. = note: the raw bytes of the constant (size: $SIZE, align: $ALIGN) { HEX_DUMP } error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_126426"}], "negative_passages": []} {"query_id": "q-en-rust-7ad0b1d7110279401731d0f8966d551330e698e91ad9b40ad0a37050a3d2d1bd", "query": " $DIR/uninhabited.rs:65:9 --> $DIR/uninhabited.rs:63:9 | LL | assert!(false); | ^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: false', $DIR/uninhabited.rs:65:9 | ^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: false', $DIR/uninhabited.rs:63:9 | = note: this error originates in the macro `assert` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0080]: evaluation of constant value failed --> $DIR/uninhabited.rs:87:9 | LL | assert!(false); | ^^^^^^^^^^^^^^ the evaluated program panicked at 'assertion failed: false', $DIR/uninhabited.rs:87:9 | = note: this error originates in the macro `assert` (in Nightly builds, run with -Z macro-backtrace for more info)", "commid": "rust_pr_126493"}], "negative_passages": []} {"query_id": "q-en-rust-7ad0b1d7110279401731d0f8966d551330e698e91ad9b40ad0a37050a3d2d1bd", "query": " $DIR/uninhabited.rs:49:41 | LL | assert::is_maybe_transmutable::<(), Void>(); | ^^^^ `yawning_void::Void` is uninhabited | ^^^^ `yawning_void_struct::Void` is uninhabited | note: required by a bound in `is_maybe_transmutable` --> $DIR/uninhabited.rs:10:14 | LL | pub fn is_maybe_transmutable() | --------------------- required by a bound in this function LL | where LL | Dst: BikeshedIntrinsicFrom | |__________^ required by this bound in `is_maybe_transmutable` error[E0277]: `()` cannot be safely transmuted into `yawning_void_enum::Void` --> $DIR/uninhabited.rs:71:41 | LL | assert::is_maybe_transmutable::<(), Void>(); | ^^^^ `yawning_void_enum::Void` is uninhabited | note: required by a bound in `is_maybe_transmutable` --> $DIR/uninhabited.rs:10:14", "commid": "rust_pr_126493"}], "negative_passages": []} {"query_id": "q-en-rust-7ad0b1d7110279401731d0f8966d551330e698e91ad9b40ad0a37050a3d2d1bd", "query": " $DIR/uninhabited.rs:70:43 --> $DIR/uninhabited.rs:92:43 | LL | assert::is_maybe_transmutable::(); | ^^^^^^^^^^^ at least one value of `u128` isn't a bit-valid value of `DistantVoid`", "commid": "rust_pr_126493"}], "negative_passages": []} {"query_id": "q-en-rust-7ad0b1d7110279401731d0f8966d551330e698e91ad9b40ad0a37050a3d2d1bd", "query": " $DIR/suggest-import-ice-issue-127302.rs:3:1 | LL | mod config; | ^^^^^^^^^^^ | = help: to create the module `config`, create file \"$DIR/config.rs\" or \"$DIR/config/mod.rs\" = note: if there is a `mod config` elsewhere in the crate already, import it with `use crate::...` instead error: format argument must be a string literal --> $DIR/suggest-import-ice-issue-127302.rs:10:14 | LL | println!(args.ctx.compiler.display()); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | help: you might be missing a string literal to format with | LL | println!(\"{}\", args.ctx.compiler.display()); | +++++ error[E0425]: cannot find value `args` in this scope --> $DIR/suggest-import-ice-issue-127302.rs:6:12 | LL | match &args.cmd { | ^^^^ not found in this scope | help: consider importing this function | LL + use std::env::args; | error[E0532]: expected unit struct, unit variant or constant, found module `crate::config` --> $DIR/suggest-import-ice-issue-127302.rs:7:9 | LL | crate::config => {} | ^^^^^^^^^^^^^ not a unit struct, unit variant or constant error: aborting due to 4 previous errors Some errors have detailed explanations: E0425, E0532, E0583. For more information about an error, try `rustc --explain E0425`. ", "commid": "rust_pr_127310"}], "negative_passages": []} {"query_id": "q-en-rust-eb7a1ea219dd01e52e217d471571566fa031423cddcd67b8e84229deaf8bf730", "query": " $DIR/suggest-import-issue-120074.rs:12:35 | LL | println!(\"Hello, {}!\", crate::bar::do_the_thing); | ^^^ unresolved import | help: a similar path exists | LL | println!(\"Hello, {}!\", crate::foo::bar::do_the_thing); | ~~~~~~~~ help: consider importing this module | LL + use foo::bar; | help: if you import `bar`, refer to it directly | LL - println!(\"Hello, {}!\", crate::bar::do_the_thing); LL + println!(\"Hello, {}!\", bar::do_the_thing); | error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0433`. ", "commid": "rust_pr_127310"}], "negative_passages": []} {"query_id": "q-en-rust-eb7a1ea219dd01e52e217d471571566fa031423cddcd67b8e84229deaf8bf730", "query": " $DIR/suggest-import-issue-120074.rs:10:35 | LL | println!(\"Hello, {}!\", crate::bar::do_the_thing); | ^^^ unresolved import | help: a similar path exists | LL | println!(\"Hello, {}!\", crate::foo::bar::do_the_thing); | ~~~~~~~~ help: consider importing this module | LL + use foo::bar; | help: if you import `bar`, refer to it directly | LL - println!(\"Hello, {}!\", crate::bar::do_the_thing); LL + println!(\"Hello, {}!\", bar::do_the_thing); | error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0433`. ", "commid": "rust_pr_127310"}], "negative_passages": []} {"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-baf02ef8ba73b038f865f4c9b85457654fdd48ca272ad2302de2bd5784581f5a", "text": "} fn struct_all_fields_are_public(tcx: TyCtxt<'_>, id: LocalDefId) -> bool { // treat PhantomData and positional ZST as public, // we don't want to lint types which only have them, // cause it's a common way to use such types to check things like well-formedness tcx.adt_def(id).all_fields().all(|field| { let adt_def = tcx.adt_def(id); // skip types contain fields of unit and never type, // it's usually intentional to make the type not constructible let not_require_constructor = adt_def.all_fields().any(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); if field_type.is_phantom_data() { return true; } let is_positional = field.name.as_str().starts_with(|c: char| c.is_ascii_digit()); if is_positional && tcx .layout_of(tcx.param_env(field.did).and(field_type)) .map_or(true, |layout| layout.is_zst()) { return true; } field.vis.is_public() }) field_type.is_unit() || field_type.is_never() }); not_require_constructor || adt_def.all_fields().all(|field| { let field_type = tcx.type_of(field.did).instantiate_identity(); // skip fields of PhantomData, // cause it's a common way to check things like well-formedness if field_type.is_phantom_data() { return true; } field.vis.is_public() }) } /// check struct and its fields are public or not,", "commid": "rust_pr_128104"}], "negative_passages": []} {"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-df4373c2782bcc924202fa885b51dc980e0fea46448d7254a61d1435ca835dc3", "text": "#![forbid(dead_code)] #[derive(Debug)] pub struct Whatever { //~ ERROR struct `Whatever` is never constructed pub struct Whatever { pub field0: (), field1: (), field1: (), //~ ERROR fields `field1`, `field2`, `field3`, and `field4` are never read field2: (), field3: (), field4: (),", "commid": "rust_pr_128104"}], "negative_passages": []} {"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-7d49cb0c1c28911544ef219b4dd6e3e21f6fcc4ce45880631ff8903bd807697f", "text": " error: struct `Whatever` is never constructed --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:4:12 error: fields `field1`, `field2`, `field3`, and `field4` are never read --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:6:5 | LL | pub struct Whatever { | ^^^^^^^^ | -------- fields in this struct LL | pub field0: (), LL | field1: (), | ^^^^^^ LL | field2: (), | ^^^^^^ LL | field3: (), | ^^^^^^ LL | field4: (), | ^^^^^^ | = note: `Whatever` has a derived impl for the trait `Debug`, but this is intentionally ignored during dead code analysis note: the lint level is defined here --> $DIR/clone-debug-dead-code-in-the-same-struct.rs:1:11 |", "commid": "rust_pr_128104"}], "negative_passages": []} {"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-0d2558f6a742582c622a646607fbb198ef44d9a075f9f76b9ac44bbd9ac2621c", "text": " #![feature(never_type)] #![deny(dead_code)] pub struct T1(!); pub struct T2(()); pub struct T3(std::marker::PhantomData); pub struct T4 { _x: !, } pub struct T5 { _x: !, _y: X, } pub struct T6 { _x: (), } pub struct T7 { _x: (), _y: X, } pub struct T8 { _x: std::marker::PhantomData, } pub struct T9 { //~ ERROR struct `T9` is never constructed _x: std::marker::PhantomData, _y: i32, } fn main() {} ", "commid": "rust_pr_128104"}], "negative_passages": []} {"query_id": "q-en-rust-618a64eeb156c9d000d922e4741d61762a493f4016a9dec1460e9214789d0eae", "query": "The struct is meant to not be constructible. Also the behavior is incoherent between tuple structs (correct behavior) and named structs (incorrect behavior). Also, this is a regression, this didn't use to be the case. No response No response\nlabel +F-never_Type +D-inconsistent\nWe will also get such warnings for private structs with fields of never type (): The struct is never constructed although it is not constructible. This behavior is expected and is not a regression. Did you meet any cases using this way in real world? For the incoherent behavior, we also have: This because we only skip positional ZST (PhantomData and generics), this policy is also applied for never read fields:\nCan you elaborate in which sense it is expected? I can understand that it is expected given the current implementation of Rust. It seems between stable and nightly a new lint was implemented. This lint is expected for inhabited types. However it has false positives on inhabited types, i.e. types that are only meant for type-level usage (for example to implement a trait). I agree we should probably categorize this issue as a false positive of a new lint rather than a regression (technically). Thanks, I can see the idea behind it. I'm not convinced about it though, but it doesn't bother me either. So let's just keep this issue focused on the false positive rather than the inconsistent behavior.\nHow do you think about private structs with fields of never type?\nI would consider it the same as public structs, because they can still be \"constructed\" at the type-level. So they are dynamic dead--code but not dead-code since they are used statically. It would be dead-code though if the type is never used (both in dynamic code and static code, i.e. types).\nActually, thinking about this issue, I don't think this is a false positive only for the never type. I think the whole lint is wrong. The fact that the type is empty is just a proof that constructing it at term-level is not necessary for the type to be \"alive\" (aka usable). This could also be the case with non-empty types. A lot of people just use unit types for type-level usage only (instead of empty types). So my conclusion would be that linting a type to be dead-code because it is never constructed is brittle as it can't be sure that the type is not meant to be constructed at term-level (and thus only used at type-level). Adding is not a solution because if the type is actually not used at all (including in types) then we actually want the warning. Here is an example: The same thing with a private struct: And now actual dead code:\nI think a better way is: is usually used to do such things like just a proof. Maybe we can add a help for this case.\nBut I agree that we shouldn't emit warnings for pub structs with any fields of never type (and also unit type), this is an intentional behavior.\nOh yes good point. So all good to proceed with adding never type to the list of special types.\nThanks a lot for the quick fix! Once it hits nightly, I'll give it a try. I'm a bit afraid though that this won't fix the underlying problem of understanding which types are not meant to exist at runtime. For example, sometimes instead of using the never type, I also use empty enums such that I can give a name and separate identity to that type. Let's see if it's an issue. If yes, I guess ultimately there would be 2 options: Just use in those cases. Introduce a trait (or better name) for types that are not meant to exist at runtime like unit, never, and phantom types, but could be extended with user types like empty enums. I'll ping this thread again if I actually hit this issue.\nI could test it (using nightly-2024-08-01) and . I just had to change a to in a crate where I didn't have already. Ultimately, is going to be a type alias for so maybe it's not worth supporting it. And thinking about empty enums, this should never be an issue for my particular usage, which is to build empty types. Because either the type takes no parameters in which case I use an empty enum directly. Or it takes parameters and I have to use a struct with fields, and to make the struct empty I also have to add a never field. So my concern in the previous message is not an issue for me.", "positive_passages": [{"docid": "doc-en-rust-a74ac354ae2d9b5f3dc63346db0c6ff4a68a42ad1ea35a23ca51f7141b1c63b2", "text": " error: struct `T9` is never constructed --> $DIR/unconstructible-pub-struct.rs:30:12 | LL | pub struct T9 { | ^^ | note: the lint level is defined here --> $DIR/unconstructible-pub-struct.rs:2:9 | LL | #![deny(dead_code)] | ^^^^^^^^^ error: aborting due to 1 previous error ", "commid": "rust_pr_128104"}], "negative_passages": []} {"query_id": "q-en-rust-39b479843bd281aed80f459adc0cce2fce8a63b9afa1b90f5aec5eaa619384d4", "query": "I do not have minimised code. This was a sudden CI build failure on nightly for aarch64-unknown-linux-musl () Note that this is a cross compile using cross-rs. It fails on rust version 1.82.0-nightly ( 2024-07-29) but works on 1.82.0-nightly ( 2024-07-28). Needless to say it also works perfectly fine on stable. The error is reproducible on rebuild and locally with cross-rs. It is not reproducible with cargo-zigbuild interestingly enough. I also checked that the version of cross-rs didn't change between the successful and failing builds (no new releases of that project since february). The only change between the successful and failing CI runs were a typo fix to an unrelated CI pipeline (the release one, so it didn't even run yet). As such I'm forced to conclude that the blame lies with Rustc. Steps to reproduce: I expected to see this happen: Compilation successful (as it is with stable for same target, or with -gnu and nightly) Instead, this happened () It most recently worked on: 1.82.0-nightly ( 2024-07-28) (or stable 1.80.0 or beta 1.81.0, take your pick for triaging) :\ncc (of cross-rs fame) in case he has any insight on this regression\nI have hit the same issue in a similar situation. Cross compiling to aarch64-unknown-linux-musl from x86_64 linux in a Github CI pipeline using cross-rs. Link to pipeline failing logs:\nSeems rune isn't needed based on the comment by Updated the title.\nThis is probably\nWG-prioritization assigning priority (). label -I-prioritize +P-high\nYeah that's probably it. We used to build the C version of those symbols but switched to Rust versions with the compiler builtins update. Unfortunately we can't enable them on all platforms because LLVM has crashes, and the logic that would allow optional configuration is apparently not working. After that part is fixed, we will need to enable the symbols on all platforms where is f128 (which includes aarch64), since these errors probably come from here I think anyone who needs a quick workaround can add to their dependencies to get the symbols directly, as described near the top of Note that this requires nightly.\nIs it possible to add a dependency conditionally based on if it is nightly? (In a specific range of dates even?) I cannot and will not depend on nightly, but I do test my code on nightly in CI (in order to find breakages like this early). For now I just disabled that nightly build on aarch64 musl.\nSame for me. Not using cross-rs though. Logs are here:\nSame for me, both using as host toolchain and cross compiling.\nLooks like it is happening now too for x86_64.\nNot for my projects, so you should probably provide a reproducer on x86-64 GNU if possible, since that is a tier 1 platform (unlike Aarch64 on Musl). Or is this about some other x8664 variation that is also tier 2?\nWoops, sorry, the x86_64 build failed too, but then i looked at the aarch64 build logs :) which still fail.", "positive_passages": [{"docid": "doc-en-rust-f57110111a32d0d828a7fe850a3896b3b9c78a94f656934e9ce7fc08a2b33290", "text": "[[package]] name = \"compiler_builtins\" version = \"0.1.114\" version = \"0.1.117\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"eb58b199190fcfe0846f55a3b545cd6b07a34bdd5930a476ff856f3ebcc5558a\" checksum = \"a91dae36d82fe12621dfb5b596d7db766187747749b22e33ac068e1bfc356f4a\" dependencies = [ \"cc\", \"rustc-std-workspace-core\",", "commid": "rust_pr_128691"}], "negative_passages": []} {"query_id": "q-en-rust-39b479843bd281aed80f459adc0cce2fce8a63b9afa1b90f5aec5eaa619384d4", "query": "I do not have minimised code. This was a sudden CI build failure on nightly for aarch64-unknown-linux-musl () Note that this is a cross compile using cross-rs. It fails on rust version 1.82.0-nightly ( 2024-07-29) but works on 1.82.0-nightly ( 2024-07-28). Needless to say it also works perfectly fine on stable. The error is reproducible on rebuild and locally with cross-rs. It is not reproducible with cargo-zigbuild interestingly enough. I also checked that the version of cross-rs didn't change between the successful and failing builds (no new releases of that project since february). The only change between the successful and failing CI runs were a typo fix to an unrelated CI pipeline (the release one, so it didn't even run yet). As such I'm forced to conclude that the blame lies with Rustc. Steps to reproduce: I expected to see this happen: Compilation successful (as it is with stable for same target, or with -gnu and nightly) Instead, this happened () It most recently worked on: 1.82.0-nightly ( 2024-07-28) (or stable 1.80.0 or beta 1.81.0, take your pick for triaging) :\ncc (of cross-rs fame) in case he has any insight on this regression\nI have hit the same issue in a similar situation. Cross compiling to aarch64-unknown-linux-musl from x86_64 linux in a Github CI pipeline using cross-rs. Link to pipeline failing logs:\nSeems rune isn't needed based on the comment by Updated the title.\nThis is probably\nWG-prioritization assigning priority (). label -I-prioritize +P-high\nYeah that's probably it. We used to build the C version of those symbols but switched to Rust versions with the compiler builtins update. Unfortunately we can't enable them on all platforms because LLVM has crashes, and the logic that would allow optional configuration is apparently not working. After that part is fixed, we will need to enable the symbols on all platforms where is f128 (which includes aarch64), since these errors probably come from here I think anyone who needs a quick workaround can add to their dependencies to get the symbols directly, as described near the top of Note that this requires nightly.\nIs it possible to add a dependency conditionally based on if it is nightly? (In a specific range of dates even?) I cannot and will not depend on nightly, but I do test my code on nightly in CI (in order to find breakages like this early). For now I just disabled that nightly build on aarch64 musl.\nSame for me. Not using cross-rs though. Logs are here:\nSame for me, both using as host toolchain and cross compiling.\nLooks like it is happening now too for x86_64.\nNot for my projects, so you should probably provide a reproducer on x86-64 GNU if possible, since that is a tier 1 platform (unlike Aarch64 on Musl). Or is this about some other x8664 variation that is also tier 2?\nWoops, sorry, the x86_64 build failed too, but then i looked at the aarch64 build logs :) which still fail.", "positive_passages": [{"docid": "doc-en-rust-458097da950ec73b1580b21c6e2853992946e901d33970069de8e47ddd9f26b4", "text": "[dependencies] core = { path = \"../core\" } compiler_builtins = { version = \"0.1.114\", features = ['rustc-dep-of-std'] } [target.'cfg(not(any(target_arch = \"aarch64\", target_arch = \"x86\", target_arch = \"x86_64\")))'.dependencies] compiler_builtins = { version = \"0.1.114\", features = [\"no-f16-f128\"] } compiler_builtins = { version = \"0.1.117\", features = ['rustc-dep-of-std'] } [dev-dependencies] rand = { version = \"0.8.5\", default-features = false, features = [\"alloc\"] }", "commid": "rust_pr_128691"}], "negative_passages": []} {"query_id": "q-en-rust-39b479843bd281aed80f459adc0cce2fce8a63b9afa1b90f5aec5eaa619384d4", "query": "I do not have minimised code. This was a sudden CI build failure on nightly for aarch64-unknown-linux-musl () Note that this is a cross compile using cross-rs. It fails on rust version 1.82.0-nightly ( 2024-07-29) but works on 1.82.0-nightly ( 2024-07-28). Needless to say it also works perfectly fine on stable. The error is reproducible on rebuild and locally with cross-rs. It is not reproducible with cargo-zigbuild interestingly enough. I also checked that the version of cross-rs didn't change between the successful and failing builds (no new releases of that project since february). The only change between the successful and failing CI runs were a typo fix to an unrelated CI pipeline (the release one, so it didn't even run yet). As such I'm forced to conclude that the blame lies with Rustc. Steps to reproduce: I expected to see this happen: Compilation successful (as it is with stable for same target, or with -gnu and nightly) Instead, this happened () It most recently worked on: 1.82.0-nightly ( 2024-07-28) (or stable 1.80.0 or beta 1.81.0, take your pick for triaging) :\ncc (of cross-rs fame) in case he has any insight on this regression\nI have hit the same issue in a similar situation. Cross compiling to aarch64-unknown-linux-musl from x86_64 linux in a Github CI pipeline using cross-rs. Link to pipeline failing logs:\nSeems rune isn't needed based on the comment by Updated the title.\nThis is probably\nWG-prioritization assigning priority (). label -I-prioritize +P-high\nYeah that's probably it. We used to build the C version of those symbols but switched to Rust versions with the compiler builtins update. Unfortunately we can't enable them on all platforms because LLVM has crashes, and the logic that would allow optional configuration is apparently not working. After that part is fixed, we will need to enable the symbols on all platforms where is f128 (which includes aarch64), since these errors probably come from here I think anyone who needs a quick workaround can add to their dependencies to get the symbols directly, as described near the top of Note that this requires nightly.\nIs it possible to add a dependency conditionally based on if it is nightly? (In a specific range of dates even?) I cannot and will not depend on nightly, but I do test my code on nightly in CI (in order to find breakages like this early). For now I just disabled that nightly build on aarch64 musl.\nSame for me. Not using cross-rs though. Logs are here:\nSame for me, both using as host toolchain and cross compiling.\nLooks like it is happening now too for x86_64.\nNot for my projects, so you should probably provide a reproducer on x86-64 GNU if possible, since that is a tier 1 platform (unlike Aarch64 on Musl). Or is this about some other x8664 variation that is also tier 2?\nWoops, sorry, the x86_64 build failed too, but then i looked at the aarch64 build logs :) which still fail.", "positive_passages": [{"docid": "doc-en-rust-c70c61db5e9da4571b1e639badafd453f8a77b508d9c7f0a20bd5aa1cec20d8d", "text": "panic_unwind = { path = \"../panic_unwind\", optional = true } panic_abort = { path = \"../panic_abort\" } core = { path = \"../core\", public = true } compiler_builtins = { version = \"0.1.114\" } compiler_builtins = { version = \"0.1.117\" } profiler_builtins = { path = \"../profiler_builtins\", optional = true } unwind = { path = \"../unwind\" } hashbrown = { version = \"0.14\", default-features = false, features = [", "commid": "rust_pr_128691"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . .suggestion = use `Box::new()` instead parse_box_syntax_removed_suggestion = use `Box::new()` instead parse_cannot_be_raw_ident = `{$ident}` cannot be a raw identifier", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . pub struct BoxSyntaxRemoved<'a> { pub struct BoxSyntaxRemoved { #[primary_span] #[suggestion( code = \"Box::new({code})\", applicability = \"machine-applicable\", style = \"verbose\" )] pub span: Span, pub code: &'a str, #[subdiagnostic] pub sugg: AddBoxNew, } #[derive(Subdiagnostic)] #[multipart_suggestion( parse_box_syntax_removed_suggestion, applicability = \"machine-applicable\", style = \"verbose\" )] pub struct AddBoxNew { #[suggestion_part(code = \"Box::new(\")] pub box_kw_and_lo: Span, #[suggestion_part(code = \")\")] pub hi: Span, } #[derive(Diagnostic)]", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . PResult<'a, (Span, ExprKind)> { let (span, _) = self.parse_expr_prefix_common(box_kw)?; let inner_span = span.with_lo(box_kw.hi()); let code = self.psess.source_map().span_to_snippet(inner_span).unwrap(); let guar = self.dcx().emit_err(errors::BoxSyntaxRemoved { span: span, code: code.trim() }); let (span, expr) = self.parse_expr_prefix_common(box_kw)?; // Make a multipart suggestion instead of `span_to_snippet` in case source isn't available let box_kw_and_lo = box_kw.until(self.interpolated_or_expr_span(&expr)); let hi = span.shrink_to_hi(); let sugg = errors::AddBoxNew { box_kw_and_lo, hi }; let guar = self.dcx().emit_err(errors::BoxSyntaxRemoved { span, sugg }); Ok((span, ExprKind::Err(guar))) }", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . | ~~~~~~~~~~~~ | ~~~~~~~~~ + error: `box_syntax` has been removed --> $DIR/removed-syntax-box.rs:10:13", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . | ~~~~~~~~~~~ | ~~~~~~~~~ + error: `box_syntax` has been removed --> $DIR/removed-syntax-box.rs:11:13", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ~~~~~~~~~ + error: `box_syntax` has been removed --> $DIR/removed-syntax-box.rs:12:13", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . | ~~~~~~~~~~~~~~~~~ | ~~~~~~~~~ + error: `box_syntax` has been removed --> $DIR/removed-syntax-box.rs:13:22", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-7d1a44241c0b1b81e77c871231dc25d1d0e136681c52dc9962b5a1c460da767c", "query": "Found by grepping through the compiler via while reviewing an unrelated PR. File : File : : . or move source file . : . = Box::new(()); | ~~~~~~~~~~~~ | ~~~~~~~~~ + error: aborting due to 5 previous errors", "commid": "rust_pr_128496"}], "negative_passages": []} {"query_id": "q-en-rust-ee2e76fa3f7f37b887c4872309d16214ee91a201697970e4a57f1a2ff2110969", "query": " $DIR/inline-tainted-body.rs:7:21 | LL | pub struct WeakOnce(); | ^ unused type parameter | = help: consider removing `T`, referring to it in a field, or using a marker such as `PhantomData` = help: if you intended `T` to be a const parameter, use `const T: /* Type */` instead error: functions with the \"rust-call\" ABI must take a single non-self tuple argument --> $DIR/inline-tainted-body.rs:11:35 | LL | extern \"rust-call\" fn try_get(&self) -> Option> {} | ^^^^^ error[E0308]: mismatched types --> $DIR/inline-tainted-body.rs:11:45 | LL | extern \"rust-call\" fn try_get(&self) -> Option> {} | ------- ^^^^^^^^^^^^^^ expected `Option>`, found `()` | | | implicitly returns `()` as its body has no tail or `return` expression | = note: expected enum `Option>` found unit type `()` error: aborting due to 3 previous errors Some errors have detailed explanations: E0308, E0392. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_128616"}], "negative_passages": []} {"query_id": "q-en-rust-adf668ef77fe46470cf478c31ac88114f86def44be52d776c44ee99ad60314a4", "query": "Right now using to manage the rust-lang/rust repo locally causes to fail: This is a result of which is responsible for adding the character to the failing test. While I would love to get that support to jj upstream I'm assuming it is non-trivial and I'd like a better solution in the short term so I can continue to use while contributing to the compiler. suggested I use in the meantime which I'm fine with but I'd like a way to set this in my or some equivalent mechanism doesn't require manually adding this set of arguments to every invocation of I make.\nnote this is essentially the same as i actually would like an even more general solution, where the flags and configs are completely interchangable and you can specify either in either place, but a solution just for seems ok too. label T-bootstrap A-contributor-roadblock\nSo it turns out that won't work as a short term fix even because does not work with tidy atm. For now I'm going with but I expect this to break frequently and be a source of annoyance so I'll be looking towards fixing this soon if it does in fact end up being a problem.\nre crlf, as a source control expert I'd recommend never using any kind of autoconversion -- it is a bit of a nightmare to work with in too many cases. Rather, I'd consider checking in the file with CRLFs, and disabling autoconversion so files are always checked out as-is. (Another option is to effectively treat the file as binary content, e.g. use some other means of encoding the file, such as base64 or putting it in a tarball. But I don't know how practical that is)", "positive_passages": [{"docid": "doc-en-rust-8c50c876adfd43f9eb2521ad8b6295d8ba674e4180ebba07ca1b48e12c0ff4be", "text": " warning-crlf.rs eol=crlf warning-crlf.rs -text ", "commid": "rust_pr_128755"}], "negative_passages": []} {"query_id": "q-en-rust-adf668ef77fe46470cf478c31ac88114f86def44be52d776c44ee99ad60314a4", "query": "Right now using to manage the rust-lang/rust repo locally causes to fail: This is a result of which is responsible for adding the character to the failing test. While I would love to get that support to jj upstream I'm assuming it is non-trivial and I'd like a better solution in the short term so I can continue to use while contributing to the compiler. suggested I use in the meantime which I'm fine with but I'd like a way to set this in my or some equivalent mechanism doesn't require manually adding this set of arguments to every invocation of I make.\nnote this is essentially the same as i actually would like an even more general solution, where the flags and configs are completely interchangable and you can specify either in either place, but a solution just for seems ok too. label T-bootstrap A-contributor-roadblock\nSo it turns out that won't work as a short term fix even because does not work with tidy atm. For now I'm going with but I expect this to break frequently and be a source of annoyance so I'll be looking towards fixing this soon if it does in fact end up being a problem.\nre crlf, as a source control expert I'd recommend never using any kind of autoconversion -- it is a bit of a nightmare to work with in too many cases. Rather, I'd consider checking in the file with CRLFs, and disabling autoconversion so files are always checked out as-is. (Another option is to effectively treat the file as binary content, e.g. use some other means of encoding the file, such as base64 or putting it in a tarball. But I don't know how practical that is)", "positive_passages": [{"docid": "doc-en-rust-d7e7843b75e4d048bc140a8fc94e037675fe83706daf54565e7ebfd3939a7fbe", "text": " // ignore-tidy-cr //@ check-pass // This file checks the spans of intra-link warnings in a file with CRLF line endings. The // .gitattributes file in this directory should enforce it. /// [error] pub struct A; //~^^ WARNING `error` /// /// docs [error1] //~^ WARNING `error1` /// docs [error2] /// pub struct B; //~^^^ WARNING `error2` /** * This is a multi-line comment. * * It also has an [error]. */ pub struct C; //~^^^ WARNING `error` // ignore-tidy-cr //@ check-pass // This file checks the spans of intra-link warnings in a file with CRLF line endings. The // .gitattributes file in this directory should enforce it. /// [error] pub struct A; //~^^ WARNING `error` /// /// docs [error1] //~^ WARNING `error1` /// docs [error2] /// pub struct B; //~^^^ WARNING `error2` /** * This is a multi-line comment. * * It also has an [error]. */ pub struct C; //~^^^ WARNING `error` ", "commid": "rust_pr_128755"}], "negative_passages": []} {"query_id": "q-en-rust-597ef8939b679433bc04ca082ead1a6e979b471e6d5cfd80d8692c02d73316fc", "query": " $DIR/suggest-arg-comma-delete-ice.rs:15:14 | LL | main(rahh\uff09; | ^^ | help: Unicode character '\uff09' (Fullwidth Right Parenthesis) looks like ')' (Right Parenthesis), but it is not | LL | main(rahh); | ~ error[E0425]: cannot find value `rahh` in this scope --> $DIR/suggest-arg-comma-delete-ice.rs:15:10 | LL | main(rahh\uff09; | ^^^^ not found in this scope error[E0061]: this function takes 0 arguments but 1 argument was supplied --> $DIR/suggest-arg-comma-delete-ice.rs:15:5 | LL | main(rahh\uff09; | ^^^^ ---- unexpected argument | note: function defined here --> $DIR/suggest-arg-comma-delete-ice.rs:11:4 | LL | fn main() { | ^^^^ help: remove the extra argument | LL - main(rahh\uff09; LL + main(\uff09; | error: aborting due to 3 previous errors Some errors have detailed explanations: E0061, E0425. For more information about an error, try `rustc --explain E0061`. ", "commid": "rust_pr_128864"}], "negative_passages": []} {"query_id": "q-en-rust-c3be9ff8c7b370c66d2cc2e5fc53e059aa7da85c6e30fdb436f526a8522bd11f", "query": " $DIR/suggest-remove-compount-assign-let-ice.rs:13:11 | LL | let x \u2796= 1; | ^^ | help: Unicode character '\u2796' (Heavy Minus Sign) looks like '-' (Minus/Hyphen), but it is not | LL | let x -= 1; | ~ error: can't reassign to an uninitialized variable --> $DIR/suggest-remove-compount-assign-let-ice.rs:13:11 | LL | let x \u2796= 1; | ^^^ | = help: if you meant to overwrite, remove the `let` binding help: initialize the variable | LL - let x \u2796= 1; LL + let x = 1; | error: aborting due to 2 previous errors ", "commid": "rust_pr_128865"}], "negative_passages": []} {"query_id": "q-en-rust-1b2872af88c774857e577873fafa7afe90711e18518c693fd389f4a7c836e231", "query": "Since GDB used on CI is recent enough to run the new tests. However, there are three failures that have been ignored in : // gdb-check:$1 = core::option::Option<&u32>::Some(0x12345678) // gdb-check:$1 = core::option::Option<&u32>::Some(0x[...]) // gdb-command:print none // gdb-check:$2 = core::option::Option<&u32>::None // gdb-command:print full // gdb-check:$3 = option_like_enum::MoreFields::Full(454545, 0x87654321, 9988) // gdb-check:$3 = option_like_enum::MoreFields::Full(454545, 0x[...], 9988) // gdb-command:print empty_gdb.discr // gdb-check:$4 = (*mut isize) 0x1 // gdb-command:print empty // gdb-check:$4 = option_like_enum::MoreFields::Empty // gdb-command:print droid // gdb-check:$5 = option_like_enum::NamedFields::Droid{id: 675675, range: 10000001, internals: 0x43218765} // gdb-check:$5 = option_like_enum::NamedFields::Droid{id: 675675, range: 10000001, internals: 0x[...]} // gdb-command:print void_droid_gdb.internals // gdb-check:$6 = (*mut isize) 0x1 // gdb-command:print void_droid // gdb-check:$6 = option_like_enum::NamedFields::Void // gdb-command:print nested_non_zero_yep // gdb-check:$7 = option_like_enum::NestedNonZero::Yep(10.5, option_like_enum::NestedNonZeroField {a: 10, b: 20, c: 0x[...]})", "commid": "rust_pr_129672"}], "negative_passages": []} {"query_id": "q-en-rust-1b2872af88c774857e577873fafa7afe90711e18518c693fd389f4a7c836e231", "query": "Since GDB used on CI is recent enough to run the new tests. However, there are three failures that have been ignored in : // lldb-check:[...] Some(&0x12345678) // lldb-check:[...] Some(&0x[...]) // lldb-command:v none // lldb-check:[...] None // lldb-command:v full // lldb-check:[...] Full(454545, &0x87654321, 9988) // lldb-check:[...] Full(454545, &0x[...], 9988) // lldb-command:v empty // lldb-check:[...] Empty // lldb-command:v droid // lldb-check:[...] Droid { id: 675675, range: 10000001, internals: &0x43218765 } // lldb-check:[...] Droid { id: 675675, range: 10000001, internals: &0x[...] } // lldb-command:v void_droid // lldb-check:[...] Void", "commid": "rust_pr_129672"}], "negative_passages": []} {"query_id": "q-en-rust-1b2872af88c774857e577873fafa7afe90711e18518c693fd389f4a7c836e231", "query": "Since GDB used on CI is recent enough to run the new tests. However, there are three failures that have been ignored in : // If the non-empty variant contains a single non-nullable pointer than the whole // item is represented as just a pointer and not wrapped in a struct. // Unfortunately (for these test cases) the content of the non-discriminant fields // in the null-case is not defined. So we just read the discriminator field in // this case (by casting the value to a memory-equivalent struct). enum MoreFields<'a> { Full(u32, &'a isize, i16),", "commid": "rust_pr_129672"}], "negative_passages": []} {"query_id": "q-en-rust-1b2872af88c774857e577873fafa7afe90711e18518c693fd389f4a7c836e231", "query": "Since GDB used on CI is recent enough to run the new tests. However, there are three failures that have been ignored in : = Some(\"abc\"); let none_str: Option<&'static str> = None; let some: Option<&u32> = Some(unsafe { std::mem::transmute(0x12345678_usize) }); let some: Option<&u32> = Some(&1234); let none: Option<&u32> = None; let full = MoreFields::Full(454545, unsafe { std::mem::transmute(0x87654321_usize) }, 9988); let full = MoreFields::Full(454545, &1234, 9988); let empty = MoreFields::Empty; let empty_gdb: &MoreFieldsRepr = unsafe { std::mem::transmute(&MoreFields::Empty) }; let droid = NamedFields::Droid { id: 675675, range: 10000001, internals: unsafe { std::mem::transmute(0x43218765_usize) } internals: &1234, }; let void_droid = NamedFields::Void; let void_droid_gdb: &NamedFieldsRepr = unsafe { std::mem::transmute(&NamedFields::Void) }; let x = 'x'; let nested_non_zero_yep = NestedNonZero::Yep( 10.5, NestedNonZeroField { a: 10, b: 20, c: &x c: &'x', }); let nested_non_zero_nope = NestedNonZero::Nope; zzz(); // #break", "commid": "rust_pr_129672"}], "negative_passages": []} {"query_id": "q-en-rust-e7a320c98519d6fc4b2fd32e4b6eef4cdfa5f12d2bc2a3a5c7d0e5b9670e2657", "query": " $DIR/unused-parens-for-stmt-expr-attributes-issue-129833.rs:9:13 | LL | let _ = (#[inline] #[allow(dead_code)] || println!(\"Hello!\")); | ^ ^ | note: the lint level is defined here --> $DIR/unused-parens-for-stmt-expr-attributes-issue-129833.rs:6:9 | LL | #![deny(unused_parens)] | ^^^^^^^^^^^^^ help: remove these parentheses | LL - let _ = (#[inline] #[allow(dead_code)] || println!(\"Hello!\")); LL + let _ = #[inline] #[allow(dead_code)] || println!(\"Hello!\"); | error: unnecessary parentheses around block return value --> $DIR/unused-parens-for-stmt-expr-attributes-issue-129833.rs:10:5 | LL | (#[inline] #[allow(dead_code)] || println!(\"Hello!\")) | ^ ^ | help: remove these parentheses | LL - (#[inline] #[allow(dead_code)] || println!(\"Hello!\")) LL + #[inline] #[allow(dead_code)] || println!(\"Hello!\") | error: aborting due to 2 previous errors ", "commid": "rust_pr_131546"}], "negative_passages": []} {"query_id": "q-en-rust-5923fe45b718fab55ab35d21c8370569fc6087fac5f9f0992fc6d95dce139465", "query": "Ferrocene CI has detected that this test was broken by rust-lang/rust . Specifically, by the change in , shown below: Test output: Reverting that single line diff fixes the test for the QNX7.1 targets, e.g. and can the change be reverted or does QNX7.0 need to use , instead of , here? from looking at libc, it appears that both and are available on QNX7.0 so reverting the change should at least not cause compilation or linking errors. if the case is the latter, then we should use in addition to .\nthanks for the report! I will need to re-test it for QNX 7.0, to see if it is a 7.0 requirement (i don't recall why this was required initially, but clearly I ran into some issues with it). I also plan to do a similar automation to ensure 7.0 builds run without problems on our hardware. Is there a list of tests you see failing on 7.1 that are OK to ignore? I see a list - is this the most relevant one? Thx!\nwe do not ignore any single unit test at the moment and we run library (e.g. libstd) tests as well as (cross) compilation tests using . we don't pass to but instead pass the list of test suites as arguments. we currently run these test suites:\nWG-prioritization assigning priority (). label -I-prioritize +P-low\nfixed in", "positive_passages": [{"docid": "doc-en-rust-858956bfbeacc4a26c50bcf4452304c01e84baadaa75e070afdc168fcca7090c", "text": "run_path_with_cstr(original, &|original| { run_path_with_cstr(link, &|link| { cfg_if::cfg_if! { if #[cfg(any(target_os = \"vxworks\", target_os = \"redox\", target_os = \"android\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\", target_os = \"nto\"))] { if #[cfg(any(target_os = \"vxworks\", target_os = \"redox\", target_os = \"android\", target_os = \"espidf\", target_os = \"horizon\", target_os = \"vita\", target_env = \"nto70\"))] { // VxWorks, Redox and ESP-IDF lack `linkat`, so use `link` instead. POSIX leaves // it implementation-defined whether `link` follows symlinks, so rely on the // `symlink_hard_link` test in library/std/src/fs/tests.rs to check the behavior.", "commid": "rust_pr_130248"}], "negative_passages": []} {"query_id": "q-en-rust-a4b53c3645bc265e33649832a59ad749f3726a8fed73b58b68318620ba8777f9", "query": " $DIR/global-cache-and-parallel-frontend.rs:15:17 | LL | #[derive(Clone, Eq)] | ^^ the trait `Clone` is not implemented for `T`, which is required by `Struct: PartialEq` | note: required for `Struct` to implement `PartialEq` --> $DIR/global-cache-and-parallel-frontend.rs:18:19 | LL | impl PartialEq for Struct | ----- ^^^^^^^^^^^^ ^^^^^^^^^ | | | unsatisfied trait bound introduced here note: required by a bound in `Eq` --> $SRC_DIR/core/src/cmp.rs:LL:COL = note: this error originates in the derive macro `Eq` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider restricting type parameter `T` | LL | pub struct Struct(T); | +++++++++++++++++++ error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_130094"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-c06310be93043158632822254ac7def4489cba1042635e23fa1686770fd84153", "text": "} } pub fn is_async(self) -> bool { matches!(self, CoroutineKind::Async { .. }) } pub fn is_gen(self) -> bool { matches!(self, CoroutineKind::Gen { .. }) pub fn as_str(self) -> &'static str { match self { CoroutineKind::Async { .. } => \"async\", CoroutineKind::Gen { .. } => \"gen\", CoroutineKind::AsyncGen { .. } => \"async gen\", } } pub fn closure_id(self) -> NodeId {", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-bfc1eabfac7834a3e820eced388b8602562a46673f8aee8c6701f0f9a76589c5", "text": "ast_passes_bound_in_context = bounds on `type`s in {$ctx} have no effect ast_passes_const_and_async = functions cannot be both `const` and `async` .const = `const` because of this .async = `async` because of this .label = {\"\"} ast_passes_const_and_c_variadic = functions cannot be both `const` and C-variadic .const = `const` because of this .variadic = C-variadic because of this ast_passes_const_and_coroutine = functions cannot be both `const` and `{$coroutine_kind}` .const = `const` because of this .coroutine = `{$coroutine_kind}` because of this .label = {\"\"} ast_passes_const_bound_trait_object = const trait bounds are not allowed in trait object types ast_passes_const_without_body =", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-1dbb75c1a7a05e588bd321a5a3aa81d5201d64f51e69e9fdee68c24fcead2f51", "text": "// Functions cannot both be `const async` or `const gen` if let Some(&FnHeader { constness: Const::Yes(cspan), constness: Const::Yes(const_span), coroutine_kind: Some(coroutine_kind), .. }) = fk.header() { let aspan = match coroutine_kind { CoroutineKind::Async { span: aspan, .. } | CoroutineKind::Gen { span: aspan, .. } | CoroutineKind::AsyncGen { span: aspan, .. } => aspan, }; // FIXME(gen_blocks): Report a different error for `const gen` self.dcx().emit_err(errors::ConstAndAsync { spans: vec![cspan, aspan], cspan, aspan, self.dcx().emit_err(errors::ConstAndCoroutine { spans: vec![coroutine_kind.span(), const_span], const_span, coroutine_span: coroutine_kind.span(), coroutine_kind: coroutine_kind.as_str(), span, }); }", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-1095c86314a09036287669a20080bc8a574823c01c9e364c4848fc94b1c106c7", "text": "} #[derive(Diagnostic)] #[diag(ast_passes_const_and_async)] pub(crate) struct ConstAndAsync { #[diag(ast_passes_const_and_coroutine)] pub(crate) struct ConstAndCoroutine { #[primary_span] pub spans: Vec, #[label(ast_passes_const)] pub cspan: Span, #[label(ast_passes_async)] pub aspan: Span, pub const_span: Span, #[label(ast_passes_coroutine)] pub coroutine_span: Span, #[label] pub span: Span, pub coroutine_kind: &'static str, } #[derive(Diagnostic)]", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-6dd5ca973f3aaa9fa3c0cc4f59386cd990f3525a0e029376e9223ef5ddd3923f", "text": ") => { eq_closure_binder(lb, rb) && lc == rc && la.map_or(false, CoroutineKind::is_async) == ra.map_or(false, CoroutineKind::is_async) && eq_coroutine_kind(*la, *ra) && lm == rm && eq_fn_decl(lf, rf) && eq_expr(le, re)", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-70650facd78c1e6390bab57b8c75f2bc365c8c58abfdbf638bb4b8df02a564e3", "text": "} } fn eq_coroutine_kind(a: Option, b: Option) -> bool { match (a, b) { (Some(CoroutineKind::Async { .. }), Some(CoroutineKind::Async { .. })) | (Some(CoroutineKind::Gen { .. }), Some(CoroutineKind::Gen { .. })) | (Some(CoroutineKind::AsyncGen { .. }), Some(CoroutineKind::AsyncGen { .. })) | (None, None) => true, _ => false, } } pub fn eq_field(l: &ExprField, r: &ExprField) -> bool { l.is_placeholder == r.is_placeholder && eq_id(l.ident, r.ident)", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-5b5bd393c9584aea250df7b8f1fcf9a7ee6b5f975196c68d1aea729c0694d3f4", "text": " //@ edition:2024 //@ compile-flags: -Zunstable-options #![feature(gen_blocks)] const gen fn a() {} //~^ ERROR functions cannot be both `const` and `gen` const async gen fn b() {} //~^ ERROR functions cannot be both `const` and `async gen` fn main() {} ", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-7de2dab580f301c8ce6b2809875c28dcb5912dccad192bd07b5e03d66bc6bf9a", "query": "() Reproduces on the playground using . label +D-incorrect +F-gen_blocks +requires-nightly", "positive_passages": [{"docid": "doc-en-rust-ba73f1c6cde8b8d73ef34d4941f73826bff52342a2c91b54fb0fa1ad04b099c7", "text": " error: functions cannot be both `const` and `gen` --> $DIR/const_gen_fn.rs:6:1 | LL | const gen fn a() {} | ^^^^^-^^^---------- | | | | | `gen` because of this | `const` because of this error: functions cannot be both `const` and `async gen` --> $DIR/const_gen_fn.rs:9:1 | LL | const async gen fn b() {} | ^^^^^-^^^^^^^^^---------- | | | | | `async gen` because of this | `const` because of this error: aborting due to 2 previous errors ", "commid": "rust_pr_130252"}], "negative_passages": []} {"query_id": "q-en-rust-1b3f2877d42a50b0ff66faf0b8e86b25962d0426b8d7fb837c7f3ddc256c18e6", "query": "The error message doesn't say that the problem is mismatching mutability.\nLooks like this can be closed, this is now: Edit: though now the complaint might be that it doesn't single out 'other'\nFlagging as needstest (just in case a test doesn't already exist)", "positive_passages": [{"docid": "doc-en-rust-ce476fea9e1b5201cc7e672a5bd3dbd244b81ebbf04b259dbd82190d9ca05f81", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. trait Foo { fn bar(&mut self, other: &mut Foo); } struct Baz; impl Foo for Baz { fn bar(&mut self, other: &Foo) {} //~^ ERROR method `bar` has an incompatible type for trait: values differ in mutability [E0053] } fn main() {} ", "commid": "rust_pr_16568"}], "negative_passages": []} {"query_id": "q-en-rust-1b3f2877d42a50b0ff66faf0b8e86b25962d0426b8d7fb837c7f3ddc256c18e6", "query": "The error message doesn't say that the problem is mismatching mutability.\nLooks like this can be closed, this is now: Edit: though now the complaint might be that it doesn't single out 'other'\nFlagging as needstest (just in case a test doesn't already exist)", "positive_passages": [{"docid": "doc-en-rust-cb6d5a935649bbd754bfa189aacfa66f7b39a0e06034fa5dee13461cc96b69b9", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(overloaded_calls)] use std::{fmt, ops}; struct Shower { x: T } impl ops::Fn<(), ()> for Shower { fn call(&self, _args: ()) { //~^ ERROR `call` has an incompatible type for trait: expected \"rust-call\" fn but found \"Rust\" fn println!(\"{}\", self.x); } } fn make_shower(x: T) -> Shower { Shower { x: x } } pub fn main() { let show3 = make_shower(3i); show3(); } ", "commid": "rust_pr_16568"}], "negative_passages": []} {"query_id": "q-en-rust-1b3f2877d42a50b0ff66faf0b8e86b25962d0426b8d7fb837c7f3ddc256c18e6", "query": "The error message doesn't say that the problem is mismatching mutability.\nLooks like this can be closed, this is now: Edit: though now the complaint might be that it doesn't single out 'other'\nFlagging as needstest (just in case a test doesn't already exist)", "positive_passages": [{"docid": "doc-en-rust-1d0614006af5408906bbd9599bc43a6d02cbc1709f55d3645a95a9eab552018c", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. enum Foo { A = 1i64, //~^ ERROR mismatched types: expected `int` but found `i64` B = 2u8 //~^ ERROR mismatched types: expected `int` but found `u8` } fn main() {} ", "commid": "rust_pr_16568"}], "negative_passages": []} {"query_id": "q-en-rust-1b3f2877d42a50b0ff66faf0b8e86b25962d0426b8d7fb837c7f3ddc256c18e6", "query": "The error message doesn't say that the problem is mismatching mutability.\nLooks like this can be closed, this is now: Edit: though now the complaint might be that it doesn't single out 'other'\nFlagging as needstest (just in case a test doesn't already exist)", "positive_passages": [{"docid": "doc-en-rust-ce11e8c5b3f9367ba0a00118503b636f9634a58d31db41d59d2d05688efc82e1", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { if true { return } match () { () => { static MAGIC: uint = 0; } } } ", "commid": "rust_pr_16568"}], "negative_passages": []} {"query_id": "q-en-rust-cb99e41e3bac24f2f688db0a04a7d6eb7ec5b55a9d3d2599471e14a7eb2164f9", "query": "upstream/master:\nDoes that build include the fix in\nYes, include. 6:45 \"Huon Wilson\" notifications\nThanks for confirming. cc\nReproduced on revision . Note that I'm using win 8.1 with non-english locale which probably causes issues. Quick investigation: then convert the code into string. It seems that failed with .\nSo the issue is broken into two parts: 1. why rustdoc failed and called ? 2. why failed?", "positive_passages": [{"docid": "doc-en-rust-ba3fd74697ec461b1b165df6ed3e2972d15d35e42e039ebcb8399214636b1c32", "text": "buf.len() as DWORD, ptr::null()); if res == 0 { fail!(\"[{}] FormatMessage failure\", errno()); // Sometimes FormatMessageW can fail e.g. system doesn't like langId, let fm_err = errno(); return format!(\"OS Error {} (FormatMessageW() returned error {})\", err, fm_err); } str::from_utf16(str::truncate_utf16_at_nul(buf)) .expect(\"FormatMessageW returned invalid UTF-16\") let msg = str::from_utf16(str::truncate_utf16_at_nul(buf)); match msg { Some(msg) => format!(\"OS Error {}: {}\", err, msg), None => format!(\"OS Error {} (FormatMessageW() returned invalid UTF-16)\", err), } } }", "commid": "rust_pr_13078"}], "negative_passages": []} {"query_id": "q-en-rust-f19e8c3b4cc91be92d062a245f376c4f67af088e4fe6f0297917e24bf214b064", "query": "The crater run for merged doctests () detected a large number of regressions due to tests examining the process arguments. (I don't have an exact number, but let's say ~50 projects.) Previously, there were no arguments, but now it gets arguments like which the user's CLI parsing code isn't expecting. For example, using something like will now fail. It seems to be fairly common to have objects whose default constructor will parse the arguments from the command line. I'm not sure if or how we should resolve that. One idea is to use a different mechanism for passing in the test information (like via an environment variable).\nI vaguely remember that using environment variables was discussed but can't find anything except mentioning it . Maybe it'd be a better approach indeed. Let me send a PR so we can make a crater run on it.", "positive_passages": [{"docid": "doc-en-rust-dfb26e0c8f1f756a927307678888fe4ab54c89ac492fa8aecc13579a95f54671", "text": "} else { cmd = Command::new(&output_file); if doctest.is_multiple_tests { cmd.arg(\"*doctest-bin-path\"); cmd.arg(&output_file); cmd.env(\"RUSTDOC_DOCTEST_BIN_PATH\", &output_file); } } if let Some(run_directory) = &rustdoc_options.test_run_directory {", "commid": "rust_pr_131095"}], "negative_passages": []} {"query_id": "q-en-rust-f19e8c3b4cc91be92d062a245f376c4f67af088e4fe6f0297917e24bf214b064", "query": "The crater run for merged doctests () detected a large number of regressions due to tests examining the process arguments. (I don't have an exact number, but let's say ~50 projects.) Previously, there were no arguments, but now it gets arguments like which the user's CLI parsing code isn't expecting. For example, using something like will now fail. It seems to be fairly common to have objects whose default constructor will parse the arguments from the command line. I'm not sure if or how we should resolve that. One idea is to use a different mechanism for passing in the test information (like via an environment variable).\nI vaguely remember that using environment variables was discussed but can't find anything except mentioning it . Maybe it'd be a better approach indeed. Let me send a PR so we can make a crater run on it.", "positive_passages": [{"docid": "doc-en-rust-2ef5b5e2eb60692e1eab8ad0a38d69421c14112e6faef323a92f5fed30ddeff0", "text": "use std::path::PathBuf; pub static BINARY_PATH: OnceLock = OnceLock::new(); pub const RUN_OPTION: &str = \"*doctest-inner-test\"; pub const BIN_OPTION: &str = \"*doctest-bin-path\"; pub const RUN_OPTION: &str = \"RUSTDOC_DOCTEST_RUN_NB_TEST\"; #[allow(unused)] pub fn doctest_path() -> Option<&'static PathBuf> {{", "commid": "rust_pr_131095"}], "negative_passages": []} {"query_id": "q-en-rust-f19e8c3b4cc91be92d062a245f376c4f67af088e4fe6f0297917e24bf214b064", "query": "The crater run for merged doctests () detected a large number of regressions due to tests examining the process arguments. (I don't have an exact number, but let's say ~50 projects.) Previously, there were no arguments, but now it gets arguments like which the user's CLI parsing code isn't expecting. For example, using something like will now fail. It seems to be fairly common to have objects whose default constructor will parse the arguments from the command line. I'm not sure if or how we should resolve that. One idea is to use a different mechanism for passing in the test information (like via an environment variable).\nI vaguely remember that using environment variables was discussed but can't find anything except mentioning it . Maybe it'd be a better approach indeed. Let me send a PR so we can make a crater run on it.", "positive_passages": [{"docid": "doc-en-rust-8b35b59110368adb05880f53682f667dce5c05bad77fb02e2f4e102240e92e0c", "text": "#[allow(unused)] pub fn doctest_runner(bin: &std::path::Path, test_nb: usize) -> Result<(), String> {{ let out = std::process::Command::new(bin) .arg(self::RUN_OPTION) .arg(test_nb.to_string()) .env(self::RUN_OPTION, test_nb.to_string()) .args(std::env::args().skip(1).collect::>()) .output() .expect(\"failed to run command\"); if !out.status.success() {{", "commid": "rust_pr_131095"}], "negative_passages": []} {"query_id": "q-en-rust-f19e8c3b4cc91be92d062a245f376c4f67af088e4fe6f0297917e24bf214b064", "query": "The crater run for merged doctests () detected a large number of regressions due to tests examining the process arguments. (I don't have an exact number, but let's say ~50 projects.) Previously, there were no arguments, but now it gets arguments like which the user's CLI parsing code isn't expecting. For example, using something like will now fail. It seems to be fairly common to have objects whose default constructor will parse the arguments from the command line. I'm not sure if or how we should resolve that. One idea is to use a different mechanism for passing in the test information (like via an environment variable).\nI vaguely remember that using environment variables was discussed but can't find anything except mentioning it . Maybe it'd be a better approach indeed. Let me send a PR so we can make a crater run on it.", "positive_passages": [{"docid": "doc-en-rust-f7b7174b3675e3ed26cccc2d1f6f5eb8ea393dd9ff97d0ebf251b36d6d85f8de", "text": "#[rustc_main] fn main() -> std::process::ExitCode {{ const TESTS: [test::TestDescAndFn; {nb_tests}] = [{ids}]; let bin_marker = std::ffi::OsStr::new(__doctest_mod::BIN_OPTION); let test_marker = std::ffi::OsStr::new(__doctest_mod::RUN_OPTION); let test_args = &[{test_args}]; const ENV_BIN: &'static str = \"RUSTDOC_DOCTEST_BIN_PATH\"; let mut args = std::env::args_os().skip(1); while let Some(arg) = args.next() {{ if arg == bin_marker {{ let Some(binary) = args.next() else {{ panic!(\"missing argument after `{{}}`\", __doctest_mod::BIN_OPTION); }}; if crate::__doctest_mod::BINARY_PATH.set(binary.into()).is_err() {{ panic!(\"`{{}}` option was used more than once\", bin_marker.to_string_lossy()); }} return std::process::Termination::report(test::test_main(test_args, Vec::from(TESTS), None)); }} else if arg == test_marker {{ let Some(nb_test) = args.next() else {{ panic!(\"missing argument after `{{}}`\", __doctest_mod::RUN_OPTION); }}; if let Some(nb_test) = nb_test.to_str().and_then(|nb| nb.parse::().ok()) {{ if let Some(test) = TESTS.get(nb_test) {{ if let test::StaticTestFn(f) = test.testfn {{ return std::process::Termination::report(f()); }} if let Ok(binary) = std::env::var(ENV_BIN) {{ let _ = crate::__doctest_mod::BINARY_PATH.set(binary.into()); unsafe {{ std::env::remove_var(ENV_BIN); }} return std::process::Termination::report(test::test_main(test_args, Vec::from(TESTS), None)); }} else if let Ok(nb_test) = std::env::var(__doctest_mod::RUN_OPTION) {{ if let Ok(nb_test) = nb_test.parse::() {{ if let Some(test) = TESTS.get(nb_test) {{ if let test::StaticTestFn(f) = test.testfn {{ return std::process::Termination::report(f()); }} }} panic!(\"Unexpected value after `{{}}`\", __doctest_mod::RUN_OPTION); }} panic!(\"Unexpected value for `{{}}`\", __doctest_mod::RUN_OPTION); }} eprintln!(\"WARNING: No argument provided so doctests will be run in the same process\"); eprintln!(\"WARNING: No rustdoc doctest environment variable provided so doctests will be run in the same process\"); std::process::Termination::report(test::test_main(test_args, Vec::from(TESTS), None)) }}\", nb_tests = self.nb_tests,", "commid": "rust_pr_131095"}], "negative_passages": []} {"query_id": "q-en-rust-654bffdfcd4ae0374e4069afb71f236b1477555682e2524b014ef184d6eaf9c2", "query": "In // Conditionally pass `-Zon-broken-pipe=kill` to underlying rustc. Not all binaries want // `-Zon-broken-pipe=kill`, which includes cargo itself. if env::var_os(\"FORCE_ON_BROKEN_PIPE_KILL\").is_some() { cmd.arg(\"-Z\").arg(\"on-broken-pipe=kill\"); } if target.is_some() { // The stage0 compiler has a special sysroot distinct from what we // actually downloaded, so we just always pass the `--sysroot` option,", "commid": "rust_pr_131155"}], "negative_passages": []} {"query_id": "q-en-rust-654bffdfcd4ae0374e4069afb71f236b1477555682e2524b014ef184d6eaf9c2", "query": "In // If the rustc output is piped to e.g. `head -n1` we want the process to be // killed, rather than having an error bubble up and cause a panic. // If the rustc output is piped to e.g. `head -n1` we want the process to be killed, rather than // having an error bubble up and cause a panic. // // FIXME(jieyouxu): this flag is load-bearing for rustc to not ICE on broken pipes, because // rustc internally sometimes uses std `println!` -- but std `println!` by default will panic on // broken pipes, and uncaught panics will manifest as an ICE. The compiler *should* handle this // properly, but this flag is set in the meantime to paper over the I/O errors. // // See for details. // // Also see the discussion for properly handling I/O errors related to broken pipes, i.e. safe // variants of `println!` in // . cargo.rustflag(\"-Zon-broken-pipe=kill\"); if builder.config.llvm_enzyme {", "commid": "rust_pr_131155"}], "negative_passages": []} {"query_id": "q-en-rust-654bffdfcd4ae0374e4069afb71f236b1477555682e2524b014ef184d6eaf9c2", "query": "In // `-Zon-broken-pipe=kill` breaks cargo tests // NOTE: The root cause of needing `-Zon-broken-pipe=kill` in the first place is because `rustc` // and `rustdoc` doesn't gracefully handle I/O errors due to usages of raw std `println!` macros // which panics upon encountering broken pipes. `-Zon-broken-pipe=kill` just papers over that // and stops rustc/rustdoc ICEing on e.g. `rustc --print=sysroot | false`. // // cargo explicitly does not want the `-Zon-broken-pipe=kill` paper because it does actually use // variants of `println!` that handles I/O errors gracefully. It's also a breaking change for a // spawn process not written in Rust, especially if the language default handler is not // `SIG_IGN`. Thankfully cargo tests will break if we do set the flag. // // For the cargo discussion, see // . // // For the rustc discussion, see // // for proper solutions. if !path.ends_with(\"cargo\") { // If the output is piped to e.g. `head -n1` we want the process to be killed, // rather than having an error bubble up and cause a panic. cargo.rustflag(\"-Zon-broken-pipe=kill\"); // Use an untracked env var `FORCE_ON_BROKEN_PIPE_KILL` here instead of `RUSTFLAGS`. // `RUSTFLAGS` is tracked by cargo. Conditionally omitting `-Zon-broken-pipe=kill` from // `RUSTFLAGS` causes unnecessary tool rebuilds due to cache invalidation from building e.g. // cargo *without* `-Zon-broken-pipe=kill` but then rustdoc *with* `-Zon-broken-pipe=kill`. cargo.env(\"FORCE_ON_BROKEN_PIPE_KILL\", \"-Zon-broken-pipe=kill\"); } cargo", "commid": "rust_pr_131155"}], "negative_passages": []} {"query_id": "q-en-rust-654bffdfcd4ae0374e4069afb71f236b1477555682e2524b014ef184d6eaf9c2", "query": "In //! Check that `rustc` and `rustdoc` does not ICE upon encountering a broken pipe due to unhandled //! panics from raw std `println!` usages. //! //! Regression test for . //@ ignore-cross-compile (needs to run test binary) #![feature(anonymous_pipe)] use std::io::Read; use std::process::{Command, Stdio}; use run_make_support::env_var; #[derive(Debug, PartialEq)] enum Binary { Rustc, Rustdoc, } fn check_broken_pipe_handled_gracefully(bin: Binary, mut cmd: Command) { let (reader, writer) = std::pipe::pipe().unwrap(); drop(reader); // close read-end cmd.stdout(writer).stderr(Stdio::piped()); let mut child = cmd.spawn().unwrap(); let mut stderr = String::new(); child.stderr.as_mut().unwrap().read_to_string(&mut stderr).unwrap(); let status = child.wait().unwrap(); assert!(!status.success(), \"{bin:?} unexpectedly succeeded\"); const PANIC_ICE_EXIT_CODE: i32 = 101; #[cfg(not(windows))] { // On non-Windows, rustc/rustdoc built with `-Zon-broken-pipe=kill` shouldn't have an exit // code of 101 because it should have an wait status that corresponds to SIGPIPE signal // number. assert_ne!(status.code(), Some(PANIC_ICE_EXIT_CODE), \"{bin:?}\"); // And the stderr should be empty because rustc/rustdoc should've gotten killed. assert!(stderr.is_empty(), \"{bin:?} stderr:n{}\", stderr); } #[cfg(windows)] { match bin { // On Windows, rustc has a paper that propagates the panic exit code of 101 but converts // broken pipe errors into fatal errors instead of ICEs. Binary::Rustc => { assert_eq!(status.code(), Some(PANIC_ICE_EXIT_CODE), \"{bin:?}\"); // But make sure it doesn't manifest as an ICE. assert!(!stderr.contains(\"internal compiler error\"), \"{bin:?} ICE'd\"); } // On Windows, rustdoc seems to cleanly exit with exit code of 1. Binary::Rustdoc => { assert_eq!(status.code(), Some(1), \"{bin:?}\"); assert!(!stderr.contains(\"panic\"), \"{bin:?} stderr contains panic\"); } } } } fn main() { let mut rustc = Command::new(env_var(\"RUSTC\")); rustc.arg(\"--print=sysroot\"); check_broken_pipe_handled_gracefully(Binary::Rustc, rustc); let mut rustdoc = Command::new(env_var(\"RUSTDOC\")); rustdoc.arg(\"--version\"); check_broken_pipe_handled_gracefully(Binary::Rustdoc, rustdoc); } ", "commid": "rust_pr_131155"}], "negative_passages": []} {"query_id": "q-en-rust-16c2411bfbebb5117e894176290cf661217a08a1a1574ac1ecbdebad0e504368", "query": "I am getting this on each invocation now, since very recently: This is in my main checkout, not in a worktree, so I am not quite sure what could even be so unusual about my setup that causes an error here. Cc\nA bisect points at That's I am surprised about the date of this PR, I think I would have noticed this earlier if this happened for a month... so probably some other factor is also involved. Cc\nIt seems to try to read which indeed is not a file that exists here. I also noticed this line in So maybe checking these files is just futile since git doesn't always use a file to track these refs?\nI think what happened is that has been run, and that indeed cleans up most of the files from . So the current approach in bootstrap of looking at these files is not reliable. IMO we should instead look at the commit date of the most recent commit in that branch. That also avoids having to directly mess with git's internal files.\nyeah, i'm gonna accept defeat on this one. is kinda slow on my device, but only if the relevant files aren't in the i/o cache, and building rust takes a good while anyways.\nis not the case anymore since . I am planning to revert and as they are not really required.\nSeems like it will still be the case if upstream was configured and too old. But that could be fixed in a much better way (e.g., we could ignore upstream commit and use merge commit in current branch) compare to and", "positive_passages": [{"docid": "doc-en-rust-aa0565575c3e09821a90b0f886d35ac6d302b288a02af5c027526c9267158d1c", "text": "if !verify_rustfmt_version(build) { return Ok(None); } get_git_modified_files(&build.config.git_config(), Some(&build.config.src), &[\"rs\"]) }", "commid": "rust_pr_131331"}], "negative_passages": []} {"query_id": "q-en-rust-16c2411bfbebb5117e894176290cf661217a08a1a1574ac1ecbdebad0e504368", "query": "I am getting this on each invocation now, since very recently: This is in my main checkout, not in a worktree, so I am not quite sure what could even be so unusual about my setup that causes an error here. Cc\nA bisect points at That's I am surprised about the date of this PR, I think I would have noticed this earlier if this happened for a month... so probably some other factor is also involved. Cc\nIt seems to try to read which indeed is not a file that exists here. I also noticed this line in So maybe checking these files is just futile since git doesn't always use a file to track these refs?\nI think what happened is that has been run, and that indeed cleans up most of the files from . So the current approach in bootstrap of looking at these files is not reliable. IMO we should instead look at the commit date of the most recent commit in that branch. That also avoids having to directly mess with git's internal files.\nyeah, i'm gonna accept defeat on this one. is kinda slow on my device, but only if the relevant files aren't in the i/o cache, and building rust takes a good while anyways.\nis not the case anymore since . I am planning to revert and as they are not really required.\nSeems like it will still be the case if upstream was configured and too old. But that could be fixed in a much better way (e.g., we could ignore upstream commit and use merge commit in current branch) compare to and", "positive_passages": [{"docid": "doc-en-rust-805a301a99d8856df1ccad3e5fe982204acc9fb7939a0369052fd6c8aa7a65ee", "text": "use std::path::PathBuf; use std::{env, fs}; use build_helper::git::warn_old_master_branch; use crate::Build; #[cfg(not(feature = \"bootstrap-self-test\"))] use crate::builder::Builder;", "commid": "rust_pr_131331"}], "negative_passages": []} {"query_id": "q-en-rust-16c2411bfbebb5117e894176290cf661217a08a1a1574ac1ecbdebad0e504368", "query": "I am getting this on each invocation now, since very recently: This is in my main checkout, not in a worktree, so I am not quite sure what could even be so unusual about my setup that causes an error here. Cc\nA bisect points at That's I am surprised about the date of this PR, I think I would have noticed this earlier if this happened for a month... so probably some other factor is also involved. Cc\nIt seems to try to read which indeed is not a file that exists here. I also noticed this line in So maybe checking these files is just futile since git doesn't always use a file to track these refs?\nI think what happened is that has been run, and that indeed cleans up most of the files from . So the current approach in bootstrap of looking at these files is not reliable. IMO we should instead look at the commit date of the most recent commit in that branch. That also avoids having to directly mess with git's internal files.\nyeah, i'm gonna accept defeat on this one. is kinda slow on my device, but only if the relevant files aren't in the i/o cache, and building rust takes a good while anyways.\nis not the case anymore since . I am planning to revert and as they are not really required.\nSeems like it will still be the case if upstream was configured and too old. But that could be fixed in a much better way (e.g., we could ignore upstream commit and use merge commit in current branch) compare to and", "positive_passages": [{"docid": "doc-en-rust-7bec0cf1c2a0c0c784e1822de03ed9fdeef886e368f9afadd11fa2e2ae43668e", "text": "if let Some(ref s) = build.config.ccache { cmd_finder.must_have(s); } warn_old_master_branch(&build.config.git_config(), &build.config.src); }", "commid": "rust_pr_131331"}], "negative_passages": []} {"query_id": "q-en-rust-16c2411bfbebb5117e894176290cf661217a08a1a1574ac1ecbdebad0e504368", "query": "I am getting this on each invocation now, since very recently: This is in my main checkout, not in a worktree, so I am not quite sure what could even be so unusual about my setup that causes an error here. Cc\nA bisect points at That's I am surprised about the date of this PR, I think I would have noticed this earlier if this happened for a month... so probably some other factor is also involved. Cc\nIt seems to try to read which indeed is not a file that exists here. I also noticed this line in So maybe checking these files is just futile since git doesn't always use a file to track these refs?\nI think what happened is that has been run, and that indeed cleans up most of the files from . So the current approach in bootstrap of looking at these files is not reliable. IMO we should instead look at the commit date of the most recent commit in that branch. That also avoids having to directly mess with git's internal files.\nyeah, i'm gonna accept defeat on this one. is kinda slow on my device, but only if the relevant files aren't in the i/o cache, and building rust takes a good while anyways.\nis not the case anymore since . I am planning to revert and as they are not really required.\nSeems like it will still be the case if upstream was configured and too old. But that could be fixed in a much better way (e.g., we could ignore upstream commit and use merge commit in current branch) compare to and", "positive_passages": [{"docid": "doc-en-rust-7a884f9b37a698604f9eee69d853d2595d6ce4e7fb0cc6ae553bcdd6de3b2b5e", "text": ".collect(); Ok(Some(files)) } /// Print a warning if the branch returned from `updated_master_branch` is old /// /// For certain configurations of git repository, this remote will not be /// updated when running `git pull`. /// /// This can result in formatting thousands of files instead of a dozen, /// so we should warn the user something is wrong. pub fn warn_old_master_branch(config: &GitConfig<'_>, git_dir: &Path) { if crate::ci::CiEnv::is_ci() { // this warning is useless in CI, // and CI probably won't have the right branches anyway. return; } // this will be overwritten by the actual name, if possible let mut updated_master = \"the upstream master branch\".to_string(); match warn_old_master_branch_(config, git_dir, &mut updated_master) { Ok(branch_is_old) => { if !branch_is_old { return; } // otherwise fall through and print the rest of the warning } Err(err) => { eprintln!(\"warning: unable to check if {updated_master} is old due to error: {err}\") } } eprintln!( \"warning: {updated_master} is used to determine if files have been modifiedn warning: if it is not updated, this may cause files to be needlessly reformatted\" ); } pub fn warn_old_master_branch_( config: &GitConfig<'_>, git_dir: &Path, updated_master: &mut String, ) -> Result> { use std::time::Duration; *updated_master = updated_master_branch(config, Some(git_dir))?; let branch_path = git_dir.join(\".git/refs/remotes\").join(&updated_master); const WARN_AFTER: Duration = Duration::from_secs(60 * 60 * 24 * 10); let meta = match std::fs::metadata(&branch_path) { Ok(meta) => meta, Err(err) => { let gcd = git_common_dir(&git_dir)?; if branch_path.starts_with(&gcd) { return Err(Box::new(err)); } std::fs::metadata(Path::new(&gcd).join(\"refs/remotes\").join(&updated_master))? } }; if meta.modified()?.elapsed()? > WARN_AFTER { eprintln!(\"warning: {updated_master} has not been updated in 10 days\"); Ok(true) } else { Ok(false) } } fn git_common_dir(dir: &Path) -> Result { output_result(Command::new(\"git\").arg(\"-C\").arg(dir).arg(\"rev-parse\").arg(\"--git-common-dir\")) .map(|x| x.trim().to_string()) } ", "commid": "rust_pr_131331"}], "negative_passages": []} {"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-a5ba39b7bcf27c282d841d940fcb3e140f6de1687e3700749e0a3f5ed56524e0", "text": "}, Primitive::F32 => types::F32, Primitive::F64 => types::F64, Primitive::Pointer => pointer_ty(tcx), // FIXME(erikdesjardins): handle non-default addrspace ptr sizes Primitive::Pointer(_) => pointer_ty(tcx), } }", "commid": "rust_pr_107843"}], "negative_passages": []} {"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-5aae347605bb4660140896e7509a3d620ae7490dc4ab5527751813c1d58b58a0", "text": "pub(crate) use cpuid::codegen_cpuid_call; pub(crate) use llvm::codegen_llvm_intrinsic_call; use rustc_middle::ty::layout::HasParamEnv; use rustc_middle::ty::print::with_no_trimmed_paths; use rustc_middle::ty::subst::SubstsRef; use rustc_span::symbol::{kw, sym, Symbol};", "commid": "rust_pr_107843"}], "negative_passages": []} {"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-528d627c63f00b8fcaddc0425a6cc6641358718e683d5ec9eca2f3441a3ab4e8", "text": "return; } if intrinsic == sym::assert_zero_valid && !fx.tcx.permits_zero_init(layout) { if intrinsic == sym::assert_zero_valid && !fx.tcx.permits_zero_init(fx.param_env().and(layout)) { with_no_trimmed_paths!({ crate::base::codegen_panic( fx,", "commid": "rust_pr_107843"}], "negative_passages": []} {"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-1bce2b20d247fab559d4607d9cf57493df300b13b0bc5b81ccd0b4208efe195e", "text": "} if intrinsic == sym::assert_mem_uninitialized_valid && !fx.tcx.permits_uninit_init(layout) && !fx.tcx.permits_uninit_init(fx.param_env().and(layout)) { with_no_trimmed_paths!({ crate::base::codegen_panic(", "commid": "rust_pr_107843"}], "negative_passages": []} {"query_id": "q-en-rust-8c602d6d427338702ea29b613b9c1cb115f3120e9c6b612b73e96076292b0f25", "query": "It seems that the prefix is used more often than , so we should swap the function names before 0.1.\nWe're probably going the other direction -\nDone.", "positive_passages": [{"docid": "doc-en-rust-8659227658fbfb5aa409e123898d07bb1956bfea9c6d555bca9f36a6e51bcaa6", "text": "is_main_fn: bool, sigpipe: u8, ) { let main_ret_ty = tcx.fn_sig(rust_main_def_id).output(); let main_ret_ty = tcx.bound_fn_sig(rust_main_def_id).subst_identity().output(); // Given that `main()` has no arguments, // then its return type cannot have // late-bound regions, since late-bound", "commid": "rust_pr_107843"}], "negative_passages": []} {"query_id": "q-en-rust-1d0a44fcb334d632818a76b77ed647afef0351f397f3ea10fa55c8952e10b234", "query": "Yes, really\nIt looks like this is 2640 bytes of lovecraft quotes in each exectuable (out of , 0.56%). Perhaps it's not so bad?\n2k is pretty bad, because a hello world binary should be ~10k at most. It's not the lowest hanging fruit right now but it will become a bigger fish as we fix the other problems. I think it's an indication of a larger issue rather than being a problem itself though...\nI think there is no defensible reason to have that text in every executable.\nAnd thus poetry was killed...\nNot particularly new - and\n2K? So that's what 2014 is about?\nIf you want Rust to be taken seriously as a systems language, you have to worry about that. It's not acceptable in many embedded environments for binaries to be this large. It makes the language unusable without writing the entire standard library again. It's fine for it to be in the standard libraries... just not every binary linking against them.\nShould be fine until other sources of bloat are removed.\nThe problem here isn't the quotes, it's that the code containing the quotes is used or considered used at all. It's the canary in the coal mine. I'm sure we can cut down the runtime overhead to something much smaller, including getting rid of this.\nIt's not unreasonable for it to be considered used, considering it's part of the abort code, so even a trivial program might trigger it (e.g. on OOM, stack corruption, etc). Personally I'd add a no_horrors cfg option or something to disable it in the rare cases where 2k is considered important, unless the potential for user confusion is considered to great to keep the behaviour at all.\nIt's not rare for people to compare the size of the binaries to C and decide against using Rust for their use case due to it being very unfavourable. You shouldn't have to compile Rust again to disable debugging features useful only to the developers. This is only the runtime abort code, it's not the generic which isn't even a function call. Runtime errors should report real errors, and logic errors should be dealt with by improving the code so redundant checks aren't necessary in the normal release builds. It can remain around in a build that's not for regular end users... the code paths calling this just need some work to remove the need for debugging stuff.\nYeah but I love it though :( Hehe.\nI also love it, but it's more important to be professional and chuck it out. Let's not make this Rust's equivalent to TPAAMAYIMNEKUDOTAYIM. Easter eggs are fun, but they have to have literally zero impact on users.\nThe key question isn't \"is 2K big or small\". The key question is \"what value does this add\".\nKeeping these as default isn't very professional for many reasons, but I think value would be lost if lightheartedness was removed from Rust as a continuing policy. I've been amazed at how many people become interested in the language after I link them the abort in question. A human touch to the compiler output goes a long way. How about a '-fun' flag that enables easter eggs being compiled in?\nSounds a bit ridiculous tbh. The point of an easter egg is to have it everywhere, not to compile it purposefully.\nPerhaps, but I'd argue the main point of an easter egg is to be entertainingly surprising. A flag enabling them wouldn't give away when they happen, just discourage unpleasant surprises. for a compiler. Another option would be to strip the messages when optimizations are enabled. -O equals -nofun.\nThat's funny, wordy and mawkish. \"what value does this add\"? Nothing. I'd vote +1 to remove them entirely.\nBesides Lovecraft, I see tons of text obviously related to assertion failures and references to my build directory. I assume there is a lot of debugging code left in the compiler and libraries that will eventually get turned off in a production compiler, yes?\nAm I the only one that's going to bring up copyright issues? I work for a company that thoroughly reviews code with lawyers that will ban any and all uses of software that are even questionable in terms of their license or copyright issues. You might claim embedding these quotes (and without attribution) is fair use etc, but all it takes is for one of my company's lawyers to disagree with your conclusion and suddenly Rust is banned from use in the company. I understand and appreciate the value of having some fun with a project, but these quotes: unnecessary overhead to executable size and memory (and yes, in real life I work on embedded systems where a few k can make a difference). potential users because a zealous lawyer has concerns regarding the quotes and bans Rust \"just to be safe\".\nLovecraft's works are basically public domain, it's not a problem.\nTo be fair, pretty sure copyright is the wrong case to make. Lovecraft's works are in the public domain. Edit: Dang, Steve beat me to it.\nWon't the embedded environments you speak of use libcore?\nTrue, but (but I probably should have been more specific because this issue specifically mention's Lovecraft's quotes and doesn't mention Zelda quotes). I know I'm being a bit paranoid here when I bring up copyright stuff, but I'm not kidding when I mention zealous lawyers laying down strict rules...\nThis is only the opinion of a user on the outside looking in, but I'm not sure there's any sensible reason for this being in-- at best it's a waste of space and code, and at worst having Lovecraft spouted during an could be seen by the user on the receiving end as being in poor taste. (Speaking from experience a month or two ago, when a build of ed after ages of tring to compile it)\nI think that it should remain in. It sort of reminds people who stumble upon this that has indeed been built by real people :-) I also don't think that it is somehow \"unprofessional\", every program I ever came across has some sort of an easter egg in it,including Git, (starting with its name). Having this compile only with a certain opt in flag kind of defeats the purpose, but maybe a flag that can explicitly opt out of this kind of thing is worth having for embedded developers. There's no sensible reason apart from having some fun, I think that's enough of a reason, but opinions will differ.\nIt's not part of , it's part of the runtime and is included in every Rust program without . An easter egg like this in is different than putting it in every Rust program.\nI suggest its removal simply be the christening commit of 1.0-stable.\nyesssss\nTo those of you saying \"2k, in 2014?\" I will point out that there are a lot of platforms out there where 2k is still a big deal. If Rust is going to work for embedded systems work, you can't pull tricks like this.\nthose would not be using Rust's runtime anyway, and so would not have this in it.\nWhy should we have poetry in a binary executable generated by a supposedly serious, non-esoteric programming language? One would think this wouldn't even need to be a discussion...........\nthat is not necessarily true. As I understand, the runtime is getting slimmer and slimmer with every RFC.\naye. To my tastes, this sort of easter egg belongs in the compiler/toolchain, not a statically linked stdlib.\nTrue, it probably shouldn't be part of the runtime.\nAn embedded gag that only comes up when the program is frustrating the user with a catastrophic error?\nHonestly, as someone outside the Rust community, seeing this puts me off using the language. I'm a Lovecraft fan, and if this was an easter egg in the toolchain I would love it, but embedding this statically into all the binaries I ship to end users is a step too far. There's a lot of negative interpretations a snooping end-user could get from finding these quotes talking about things dying and gods in the binaries (and running on a binary isn't exactly difficult). That negative interpretation would fall on me, who they see as the developer, not the Rust team, who they probably don't even know exists. Rust is a tool. If my table saw has a tiny inscription on the underside with the name of the engineer who designed it - that's neat, a fun homage, and overall great. If my table saw contains a tiny router that engraves that engineer's name onto every piece of wood I cut with it - not so great. It degrades the quality of the tool. While I appreciate the entertainment value of the quotes, I think moving them into a developer tool like the Rust compiler would maintain that entertainment while also not degrading the quality of Rust by inserting unwanted data in generated binaries.\nThat's an incredibly fitting analogy. Here are a complete outsider's 2 cents: why not change instead to display a funny (or lighthearted) quote when something goes wrong, to sort of alleviate the pain of having your compilation fail? Of course, it shouldn't get in the way of you solving the issue (or it would be even worse), but if it manages to have some \"decorative\" value, I think it would be a perfect middle-ground between getting rid of this easter egg entirely or moving it to a more sensible place.\nI am very much in favour of removing the quotes. Put the quotes somewhere where humans will read them, not machines", "positive_passages": [{"docid": "doc-en-rust-8a8d324a17b540a90c83e611e7ba00643e6b0d5c28b061e8598443ce13d6eba1", "text": "let _ = write!(&mut w, \"{}\", args); let msg = str::from_utf8(&w.buf[0..w.pos]).unwrap_or(\"aborted\"); let msg = if msg.is_empty() {\"aborted\"} else {msg}; // Give some context to the message let hash = msg.bytes().fold(0, |accum, val| accum + (val as uint) ); let quote = match hash % 10 { 0 => \" It was from the artists and poets that the pertinent answers came, and I know that panic would have broken loose had they been able to compare notes. As it was, lacking their original letters, I half suspected the compiler of having asked leading questions, or of having edited the correspondence in corroboration of what he had latently resolved to see.\", 1 => \" There are not many persons who know what wonders are opened to them in the stories and visions of their youth; for when as children we listen and dream, we think but half-formed thoughts, and when as men we try to remember, we are dulled and prosaic with the poison of life. But some of us awake in the night with strange phantasms of enchanted hills and gardens, of fountains that sing in the sun, of golden cliffs overhanging murmuring seas, of plains that stretch down to sleeping cities of bronze and stone, and of shadowy companies of heroes that ride caparisoned white horses along the edges of thick forests; and then we know that we have looked back through the ivory gates into that world of wonder which was ours before we were wise and unhappy.\", 2 => \" Instead of the poems I had hoped for, there came only a shuddering blackness and ineffable loneliness; and I saw at last a fearful truth which no one had ever dared to breathe before \u2014 the unwhisperable secret of secrets \u2014 The fact that this city of stone and stridor is not a sentient perpetuation of Old New York as London is of Old London and Paris of Old Paris, but that it is in fact quite dead, its sprawling body imperfectly embalmed and infested with queer animate things which have nothing to do with it as it was in life.\", 3 => \" The ocean ate the last of the land and poured into the smoking gulf, thereby giving up all it had ever conquered. From the new-flooded lands it flowed again, uncovering death and decay; and from its ancient and immemorial bed it trickled loathsomely, uncovering nighted secrets of the years when Time was young and the gods unborn. Above the waves rose weedy remembered spires. The moon laid pale lilies of light on dead London, and Paris stood up from its damp grave to be sanctified with star-dust. Then rose spires and monoliths that were weedy but not remembered; terrible spires and monoliths of lands that men never knew were lands...\", 4 => \" There was a night when winds from unknown spaces whirled us irresistibly into limitless vacuum beyond all thought and entity. Perceptions of the most maddeningly untransmissible sort thronged upon us; perceptions of infinity which at the time convulsed us with joy, yet which are now partly lost to my memory and partly incapable of presentation to others.\", _ => \"You've met with a terrible fate, haven't you?\" }; rterrln!(\"{}\", \"\"); rterrln!(\"{}\", quote); rterrln!(\"{}\", \"\"); rterrln!(\"fatal runtime error: {}\", msg); unsafe { intrinsics::abort(); } }", "commid": "rust_pr_20944"}], "negative_passages": []} {"query_id": "q-en-rust-75f48a827d240fd7005de6013e817df4841e8ab17224bcb8f5ab5f05ec71ea5e", "query": "See I propose to change the license of Non trivial contributors: OK OK OK (MoCo employee) Contributors appearing in git blame that are trivial (trivial update due to language changes): OK (MoCo employee) OK OK (MoCo employee) OK OK OK OK OK OK (sent agreement to by email) Contributors with removed and trivial contributions are not considered. For the persons that are not in \"OK\" state, please respond to this issue saying: Then, I'll propose the corresponding PR. Thanks.\nI agree to relicense any previous contributions to according to the term of the computer Language Benchmarks Game license (\nWe don't need statements from Graydon or Marijn as they both contributed as employees. On Jun 7, 2014 5:50 AM, \"TeXitoi\" wrote:\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOp updated. could you please respond?\nCiting in a private email: can I consider the contribution of negligible? the same for renaming? If yes I can propose a PR for this and for\nguh, I suppose so, though I'm disappointed that didn't cooperate.\nMy change is obviously trivial and I have zero copyright over anything to do with the shootout benchmarks, but if you must, then: I agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOP updated. Thanks. I'll wait some days to see if respond to my email.\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()", "positive_passages": [{"docid": "doc-en-rust-8edeae1330276a8770077e003ad1cbba64a554bbc70d490031becfe19c09fbc5", "text": "\"libstd/sync/spsc_queue.rs\", # BSD \"libstd/sync/mpmc_bounded_queue.rs\", # BSD \"libsync/mpsc_intrusive.rs\", # BSD \"test/bench/shootout-binarytrees.rs\", # BSD \"test/bench/shootout-fannkuch-redux.rs\", # BSD \"test/bench/shootout-meteor.rs\", # BSD \"test/bench/shootout-regex-dna.rs\", # BSD", "commid": "rust_pr_14855"}], "negative_passages": []} {"query_id": "q-en-rust-75f48a827d240fd7005de6013e817df4841e8ab17224bcb8f5ab5f05ec71ea5e", "query": "See I propose to change the license of Non trivial contributors: OK OK OK (MoCo employee) Contributors appearing in git blame that are trivial (trivial update due to language changes): OK (MoCo employee) OK OK (MoCo employee) OK OK OK OK OK OK (sent agreement to by email) Contributors with removed and trivial contributions are not considered. For the persons that are not in \"OK\" state, please respond to this issue saying: Then, I'll propose the corresponding PR. Thanks.\nI agree to relicense any previous contributions to according to the term of the computer Language Benchmarks Game license (\nWe don't need statements from Graydon or Marijn as they both contributed as employees. On Jun 7, 2014 5:50 AM, \"TeXitoi\" wrote:\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOp updated. could you please respond?\nCiting in a private email: can I consider the contribution of negligible? the same for renaming? If yes I can propose a PR for this and for\nguh, I suppose so, though I'm disappointed that didn't cooperate.\nMy change is obviously trivial and I have zero copyright over anything to do with the shootout benchmarks, but if you must, then: I agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()\nOP updated. Thanks. I'll wait some days to see if respond to my email.\nI agree to relicense any previous contributions to according to the term of the Computer Language Benchmarks Game license ()", "positive_passages": [{"docid": "doc-en-rust-ee0db9f6886de7a6b863a18f98de10abd6e39a9f794c5d6478793c9ca7fa21a7", "text": " // Copyright 2012-2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // The Computer Language Benchmarks Game // http://benchmarksgame.alioth.debian.org/ // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // contributed by the Rust Project Developers // Copyright (c) 2012-2014 The Rust Project Developers // // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions // are met: // // - Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // // - Redistributions in binary form must reproduce the above copyright // notice, this list of conditions and the following disclaimer in // the documentation and/or other materials provided with the // distribution. // // - Neither the name of \"The Computer Language Benchmarks Game\" nor // the name of \"The Computer Language Shootout Benchmarks\" nor the // names of its contributors may be used to endorse or promote // products derived from this software without specific prior // written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS // FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE // COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, // INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES // (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR // SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) // HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, // STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) // ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED // OF THE POSSIBILITY OF SUCH DAMAGE. extern crate sync; extern crate arena;", "commid": "rust_pr_14855"}], "negative_passages": []} {"query_id": "q-en-rust-6f04471aa769edcf2ac894521aed5ab2886f2f87af17ee8420798ac5cf721630", "query": "Currently, pointers stored on the stack aren't word-aligned. This is a critical issue because it causes stack growth to fail most of the time.\nCommit disables task growth for now until this is fixed.\nWONTFIX (moot point given disabled stack growth)", "positive_passages": [{"docid": "doc-en-rust-f30a4a313f877b3bbcc8c6bf08f0e468ccaa46514db69c115a3d06715268e1fc", "text": "rustup toolchain install --profile minimal nightly-${TOOLCHAIN} # Sanity check to see if the nightly exists echo nightly-${TOOLCHAIN} > rust-toolchain echo \"=> Uninstalling all old nighlies\" echo \"=> Uninstalling all old nightlies\" for nightly in $(rustup toolchain list | grep nightly | grep -v $TOOLCHAIN | grep -v nightly-x86_64); do rustup toolchain uninstall $nightly done", "commid": "rust_pr_97887"}], "negative_passages": []} {"query_id": "q-en-rust-e7a9e1f777d5a32db801c9b16916b12385516018cdf0d625ce8b9cd38a3ad308", "query": "(sorry about the bad title but I really have no clue what's going on here!) Compiles and prints . I think it's getting at the of the inside a . edit: As one might expect, does not compile.\nSeems rather bad. cc\nNominating.\nThis is a fun bug, it's by far not limited to the operator. I tried to reduce the test case a bit, starting with a new non-operator trait, changed it to a static method that just returns for inference, and...\ncc me\nAssigning 1.0, P-backcompat-lang.\nEven smaller:", "positive_passages": [{"docid": "doc-en-rust-0de57ad815da55c60f29375af01abc3adcabb776ac8c583d7f2953eaa398b8f2", "text": "} } (&ty::ty_param(ref a_p), &ty::ty_param(ref b_p)) if a_p.idx == b_p.idx => { (&ty::ty_param(ref a_p), &ty::ty_param(ref b_p)) if a_p.idx == b_p.idx && a_p.space == b_p.space => { Ok(a) }", "commid": "rust_pr_15356"}], "negative_passages": []} {"query_id": "q-en-rust-e7a9e1f777d5a32db801c9b16916b12385516018cdf0d625ce8b9cd38a3ad308", "query": "(sorry about the bad title but I really have no clue what's going on here!) Compiles and prints . I think it's getting at the of the inside a . edit: As one might expect, does not compile.\nSeems rather bad. cc\nNominating.\nThis is a fun bug, it's by far not limited to the operator. I tried to reduce the test case a bit, starting with a new non-operator trait, changed it to a static method that just returns for inference, and...\ncc me\nAssigning 1.0, P-backcompat-lang.\nEven smaller:", "positive_passages": [{"docid": "doc-en-rust-287f506bfb853e1eed85a3bfdd7ab8ae4fee680699f45b24adbf865c20957ec7", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::num::Num; trait BrokenAdd: Num { fn broken_add(&self, rhs: T) -> Self { *self + rhs //~ ERROR mismatched types } } impl BrokenAdd for T {} pub fn main() { let foo: u8 = 0u8; let x: u8 = foo.broken_add(\"hello darkness my old friend\".to_string()); println!(\"{}\", x); } ", "commid": "rust_pr_15356"}], "negative_passages": []} {"query_id": "q-en-rust-e7a9e1f777d5a32db801c9b16916b12385516018cdf0d625ce8b9cd38a3ad308", "query": "(sorry about the bad title but I really have no clue what's going on here!) Compiles and prints . I think it's getting at the of the inside a . edit: As one might expect, does not compile.\nSeems rather bad. cc\nNominating.\nThis is a fun bug, it's by far not limited to the operator. I tried to reduce the test case a bit, starting with a new non-operator trait, changed it to a static method that just returns for inference, and...\ncc me\nAssigning 1.0, P-backcompat-lang.\nEven smaller:", "positive_passages": [{"docid": "doc-en-rust-a4cabb6a8b263dc1e4200bc826348e49b587680790314673a673e689e56688b8", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. trait Tr { fn op(T) -> Self; } // these compile as if Self: Tr, even tho only Self: Tr trait A: Tr { fn test(u: U) -> Self { Tr::op(u) //~ ERROR expected Tr, but found Tr } } trait B: Tr { fn test(u: U) -> Self { Tr::op(u) //~ ERROR expected Tr, but found Tr } } impl Tr for T { fn op(t: T) -> T { t } } impl A for T {} fn main() { std::io::println(A::test((&7306634593706211700, 8))); } ", "commid": "rust_pr_15356"}], "negative_passages": []} {"query_id": "q-en-rust-e7a9e1f777d5a32db801c9b16916b12385516018cdf0d625ce8b9cd38a3ad308", "query": "(sorry about the bad title but I really have no clue what's going on here!) Compiles and prints . I think it's getting at the of the inside a . edit: As one might expect, does not compile.\nSeems rather bad. cc\nNominating.\nThis is a fun bug, it's by far not limited to the operator. I tried to reduce the test case a bit, starting with a new non-operator trait, changed it to a static method that just returns for inference, and...\ncc me\nAssigning 1.0, P-backcompat-lang.\nEven smaller:", "positive_passages": [{"docid": "doc-en-rust-46f13758ec037b88a92608c7df547fa8b30aeb4309f8aec7bf9d30fa3d117d83", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. trait Tr { fn test(u: X) -> Self { u //~ ERROR mismatched types } } fn main() {} ", "commid": "rust_pr_15356"}], "negative_passages": []} {"query_id": "q-en-rust-e7d5c4cff4e3956d56aa6d2f4d95d22c14f61bd4f17ed05e5d4264aafe72e806", "query": "At this point, and both exist. Just in case it matters: my prefix is .\nFallout from , perhaps? This isn\u2019t the first time we\u2019ve had completely broken (two months ago it happened, ). Why don\u2019t we add it to the tests done on the buildbot?\nIndeed, this was predicted as a known consequence of . It may be the case that asking people to set is too much. We might need/want wrapper scripts to drive that set up such environment variables, much like other projects have.\nFixed by .\n(see comment , since it is relevant to this issue.)", "positive_passages": [{"docid": "doc-en-rust-c28011ded6fdb631ff16a2538f863a9097f82eb569d5e0633f28eae24beb28eb", "text": "CFG_LIBDIR_RELATIVE=bin fi if [ \"$CFG_OSTYPE\" = \"pc-mingw32\" ] || [ \"$CFG_OSTYPE\" = \"w64-mingw32\" ] then CFG_LD_PATH_VAR=PATH CFG_OLD_LD_PATH_VAR=$PATH elif [ \"$CFG_OSTYPE\" = \"Darwin\" ] then CFG_LD_PATH_VAR=DYLD_LIBRARY_PATH CFG_OLD_LD_PATH_VAR=$DYLD_LIBRARY_PATH else CFG_LD_PATH_VAR=LD_LIBRARY_PATH CFG_OLD_LD_PATH_VAR=$LD_LIBRARY_PATH fi flag uninstall \"only uninstall from the installation prefix\" opt verify 1 \"verify that the installed binaries run correctly\" valopt prefix \"/usr/local\" \"set installation prefix\"", "commid": "rust_pr_15550"}], "negative_passages": []} {"query_id": "q-en-rust-e7d5c4cff4e3956d56aa6d2f4d95d22c14f61bd4f17ed05e5d4264aafe72e806", "query": "At this point, and both exist. Just in case it matters: my prefix is .\nFallout from , perhaps? This isn\u2019t the first time we\u2019ve had completely broken (two months ago it happened, ). Why don\u2019t we add it to the tests done on the buildbot?\nIndeed, this was predicted as a known consequence of . It may be the case that asking people to set is too much. We might need/want wrapper scripts to drive that set up such environment variables, much like other projects have.\nFixed by .\n(see comment , since it is relevant to this issue.)", "positive_passages": [{"docid": "doc-en-rust-e4bddfae5b49a910515cc937faafebc2e46779fadf6899e21fc809bf14e95cae", "text": "if [ -z \"${CFG_UNINSTALL}\" ] then msg \"verifying platform can run binaries\" export $CFG_LD_PATH_VAR=\"${CFG_SRC_DIR}/lib\":$CFG_OLD_LD_PATH_VAR \"${CFG_SRC_DIR}/bin/rustc\" --version > /dev/null if [ $? -ne 0 ] then err \"can't execute rustc binary on this platform\" fi export $CFG_LD_PATH_VAR=$CFG_OLD_LD_PATH_VAR fi fi", "commid": "rust_pr_15550"}], "negative_passages": []} {"query_id": "q-en-rust-e7d5c4cff4e3956d56aa6d2f4d95d22c14f61bd4f17ed05e5d4264aafe72e806", "query": "At this point, and both exist. Just in case it matters: my prefix is .\nFallout from , perhaps? This isn\u2019t the first time we\u2019ve had completely broken (two months ago it happened, ). Why don\u2019t we add it to the tests done on the buildbot?\nIndeed, this was predicted as a known consequence of . It may be the case that asking people to set is too much. We might need/want wrapper scripts to drive that set up such environment variables, much like other projects have.\nFixed by .\n(see comment , since it is relevant to this issue.)", "positive_passages": [{"docid": "doc-en-rust-aedd2125ed847a6ea0d9abb9b1990718cd8c897d779a57f5d700b9ce170f8561", "text": "done < \"${CFG_SRC_DIR}/${CFG_LIBDIR_RELATIVE}/rustlib/manifest.in\" # Sanity check: can we run the installed binaries? # # As with the verification above, make sure the right LD_LIBRARY_PATH-equivalent # is in place. Try first without this variable, and if that fails try again with # the variable. If the second time tries, print a hopefully helpful message to # add something to the appropriate environment variable. if [ -z \"${CFG_DISABLE_VERIFY}\" ] then msg \"verifying installed binaries are executable\" \"${CFG_PREFIX}/bin/rustc\" --version > /dev/null \"${CFG_PREFIX}/bin/rustc\" --version 2> /dev/null 1> /dev/null if [ $? -ne 0 ] then ERR=\"can't execute installed rustc binary. \" ERR=\"${ERR}installation may be broken. \" ERR=\"${ERR}if this is expected then rerun install.sh with `--disable-verify` \" ERR=\"${ERR}or `make install` with `--disable-verify-install`\" err \"${ERR}\" export $CFG_LD_PATH_VAR=\"${CFG_PREFIX}/lib\":$CFG_OLD_LD_PATH_VAR \"${CFG_PREFIX}/bin/rustc\" --version > /dev/null if [ $? -ne 0 ] then ERR=\"can't execute installed rustc binary. \" ERR=\"${ERR}installation may be broken. \" ERR=\"${ERR}if this is expected then rerun install.sh with `--disable-verify` \" ERR=\"${ERR}or `make install` with `--disable-verify-install`\" err \"${ERR}\" else echo echo \" please ensure '${CFG_PREFIX}/lib' is added to ${CFG_LD_PATH_VAR}\" echo fi fi fi", "commid": "rust_pr_15550"}], "negative_passages": []} {"query_id": "q-en-rust-e7d5c4cff4e3956d56aa6d2f4d95d22c14f61bd4f17ed05e5d4264aafe72e806", "query": "At this point, and both exist. Just in case it matters: my prefix is .\nFallout from , perhaps? This isn\u2019t the first time we\u2019ve had completely broken (two months ago it happened, ). Why don\u2019t we add it to the tests done on the buildbot?\nIndeed, this was predicted as a known consequence of . It may be the case that asking people to set is too much. We might need/want wrapper scripts to drive that set up such environment variables, much like other projects have.\nFixed by .\n(see comment , since it is relevant to this issue.)", "positive_passages": [{"docid": "doc-en-rust-c23d4e10ab6c0b51097190df93aa74ad9bfe719c59eb2a5ff34a3500fe1f1f28", "text": "TTNonterminal(Span, Ident) } /// Matchers are nodes defined-by and recognized-by the main rust parser and /// language, but they're only ever found inside syntax-extension invocations; /// indeed, the only thing that ever _activates_ the rules in the rust parser /// for parsing a matcher is a matcher looking for the 'matchers' nonterminal /// itself. Matchers represent a small sub-language for pattern-matching /// token-trees, and are thus primarily used by the macro-defining extension /// itself. /// /// MatchTok /// -------- /// /// A matcher that matches a single token, denoted by the token itself. So /// long as there's no $ involved. /// /// /// MatchSeq /// -------- /// /// A matcher that matches a sequence of sub-matchers, denoted various /// possible ways: /// /// $(M)* zero or more Ms /// $(M)+ one or more Ms /// $(M),+ one or more comma-separated Ms /// $(A B C);* zero or more semi-separated 'A B C' seqs /// /// /// MatchNonterminal /// ----------------- /// /// A matcher that matches one of a few interesting named rust /// nonterminals, such as types, expressions, items, or raw token-trees. A /// black-box matcher on expr, for example, binds an expr to a given ident, /// and that ident can re-occur as an interpolation in the RHS of a /// macro-by-example rule. For example: /// /// $foo:expr => 1 + $foo // interpolate an expr /// $foo:tt => $foo // interpolate a token-tree /// $foo:tt => bar! $foo // only other valid interpolation /// // is in arg position for another /// // macro /// /// As a final, horrifying aside, note that macro-by-example's input is /// also matched by one of these matchers. Holy self-referential! It is matched /// by a MatchSeq, specifically this one: /// /// $( $lhs:matchers => $rhs:tt );+ /// /// If you understand that, you have closed the loop and understand the whole /// macro system. Congratulations. // Matchers are nodes defined-by and recognized-by the main rust parser and // language, but they're only ever found inside syntax-extension invocations; // indeed, the only thing that ever _activates_ the rules in the rust parser // for parsing a matcher is a matcher looking for the 'matchers' nonterminal // itself. Matchers represent a small sub-language for pattern-matching // token-trees, and are thus primarily used by the macro-defining extension // itself. // // MatchTok // -------- // // A matcher that matches a single token, denoted by the token itself. So // long as there's no $ involved. // // // MatchSeq // -------- // // A matcher that matches a sequence of sub-matchers, denoted various // possible ways: // // $(M)* zero or more Ms // $(M)+ one or more Ms // $(M),+ one or more comma-separated Ms // $(A B C);* zero or more semi-separated 'A B C' seqs // // // MatchNonterminal // ----------------- // // A matcher that matches one of a few interesting named rust // nonterminals, such as types, expressions, items, or raw token-trees. A // black-box matcher on expr, for example, binds an expr to a given ident, // and that ident can re-occur as an interpolation in the RHS of a // macro-by-example rule. For example: // // $foo:expr => 1 + $foo // interpolate an expr // $foo:tt => $foo // interpolate a token-tree // $foo:tt => bar! $foo // only other valid interpolation // // is in arg position for another // // macro // // As a final, horrifying aside, note that macro-by-example's input is // also matched by one of these matchers. Holy self-referential! It is matched // by a MatchSeq, specifically this one: // // $( $lhs:matchers => $rhs:tt );+ // // If you understand that, you have closed the loop and understand the whole // macro system. Congratulations. pub type Matcher = Spanned; #[deriving(Clone, PartialEq, Eq, Encodable, Decodable, Hash)]", "commid": "rust_pr_15550"}], "negative_passages": []} {"query_id": "q-en-rust-b0830a8a78d11d019a1b8b64e3c214b752a82adb8681f6449cc5f6a6ecae1c54", "query": "A fixed-length array expression with a repeat count with the high bit set and no suffix is incorrectly treated as a negative count. On a 64-bit machine: This results in\nCan be easily fixed after landed\nSeems that it is not an issue any more.\nI'm thinking that if it behave differently on different platforms, is it making it an undefined behavior? And I think designating an exact behavior for UB cannot possibly be a breaking change?\nThis does not make it undefined behavior. It's very much defined behavior. If it were undefined behavior, then any or literal would be undefined behavior as well, because the range of valid values is architecture-dependent.", "positive_passages": [{"docid": "doc-en-rust-5c5a28a08afa9e9a1bca597f5da3fea72c5beea09a322d41a6ab0cc6b389b37d", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern: too big for the current architecture #[cfg(target_pointer_width = \"32\")] fn main() { let x = [0usize; 0xffff_ffff]; } #[cfg(target_pointer_width = \"64\")] fn main() { let x = [0usize; 0xffff_ffff_ffff_ffff]; } ", "commid": "rust_pr_27133"}], "negative_passages": []} {"query_id": "q-en-rust-0eddf3ef6312ad285853d06e101e89bfbf34369cadb01b50dccf9d0be8c60066", "query": "We should record the names that users give complex types and use them when we can when reporting diagnostics.\nWONTFIX (not require for bootstrapping; works correctly in rustc already).", "positive_passages": [{"docid": "doc-en-rust-f30a4a313f877b3bbcc8c6bf08f0e468ccaa46514db69c115a3d06715268e1fc", "text": "rustup toolchain install --profile minimal nightly-${TOOLCHAIN} # Sanity check to see if the nightly exists echo nightly-${TOOLCHAIN} > rust-toolchain echo \"=> Uninstalling all old nighlies\" echo \"=> Uninstalling all old nightlies\" for nightly in $(rustup toolchain list | grep nightly | grep -v $TOOLCHAIN | grep -v nightly-x86_64); do rustup toolchain uninstall $nightly done", "commid": "rust_pr_97887"}], "negative_passages": []} {"query_id": "q-en-rust-2bef1c848f9e8900962cfff266d5341880ab73627fd8865536de623b9538e9a8", "query": "fails with vector v.\nWONTFIX (not required for bootstrapping, reopen if somehow this re-emerges in rustc)", "positive_passages": [{"docid": "doc-en-rust-f30a4a313f877b3bbcc8c6bf08f0e468ccaa46514db69c115a3d06715268e1fc", "text": "rustup toolchain install --profile minimal nightly-${TOOLCHAIN} # Sanity check to see if the nightly exists echo nightly-${TOOLCHAIN} > rust-toolchain echo \"=> Uninstalling all old nighlies\" echo \"=> Uninstalling all old nightlies\" for nightly in $(rustup toolchain list | grep nightly | grep -v $TOOLCHAIN | grep -v nightly-x86_64); do rustup toolchain uninstall $nightly done", "commid": "rust_pr_97887"}], "negative_passages": []} {"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-d36d32fd3d4345a3bf226c229abe0dae9016f24f81ce217e95a40d851aa2b4d6", "text": " Subproject commit 2751bdcef125468ea2ee006c11992cd1405aebe5 Subproject commit 34fca48ed284525b2f124bf93c51af36d6685492 ", "commid": "rust_pr_115761"}], "negative_passages": []} {"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-4486230c5649bfb99f8b8b2a73149a6e84bc6ed0ad65fcb881fca7e59c882606", "text": " Subproject commit 388750b081c0893c275044d37203f97709e058ba Subproject commit e3f3af69dce71cd37a785bccb7e58449197d940c ", "commid": "rust_pr_115761"}], "negative_passages": []} {"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-fc90d79d15ff6d2cf104695d5e4e15cfbf996a2a921b483b76249444bbd3e0ce", "text": " Subproject commit d43038932adeb16ada80e206d4c073d851298101 Subproject commit ee7c676fd6e287459cb407337652412c990686c0 ", "commid": "rust_pr_115761"}], "negative_passages": []} {"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-b2a11c6984f4036649711b8c163467eea27f9c2d2d043354ba5e884f779a2a20", "text": " Subproject commit 07e0df2f006e59d171c6bf3cafa9d61dbeb520d8 Subproject commit c954202c1e1720cba5628f99543cc01188c7d6fc ", "commid": "rust_pr_115761"}], "negative_passages": []} {"query_id": "q-en-rust-72d4c72ff7db52a00a503bf5c52b2971949231687f668c77eb806ffbe5f15c17", "query": "On the pull request for (), had a good suggestion for what I think would make a good additional function: \"a version of frombytes which doesn't fail on invalid input but rather offers some sort of error recovery options. Like maybe it can return the partly converted string with an index to the invalid position. Or perhaps, a callback function to handle the error, etc.\" At a glance the easy way to do this would be with a callback to, e.g., drop, replace, or coerce offending sequences into UTF-8 or else return a ... But is this a recipe for complex callbacks? Perhaps providing sensible callback functions like , , , and so on would make this simple enough to use. Maybe those should just be simple functions for whole byte vector conversion, anyways. What do you think?\nI don't have a strong opinion about it, but I guess I would prefer to wait until somebody actually needs something like this.\nNot RFC-level proposal. Just a library wishlist item. Might accept a patch; but it seems a bit overwrought. Anyone need such a thing yet?\nI think this is subsumed in ; we might well adopt a dynamic-handler mechanism while working through that bug. This would be a special case.", "positive_passages": [{"docid": "doc-en-rust-a5f5f68f24cab9f177c848c1dd6b9756981899e73e2c1635bc805746cfffb84b", "text": " Subproject commit b123ab4754127d822ffb38349ce0fbf561f1b2fd Subproject commit 08bb147d51e815b96e8db7ba4cf870f201c11ff8 ", "commid": "rust_pr_115761"}], "negative_passages": []} {"query_id": "q-en-rust-43ccaabceba2b19f3a93776a48d06f56c6a694c2a2eaa20060ed54d6293da90e", "query": "The std docs should list: The version they were built with The build date They could be at the top or bottom of every page, or on a stand-alone page like About or Help. This may not be important on rust- where I think they are rebuilt with every push but it seems like a good idea. It might be more useful on other sites where it isn't rebuilt for every push.\nI don't think this is a problem with std itself, since the version is in the URL (or it's master, which was built in the past 24 hours).\nNot sure where you std docs URL doesn't list a version. I don't think it will be a problem for Rust long-term. Once stable, the address will probably be something like with version varying between versions.\nPartly a dupe of\nYes, this is basically a dup of /", "positive_passages": [{"docid": "doc-en-rust-19132abfb27af4f49ca30d8e2d0e18e3a621078a8576851670f89aa7943f27a1", "text": "**Source:** https://github.com/rust-lang/rust-analyzer/blob/master/crates/rust-analyzer/src/config.rs[config.rs] The <<_installation,Installation>> section contains details on configuration for some of the editors. The <> section contains details on configuration for some of the editors. In general `rust-analyzer` is configured via LSP messages, which means that it's up to the editor to decide on the exact format and location of configuration files. Some clients, such as <> or <> provide `rust-analyzer` specific configuration UIs. Others may require you to know a bit more about the interaction with `rust-analyzer`.", "commid": "rust_pr_128490"}], "negative_passages": []} {"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-c179906366701996c3e746a1184066cf25db4b271bde26e80c510772e515a2f8", "text": "let stability_index = time(time_passes, \"stability index\", (), |_| stability::Index::build(krate)); time(time_passes, \"static item recursion checking\", (), |_| middle::check_static_recursion::check_crate(&sess, krate, &def_map, &ast_map)); let ty_cx = ty::mk_ctxt(sess, type_arena, def_map,", "commid": "rust_pr_17264"}], "negative_passages": []} {"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-b1909a43e0a7b2681695308637fae53ed4409eccbde35c82eb12e83cf5afae64", "text": "pub mod borrowck; pub mod cfg; pub mod check_const; pub mod check_static_recursion; pub mod check_loop; pub mod check_match; pub mod check_rvalues;", "commid": "rust_pr_17264"}], "negative_passages": []} {"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-533565ffca50e2317793e19db02767a8cda696ed4b6ac871d9176d4c67cba88c", "text": "// except according to those terms. use driver::session::Session; use middle::def::*; use middle::resolve; use middle::ty; use middle::typeck; use util::ppaux; use syntax::ast::*; use syntax::{ast_util, ast_map}; use syntax::ast_util; use syntax::visit::Visitor; use syntax::visit;", "commid": "rust_pr_17264"}], "negative_passages": []} {"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-c58d041ecdb9c4a91b6fac4dd2886bbb4f36dda7349357790bc41f12973abacb", "text": "match it.node { ItemStatic(_, _, ref ex) => { v.inside_const(|v| v.visit_expr(&**ex)); check_item_recursion(&v.tcx.sess, &v.tcx.map, &v.tcx.def_map, it); } ItemEnum(ref enum_definition, _) => { for var in (*enum_definition).variants.iter() {", "commid": "rust_pr_17264"}], "negative_passages": []} {"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-849c04e400f26dedabc4a8dff4e33f6d7b742b5ef7bdeac33c33e4a76c5d0447", "text": "} visit::walk_expr(v, e); } struct CheckItemRecursionVisitor<'a, 'ast: 'a> { root_it: &'a Item, sess: &'a Session, ast_map: &'a ast_map::Map<'ast>, def_map: &'a resolve::DefMap, idstack: Vec } // Make sure a const item doesn't recursively refer to itself // FIXME: Should use the dependency graph when it's available (#1356) pub fn check_item_recursion<'a>(sess: &'a Session, ast_map: &'a ast_map::Map, def_map: &'a resolve::DefMap, it: &'a Item) { let mut visitor = CheckItemRecursionVisitor { root_it: it, sess: sess, ast_map: ast_map, def_map: def_map, idstack: Vec::new() }; visitor.visit_item(it); } impl<'a, 'ast, 'v> Visitor<'v> for CheckItemRecursionVisitor<'a, 'ast> { fn visit_item(&mut self, it: &Item) { if self.idstack.iter().any(|x| x == &(it.id)) { self.sess.span_fatal(self.root_it.span, \"recursive constant\"); } self.idstack.push(it.id); visit::walk_item(self, it); self.idstack.pop(); } fn visit_expr(&mut self, e: &Expr) { match e.node { ExprPath(..) => { match self.def_map.borrow().find(&e.id) { Some(&DefStatic(def_id, _)) if ast_util::is_local(def_id) => { self.visit_item(&*self.ast_map.expect_item(def_id.node)); } _ => () } }, _ => () } visit::walk_expr(self, e); } } ", "commid": "rust_pr_17264"}], "negative_passages": []} {"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-9775302ddf9ce19807147945c99da99c41a4b5621f377d8220fc29ceda073f38", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // This compiler pass detects static items that refer to themselves // recursively. use driver::session::Session; use middle::resolve; use middle::def::DefStatic; use syntax::ast::{Crate, Expr, ExprPath, Item, ItemStatic, NodeId}; use syntax::{ast_util, ast_map}; use syntax::visit::Visitor; use syntax::visit; struct CheckCrateVisitor<'a, 'ast: 'a> { sess: &'a Session, def_map: &'a resolve::DefMap, ast_map: &'a ast_map::Map<'ast> } impl<'v, 'a, 'ast> Visitor<'v> for CheckCrateVisitor<'a, 'ast> { fn visit_item(&mut self, i: &Item) { check_item(self, i); } } pub fn check_crate<'ast>(sess: &Session, krate: &Crate, def_map: &resolve::DefMap, ast_map: &ast_map::Map<'ast>) { let mut visitor = CheckCrateVisitor { sess: sess, def_map: def_map, ast_map: ast_map }; visit::walk_crate(&mut visitor, krate); sess.abort_if_errors(); } fn check_item(v: &mut CheckCrateVisitor, it: &Item) { match it.node { ItemStatic(_, _, ref ex) => { check_item_recursion(v.sess, v.ast_map, v.def_map, it); visit::walk_expr(v, &**ex) }, _ => visit::walk_item(v, it) } } struct CheckItemRecursionVisitor<'a, 'ast: 'a> { root_it: &'a Item, sess: &'a Session, ast_map: &'a ast_map::Map<'ast>, def_map: &'a resolve::DefMap, idstack: Vec } // Make sure a const item doesn't recursively refer to itself // FIXME: Should use the dependency graph when it's available (#1356) pub fn check_item_recursion<'a>(sess: &'a Session, ast_map: &'a ast_map::Map, def_map: &'a resolve::DefMap, it: &'a Item) { let mut visitor = CheckItemRecursionVisitor { root_it: it, sess: sess, ast_map: ast_map, def_map: def_map, idstack: Vec::new() }; visitor.visit_item(it); } impl<'a, 'ast, 'v> Visitor<'v> for CheckItemRecursionVisitor<'a, 'ast> { fn visit_item(&mut self, it: &Item) { if self.idstack.iter().any(|x| x == &(it.id)) { self.sess.span_err(self.root_it.span, \"recursive constant\"); return; } self.idstack.push(it.id); visit::walk_item(self, it); self.idstack.pop(); } fn visit_expr(&mut self, e: &Expr) { match e.node { ExprPath(..) => { match self.def_map.borrow().find(&e.id) { Some(&DefStatic(def_id, _)) if ast_util::is_local(def_id) => { self.visit_item(&*self.ast_map.expect_item(def_id.node)); } _ => () } }, _ => () } visit::walk_expr(self, e); } } ", "commid": "rust_pr_17264"}], "negative_passages": []} {"query_id": "q-en-rust-56e33a072d9aec3b63af4a16f4a25f8a93a155cfe3938f1a7297febf189c70df", "query": "Normally when a static variable has a recursive the definition, the compiler issues an error. However, if the static is used (and not just defined) rustc overflows its stack. This program causes a stack overflow: Stack trace:\nConstant checking -- which detects this recursion -- occurs after type checking in the phase order, but type checking needs to evaluate the constant expression in order to find the length of the array type. It looks like strategically inserting a call to in the appropriate place would fix this at the cost of potentially performing the check many times.\nAfter some chatting on IRC, it looks like the best way to fix this is to break the item recursion check out into a separate pass that comes before type checking. This is my first time looking at this part of the compiler, but I'll give it a shot.\nThis approach looks promising. I need to do a full build/test run and then I can submit a PR.\nFixed.", "positive_passages": [{"docid": "doc-en-rust-4fd3b5fe5f73fb4ed6aabb3590e086c22b03f0ce2f037e91f636912e10b82c25", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. static FOO: uint = FOO; //~ ERROR recursive constant fn main() { let _x: [u8, ..FOO]; // caused stack overflow prior to fix let _y: uint = 1 + { static BAR: uint = BAR; //~ ERROR recursive constant let _z: [u8, ..BAR]; // caused stack overflow prior to fix 1 }; } ", "commid": "rust_pr_17264"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-1d9ed2d556f6f3723a9c495ddfb4083726dde618afa0c3fa68ac6ec3b2ee1e82", "text": "euv::AutoRef(..) | euv::ClosureInvocation(..) | euv::ForLoop(..) | euv::RefBinding(..) => { euv::RefBinding(..) | euv::MatchDiscriminant(..) => { format!(\"previous borrow of `{}` occurs here\", self.bccx.loan_path_to_string(&*old_loan.loan_path)) }", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-8fb78beaf826eae51154114cba362d288503e49fa4e78f695f6c5266d31fbb7b", "text": "euv::AddrOf | euv::RefBinding | euv::AutoRef | euv::ForLoop => { euv::ForLoop | euv::MatchDiscriminant => { format!(\"cannot borrow {} as mutable\", descr) } euv::ClosureInvocation => {", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-016685476a62622310251de040610fe56e03ebee04acd84b80532bca1cbca340", "text": "BorrowViolation(euv::OverloadedOperator) | BorrowViolation(euv::AddrOf) | BorrowViolation(euv::AutoRef) | BorrowViolation(euv::RefBinding) => { BorrowViolation(euv::RefBinding) | BorrowViolation(euv::MatchDiscriminant) => { \"cannot borrow data mutably\" }", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-e5f0f286a6044cd03fb2a2515ecf70804946a3cf79394990387079810ffd1680", "text": "OverloadedOperator, ClosureInvocation, ForLoop, MatchDiscriminant } #[deriving(PartialEq,Show)]", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-fc09eab11a1b6be20355e489ae2f307b01b9ed4d8003aac0361cbf38e0025f42", "text": "} ast::ExprMatch(ref discr, ref arms) => { // treatment of the discriminant is handled while // walking the arms: self.walk_expr(&**discr); let discr_cmt = return_if_err!(self.mc.cat_expr(&**discr)); self.borrow_expr(&**discr, ty::ReEmpty, ty::ImmBorrow, MatchDiscriminant); // treatment of the discriminant is handled while walking the arms. for arm in arms.iter() { self.walk_arm(discr_cmt.clone(), arm); }", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-fd37ac97447eb5eeac81e0070892e0989f6c5d87857a2e1f303e5365166f9853", "text": "ref s => { tcx.sess.span_bug( span, format!(\"ty_region() invoked on in appropriate ty: {:?}\", format!(\"ty_region() invoked on an inappropriate ty: {:?}\", s).as_slice()); } }", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-82268035238162cf5088440a8a821f1f3c0861f2faaefd58bb2094295a4d3388", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn foo(_s: i16) { } fn bar(_s: u32) { } fn main() { foo(1*(1 as int)); //~^ ERROR: mismatched types: expected `i16`, found `int` (expected `i16`, found `int`) bar(1*(1 as uint)); //~^ ERROR: mismatched types: expected `u32`, found `uint` (expected `u32`, found `uint`) } ", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-d845f28c0e6fb3aafbf9b0cc8d6338405098b18e9e3365bca1f066397dd6837c", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. struct X(int); enum Enum { Variant1, Variant2 } impl Drop for X { fn drop(&mut self) {} } impl Drop for Enum { fn drop(&mut self) {} } fn main() { let foo = X(1i); drop(foo); match foo { //~ ERROR use of moved value X(1i) => (), _ => unreachable!() } let e = Variant2; drop(e); match e { //~ ERROR use of moved value Variant1 => unreachable!(), Variant2 => () } } ", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-1a12006d024279fa8475b1aa755ca045b88e351de146fd634f30bc96d4b7f187", "query": "Sorry for not providing smaller case but I hope this is better than nothing: Both versions works (compile and start), but runtime behavior changes. The code that follows the above part: will match no matter what really is, if the value is partially moved (so mouse events are updating my game). If I use the version with , the code works as intended and game is updated only on keyboard presses.\nMinimal: This shouldn't compile at all.\nNominating.\nLooks like a regression. 0.10 rejects this code.", "positive_passages": [{"docid": "doc-en-rust-c135a622274e276af9b78d0a702556dbd164b96bcf9f891b2e979a1c4795359b", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn foo(_s: i16) { } fn bar(_s: u32) { } fn main() { foo(1*(1 as int)); //~^ ERROR: mismatched types: expected `i16`, found `int` (expected `i16`, found `int`) bar(1*(1 as uint)); //~^ ERROR: mismatched types: expected `u32`, found `uint` (expected `u32`, found `uint`) } ", "commid": "rust_pr_17413"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-f9b93a71773c25d0181f68e5cbca4b40914917a202ba1d03bf7416703b9bc8bc", "text": "{ let parent = self.get_parent(id); let parent = match self.find_entry(id) { Some(EntryForeignItem(..)) | Some(EntryVariant(..)) => { // Anonymous extern items, enum variants and struct ctors // go in the parent scope. Some(EntryForeignItem(..)) => { // Anonymous extern items go in the parent scope. self.get_parent(parent) } // But tuple struct ctors don't have names, so use the path of its", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-ebcd74389ede4290e4c5af07f2ed630e582f9ea4841a1e24fa3cb10e21e867f0", "text": "prim_ty_to_ty(tcx, base_segments, prim_ty) } _ => { let node = def.def_id().node; span_err!(tcx.sess, span, E0248, \"found value name used as a type: {:?}\", *def); \"found value `{}` used as a type\", tcx.map.path_to_string(node)); return this.tcx().types.err; } }", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-811b351450a0343ce7380c32cc93caa1cd1687e0fd4976fbf56f70b8ef1bb520", "text": "Bar } fn foo(x: Foo::Bar) {} //~ERROR found value name used as a type fn foo(x: Foo::Bar) {} //~ERROR found value `Foo::Bar` used as a type fn main() {}", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-95485d66c01546882055e9a56fa31bbf266fc4f7536ce706a455c968636d3b71", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use foo::MyEnum::Result; use foo::NoResult; // Through a re-export mod foo { pub use self::MyEnum::NoResult; enum MyEnum { Result, NoResult } fn new() -> NoResult { //~^ ERROR: found value `foo::MyEnum::NoResult` used as a type unimplemented!() } } mod bar { use foo::MyEnum::Result; use foo; fn new() -> Result { //~^ ERROR: found value `foo::MyEnum::Result` used as a type unimplemented!() } } fn new() -> Result { //~^ ERROR: found value `foo::MyEnum::Result` used as a type unimplemented!() } fn newer() -> NoResult { //~^ ERROR: found value `foo::MyEnum::NoResult` used as a type unimplemented!() } fn main() { let _ = new(); } ", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-8e6c69b38bf2824f4613d8b8144de439e9c049a521c69587a57bd9cacf66ec9b", "text": "#[rustc_move_fragments] pub fn test_match_partial(p: Lonely) { //~^ ERROR parent_of_fragments: `$(local p)` //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` match p { Zero(..) => {} _ => {}", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-b37d649e1458a97ca15d9db1ce2444c5522bd71d252718f893a08553101c3647", "text": "#[rustc_move_fragments] pub fn test_match_full(p: Lonely) { //~^ ERROR parent_of_fragments: `$(local p)` //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR assigned_leaf_path: `($(local p) as One)` //~| ERROR assigned_leaf_path: `($(local p) as Two)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::One)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Two)` match p { Zero(..) => {} One(..) => {}", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-e571967450a62871ffebeeeafab3b647ccbd345a34489715c4ece4999cc2b778", "text": "#[rustc_move_fragments] pub fn test_match_bind_one(p: Lonely) { //~^ ERROR parent_of_fragments: `$(local p)` //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR parent_of_fragments: `($(local p) as One)` //~| ERROR moved_leaf_path: `($(local p) as One).#0` //~| ERROR assigned_leaf_path: `($(local p) as Two)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR parent_of_fragments: `($(local p) as Lonely::One)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::One).#0` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Two)` //~| ERROR assigned_leaf_path: `$(local data)` match p { Zero(..) => {}", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-3055714d0a0076ed06eb06970e8c5a2fefad109c00b2fa737b9b13c45f77e277", "text": "#[rustc_move_fragments] pub fn test_match_bind_many(p: Lonely) { //~^ ERROR parent_of_fragments: `$(local p)` //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR parent_of_fragments: `($(local p) as One)` //~| ERROR moved_leaf_path: `($(local p) as One).#0` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR parent_of_fragments: `($(local p) as Lonely::One)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::One).#0` //~| ERROR assigned_leaf_path: `$(local data)` //~| ERROR parent_of_fragments: `($(local p) as Two)` //~| ERROR moved_leaf_path: `($(local p) as Two).#0` //~| ERROR moved_leaf_path: `($(local p) as Two).#1` //~| ERROR parent_of_fragments: `($(local p) as Lonely::Two)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::Two).#0` //~| ERROR moved_leaf_path: `($(local p) as Lonely::Two).#1` //~| ERROR assigned_leaf_path: `$(local left)` //~| ERROR assigned_leaf_path: `$(local right)` match p {", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-d2ad1581699f1bf6f4b31bb4c3edda02cf70003bb64b97852be0570474e26fa5", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // Copyright 2014-2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. //", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-4d672fa79ea5dc4ca0b03f6e9c158143c4fef169bb3527001c83c30347914a32", "query": "Couldn't find another issue for this. The error message for this is poor and should suggest qualifying the type, or at least clearly saying that a value from cannot be used as a type, and the span should then help the user understand (it is currently fine): Error message comes from astconv, so we could probably catch this in resolve?\n(I'll update the example so that the bug is still demonstrable.)", "positive_passages": [{"docid": "doc-en-rust-072931f0a08736b1d98904e8786be145a1e189dd6632aa1e277cc87101b84593", "text": "#[rustc_move_fragments] pub fn test_match_bind_and_underscore(p: Lonely) { //~^ ERROR parent_of_fragments: `$(local p)` //~| ERROR assigned_leaf_path: `($(local p) as Zero)` //~| ERROR assigned_leaf_path: `($(local p) as One)` //~| ERROR parent_of_fragments: `($(local p) as Two)` //~| ERROR moved_leaf_path: `($(local p) as Two).#0` //~| ERROR unmoved_fragment: `($(local p) as Two).#1` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::Zero)` //~| ERROR assigned_leaf_path: `($(local p) as Lonely::One)` //~| ERROR parent_of_fragments: `($(local p) as Lonely::Two)` //~| ERROR moved_leaf_path: `($(local p) as Lonely::Two).#0` //~| ERROR unmoved_fragment: `($(local p) as Lonely::Two).#1` //~| ERROR assigned_leaf_path: `$(local left)` match p {", "commid": "rust_pr_27085"}], "negative_passages": []} {"query_id": "q-en-rust-6452d80eb6b79925fe19a83cddf32060a56d592ce05d1e5097093835a315393c", "query": "Hi, I've been giving rust a try and hit my first compiler bug today. I was playing around with traits and ran into an error with the following code: In particular it seems it's the last line in that breaks things, if I comment that out it compiles (and runs) without issue. Here's output from rustc: I am using on OS X 10.9.4\nLooke like this no longer ICE-s: Now produces: Which seems like the right error message. Please reopen if you are still having trouble. Please reopen if you are still", "positive_passages": [{"docid": "doc-en-rust-19132abfb27af4f49ca30d8e2d0e18e3a621078a8576851670f89aa7943f27a1", "text": "**Source:** https://github.com/rust-lang/rust-analyzer/blob/master/crates/rust-analyzer/src/config.rs[config.rs] The <<_installation,Installation>> section contains details on configuration for some of the editors. The <> section contains details on configuration for some of the editors. In general `rust-analyzer` is configured via LSP messages, which means that it's up to the editor to decide on the exact format and location of configuration files. Some clients, such as <> or <> provide `rust-analyzer` specific configuration UIs. Others may require you to know a bit more about the interaction with `rust-analyzer`.", "commid": "rust_pr_128490"}], "negative_passages": []} {"query_id": "q-en-rust-b8cb3c322e214a7d9f962a08fcd9706e15978f67485b7e8b997d16b45bd432b6", "query": "Some of core methods using macros. Like and of and . I found two primary things about this: () (, ) Seems like some of fail (panic) methods will be preserved (like for a ). But using of such methods is highly error-prone. To prevent mass usage of them I suggest to mark such methods as . For example:\nis used for operations that are memory unsafe. marking (supposed) code smells as would only pollute when or when not to use it. notice memory unsafe is a very clear definition, unsafe does not mean this code might be broken, per se. (as i understand this)\nis for drawing a boundary between memory unsafe and memory safe code It's not used to discourage the usage of safe APIs as that would break down the boundary and make it harder to identify memory safety issues. Usage of and is common in correct code, because there are invariants that are not encoded into the type system. Changing the meaning of would be a fundamental change to the design of the language / libraries so it would need to go through the RFC process (but it wouldn't be accepted).\n(now at )", "positive_passages": [{"docid": "doc-en-rust-19132abfb27af4f49ca30d8e2d0e18e3a621078a8576851670f89aa7943f27a1", "text": "**Source:** https://github.com/rust-lang/rust-analyzer/blob/master/crates/rust-analyzer/src/config.rs[config.rs] The <<_installation,Installation>> section contains details on configuration for some of the editors. The <> section contains details on configuration for some of the editors. In general `rust-analyzer` is configured via LSP messages, which means that it's up to the editor to decide on the exact format and location of configuration files. Some clients, such as <> or <> provide `rust-analyzer` specific configuration UIs. Others may require you to know a bit more about the interaction with `rust-analyzer`.", "commid": "rust_pr_128490"}], "negative_passages": []} {"query_id": "q-en-rust-45b0dfa6ced3cec22a828f4dd65a3b8cbbb7417e7f308a97c1a0be78a3e6b164", "query": "Let's say that you have a JSON document similar to this: And the value of can be either a number or a string. There is currently no way to write a generic implementation of that can handle this. Attempting to read the value of will remove it and throw it away, even if the read triggers an error. You can't try to read a number and then read a string if it fails. The element is returned in the , so it should be possible to write an implementation of specifically for Json, but if you do so you can't put the object within another object that has .\nCan it not map to the enum?\nThe enum doesn't implement . It's probably for this reason.\nDuplicate of Should I close this?", "positive_passages": [{"docid": "doc-en-rust-19132abfb27af4f49ca30d8e2d0e18e3a621078a8576851670f89aa7943f27a1", "text": "**Source:** https://github.com/rust-lang/rust-analyzer/blob/master/crates/rust-analyzer/src/config.rs[config.rs] The <<_installation,Installation>> section contains details on configuration for some of the editors. The <> section contains details on configuration for some of the editors. In general `rust-analyzer` is configured via LSP messages, which means that it's up to the editor to decide on the exact format and location of configuration files. Some clients, such as <> or <> provide `rust-analyzer` specific configuration UIs. Others may require you to know a bit more about the interaction with `rust-analyzer`.", "commid": "rust_pr_128490"}], "negative_passages": []} {"query_id": "q-en-rust-5563e044fe48208a17201490824e1b4cfb81aae9011d2df189630cc85ecc978f", "query": "Code: Compile, run, result is Sorry, cannot find smaller example to reproduce the problem. Similar code, internal compiler error:\nSmaller:\nLooks like a libstd bug: could you file a separate issue for the internal compiler error?", "positive_passages": [{"docid": "doc-en-rust-24e10520017e04927b4bc99996d854b5a18f60900296f26c214e6596e7c44988", "text": "impl<'a> Writer for &'a mut Writer+'a { #[inline] fn write(&mut self, buf: &[u8]) -> IoResult<()> { self.write(buf) } fn write(&mut self, buf: &[u8]) -> IoResult<()> { (**self).write(buf) } #[inline] fn flush(&mut self) -> IoResult<()> { self.flush() } fn flush(&mut self) -> IoResult<()> { (**self).flush() } } /// A `RefWriter` is a struct implementing `Writer` which contains a reference", "commid": "rust_pr_17772"}], "negative_passages": []} {"query_id": "q-en-rust-ab800df65a92713515962d1913ca9ea6d1199b4c294c6be9c5f5547a986df8da", "query": "Noticed in . This is either a really odd way to ask for the version, or a bit of advice that's no longer relevant. What should we say instead? is a bit too_ verbose \u2014 I killed it after it wrote 8.1 GB of logs compiling .\nI think the intent was indeed to get the version output. I think the phrasing is only odd in that it does not make that intent clear. (It is possible I am wrong about the intent here.) But there is an additional problem: Sometime between 0.10 and 0.11, we revised the / output to ensure it would be only one line when you do not supply an argument to . This means we lose potentially relevant information, namely the host system type (e.g. or ). We could fix both of these problems by revising the instructions to say something like:\nClosing since was merged.", "positive_passages": [{"docid": "doc-en-rust-f47b1748fd17d8d4077a5d4299b23180667da74da10daaf668f7be18b7c6ea1e", "text": "It generally helps our diagnosis to include your specific OS (for example: Mac OS X 10.8.3, Windows 7, Ubuntu 12.04) and your hardware architecture (for example: i686, x86_64). It's also helpful to copy/paste the output of re-running the erroneous rustc command with the `-v` flag. Finally, if you can run the offending command under gdb, pasting a stack trace can be useful; to do so, you will need to set a breakpoint on `rust_fail`. It's also helpful to provide the exact version and host by copying the output of re-running the erroneous rustc command with the `--version=verbose` flag, which will produce something like this: ```{ignore} rustc 0.12.0 (ba4081a5a 2014-10-07 13:44:41 -0700) binary: rustc commit-hash: ba4081a5a8573875fed17545846f6f6902c8ba8d commit-date: 2014-10-07 13:44:41 -0700 host: i686-apple-darwin release: 0.12.0 ``` Finally, if you can run the offending command under gdb, pasting a stack trace can be useful; to do so, you will need to set a breakpoint on `rust_fail`. # I submitted a bug, but nobody has commented on it!", "commid": "rust_pr_18217"}], "negative_passages": []} {"query_id": "q-en-rust-ec25fd14d88d508f5ac3114434115077f45a6cdb8cd231156e128750ad9f21e6", "query": "When trying to do development on the stage1 compiler, I've been hitting a link error. I suspect something has gone wrong in our make dependencies because I think the problem goes away if you first do a full build before doing (I think, though I have not double-checked that scenario from scratch in a clean build dir yet).\nAlso, needs to be written much like itself, in that it needs to be able to build under a snapshot compiler. I have been encountering issues with its use of slicing syntax due to how the associated feature gate has come and gone.\nI know there is well-founded opposition to gating a PR on passing, but wouldn't just adding to the set of required build products at each stage during boot strap prevent errors like this in the future?\n(sigh; the issue may be isolated to the snapshot compiler ... my attempts to reproduce with have not managed to duplicate the problem here...)\nSo at this point its probably not feasible to fix the issue as described here, since I am pretty sure it is isolated to a problem in the snapshot compiler. But we can and should still try to increase the coverage of bors to handle at least building from a snapshot, if possible. (Maybe its more a problem with how we check the state of our snapshots?)\nAt least building seem reasonable! Running the tests may end up just making a longer cycle time longer unfortunately :(. (but compiletest is fast to build)\nIn case anyone is curious, the problem also occurs on Linux. I would be curious to know if taking a new snapshot would fix this. I'm not exactly sure how best to test that theory; my attempts to use to test this theory have been somewhat flummoxed by different problems.\nIt does indeed seem like if I first do , after the but before the , then things proceed just fine. This is evidence for my original hypothesis that something has gone wrong in our make dependencies. (odd that i could not reproduce the problem via another build though. Then again, there is still some weird rpath-ish stuff in our OS X builds of that I imagine could easily mask a problem like this.\n(PR only fixes the build issue; it does not attempt to add to the bors cycle.)\n(PR also does not actually fix the problem I mentioned in comment: . That can be resolved separately; but I will revise the PR comment to not say that it \"fixes\" this issue.)\n(I edited the PR description but forgot that github closes issues based on the original commit message.) I have a follow-on patch that resolves this in more fundamental ways, e.g. by both fixing the build itself, and also adding rules that will force bors to gate on continuing to build.", "positive_passages": [{"docid": "doc-en-rust-ecab79986a5f5b15d76efb8b078cd8104d835fe14f2319271e612c58a99a54c7", "text": "# Some less critical tests that are not prone to breakage. # Not run as part of the normal test suite, but tested by bors on checkin. check-secondary: check-lexer check-pretty check-secondary: check-build-compiletest check-lexer check-pretty # check + check-secondary. check-all: check check-secondary # # Issue #17883: build check-secondary first so hidden dependencies in # e.g. building compiletest are exercised (resolve those by adding # deps to rules that need them; not by putting `check` first here). check-all: check-secondary check # Pretty-printing tests. check-pretty: check-stage2-T-$(CFG_BUILD)-H-$(CFG_BUILD)-pretty-exec define DEF_CHECK_BUILD_COMPILETEST_FOR_STAGE check-stage$(1)-build-compiletest: \t$$(HBIN$(1)_H_$(CFG_BUILD))/compiletest$$(X_$(CFG_BUILD)) endef $(foreach stage,$(STAGES), $(eval $(call DEF_CHECK_BUILD_COMPILETEST_FOR_STAGE,$(stage)))) check-build-compiletest: check-stage1-build-compiletest check-stage2-build-compiletest .PHONY: cleantmptestlogs cleantestlibs cleantmptestlogs:", "commid": "rust_pr_18012"}], "negative_passages": []} {"query_id": "q-en-rust-ec25fd14d88d508f5ac3114434115077f45a6cdb8cd231156e128750ad9f21e6", "query": "When trying to do development on the stage1 compiler, I've been hitting a link error. I suspect something has gone wrong in our make dependencies because I think the problem goes away if you first do a full build before doing (I think, though I have not double-checked that scenario from scratch in a clean build dir yet).\nAlso, needs to be written much like itself, in that it needs to be able to build under a snapshot compiler. I have been encountering issues with its use of slicing syntax due to how the associated feature gate has come and gone.\nI know there is well-founded opposition to gating a PR on passing, but wouldn't just adding to the set of required build products at each stage during boot strap prevent errors like this in the future?\n(sigh; the issue may be isolated to the snapshot compiler ... my attempts to reproduce with have not managed to duplicate the problem here...)\nSo at this point its probably not feasible to fix the issue as described here, since I am pretty sure it is isolated to a problem in the snapshot compiler. But we can and should still try to increase the coverage of bors to handle at least building from a snapshot, if possible. (Maybe its more a problem with how we check the state of our snapshots?)\nAt least building seem reasonable! Running the tests may end up just making a longer cycle time longer unfortunately :(. (but compiletest is fast to build)\nIn case anyone is curious, the problem also occurs on Linux. I would be curious to know if taking a new snapshot would fix this. I'm not exactly sure how best to test that theory; my attempts to use to test this theory have been somewhat flummoxed by different problems.\nIt does indeed seem like if I first do , after the but before the , then things proceed just fine. This is evidence for my original hypothesis that something has gone wrong in our make dependencies. (odd that i could not reproduce the problem via another build though. Then again, there is still some weird rpath-ish stuff in our OS X builds of that I imagine could easily mask a problem like this.\n(PR only fixes the build issue; it does not attempt to add to the bors cycle.)\n(PR also does not actually fix the problem I mentioned in comment: . That can be resolved separately; but I will revise the PR comment to not say that it \"fixes\" this issue.)\n(I edited the PR description but forgot that github closes issues based on the original commit message.) I have a follow-on patch that resolves this in more fundamental ways, e.g. by both fixing the build itself, and also adding rules that will force bors to gate on continuing to build.", "positive_passages": [{"docid": "doc-en-rust-29310bb14b15d5beb5024aa6f9b12246d5e25e23b7009e52024a78850fc668c5", "text": "PRETTY_DEPS_pretty-rfail = $(RFAIL_TESTS) PRETTY_DEPS_pretty-bench = $(BENCH_TESTS) PRETTY_DEPS_pretty-pretty = $(PRETTY_TESTS) # The stage- and host-specific dependencies are for e.g. macro_crate_test which pulls in # external crates. PRETTY_DEPS$(1)_H_$(3)_pretty-rpass = PRETTY_DEPS$(1)_H_$(3)_pretty-rpass-full = $$(HLIB$(1)_H_$(3))/stamp.syntax $$(HLIB$(1)_H_$(3))/stamp.rustc PRETTY_DEPS$(1)_H_$(3)_pretty-rfail = PRETTY_DEPS$(1)_H_$(3)_pretty-bench = PRETTY_DEPS$(1)_H_$(3)_pretty-pretty = PRETTY_DIRNAME_pretty-rpass = run-pass PRETTY_DIRNAME_pretty-rpass-full = run-pass-fulldeps PRETTY_DIRNAME_pretty-rfail = run-fail", "commid": "rust_pr_18012"}], "negative_passages": []} {"query_id": "q-en-rust-ec25fd14d88d508f5ac3114434115077f45a6cdb8cd231156e128750ad9f21e6", "query": "When trying to do development on the stage1 compiler, I've been hitting a link error. I suspect something has gone wrong in our make dependencies because I think the problem goes away if you first do a full build before doing (I think, though I have not double-checked that scenario from scratch in a clean build dir yet).\nAlso, needs to be written much like itself, in that it needs to be able to build under a snapshot compiler. I have been encountering issues with its use of slicing syntax due to how the associated feature gate has come and gone.\nI know there is well-founded opposition to gating a PR on passing, but wouldn't just adding to the set of required build products at each stage during boot strap prevent errors like this in the future?\n(sigh; the issue may be isolated to the snapshot compiler ... my attempts to reproduce with have not managed to duplicate the problem here...)\nSo at this point its probably not feasible to fix the issue as described here, since I am pretty sure it is isolated to a problem in the snapshot compiler. But we can and should still try to increase the coverage of bors to handle at least building from a snapshot, if possible. (Maybe its more a problem with how we check the state of our snapshots?)\nAt least building seem reasonable! Running the tests may end up just making a longer cycle time longer unfortunately :(. (but compiletest is fast to build)\nIn case anyone is curious, the problem also occurs on Linux. I would be curious to know if taking a new snapshot would fix this. I'm not exactly sure how best to test that theory; my attempts to use to test this theory have been somewhat flummoxed by different problems.\nIt does indeed seem like if I first do , after the but before the , then things proceed just fine. This is evidence for my original hypothesis that something has gone wrong in our make dependencies. (odd that i could not reproduce the problem via another build though. Then again, there is still some weird rpath-ish stuff in our OS X builds of that I imagine could easily mask a problem like this.\n(PR only fixes the build issue; it does not attempt to add to the bors cycle.)\n(PR also does not actually fix the problem I mentioned in comment: . That can be resolved separately; but I will revise the PR comment to not say that it \"fixes\" this issue.)\n(I edited the PR description but forgot that github closes issues based on the original commit message.) I have a follow-on patch that resolves this in more fundamental ways, e.g. by both fixing the build itself, and also adding rules that will force bors to gate on continuing to build.", "positive_passages": [{"docid": "doc-en-rust-a5611babb5a32dcc78fe69805b515f2412dc6a1727c35003c1159f83aeef126e", "text": "$$(call TEST_OK_FILE,$(1),$(2),$(3),$(4)): $$(TEST_SREQ$(1)_T_$(2)_H_$(3)) $$(PRETTY_DEPS_$(4)) $$(PRETTY_DEPS_$(4)) $$(PRETTY_DEPS$(1)_H_$(3)_$(4)) @$$(call E, run pretty-rpass [$(2)]: $$<) $$(Q)$$(call CFG_RUN_CTEST_$(2),$(1),$$<,$(3)) $$(PRETTY_ARGS$(1)-T-$(2)-H-$(3)-$(4)) ", "commid": "rust_pr_18012"}], "negative_passages": []} {"query_id": "q-en-rust-2bd08bac4593184fa1555a33ac9920af931f01de868bc8dc2f6eca460a1ed72a", "query": "Technically speaking the negative duration is not the valid ISO 8601, but we need to print it anyway. This code: Produces: Should be , in fact Should be , in fact\nCc", "positive_passages": [{"docid": "doc-en-rust-b366ef045ef1049cdf63d4d8f45bb12c9863cd4306c56a94b3af612efa8efa2a", "text": "impl fmt::Show for Duration { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let days = self.num_days(); let secs = self.secs - days * SECS_PER_DAY; // technically speaking, negative duration is not valid ISO 8601, // but we need to print it anyway. let (abs, sign) = if self.secs < 0 { (-self, \"-\") } else { (*self, \"\") }; let days = abs.secs / SECS_PER_DAY; let secs = abs.secs - days * SECS_PER_DAY; let hasdate = days != 0; let hastime = (secs != 0 || self.nanos != 0) || !hasdate; let hastime = (secs != 0 || abs.nanos != 0) || !hasdate; try!(write!(f, \"{}P\", sign)); try!(write!(f, \"P\")); if hasdate { // technically speaking the negative part is not the valid ISO 8601, // but we need to print it anyway. try!(write!(f, \"{}D\", days)); } if hastime { if self.nanos == 0 { if abs.nanos == 0 { try!(write!(f, \"T{}S\", secs)); } else if self.nanos % NANOS_PER_MILLI == 0 { try!(write!(f, \"T{}.{:03}S\", secs, self.nanos / NANOS_PER_MILLI)); } else if self.nanos % NANOS_PER_MICRO == 0 { try!(write!(f, \"T{}.{:06}S\", secs, self.nanos / NANOS_PER_MICRO)); } else if abs.nanos % NANOS_PER_MILLI == 0 { try!(write!(f, \"T{}.{:03}S\", secs, abs.nanos / NANOS_PER_MILLI)); } else if abs.nanos % NANOS_PER_MICRO == 0 { try!(write!(f, \"T{}.{:06}S\", secs, abs.nanos / NANOS_PER_MICRO)); } else { try!(write!(f, \"T{}.{:09}S\", secs, self.nanos)); try!(write!(f, \"T{}.{:09}S\", secs, abs.nanos)); } } Ok(())", "commid": "rust_pr_18359"}], "negative_passages": []} {"query_id": "q-en-rust-2bd08bac4593184fa1555a33ac9920af931f01de868bc8dc2f6eca460a1ed72a", "query": "Technically speaking the negative duration is not the valid ISO 8601, but we need to print it anyway. This code: Produces: Should be , in fact Should be , in fact\nCc", "positive_passages": [{"docid": "doc-en-rust-d69a06d5cf88439ab664c2a05f9d905ae5bc70a4ea96bd64850c9782d9b56be5", "text": "let d: Duration = Zero::zero(); assert_eq!(d.to_string(), \"PT0S\".to_string()); assert_eq!(Duration::days(42).to_string(), \"P42D\".to_string()); assert_eq!(Duration::days(-42).to_string(), \"P-42D\".to_string()); assert_eq!(Duration::days(-42).to_string(), \"-P42D\".to_string()); assert_eq!(Duration::seconds(42).to_string(), \"PT42S\".to_string()); assert_eq!(Duration::milliseconds(42).to_string(), \"PT0.042S\".to_string()); assert_eq!(Duration::microseconds(42).to_string(), \"PT0.000042S\".to_string()); assert_eq!(Duration::nanoseconds(42).to_string(), \"PT0.000000042S\".to_string()); assert_eq!((Duration::days(7) + Duration::milliseconds(6543)).to_string(), \"P7DT6.543S\".to_string()); assert_eq!(Duration::seconds(-86401).to_string(), \"-P1DT1S\".to_string()); assert_eq!(Duration::nanoseconds(-1).to_string(), \"-PT0.000000001S\".to_string()); // the format specifier should have no effect on `Duration` assert_eq!(format!(\"{:30}\", Duration::days(1) + Duration::milliseconds(2345)),", "commid": "rust_pr_18359"}], "negative_passages": []} {"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-5a64775c8133565cc5ef4a9d94cd7af013ee6fd714f1702d23e068ee505df233", "text": "return found def make_win_dist(dist_root, target_triple): # Ask gcc where it keeps its' stuff # Ask gcc where it keeps its stuff gcc_out = subprocess.check_output([\"gcc.exe\", \"-print-search-dirs\"]) bin_path = os.environ[\"PATH\"].split(os.pathsep) lib_path = []", "commid": "rust_pr_18797"}], "negative_passages": []} {"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-7352202efc5f8c2b688f1c17f135a04122fba4f87c2675c2eed1b068e1714fde", "text": "else: rustc_dlls.append(\"libgcc_s_seh-1.dll\") target_libs = [\"crtbegin.o\", \"crtend.o\", \"crt2.o\", \"dllcrt2.o\", \"libadvapi32.a\", \"libcrypt32.a\", \"libgcc.a\", \"libgcc_eh.a\", \"libgcc_s.a\", \"libimagehlp.a\", \"libiphlpapi.a\", \"libkernel32.a\", \"libm.a\", \"libmingw32.a\", \"libmingwex.a\", \"libmsvcrt.a\", \"libpsapi.a\", \"libshell32.a\", \"libstdc++.a\", \"libuser32.a\", \"libws2_32.a\", \"libiconv.a\", \"libmoldname.a\"] target_libs = [ # MinGW libs \"crtbegin.o\", \"crtend.o\", \"crt2.o\", \"dllcrt2.o\", \"libgcc.a\", \"libgcc_eh.a\", \"libgcc_s.a\", \"libm.a\", \"libmingw32.a\", \"libmingwex.a\", \"libstdc++.a\", \"libiconv.a\", \"libmoldname.a\", # Windows import libs \"libadvapi32.a\", \"libbcrypt.a\", \"libcomctl32.a\", \"libcomdlg32.a\", \"libcrypt32.a\", \"libctl3d32.a\", \"libgdi32.a\", \"libimagehlp.a\", \"libiphlpapi.a\", \"libkernel32.a\", \"libmsvcrt.a\", \"libodbc32.a\", \"libole32.a\", \"liboleaut32.a\", \"libopengl32.a\", \"libpsapi.a\", \"librpcrt4.a\", \"libsetupapi.a\", \"libshell32.a\", \"libuser32.a\", \"libuuid.a\", \"libwinhttp.a\", \"libwinmm.a\", \"libwinspool.a\", \"libws2_32.a\", \"libwsock32.a\", ] # Find mingw artifacts we want to bundle target_tools = find_files(target_tools, bin_path)", "commid": "rust_pr_18797"}], "negative_passages": []} {"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-f3d1b2ee3c688634debb3fdfc86ec3275bb56afec99cd371a51fffade30f8cc5", "text": "shutil.copy(src, dist_bin_dir) # Copy platform tools to platform-specific bin directory target_bin_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"gcc\", \"bin\") target_bin_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"bin\") if not os.path.exists(target_bin_dir): os.makedirs(target_bin_dir) for src in target_tools: shutil.copy(src, target_bin_dir) # Copy platform libs to platform-spcific lib directory target_lib_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"gcc\", \"lib\") target_lib_dir = os.path.join(dist_root, \"bin\", \"rustlib\", target_triple, \"lib\") if not os.path.exists(target_lib_dir): os.makedirs(target_lib_dir) for src in target_libs:", "commid": "rust_pr_18797"}], "negative_passages": []} {"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-edfcbb80f35184875d93829291391d692c0720b88643782c304fe5053aecbfc3", "text": "cmd.arg(obj_filename.with_extension(\"metadata.o\")); } // Rust does its' own LTO cmd.arg(\"-fno-lto\"); if t.options.is_like_osx { // The dead_strip option to the linker specifies that functions and data // unreachable by the entry point will be removed. This is quite useful", "commid": "rust_pr_18797"}], "negative_passages": []} {"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-29d8a8c9b4820cddb7e2682607c23d70ffd9d905c299846a9497236e348c69ad", "text": "trans: &CrateTranslation, outputs: &OutputFilenames) { let old_path = os::getenv(\"PATH\").unwrap_or_else(||String::new()); let mut new_path = os::split_paths(old_path.as_slice()); new_path.extend(sess.host_filesearch().get_tools_search_paths().into_iter()); let mut new_path = sess.host_filesearch().get_tools_search_paths(); new_path.extend(os::split_paths(old_path.as_slice()).into_iter()); os::setenv(\"PATH\", os::join_paths(new_path.as_slice()).unwrap()); time(sess.time_passes(), \"linking\", (), |_|", "commid": "rust_pr_18797"}], "negative_passages": []} {"query_id": "q-en-rust-28bc114395ce3c204b6ca391330a1753c30a2c95379e09e92a3c36c37a321a3c", "query": "Linking may fail when an incompatible installation of MinGW is on the PATH, because rustc ends up running external linker instead of the bundled one. I keep seeing people running into this problem when using WIndows installer. The solution it to run rustc in a console with stripped-down PATH, but this workaround is very nonobvious. I am afraid this will negatively affect perception of quality of our Windows distribution because many early adopters will have some version of mingw already installed on their machines. I think the earlier decision to make rustc prefer bundled binaries () was the correct one. We should un-revert , and fix cargo build instead.\ncc , , rust-lang/cargo\nRegarding cargo, I think we can take one of the following approaches: Use the snapshot compiler. Specify linker path explicitly via . Delete/rename directory after installing. Add a new command line flag to rustc for controlling this behavior. Perhaps something like ? cc\nIssues like are definitely quite worrying, but I would see this as more than just a problem with Cargo but also extending to any project relying on libs installed on windows by some compiler toolchain. The bundled gcc doesn't know anything about system lookup paths, and I see this as akin to saying we should run gcc on unix with no default lookup paths (which would likely be disastrous). I suspect that there are other projects which rely on libs coming from a MinGW, and this seems similar to relying on randomly installed libraries on unix systems, so I'm not sure if it's necessarily a bad thing. Do we know why the error in is occurring? It looks like there's an incompatibility with the MinGW installation, but do we know precisely what it is? I was under the impression that linkers were intended to interoperate well...\nI don't really want gcc to be able to find arbitrary mingw libraries because I don't want Rust users on Windows to depend on mingw (imagine cargo packages that expect mingw libs but only a fraction of windows users actually have it installed). I've misunderstood the implications of this issue before, but my inclination is to do the revert and make the cargo build do the extra work to get at the libs it needs.\nNominating since this issue gets hit pretty often and is the last major usability problem on windows.\nNot quite: mingw gcc has a default library search path built-it, and it is expressed relative to the installation point. So the one we bundle is going to look for libraries only under (unless we supply more via , of course). This is mostly about library compatibility, not about linker. GCC may emit references to library symbols, that may not exist/be buggy in older versions of mingw. This particular one probably happened because the developer had installed mingw with sjlj exception handling, which defines UnwindSjLjResume instead of UnwindResume.\nAssigning P-high, 1.0.\nNot being able to access the full set of mingw libs means that people can't link to random GNU libs (which I'm in favor of). I'm unclear on whether it also prevents people from linking to various windows system libraries (I have this vague idea that mingw includes artifacts that make the linker able to link to e.g. ). Is there any concern there?\ngcc needs to recognize system library (), so picks up . We ship , but we don't ship or so , rust-lang/cargo, and occurs. So, what's the role of bundle? I thought bundle is for providing \"minimal\" linking environment so that beginners can build hello-world without installing mingw-w64. In this scenario, if user want to do more than bundle can provide (e.g. gdi32), user must install mingw-w64. At least, this is current status. Maybe we can bundle ? is 72M.\nPreferring bundle means: if user has old mingw, will not use it. This solves lots of of beginners' issues. , if user really wants to use their own linker, user need to do something. This complicates build story. So my suggestion is 1) prefer bundle and 2) add a flag to not use bundle at all (rather than changing preference). Maybe or ?\non the other hand, there are a few libs that need to be matched to the version of gcc that Rust was built with. It's mostly stuff like and ; possibly also ,. So simply deferring to external linker probably isn't a good idea, unless we ensure that our bundled libs are being found first. I kinda like the idea of shipping the minimal set of import libraries that is sufficient for linking Rust's standard library. We could add a few more common ones like , but everything is a bit much, IMHO. For reference, here's the list of import libraries that D ships with: , , , , , ,, ,, , , , , , , , . Btw, mingw can also link to dlls directly, so in many cases adding to the command line is enough.\nld can also link to dlls directly? Is it possible to #link directly in them in source? #[link(path = \"%systemroot%system32\")] or something like that?\nThat D ships a set of system import libraries is encouraging. Though if mingw can link to dlls directly it's not clear why this is such a problem - if we were to just have rustc add the system32 path by default would this problem with windows import libraries go away completely?", "positive_passages": [{"docid": "doc-en-rust-7e07e3b4132822e8a2c7fa4748fc7d92a65fd3a478956874acae98ac3fae5de3", "text": "p.push(find_libdir(self.sysroot)); p.push(rustlibdir()); p.push(self.triple); let mut p1 = p.clone(); p1.push(\"bin\"); let mut p2 = p.clone(); p2.push(\"gcc\"); p2.push(\"bin\"); vec![p1, p2] p.push(\"bin\"); vec![p] } }", "commid": "rust_pr_18797"}], "negative_passages": []} {"query_id": "q-en-rust-02bc8dabc7d8997f6eccaac1b7159290b362b0fedce53008990455f3efd75625", "query": "The following causes an LLVM assertion failure: Removing the panic! prevents the assertion failure.\nI minimized it some more: The Option let retty = if ty.kind() == llvm::Integer { let retty = if ty.kind() == llvm::Function { ty.return_type() } else { ccx.int_type()", "commid": "rust_pr_18644"}], "negative_passages": []} {"query_id": "q-en-rust-02bc8dabc7d8997f6eccaac1b7159290b362b0fedce53008990455f3efd75625", "query": "The following causes an LLVM assertion failure: Removing the panic! prevents the assertion failure.\nI minimized it some more: The Option // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:stop // #18576 // Make sure that an calling extern function pointer in an unreachable // context doesn't cause an LLVM assertion #[allow(unreachable_code)] fn main() { panic!(\"stop\"); let pointer = other; pointer(); } extern fn other() {} ", "commid": "rust_pr_18644"}], "negative_passages": []} {"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-5cee976ca2181387d2d6b2fa468ac944811ec96c8e33acd21528030fe840a4dc", "text": "use common::{CodegenCx, val_ty}; use declare; use monomorphize::Instance; use syntax_pos::Span; use syntax_pos::symbol::LocalInternedString; use type_::Type; use type_of::LayoutLlvmExt; use rustc::ty; use rustc::ty::{self, Ty}; use rustc::ty::layout::{Align, LayoutOf}; use rustc::hir::{self, CodegenFnAttrFlags}; use rustc::hir::{self, CodegenFnAttrs, CodegenFnAttrFlags}; use std::ffi::{CStr, CString};", "commid": "rust_pr_52635"}], "negative_passages": []} {"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-a2b792b6c02802e532211ebd80563bb5f432526afa16c814eaf6a391c1753aad", "text": "let ty = instance.ty(cx.tcx); let sym = cx.tcx.symbol_name(instance).as_str(); debug!(\"get_static: sym={} instance={:?}\", sym, instance); let g = if let Some(id) = cx.tcx.hir.as_local_node_id(def_id) { let llty = cx.layout_of(ty).llvm_type(cx);", "commid": "rust_pr_52635"}], "negative_passages": []} {"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-318b3ac5f2ec07849df1c79698c379f480406bae59be166603870eea318c8274", "text": "hir_map::NodeForeignItem(&hir::ForeignItem { ref attrs, span, node: hir::ForeignItemKind::Static(..), .. }) => { let g = if let Some(linkage) = cx.tcx.codegen_fn_attrs(def_id).linkage { // If this is a static with a linkage specified, then we need to handle // it a little specially. The typesystem prevents things like &T and // extern \"C\" fn() from being non-null, so we can't just declare a // static and call it a day. Some linkages (like weak) will make it such // that the static actually has a null value. let llty2 = match ty.sty { ty::TyRawPtr(ref mt) => cx.layout_of(mt.ty).llvm_type(cx), _ => { cx.sess().span_fatal(span, \"must have type `*const T` or `*mut T`\"); } }; unsafe { // Declare a symbol `foo` with the desired linkage. let g1 = declare::declare_global(cx, &sym, llty2); llvm::LLVMRustSetLinkage(g1, base::linkage_to_llvm(linkage)); // Declare an internal global `extern_with_linkage_foo` which // is initialized with the address of `foo`. If `foo` is // discarded during linking (for example, if `foo` has weak // linkage and there are no definitions), then // `extern_with_linkage_foo` will instead be initialized to // zero. let mut real_name = \"_rust_extern_with_linkage_\".to_string(); real_name.push_str(&sym); let g2 = declare::define_global(cx, &real_name, llty).unwrap_or_else(||{ cx.sess().span_fatal(span, &format!(\"symbol `{}` is already defined\", &sym)) }); llvm::LLVMRustSetLinkage(g2, llvm::Linkage::InternalLinkage); llvm::LLVMSetInitializer(g2, g1); g2 } } else { // Generate an external declaration. declare::declare_global(cx, &sym, llty) }; (g, attrs) let fn_attrs = cx.tcx.codegen_fn_attrs(def_id); (check_and_apply_linkage(cx, &fn_attrs, ty, sym, Some(span)), attrs) } item => bug!(\"get_static: expected static, found {:?}\", item) }; debug!(\"get_static: sym={} attrs={:?}\", sym, attrs); for attr in attrs { if attr.check_name(\"thread_local\") { llvm::set_thread_local_mode(g, cx.tls_model);", "commid": "rust_pr_52635"}], "negative_passages": []} {"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-68bc26f3d58da61ad193376712dd0cf98554e5aed8baa00b65890d6fffcb44df", "text": "g } else { // FIXME(nagisa): perhaps the map of externs could be offloaded to llvm somehow? // FIXME(nagisa): investigate whether it can be changed into define_global let g = declare::declare_global(cx, &sym, cx.layout_of(ty).llvm_type(cx)); debug!(\"get_static: sym={} item_attr={:?}\", sym, cx.tcx.item_attrs(def_id)); let attrs = cx.tcx.codegen_fn_attrs(def_id); let g = check_and_apply_linkage(cx, &attrs, ty, sym, None); // Thread-local statics in some other crate need to *always* be linked // against in a thread-local fashion, so we need to be sure to apply the // thread-local attribute locally if it was present remotely. If we // don't do this then linker errors can be generated where the linker // complains that one object files has a thread local version of the // symbol and another one doesn't. for attr in cx.tcx.get_attrs(def_id).iter() { if attr.check_name(\"thread_local\") { llvm::set_thread_local_mode(g, cx.tls_model); } if attrs.flags.contains(CodegenFnAttrFlags::THREAD_LOCAL) { llvm::set_thread_local_mode(g, cx.tls_model); } if cx.use_dll_storage_attrs && !cx.tcx.is_foreign_item(def_id) { // This item is external but not foreign, i.e. it originates from an external Rust // crate. Since we don't know whether this crate will be linked dynamically or", "commid": "rust_pr_52635"}], "negative_passages": []} {"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-ffdea37f271aa115c8be581deff56497e651f233aaa307a965a4af1cf39ad956", "text": "g } fn check_and_apply_linkage<'tcx>( cx: &CodegenCx<'_, 'tcx>, attrs: &CodegenFnAttrs, ty: Ty<'tcx>, sym: LocalInternedString, span: Option ) -> ValueRef { let llty = cx.layout_of(ty).llvm_type(cx); if let Some(linkage) = attrs.linkage { debug!(\"get_static: sym={} linkage={:?}\", sym, linkage); // If this is a static with a linkage specified, then we need to handle // it a little specially. The typesystem prevents things like &T and // extern \"C\" fn() from being non-null, so we can't just declare a // static and call it a day. Some linkages (like weak) will make it such // that the static actually has a null value. let llty2 = match ty.sty { ty::TyRawPtr(ref mt) => cx.layout_of(mt.ty).llvm_type(cx), _ => { if span.is_some() { cx.sess().span_fatal(span.unwrap(), \"must have type `*const T` or `*mut T`\") } else { bug!(\"must have type `*const T` or `*mut T`\") } } }; unsafe { // Declare a symbol `foo` with the desired linkage. let g1 = declare::declare_global(cx, &sym, llty2); llvm::LLVMRustSetLinkage(g1, base::linkage_to_llvm(linkage)); // Declare an internal global `extern_with_linkage_foo` which // is initialized with the address of `foo`. If `foo` is // discarded during linking (for example, if `foo` has weak // linkage and there are no definitions), then // `extern_with_linkage_foo` will instead be initialized to // zero. let mut real_name = \"_rust_extern_with_linkage_\".to_string(); real_name.push_str(&sym); let g2 = declare::define_global(cx, &real_name, llty).unwrap_or_else(||{ if span.is_some() { cx.sess().span_fatal( span.unwrap(), &format!(\"symbol `{}` is already defined\", &sym) ) } else { bug!(\"symbol `{}` is already defined\", &sym) } }); llvm::LLVMRustSetLinkage(g2, llvm::Linkage::InternalLinkage); llvm::LLVMSetInitializer(g2, g1); g2 } } else { // Generate an external declaration. // FIXME(nagisa): investigate whether it can be changed into define_global declare::declare_global(cx, &sym, llty) } } pub fn codegen_static<'a, 'tcx>( cx: &CodegenCx<'a, 'tcx>, def_id: DefId,", "commid": "rust_pr_52635"}], "negative_passages": []} {"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-567c2b5d02a723fbedef3eaa51665a4b6afc0ac1083bb3915a6354d86ce4cf4c", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![crate_type = \"rlib\"] #![feature(linkage)] pub fn foo() -> *const() { extern { #[linkage = \"extern_weak\"] static FOO: *const(); } unsafe { FOO } } ", "commid": "rust_pr_52635"}], "negative_passages": []} {"query_id": "q-en-rust-84928dfd6fe4708e73d371a7088e17c27c31483dfa3506634d725091306ed23e", "query": "For example: This can be a little more clearly seen with the IR generated for main: Note the lack of anywhere on .\nTriage: this is still true today, though needs to be updated to , and there's now a warning about not being a valid foreign type, which is irrelevant to this particular issue.\nclaims it repros still.\nUpdated test code: Still fails to compile with the same errors mentioned in the top post along with the irrelevant invalid foreign type warning mentioned by", "positive_passages": [{"docid": "doc-en-rust-25301a74a2b188ce9e6063016a54016b5d19f38478938b159c5b1ea8a53e2d07", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test for issue #18804, #[linkage] does not propagate thorugh generic // functions. Failure results in a linker error. // ignore-asmjs no weak symbol support // ignore-emscripten no weak symbol support // aux-build:lib.rs extern crate lib; fn main() { lib::foo::(); } ", "commid": "rust_pr_52635"}], "negative_passages": []} {"query_id": "q-en-rust-83215f4905eb1ba23588dd5bad0f466683bbdb606bdbed0cd36c49c4e6bd9c83", "query": "This was the smallest I could make it. Commenting out the lines for either or or making and the same type allows compilation to finish successfully. Error: What I don't understand is: why in the world is it even complaining about implementations when no one is using ? Is this some sort of weird interaction with trait object types? (rustc at commit , Linux)\nI've been seeing this too, any time I have a trait with two associated types that are different. Still happens as of the current nightly. Strangely, if you make them the same (i.e. \"type T0 = f32; type T1 = f32;\" in the impl), the error goes away.\nThis seems to be fixed. The example compiles on", "positive_passages": [{"docid": "doc-en-rust-9645b58d260d7a93b752bbf04f0b6be63696b6e0c01fc37299478989f063d3b1", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. trait Tup { type T0; type T1; } impl Tup for isize { type T0 = f32; type T1 = (); } fn main() {} ", "commid": "rust_pr_26460"}], "negative_passages": []} {"query_id": "q-en-rust-b5fbe2780d1c918d212d3fb3594848d807c1288493fc3aa8bd051cfa2b2893e5", "query": "Here's what I think is wrong: keywords , and appear next to the name of the constant, when clearly they are redundant. the constants page through search brings the user to an empty page. value assigned to the constant appears along side its name. When reading the documentation of constants, the most important information is the name of the constant and its description. Having the declaration appear inline makes scanning for the name of harder. Better output would replace the declaration with the name of the constant followed by its type. Also it would move the declaration of the constant to the empty page mentioned in point 2 and have the identifier of the constant link to this page. All these points are relevant for statics as well. Example of ugly output: ! This page should show the definition and a description of the constant: !", "positive_passages": [{"docid": "doc-en-rust-278e618702f4a8813822af7d5e53c603ce3ae1e7c31f08b57e61ac63884915e0", "text": "clean::TypedefItem(ref t) => item_typedef(fmt, self.item, t), clean::MacroItem(ref m) => item_macro(fmt, self.item, m), clean::PrimitiveItem(ref p) => item_primitive(fmt, self.item, p), clean::StaticItem(ref i) => item_static(fmt, self.item, i), clean::ConstantItem(ref c) => item_constant(fmt, self.item, c), _ => Ok(()) } }", "commid": "rust_pr_19234"}], "negative_passages": []} {"query_id": "q-en-rust-b5fbe2780d1c918d212d3fb3594848d807c1288493fc3aa8bd051cfa2b2893e5", "query": "Here's what I think is wrong: keywords , and appear next to the name of the constant, when clearly they are redundant. the constants page through search brings the user to an empty page. value assigned to the constant appears along side its name. When reading the documentation of constants, the most important information is the name of the constant and its description. Having the declaration appear inline makes scanning for the name of harder. Better output would replace the declaration with the name of the constant followed by its type. Also it would move the declaration of the constant to the empty page mentioned in point 2 and have the identifier of the constant link to this page. All these points are relevant for statics as well. Example of ugly output: ! This page should show the definition and a description of the constant: !", "positive_passages": [{"docid": "doc-en-rust-e7afe6d75972f759f592694ed4cd9b09ee1d2ddfb7bbf1c7fd41f8d5cde27bca", "text": "return s } fn blank<'a>(s: Option<&'a str>) -> &'a str { match s { Some(s) => s, None => \"\" } } fn shorter<'a>(s: Option<&'a str>) -> &'a str { match s { Some(s) => match s.find_str(\"nn\") {", "commid": "rust_pr_19234"}], "negative_passages": []} {"query_id": "q-en-rust-b5fbe2780d1c918d212d3fb3594848d807c1288493fc3aa8bd051cfa2b2893e5", "query": "Here's what I think is wrong: keywords , and appear next to the name of the constant, when clearly they are redundant. the constants page through search brings the user to an empty page. value assigned to the constant appears along side its name. When reading the documentation of constants, the most important information is the name of the constant and its description. Having the declaration appear inline makes scanning for the name of harder. Better output would replace the declaration with the name of the constant followed by its type. Also it would move the declaration of the constant to the empty page mentioned in point 2 and have the identifier of the constant link to this page. All these points are relevant for statics as well. Example of ugly output: ! This page should show the definition and a description of the constant: !", "positive_passages": [{"docid": "doc-en-rust-fe44281601ef46cdad99ffe32b59b5f531ae4cea381950e28eacc3750478bb68", "text": "id = short, name = name)); } struct Initializer<'a>(&'a str, Item<'a>); impl<'a> fmt::Show for Initializer<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Initializer(s, item) = *self; if s.len() == 0 { return Ok(()); } try!(write!(f, \" = \")); if s.contains(\"n\") { match item.href() { Some(url) => { write!(f, \"[definition]\", url) } None => Ok(()), } } else { write!(f, \"{}\", s.as_slice()) } } } match myitem.inner { clean::StaticItem(ref s) | clean::ForeignStaticItem(ref s) => { try!(write!(w, \" {}{}static {}{}: {}{} {}  \", ConciseStability(&myitem.stability), VisSpace(myitem.visibility), MutableSpace(s.mutability), *myitem.name.as_ref().unwrap(), s.type_, Initializer(s.expr.as_slice(), Item { cx: cx, item: myitem }), Markdown(blank(myitem.doc_value())))); } clean::ConstantItem(ref s) => { try!(write!(w, \" {}{}const {}: {}{} {}  \", ConciseStability(&myitem.stability), VisSpace(myitem.visibility), *myitem.name.as_ref().unwrap(), s.type_, Initializer(s.expr.as_slice(), Item { cx: cx, item: myitem }), Markdown(blank(myitem.doc_value())))); } clean::ViewItemItem(ref item) => { match item.inner { clean::ExternCrate(ref name, ref src, _) => {", "commid": "rust_pr_19234"}], "negative_passages": []} {"query_id": "q-en-rust-b5fbe2780d1c918d212d3fb3594848d807c1288493fc3aa8bd051cfa2b2893e5", "query": "Here's what I think is wrong: keywords , and appear next to the name of the constant, when clearly they are redundant. the constants page through search brings the user to an empty page. value assigned to the constant appears along side its name. When reading the documentation of constants, the most important information is the name of the constant and its description. Having the declaration appear inline makes scanning for the name of harder. Better output would replace the declaration with the name of the constant followed by its type. Also it would move the declaration of the constant to the empty page mentioned in point 2 and have the identifier of the constant link to this page. All these points are relevant for statics as well. Example of ugly output: ! This page should show the definition and a description of the constant: !", "positive_passages": [{"docid": "doc-en-rust-a2df27e29da610839f330476dea2be54d786d608ea961e628767a93fccebb29e", "text": "write!(w, \"\") } struct Initializer<'a>(&'a str); impl<'a> fmt::Show for Initializer<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let Initializer(s) = *self; if s.len() == 0 { return Ok(()); } try!(write!(f, \" = \")); write!(f, \"{}\", s.as_slice()) } } fn item_constant(w: &mut fmt::Formatter, it: &clean::Item, c: &clean::Constant) -> fmt::Result { try!(write!(w, \"

{vis}const  {name}: {typ}{init}
\", vis = VisSpace(it.visibility), name = it.name.as_ref().unwrap().as_slice(), typ = c.type_, init = Initializer(c.expr.as_slice()))); document(w, it) } fn item_static(w: &mut fmt::Formatter, it: &clean::Item, s: &clean::Static) -> fmt::Result { try!(write!(w, \"
{vis}static {mutability} {name}: {typ}{init}
\", vis = VisSpace(it.visibility), mutability = MutableSpace(s.mutability), name = it.name.as_ref().unwrap().as_slice(), typ = s.type_, init = Initializer(s.expr.as_slice()))); document(w, it) } fn item_function(w: &mut fmt::Formatter, it: &clean::Item, f: &clean::Function) -> fmt::Result { try!(write!(w, \"
{vis}{fn_style}fn ", "commid": "rust_pr_19234"}], "negative_passages": []}
{"query_id": "q-en-rust-88465e69fce7874164bd433141c7fe472f9f27c2dade7cbf92125b5006492558", "query": "The documentation is misleading. The pointer is not allowed to be null.\nHow do you make an empty slice if the pointer is not allowed to be null?\nThe underlying representation is an implementation detail and shouldn't be documented beyond the ability to convert the raw parts obtained from a vector back into a vector.\nThe language and library documentation has a love affair with making far too many promises about the implementation. In the vector module, there are numerous errors when it comes to information about the vector's capacity too. It tends to guarantee that the capacity is exactly what was asked for rather than at least that much. It is allowed to set the capacity to a value provided by the allocator.\nDoes that mean you are required to special case the empty case and then use a different way to make an empty vector?\nNo, it means you can't use this method outside of the standard library beyond converting from an existing vector.\nIf what you're asking is how it does this internally, the answer is that empty vectors along with zero-size allocations in general are allowed to be entirely arbitrary pointers. They will never be dereferenced and the compiler / library code will never attempt to deallocate them.\nThe documentation is not supposed to cover these implementation details. It would make sense to cover it in comments (which it is) or internal design documentation.\nIn that case I suppose the docs should say: can only be constructed from components of an already existing vector.", "positive_passages": [{"docid": "doc-en-rust-4a95538432201e87ca988f2705cfd4f08e33471577528b2d1030d031abc4142c", "text": "} }  /// Creates a `Vec` directly from the raw constituents.   /// Creates a `Vec` directly from the raw components of another vector.  ///  /// This is highly unsafe: /// /// - if `ptr` is null, then `length` and `capacity` should be 0 /// - `ptr` must point to an allocation of size `capacity` /// - there must be `length` valid instances of type `T` at the ///   beginning of that allocation /// - `ptr` must be allocated by the default `Vec` allocator   /// This is highly unsafe, due to the number of invariants that aren't checked.  /// /// # Example ///", "commid": "rust_pr_19306"}], "negative_passages": []}
{"query_id": "q-en-rust-5f5507094586d90ba2ca78024e2b029d385771dbc83fd987403c3f5f8c30699a", "query": "cc\nIt seems like the second part of this (type parameter shadowing) is backwards incompatible, but AFAICT, isn't implemented yet.\nI suggest we leave it and go straight for the lint (as we plan to do eventually with the lifetime version). Then no backwards compatibility issues. Also, since type shadowing is allowed in every other language it is going to really surprise people if it is forbidden in Rust. And, it is the shadowed lifetimes which were causing the motivating confusion.\njust for future visitors, type parameter shadowing (like the example in the RFC) does, in fact, produce a clear compiler error. The comments above and lack of linked \"type parameter shadowing\" PRs might otherwise suggest that type parameter shadowing was left as a lint rather than a compiler error.", "positive_passages": [{"docid": "doc-en-rust-89777ca1a307a97e6d350cb970476e70632cc4f0634805ba04e43f9edbaa787d", "text": "use syntax::ast_util::{local_def}; use syntax::attr; use syntax::codemap::Span;  use syntax::parse::token;  use syntax::visit; use syntax::visit::Visitor;", "commid": "rust_pr_20728"}], "negative_passages": []}
{"query_id": "q-en-rust-5f5507094586d90ba2ca78024e2b029d385771dbc83fd987403c3f5f8c30699a", "query": "cc\nIt seems like the second part of this (type parameter shadowing) is backwards incompatible, but AFAICT, isn't implemented yet.\nI suggest we leave it and go straight for the lint (as we plan to do eventually with the lifetime version). Then no backwards compatibility issues. Also, since type shadowing is allowed in every other language it is going to really surprise people if it is forbidden in Rust. And, it is the shadowed lifetimes which were causing the motivating confusion.\njust for future visitors, type parameter shadowing (like the example in the RFC) does, in fact, produce a clear compiler error. The comments above and lack of linked \"type parameter shadowing\" PRs might otherwise suggest that type parameter shadowing was left as a lint rather than a compiler error.", "positive_passages": [{"docid": "doc-en-rust-456eecb7eec26a55daa5bcfafe37bb7ca7ca7bb974d611e8353c924e21bbd65a", "text": "} }  fn reject_shadowing_type_parameters<'tcx>(tcx: &ty::ctxt<'tcx>, span: Span, generics: &ty::Generics<'tcx>) { let impl_params = generics.types.get_slice(subst::TypeSpace).iter() .map(|tp| tp.name).collect::>(); for method_param in generics.types.get_slice(subst::FnSpace).iter() { if impl_params.contains(&method_param.name) { tcx.sess.span_err( span, &*format!(\"type parameter `{}` shadows another type parameter of the same name\", token::get_name(method_param.name))); } } }  impl<'ccx, 'tcx, 'v> Visitor<'v> for CheckTypeWellFormedVisitor<'ccx, 'tcx> { fn visit_item(&mut self, i: &ast::Item) { self.check_item_well_formed(i); visit::walk_item(self, i); }  fn visit_fn(&mut self, fk: visit::FnKind<'v>, fd: &'v ast::FnDecl, b: &'v ast::Block, span: Span, id: ast::NodeId) { match fk { visit::FkFnBlock | visit::FkItemFn(..) => {} visit::FkMethod(..) => { match ty::impl_or_trait_item(self.ccx.tcx, local_def(id)) { ty::ImplOrTraitItem::MethodTraitItem(ty_method) => { reject_shadowing_type_parameters(self.ccx.tcx, span, &ty_method.generics) } _ => {} } } } visit::walk_fn(self, fk, fd, b, span) } fn visit_trait_item(&mut self, t: &'v ast::TraitItem) { match t { &ast::TraitItem::ProvidedMethod(_) | &ast::TraitItem::TypeTraitItem(_) => {}, &ast::TraitItem::RequiredMethod(ref method) => { match ty::impl_or_trait_item(self.ccx.tcx, local_def(method.id)) { ty::ImplOrTraitItem::MethodTraitItem(ty_method) => { reject_shadowing_type_parameters( self.ccx.tcx, method.span, &ty_method.generics) } _ => {} } } } visit::walk_trait_item(self, t) }  } pub struct BoundsChecker<'cx,'tcx:'cx> {", "commid": "rust_pr_20728"}], "negative_passages": []}
{"query_id": "q-en-rust-5f5507094586d90ba2ca78024e2b029d385771dbc83fd987403c3f5f8c30699a", "query": "cc\nIt seems like the second part of this (type parameter shadowing) is backwards incompatible, but AFAICT, isn't implemented yet.\nI suggest we leave it and go straight for the lint (as we plan to do eventually with the lifetime version). Then no backwards compatibility issues. Also, since type shadowing is allowed in every other language it is going to really surprise people if it is forbidden in Rust. And, it is the shadowed lifetimes which were causing the motivating confusion.\njust for future visitors, type parameter shadowing (like the example in the RFC) does, in fact, produce a clear compiler error. The comments above and lack of linked \"type parameter shadowing\" PRs might otherwise suggest that type parameter shadowing was left as a lint rather than a compiler error.", "positive_passages": [{"docid": "doc-en-rust-c463ddaa581880478f49f148527ab1669091e14e677b346cb4fd3b8724992c8d", "text": "self.ch == Some(c) }  fn error(&self, reason: ErrorCode) -> Result {   fn error(&self, reason: ErrorCode) -> Result {  Err(SyntaxError(reason, self.line, self.col)) }", "commid": "rust_pr_20728"}], "negative_passages": []}
{"query_id": "q-en-rust-5f5507094586d90ba2ca78024e2b029d385771dbc83fd987403c3f5f8c30699a", "query": "cc\nIt seems like the second part of this (type parameter shadowing) is backwards incompatible, but AFAICT, isn't implemented yet.\nI suggest we leave it and go straight for the lint (as we plan to do eventually with the lifetime version). Then no backwards compatibility issues. Also, since type shadowing is allowed in every other language it is going to really surprise people if it is forbidden in Rust. And, it is the shadowed lifetimes which were causing the motivating confusion.\njust for future visitors, type parameter shadowing (like the example in the RFC) does, in fact, produce a clear compiler error. The comments above and lack of linked \"type parameter shadowing\" PRs might otherwise suggest that type parameter shadowing was left as a lint rather than a compiler error.", "positive_passages": [{"docid": "doc-en-rust-e1f57f4a672b64e03b4d7a90973274aec453430b32451bbd1709d96c224a3617", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that shadowed lifetimes generate an error. struct Foo; impl Foo { fn shadow_in_method(&self) {} //~^ ERROR type parameter `T` shadows another type parameter fn not_shadow_in_item(&self) { struct Bar; // not a shadow, separate item fn foo() {} // same } } trait Bar { fn shadow_in_required(&self); //~^ ERROR type parameter `T` shadows another type parameter fn shadow_in_provided(&self) {} //~^ ERROR type parameter `T` shadows another type parameter fn not_shadow_in_required(&self); fn not_shadow_in_provided(&self) {} } fn main() {} ", "commid": "rust_pr_20728"}], "negative_passages": []}
{"query_id": "q-en-rust-bd2bf0a66de67d11143ebe7c3d079234a2d06a4d636f3f478e446e0539bea3c4", "query": "Tracking issue for this failure:\nIt appears that the compiler is producing corrupt metadata somehow. I have modified the test to ignore and only link to , and I've found an interesting phenomena. When running the test via the rlib for -2 may end up containing two blobs of metadata. One blob of metadata ends in four 0x0a bytes (corrupted), and the other blob is the exact same but without these trailing bytes. The compilation of the -3 crate is the one that's failing, and it will deterministically fail depending on the metadata of the -2 crate. I do not currently know where the extra bytes come from, still trying to track that down...", "positive_passages": [{"docid": "doc-en-rust-44fa3d3073952316c3d02b7d96f362eee4837a39678eddf1de2a691220ae8dc0", "text": "// aux-build:issue-13560-1.rs // aux-build:issue-13560-2.rs // aux-build:issue-13560-3.rs  // ignore-pretty FIXME #19501  // ignore-stage1 // Regression test for issue #13560, the test itself is all in the dependent", "commid": "rust_pr_19502"}], "negative_passages": []}
{"query_id": "q-en-rust-271e5254a2e8191ad58ca6872f59048573a6bb5d7abe050059f6847f5e72f7d8", "query": "Right now we don't normalize impl bounds during method probing, which means that the winnow stage may not work great when those bounds involve associated types. I think with the new setup we should be able to do this but didn't have time to test. Search for the FIXME in", "positive_passages": [{"docid": "doc-en-rust-b58c6d80a6167028b5034e1080dcba92e8c28849def2d53a103d836b1c930cac", "text": "match probe.kind { InherentImplCandidate(impl_def_id, ref substs) | ExtensionImplCandidate(impl_def_id, _, ref substs, _) => {  let selcx = &mut traits::SelectionContext::new(self.infcx(), self.fcx); let cause = traits::ObligationCause::misc(self.span, self.fcx.body_id);  // Check whether the impl imposes obligations we have to worry about. let impl_generics = ty::lookup_item_type(self.tcx(), impl_def_id).generics; let impl_bounds = impl_generics.to_bounds(self.tcx(), substs);  // FIXME(#20378) assoc type normalization here? // Erase any late-bound regions bound in the impl // which appear in the bounds. let impl_bounds = self.erase_late_bound_regions(&ty::Binder(impl_bounds));   let traits::Normalized { value: impl_bounds, obligations: norm_obligations } = traits::normalize(selcx, cause.clone(), &impl_bounds);  // Convert the bounds into obligations. let obligations =  traits::predicates_for_generics( self.tcx(), traits::ObligationCause::misc(self.span, self.fcx.body_id), &impl_bounds);   traits::predicates_for_generics(self.tcx(), cause.clone(), &impl_bounds);  debug!(\"impl_obligations={}\", obligations.repr(self.tcx())); // Evaluate those obligations to see if they might possibly hold.  let mut selcx = traits::SelectionContext::new(self.infcx(), self.fcx); obligations.all(|o| selcx.evaluate_obligation(o))   obligations.all(|o| selcx.evaluate_obligation(o)) && norm_obligations.iter().all(|o| selcx.evaluate_obligation(o))  } ObjectCandidate(..) |", "commid": "rust_pr_20608"}], "negative_passages": []}
{"query_id": "q-en-rust-271e5254a2e8191ad58ca6872f59048573a6bb5d7abe050059f6847f5e72f7d8", "query": "Right now we don't normalize impl bounds during method probing, which means that the winnow stage may not work great when those bounds involve associated types. I think with the new setup we should be able to do this but didn't have time to test. Search for the FIXME in", "positive_passages": [{"docid": "doc-en-rust-528025b5d28509a0972e8631772b2a319994794a7aa43e9734b4cdb77d7990cc", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that we handle projection types which wind up important for // resolving methods. This test was reduced from a larger example; the // call to `foo()` at the end was failing to resolve because the // winnowing stage of method resolution failed to handle an associated // type projection. #![feature(associated_types)] trait Hasher { type Output; fn finish(&self) -> Self::Output; } trait Hash { fn hash(&self, h: &mut H); } trait HashState { type Wut: Hasher; fn hasher(&self) -> Self::Wut; } struct SipHasher; impl Hasher for SipHasher { type Output = u64; fn finish(&self) -> u64 { 4 } } impl Hash for int { fn hash(&self, h: &mut SipHasher) {} } struct SipState; impl HashState for SipState { type Wut = SipHasher; fn hasher(&self) -> SipHasher { SipHasher } } struct Map { s: S, } impl Map where S: HashState, ::Wut: Hasher, { fn foo(&self, k: K) where K: Hash< ::Wut> {} } fn foo>(map: &Map) { map.foo(22); } fn main() {} ", "commid": "rust_pr_20608"}], "negative_passages": []}
{"query_id": "q-en-rust-e2b709f1eeee61ea3e7c9154ff7908d7acd7c273e263646c820f7136586c7266", "query": "I think I found a bug with non-power-of-2 SIMD vectors. They're randomly turned into zero values when I store them on Expected result: What we get: Tested with:\nAs you can see, this is a fault of assuming (while for this is 12 and 16 respectively). More accurately, for , every growth allocates bytes while it actually needs bytes; any element after will overflow the memory, and they will be omitted at the next reallocation.\nAck!\nCC\nI've got through and I guess there's something wrong with for and other non-power-of-2 SIMD types. $DIR/return-unsized-from-trait-method.rs:11:17 | LL |         let _ = f.foo(); |                 ^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0161`. ", "commid": "rust_pr_71541"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-849db4f23288620883d0e3cf5984d29b1640d4114a5547ab3c0332e2cfe20262", "text": "//! //! where `&.T` and `*T` are references of either mutability, //! and where unsize_kind(`T`) is the kind of the unsize info  //! in `T` - a vtable or a length (or `()` if `T: Sized`).   //! in `T` - the vtable for a trait definition (e.g. `fmt::Display` or //! `Iterator`, not `Iterator`) or a length (or `()` if `T: Sized`). //! //! Note that lengths are not adjusted when casting raw slices - //! `T: *const [u16] as *const [u8]` creates a slice that only includes //! half of the original memory.  //! //! Casting is not transitive, that is, even if `e as U1 as U2` is a valid //! expression, `e as U2` is not necessarily so (in fact it will only be valid if", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-2a2fdf3f7742792676600bef3d44bb18ed3f3aad8f45ceef8ed006c6564b2863", "text": "/// fat pointers if their unsize-infos have the same kind. #[derive(Copy, Clone, PartialEq, Eq)] enum UnsizeKind<'tcx> {  Vtable,   Vtable(ast::DefId),  Length, /// The unsize info of this projection OfProjection(&'tcx ty::ProjectionTy<'tcx>),", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-43d6176dfd60f671c5d92f18a135aa25484240480aa8f2f15fe3586a19084fe4", "text": "-> Option> { match t.sty { ty::TySlice(_) | ty::TyStr => Some(UnsizeKind::Length),  ty::TyTrait(_) => Some(UnsizeKind::Vtable),   ty::TyTrait(ref tty) => Some(UnsizeKind::Vtable(tty.principal_def_id())),  ty::TyStruct(did, substs) => { match ty::struct_fields(fcx.tcx(), did, substs).pop() { None => None,", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-3c9f20479e4729f2c52d0e6acfff91eba74fc7076bdd18c15458648cf7f46b46", "text": "trait Foo { fn foo(&self) {} } impl Foo for T {}  trait Bar { fn foo(&self) {} } impl Bar for T {}  enum E { A, B }", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-8ab0f4db7424069e14bdfea640119c8c0f4f910ae7d9f73a892eb5a29109256f", "text": "// check no error cascade let _ = main.f as *const u32; //~ ERROR attempted access of field  let cf: *const Foo = &0; let _ = cf as *const [u8]; //~ ERROR vtable kinds let _ = cf as *const Bar; //~ ERROR vtable kinds  }", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-a2825d467fffd45c7b54807b4955ee97440bf6e99c995153a06ea1623a15e8c2", "text": "impl Foo for u32 { fn foo(&self, _: u32) -> u32 { self+43 } } impl Bar for () {}  unsafe fn fool<'a>(t: *const (Foo+'a)) -> u32 { let bar : *const Bar = t as *const Bar;   unsafe fn round_trip_and_call<'a>(t: *const (Foo+'a)) -> u32 {  let foo_e : *const Foo = t as *const _; let r_1 = foo_e as *mut Foo;  (&*r_1).foo(0)*(&*(bar as *const Foo)).foo(0)   (&*r_1).foo(0)  } #[repr(C)]", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-1dea7b9f653c61585e6991495888bcfe2316a36e899e163db85e556d81eeb09d", "text": "fn main() { let x = 4u32; let y : &Foo = &x;  let fl = unsafe { fool(y as *const Foo) }; assert_eq!(fl, (43+4)*(43+4));   let fl = unsafe { round_trip_and_call(y as *const Foo) }; assert_eq!(fl, (43+4));  let s = FooS([0,1,2]); let u: &FooS<[u32]> = &s;", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-0a75249c4338e68e72e7b7575aad3651aee9dd3b64619f923fdce00d9fcff971", "text": "assert_eq!(u as *const u8, p as *const u8); assert_eq!(u as *const u16, p as *const u16);  // ptr-ptr-cast (both vk=Length)   // ptr-ptr-cast (Length vtables)  let mut l : [u8; 2] = [0,1]; let w: *mut [u16; 2] = &mut l as *mut [u8; 2] as *mut _; let w: *mut [u16] = unsafe {&mut *w};", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-d4a593b434e97fdf75553db892a4a97b6111c11da0bceb895600abdb30f17120", "query": "Much of the implementation work for rust-lang/rfcs has been done, but there remain some corner cases to fix: [ ] Casts between raw pointers to different traits are accepted but should not be (e.g. ) [ ] Double-check other cases, ensure adequate tests exist\ntriage: P-high -- I'm giving this high priority since accepting casts that we ought not to is a backwards compatibility risk, though it is mitigated by the fact that afaik the illegal casts yield an ICE (but I haven't done exhaustive testing)\nThe newly-illegal casts shouldn't be ICE-s (I mean, on nightly). Casts on stable are just randomly ICE-ey.", "positive_passages": [{"docid": "doc-en-rust-4504e8170ecd5fcc46caed139f9442f6496ed748643695d70f698bbbe173cbbc", "text": "let l_via_str = unsafe{&*(s as *const [u8])}; assert_eq!(&l, l_via_str);  // ptr-ptr-cast (Length vtables, check length is preserved) let l: [[u8; 3]; 2] = [[3, 2, 6], [4, 5, 1]]; let p: *const [[u8; 3]] = &l; let p: &[[u8; 2]] = unsafe {&*(p as *const [[u8; 2]])}; assert_eq!(p, [[3, 2], [6, 4]]);  // enum-cast assert_eq!(Simple::A as u8, 0); assert_eq!(Simple::B as u8, 1);", "commid": "rust_pr_26394"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-caec82172341739f7b261010d5823d336df96af99574a7495613d264c6f9d82c", "text": "let impl_m = &cm.mty;  if impl_m.fty.meta.purity != trait_m.fty.meta.purity { tcx.sess.span_err( cm.span, fmt!(\"method `%s`'s purity does  not match the trait method's  purity\", tcx.sess.str_of(impl_m.ident))); } // is this check right?   // FIXME(#2687)---this check is too strict.  For example, a trait // method with self type `&self` or `&mut self` should be // implementable by an `&const self` method (the impl assumes less // than the trait provides).  if impl_m.self_ty != trait_m.self_ty { tcx.sess.span_err( cm.span,", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-b902e8c01413639a602dfadb7f12f9af28e0c2733f74dc8724c2a7c761329465", "text": "return; }  // FIXME(#2687)---we should be checking that the bounds of the // trait imply the bounds of the subtype, but it appears // we are...not checking this.  for trait_m.tps.eachi() |i, trait_param_bounds| { // For each of the corresponding impl ty param's bounds... let impl_param_bounds = impl_m.tps[i];", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-0570e7ca245bccd66eab5eaa5b761bdcb1e3fc04b60fee1b55c8203304ef0740", "text": "debug!(\"trait_fty (pre-subst): %s\", ty_to_str(tcx, trait_fty)); ty::subst(tcx, &substs, trait_fty) };  debug!(\"trait_fty: %s\", ty_to_str(tcx, trait_fty)); require_same_types( tcx, None, false, cm.span, impl_fty, trait_fty, || fmt!(\"method `%s` has an incompatible type\", tcx.sess.str_of(trait_m.ident)));   let infcx = infer::new_infer_ctxt(tcx); match infer::mk_subty(infcx, false, cm.span, impl_fty, trait_fty) { result::Ok(()) => {} result::Err(ref terr) => { tcx.sess.span_err( cm.span, fmt!(\"method `%s` has an incompatible type: %s\", tcx.sess.str_of(trait_m.ident), ty::type_err_to_str(tcx, terr))); ty::note_and_explain_type_err(tcx, terr); } }  return; // Replaces bound references to the self region with `with_r`.", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-bf3dd5226517d75e056d0b6175d50ee7feea04bdd81f483f891899405c40de43", "text": " trait Mumbo { pure fn jumbo(&self, x: @uint) -> uint; fn jambo(&self, x: @const uint) -> uint; fn jbmbo(&self) -> @uint; } impl uint: Mumbo { // Cannot have a larger effect than the trait: fn jumbo(&self, x: @uint) { *self + *x; } //~^ ERROR expected pure fn but found impure fn // Cannot accept a narrower range of parameters: fn jambo(&self, x: @uint) { *self + *x; } //~^ ERROR values differ in mutability // Cannot return a wider range of values: fn jbmbo(&self) -> @const uint { @const 0 } //~^ ERROR values differ in mutability } fn main() {} ", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-03196d1b720ff7ae1cf9aa1cc66dd1f5ed7c199530aca635881095af0585155a", "query": "Right now, a fn implementing an iface method must match the purity of the iface method exactly. This is too strict. We should allow a pure fn to implement an impure fn, and both pure/impure to implement an unsafe fn. UPDATE: Updated title to reflect the real problem here. There is a FIXME in the code in the relevant area. If one declares a trait like: then in the impl, the method should take one type parameter with an bound. We don't really check this correctly though. As far as I can tell, we just check that has one type parameter with one bound, but not precisely what kind of bound it is.\nSimilarly, in , we require that the bounds on the method parameters be precisely equivalent, which is stronger than necessary. It is sufficient (I believe) that the bounds on the trait be stronger than the bounds on the implementation.\nIs this still an issue? It seems to be fixed now.\nStill an issue with respect to type parameter bounds.\nI don't believe this is backwards incompatible, renominating.\naccepted for feature-complete milestone\nThe following appears to work, but I assume that this is what was talking about:\ntriage bump. nothing to add.\nAccepted for P-backcompat-lang\nAdded UPDATE to main bug description above.\nI had a look at this issue and tried to create a test case. does test case covers the issue you mentioned? This test case fails because it seems like the parameter types are checked in\nwhat error does it fail with? In the you need to use a different type bound for in order to test this issue. To be more precise, the bounds of the implementation should be implied by the bounds of the trait. For example, this should fail:\nthanks for the hint, I guess your updated example should indeed test this issue. I'll start working on a patch.", "positive_passages": [{"docid": "doc-en-rust-500537c687a9e841c09ee5facf3fe1da66a6c3bd64b6b527245f1c0d52622760", "text": " trait Mumbo { fn jumbo(&self, x: @uint) -> uint; } impl uint: Mumbo { // Note: this method def is ok, it is more accepting and // less effecting than the trait method: pure fn jumbo(&self, x: @const uint) -> uint { *self + *x } } fn main() { let a = 3u; let b = a.jumbo(@mut 6); let x = @a as @Mumbo; let y = x.jumbo(@mut 6); //~ ERROR values differ in mutability let z = x.jumbo(@6); } ", "commid": "rust_pr_3873"}], "negative_passages": []}
{"query_id": "q-en-rust-dc29692e329c492975de930903284b0a8df2e4072e6472451b55244c5ec17708", "query": "Some : The Lifetimes chapter currently has: which does say \"declares our lifetimes\", so maybe further explanation isn't appropriate here, but it looks like the Functions chapter is sticking to the bare minimum. I'm not sure where this would fit, but I think the could use another sentence or two of explanation-- \"This is where you declare lifetime and generic type parameters that this function is going to use\" and link to the appropriate chapters for that perhaps?", "positive_passages": [{"docid": "doc-en-rust-f2d5d42807374dcc52259d67ec5852c9c214b55e039e3dd0a5c827b150c928d2", "text": "fn bar<'a>(...) ```  This part declares our lifetimes. This says that `bar` has one lifetime, `'a`. If we had two reference parameters, it would look like this:   We previously talked a little about [function syntax][functions], but we didn\u2019t discuss the `<>`s after a function\u2019s name. A function can have \u2018generic parameters\u2019 between the `<>`s, of which lifetimes are one kind. We\u2019ll discuss other kinds of generics [later in the book][generics], but for now, let\u2019s just focus on the lifteimes aspect. [functions]: functions.html [generics]: generics.html We use `<>` to declare our lifetimes. This says that `bar` has one lifetime, `'a`. If we had two reference parameters, it would look like this:  ```rust,ignore fn bar<'a, 'b>(...)", "commid": "rust_pr_27538"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is  and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-408b716fb47786f807f48dc315e134b4ac30c3ca7a7a01b28aaaa8c6d3ec3b34", "text": "let mut sign = None; if !is_positive { sign = Some('-'); width += 1;  } else if self.flags & (1 << (FlagV1::SignPlus as u32)) != 0 {   } else if self.sign_plus() {  sign = Some('+'); width += 1; } let mut prefixed = false;  if self.flags & (1 << (FlagV1::Alternate as u32)) != 0 {   if self.alternate() {  prefixed = true; width += prefix.char_len(); }", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is  and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-c9c1cf0acb0342c33397d0bc973020c4513ca10cc1f4f7f014a27ada71ac7c45", "text": "} // The sign and prefix goes before the padding if the fill character // is zero  Some(min) if self.flags & (1 << (FlagV1::SignAwareZeroPad as u32)) != 0 => {   Some(min) if self.sign_aware_zero_pad() => {  self.fill = '0'; try!(write_prefix(self)); self.with_padding(min - width, Alignment::Right, |f| {", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is  and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-d7734815f6ce1d7b906f0ab305877f7beb3d3478f6321805929325128f4a4c24", "text": "let mut formatted = formatted.clone(); let mut align = self.align; let old_fill = self.fill;  if self.flags & (1 << (FlagV1::SignAwareZeroPad as u32)) != 0 {   if self.sign_aware_zero_pad() {  // a sign always goes first let sign = unsafe { str::from_utf8_unchecked(formatted.sign) }; try!(self.buf.write_str(sign));", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is  and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-01f77f18243bddaeca35fc50314de03cb4b44453ab63b886b0c01531b02cd507", "text": "issue = \"27726\")] pub fn precision(&self) -> Option { self.precision }  /// Determines if the `+` flag was specified. #[unstable(feature = \"fmt_flags\", reason = \"method was just created\", issue = \"27726\")] pub fn sign_plus(&self) -> bool { self.flags & (1 << FlagV1::SignPlus as u32) != 0 } /// Determines if the `-` flag was specified. #[unstable(feature = \"fmt_flags\", reason = \"method was just created\", issue = \"27726\")] pub fn sign_minus(&self) -> bool { self.flags & (1 << FlagV1::SignMinus as u32) != 0 } /// Determines if the `#` flag was specified. #[unstable(feature = \"fmt_flags\", reason = \"method was just created\", issue = \"27726\")] pub fn alternate(&self) -> bool { self.flags & (1 << FlagV1::Alternate as u32) != 0 } /// Determines if the `0` flag was specified. #[unstable(feature = \"fmt_flags\", reason = \"method was just created\", issue = \"27726\")] pub fn sign_aware_zero_pad(&self) -> bool { self.flags & (1 << FlagV1::SignAwareZeroPad as u32) != 0 }  /// Creates a `DebugStruct` builder designed to assist with creation of /// `fmt::Debug` implementations for structs. ///", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is  and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-6d5ed29038552fbe1fd1c8574d6e6711f73f69ffe162c3c35a3803906aad6610", "text": "// it denotes whether to prefix with 0x. We use it to work out whether // or not to zero extend, and then unconditionally set it to get the // prefix.  if f.flags & 1 << (FlagV1::Alternate as u32) > 0 {   if f.alternate() {  f.flags |= 1 << (FlagV1::SignAwareZeroPad as u32); if let None = f.width {", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is  and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-4fd85a1b7ccc1b049c44ba911c40e8b40aa89b849d0dad479204e75d0ae89b19", "text": "fn float_to_decimal_common(fmt: &mut Formatter, num: &T, negative_zero: bool) -> Result where T: flt2dec::DecodableFloat {  let force_sign = fmt.flags & (1 << (FlagV1::SignPlus as u32)) != 0;   let force_sign = fmt.sign_plus();  let sign = match (force_sign, negative_zero) { (false, false) => flt2dec::Sign::Minus, (false, true)  => flt2dec::Sign::MinusRaw,", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-cb3c8dda2a453e2d7f4cc7937e7c8f83933dee49c111d7b8d5c3c15b4f3fa421", "query": "Tracking issue for the feature, which is  and the associated enum.\nYes, please. Why? Makes little sense.\nI disagree that it makes little sense to expose getters, it's possible to envision a situation where one formatting is composed of many others and you perhaps want some to be formatted to various widths or perhaps enable various flags (e.g. the \"alternate\" mode). It's certainly niche but it's somewhat necessary for completeness.\n, , and all seem straightforward and can be stabilized as-is. I would also very much like to add and stabilize boolean accessors for the various parts of : , , , and (we could call the last one but that seems like a genuinely excessive name). It's pretty unfortunate we stabilized the accessor in the first place IMO, but oh well. I'm a little less sure about , though. What is returned if no fill character was specified? Why doesn't it return an like the others? Is sufficient? Should it be a full grapheme instead?\nI'd be fine just deprecating the method (it's basically useless without constants anyway) and adding boolean accessors, I agree that it better matches design in today's Rust. I'd also be fine making return an and just specifying that it's always one unicode character (it's how the format string is parsed). Perhaps in theory a grapheme could be specified but that seems like something for an even fancier formatting system!\nThis issue is now entering its cycle-long FCP for stabilization in 1.5 The accessors being in will also be considered for stabilization.\nAh, and may be good to get your opinion on the grapheme-vs- situation here. Currently whenever something is formatted you have the ability to specify a \"fill\" character which for when padding is applied (normally this is just an ascii space). As points out this fill is currently just a , but it could in theory be a grapheme in terms of \"one thing on the screen\". Do you have an opinion either way in that regard? Curious to hear thoughts!\nUh. It\u2019s not just graphemes. The whole // thing in as currently written is based on some assumptions: \u2019re printing to something that (like most terminal emulators) align text on a grid . Unfortunately, as often with Unicode, it\u2019s complicated. Not only grapheme clusters can have more than one code point and still only use one slot (with combining code points), but most characters of some Asian languages and most emoji are \u201cfull width\u201d: they\u2019re twice the usual width in most monospace fonts. Control characters be displayed. Or they might be interpreted by the terminal to move the cursor around. has some more background. Here is an extract, about emoji flags: Our best bet for \u201dwhat is the width of this string?\u201d is probably That leaves dealing with if it\u2019s double width or a control character. If we want to do it, could be an arbitrary string rather than a single ? Or should be rejected?\nHm, those are very good points! I would be mostly tempted to just pare down everything and say it largely deals with ascii only (and maybe simple unicode?). I don't think it'd be too beneficial to start getting full-blown unicode-width support in libcore just to support a use case like this. Having some verification in the compiler, however, to make sure you're not doing crazy things seems reasonable? In theory, yes. I'd also be fine with this!\nThe libs team discussed this during triage today and the decision was to stabilize.\nTriage: looks like this was stabilized, closing!\nis still unstable, linking to this issue:\nI am a bit confused about the reason is still unstable. Is it because is only available from ? Is there anything I can help with to move this along?\nNominating for stabilization. I suspect that needs to be re-exported in std, but that seems like something trivial to solve while stabilizing. The only concern that I see left is that there appear to be 3 separate, but equivalent, definition of the enum in the various libraries. These should probably be de-duplicated, but again, since only one appears to be public, we can probably do this trivially as well during stabilization.\nI think we should remove and return an instead, but it seems reasonable to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nShouldn't this get merged then?\nYes, with FCP to stabilize completed, the next step is a stabilization PR.", "positive_passages": [{"docid": "doc-en-rust-56bedd0a125417bb071cd1f797e30aa1c8973f38445d3c1b46001a08b47deda7", "text": "fn float_to_exponential_common(fmt: &mut Formatter, num: &T, upper: bool) -> Result where T: flt2dec::DecodableFloat {  let force_sign = fmt.flags & (1 << (FlagV1::SignPlus as u32)) != 0;   let force_sign = fmt.sign_plus();  let sign = match force_sign { false => flt2dec::Sign::Minus, true  => flt2dec::Sign::MinusPlus,", "commid": "rust_pr_28615"}], "negative_passages": []}
{"query_id": "q-en-rust-8388d7b1b57e76a9f621a117dc1a6278da99e1855ca97db4d2310863427ae0b4", "query": "This compiles and runs without warnings: I at least expected the unused attribute lint to catch this\nThey are not ignored, it's just that attributes on item macros apply to the macro expansion rather than the resultant item, e.g this errors as expected as the macro isn't expanded.\nAh, and they're removed when the macro is expanded, which is why the lint doesn't pick them up.\nYeah as noted this is actually working as intended, so closing.\nI still feel uneasy about the lint missing these attributes, but I guess that cannot be fixed unless the attribute system is rewritten", "positive_passages": [{"docid": "doc-en-rust-d37399cb719695b7f94573c73bfaae9b8e85014e0d623d2dc52b4457a53b6837", "text": "// // Local Variables:  // mode: C++   // mode: rust  // fill-column: 78; // indent-tabs-mode: nil // c-basic-offset: 4", "commid": "rust_pr_398"}], "negative_passages": []}
{"query_id": "q-en-rust-11ab28a076dfabf628b922f54c3fbd59c5db2b0bf776c31cdd624cc3012cc60b", "query": "When you search for something on the online docs (which I assume are generated using cargo doc) a list of results comes up. When you clear the search bar/press the X button it probably should clear the results to allow you to view the original page. Anyone know how to fix this?\nI don't believe that we have anything set up for that. Is there an \"X\" button? I don't see it anywhere. Regardless, clearing the box or pressing escape or something should probably clear it, for sure.\n! And then after clicking the \"X\": !\nSearch field is and therefore it might have various user agent specific functionality. Simpler way to reproduce in a cross-UA way would be to follow these steps: a search query; the whole query at once by highlighting it and clicking backspace, delete or any other relevant button; Result list not changing. We could go back to the page that was open before the search field was used to fix this. Note that we do know what page it was, because we simply append to the URI of the page we start the search at.\nI took a shot at it and in the half an hour I dedicated to it, all I found that search code is a complete mess and this might be harder to fix correctly than it might appear at a first glance.", "positive_passages": [{"docid": "doc-en-rust-e7d7c73d404ba3abc87ef5bc3675006d1d04271ca22478af988066b238167d40", "text": "} function startSearch() {  $(\".search-input\").on(\"keyup\",function() { if ($(this).val().length === 0) { window.history.replaceState(\"\", \"std - Rust\", \"?search=\"); $('#main.content').removeClass('hidden'); $('#search.content').addClass('hidden'); } });  var keyUpTimeout; $('.do-search').on('click', search); $('.search-input').on('keyup', function() {", "commid": "rust_pr_28795"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-16655e1b66b04d4ea8eab2e01e78493177e16bbabf6080fec191548dc7724e81", "text": "E0495, // cannot infer an appropriate lifetime due to conflicting requirements E0496, // .. name `..` shadows a .. name that is already in scope E0498, // malformed plugin attribute  E0514, // metadata version mismatch  }", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-c482e36f801dd43bcbef4be3c51849d45e732064154bf16056292d54b5963a56", "text": "pub const tag_impl_coerce_unsized_kind: usize = 0xa5; pub const tag_items_data_item_constness: usize = 0xa6;  pub const tag_rustc_version: usize = 0x10f; pub fn rustc_version() -> String { format!( \"rustc {}\", option_env!(\"CFG_VERSION\").unwrap_or(\"unknown version\") ) } ", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-a0df95f6c0155d1f330f444e6806b5c0ea06795206b068bd61f0d3707a12ce86", "text": "use back::svh::Svh; use session::{config, Session}; use session::search_paths::PathKind;  use metadata::common::rustc_version;  use metadata::cstore; use metadata::cstore::{CStore, CrateSource, MetadataBlob}; use metadata::decoder;", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-dc7528e35ac4f58f9db16503ee8f69b940c94dea1ed697dc2701068de8957af7", "text": "return ret; }  fn verify_rustc_version(&self, name: &str, span: Span, metadata: &MetadataBlob) { let crate_rustc_version = decoder::crate_rustc_version(metadata.as_slice()); if crate_rustc_version != Some(rustc_version()) { span_err!(self.sess, span, E0514, \"the crate `{}` has been compiled with {}, which is  incompatible with this version of rustc\", name, crate_rustc_version .as_ref().map(|s|&**s) .unwrap_or(\"an old version of rustc\") ); self.sess.abort_if_errors(); } }  fn register_crate(&mut self, root: &Option, ident: &str,", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-657f2bb26cd7c265ac3d44371d515caf2e222196191149277b9645f13137f63f", "text": "explicitly_linked: bool) -> (ast::CrateNum, Rc, cstore::CrateSource) {  self.verify_rustc_version(name, span, &lib.metadata);  // Claim this crate number and cache it let cnum = self.next_crate_num; self.next_crate_num += 1;", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-7d4d287c064721db4d7c56513410fefb46550f0487660403746af45329096484", "text": "index::Index::from_buf(index.data, index.start, index.end) }  pub fn crate_rustc_version(data: &[u8]) -> Option { let doc = rbml::Doc::new(data); reader::maybe_get_doc(doc, tag_rustc_version).map(|s| s.as_str()) }  #[derive(Debug, PartialEq)] enum Family { ImmStatic,             // c", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-964c47af66daa0c54d4f44cf54a601c968f3052f4e9fa68c603933d15cfbb4fb", "text": "rbml_w.wr_tagged_str(tag_crate_hash, hash.as_str()); }  fn encode_rustc_version(rbml_w: &mut Encoder) { rbml_w.wr_tagged_str(tag_rustc_version, &rustc_version()); }  fn encode_crate_name(rbml_w: &mut Encoder, crate_name: &str) { rbml_w.wr_tagged_str(tag_crate_crate_name, crate_name); }", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-e89bd50a665ad879c1ebdbaaa02720641663c87b5ef3a88c5732b8d6b26813e6", "text": "let mut rbml_w = Encoder::new(wr);  encode_rustc_version(&mut rbml_w);  encode_crate_name(&mut rbml_w, &ecx.link_meta.crate_name); encode_crate_triple(&mut rbml_w, &tcx.sess.opts.target_triple); encode_hash(&mut rbml_w, &ecx.link_meta.crate_hash);", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-7b0faa7fa43ff1376cf740c0b3f25ffe61550d758db1d2105d1683211fe87f5a", "text": "// can just use the tcx as the typer. // // FIXME(stage0): the :'t here is probably only important for stage0  pub struct ExprUseVisitor<'d, 't, 'a: 't, 'tcx:'a+'d+'t> {   pub struct ExprUseVisitor<'d, 't, 'a: 't, 'tcx:'a+'d> {  typer: &'t infer::InferCtxt<'a, 'tcx>, mc: mc::MemCategorizationContext<'t, 'a, 'tcx>, delegate: &'d mut Delegate<'tcx>,", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-165546cc6d1664a37fe24db06bf255ae7f919e8684ae20907812cf93b20914b9", "query": "When trying to update uutils/coreutils to HEAD, I got this ICE: Version: code:\napparently, this happens as a side effect of the build process. i had built with stable, then tried to build with nightly, giving incorrect metadata", "positive_passages": [{"docid": "doc-en-rust-b536b3aeb7a32be04f8b83bfe28ca29f11bd9ebcc2dbf2951bea982df2c84d03", "text": "impl<'d,'t,'a,'tcx> ExprUseVisitor<'d,'t,'a,'tcx> { pub fn new(delegate: &'d mut Delegate<'tcx>, typer: &'t infer::InferCtxt<'a, 'tcx>)  -> ExprUseVisitor<'d,'t,'a,'tcx>   -> ExprUseVisitor<'d,'t,'a,'tcx> where 'tcx:'a  { ExprUseVisitor { typer: typer,", "commid": "rust_pr_28702"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-f879b28a420cb50f348a3c9897d52286e6156286ea222e37988a0687cabfbdfe", "text": "% Closures  Rust not only has named functions, but anonymous functions as well. Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment. Rust has a really great implementation of them, as we\u2019ll see.   Sometimes it is useful to wrap up a function and _free variables_ for better clarity and reuse. The free variables that can be used come from the enclosing scope and are \u2018closed over\u2019 when used in the function. From this, we get the name \u2018closures\u2019 and Rust provides a really great implementation of them, as we\u2019ll see.  # Syntax", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-b57792600f3fdfde7082944216655f0c6377540e26ac8e20d4b13cc27c2c38e8", "text": "``` You\u2019ll notice a few things about closures that are a bit different from regular  functions defined with `fn`. The first is that we did not need to   named functions defined with `fn`. The first is that we did not need to  annotate the types of arguments the closure takes or the values it returns. We can:", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-997f7ad35b3af6173a7160aa267f7be5b0809cd920770d5c32a5f8827224721f", "text": "assert_eq!(2, plus_one(1)); ```  But we don\u2019t have to. Why is this? Basically, it was chosen for ergonomic reasons. While specifying the full type for named functions is helpful with things like documentation and type inference, the types of closures are rarely documented since they\u2019re anonymous, and they don\u2019t cause the kinds of error-at-a-distance problems that inferring named function types can.   But we don\u2019t have to. Why is this? Basically, it was chosen for ergonomic reasons. While specifying the full type for named functions is helpful with things like documentation and type inference, the full type signatures of closures are rarely documented since they\u2019re anonymous, and they don\u2019t cause the kinds of error-at-a-distance problems that inferring named function types can.   The second is that the syntax is similar, but a bit different. I\u2019ve added spaces here for easier comparison:   The second is that the syntax is similar, but a bit different. I\u2019ve added spaces here for easier comparison:  ```rust fn  plus_one_v1   (x: i32) -> i32 { x + 1 }", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-32c942591c15ff961f71cac4329167bf81e26ff5c1fefedfd3368e65f5b6b47f", "text": "# Closures and their environment  Closures are called such because they \u2018close over their environment\u2019. It looks like this:   The environment for a closure can include bindings from its enclosing scope in addition to parameters and local bindings. It looks like this:  ```rust let num = 5;", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-0de7cf9b66e3be6b35ae7f518b8d1b2e76eeeb642e759902776596c4f016a2c9", "text": "it, while a `move` closure is self-contained. This means that you cannot generally return a non-`move` closure from a function, for example.  But before we talk about taking and returning closures, we should talk some more about the way that closures are implemented. As a systems language, Rust gives you tons of control over what your code does, and closures are no different.   But before we talk about taking and returning closures, we should talk some more about the way that closures are implemented. As a systems language, Rust gives you tons of control over what your code does, and closures are no different.  # Closure implementation", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-e336995a7a063325da7876e3c8a0d8c0c70be9613c42d087dfec827301d350d5", "text": "#   some_closure(1) } ```  Because `Fn` is a trait, we can bound our generic with it. In this case, our closure takes a `i32` as an argument and returns an `i32`, and so the generic bound we use is `Fn(i32) -> i32`.   Because `Fn` is a trait, we can bound our generic with it. In this case, our closure takes a `i32` as an argument and returns an `i32`, and so the generic bound we use is `Fn(i32) -> i32`.  There\u2019s one other key point here: because we\u2019re bounding a generic with a trait, this will get monomorphized, and therefore, we\u2019ll be doing static", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-dca5aa8713ccc482463c7daa7fec86aae54a04f4400b3fb9be93d0f6724c246d", "query": "The first section on can be difficult to understand for novice programmers, particularly the line \"Anonymous functions that have an associated environment are called \u2018closures\u2019, because they close over an environment.\" It's difficult from this context to understand what \"environment\" is referring to.\nYeah, this definition is kind of embarassingly self-referential :sweat:\nI'm going to try to clarify this.\nfixed this", "positive_passages": [{"docid": "doc-en-rust-3d62a91e9065e5d5b760b55ff466c2f1a66b3a2acc9ea5dc0053e1b19659c41d", "text": "The error also points out that the return type is expected to be a reference, but what we are trying to return is not. Further, we cannot directly assign a `'static` lifetime to an object. So we'll take a different approach and return  a \"trait object\" by `Box`ing up the `Fn`. This _almost_ works:   a \u2018trait object\u2019 by `Box`ing up the `Fn`. This _almost_ works:  ```rust,ignore fn factory() -> Box i32> {", "commid": "rust_pr_28856"}], "negative_passages": []}
{"query_id": "q-en-rust-968fef732252e27e02f1a9e229cf866b28cd98df6e3022e4ba5252562b3f4993", "query": "After some discussion about the libs subteam decided that iterator adaptors should always preserve the same semantics in terms of the convenience methods and such. This was not audited for when all the initial specializations landed, so we should take a look and make sure that everything adheres to this policy. Additionally, documentation should be to the trait methods indicating what form of guarantees you are given (e.g. calling is equivalent to exhausting the iterator). triage: I-nominated\nI've checked . should exhaust before but that's the only bug I found (and it wasn't my bug for once :tada:). Unless, of course, you care about drop order. I haven't tried to reason about that.\nThe slice iterators () are fine because they are side-effect free.\nand are fine because they are also side-effect free.\nI'd appreciate it if someone else double checked but I think that's all of them.\nThis is relevant as well, because implementations of Drop may also (and usually do, in form of memory deallocation) contain side effects. Can we make a [x] list [x] of iterators which were specialised [x] like this?\nI strongly disagree. Drop has side effects but programmers should never rely on drop order except stack drop order because drop order is often unspecified (e.g. within structs). core::iter::Chain::last core::iter::Peekable::last core::iter::Peekable::count (fixable) core::iter::Skip::last (fixable?)[x] core::iter::Chain [x] core::iter::Enumerate [x] core::iter::Peekable [x] core::iter::Skip [x] core::iter::Take [x] core::iter::Fuse [x] core::str::Bytes [x] core::slice::Iter [x] core::slice::IterMut [x] core::slice::Windows [x] core::slice::Chunks [x] core::slice::ChunksMut [x] collections::vec::IntoIter\nAh right, this will be decided by anyway.\nAwesome, thanks for the work here\nUpdated list of out-of-order drops (didn't notice ).\ntriage: P-medium\n(Need to make sure this behavior is documented/explicitly promised.)\nIs this done? (Everything in the list above is :whitecheckmark:'d)\nI believe this is left open as to-be-documented. Also, I don't see any decision as to whether out of order drop is ok.\nre-tagging as docs\nSo I believe that all of these are then documented in Iterator's methods, at least as far as I can tell. Can someone from libs double check me here and help provide any specifics as to what's not documented?\nLooks good to me, thanks", "positive_passages": [{"docid": "doc-en-rust-8417a8ce0044bd436d5b08aa057ad9197853b8cad0c983fb25b5ffbca00a42c4", "text": "#[inline] fn last(self) -> Option { match self.state {  ChainState::Both => self.b.last().or(self.a.last()),   ChainState::Both => { // Must exhaust a before b. let a_last = self.a.last(); let b_last = self.b.last(); b_last.or(a_last) },  ChainState::Front => self.a.last(), ChainState::Back => self.b.last() }", "commid": "rust_pr_28818"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158  Appearances found using typical 'Search in directory' feature in a text editor.  Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-321915c765e4a76eb93458930bff97ac962d3f629b4606b3bbf4a1af9f52baa1", "text": "talk about what you do want instead. There are three broad classes of things that are relevant here: iterators,  *iterator adapters*, and *consumers*. Here's some definitions:   *iterator adaptors*, and *consumers*. Here's some definitions:  * *iterators* give you a sequence of values.  * *iterator adapters* operate on an iterator, producing a new iterator with a   * *iterator adaptors* operate on an iterator, producing a new iterator with a  different output sequence. * *consumers* operate on an iterator, producing some final set of values.", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158  Appearances found using typical 'Search in directory' feature in a text editor.  Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-37ff3598971b4d3ff40cee4dcd2ced84dcf9daab85cebd24523f69aca805fea1", "text": "These two basic iterators should serve you well. There are some more advanced iterators, including ones that are infinite.  That's enough about iterators. Iterator adapters are the last concept   That's enough about iterators. Iterator adaptors are the last concept  we need to talk about with regards to iterators. Let's get to it!  ## Iterator adapters   ## Iterator adaptors   *Iterator adapters* take an iterator and modify it somehow, producing   *Iterator adaptors* take an iterator and modify it somehow, producing  a new iterator. The simplest one is called `map`: ```rust,ignore", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158  Appearances found using typical 'Search in directory' feature in a text editor.  Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-bedaa54d1a4ae3af6fe8ebd07cf2d514a0951c6223e3ee5a9d66bad0b9646af1", "text": "If you are trying to execute a closure on an iterator for its side effects, just use `for` instead.  There are tons of interesting iterator adapters. `take(n)` will return an   There are tons of interesting iterator adaptors. `take(n)` will return an  iterator over the next `n` elements of the original iterator. Let's try it out with an infinite iterator:", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-33eeb25f389e2ae0f4e0d32ecb7178698ae3fe7517eb6be1629dad80be522f67", "query": "After having first read about iterators the TRPL book, it called my attention the scarce number of matching results in the API docs for a query like \"iterator adapters\". Curiously enough, it looks like the term used there is \"adaptors\" instead of adapters. Appearances @ TRPL(): \"iterator adapters\" -7 \"iterator adaptors\" -1 (part of compilation output) Appearances @ API docs(): \"iterator adapters\" -1 (book) \"iterator adaptors\" -158  Appearances found using typical 'Search in directory' feature in a text editor.  Appearances found using the site search filter in Google, ie. \"iterator adaptors\" Assuming that there aren't semantics involved between these two words in English, I'd like to know what's your opinion on the next steps to take (so we can avoid the inconsistency and possible confusion). What about replacing 'adapters' to 'adaptors' everywhere in the book?\nI assume this was fixed by ?\nYes, this issue can be safely closed now.", "positive_passages": [{"docid": "doc-en-rust-a6e20417b37e108841264de8d2c07b9dc647699309a0db8701567912bc46ece9", "text": "This will give you a vector containing `6`, `12`, `18`, `24`, and `30`.  This is just a small taste of what iterators, iterator adapters, and consumers   This is just a small taste of what iterators, iterator adaptors, and consumers  can help you with. There are a number of really useful iterators, and you can write your own as well. Iterators provide a safe, efficient way to manipulate all kinds of lists. They're a little unusual at first, but if you play with", "commid": "rust_pr_29066"}], "negative_passages": []}
{"query_id": "q-en-rust-439dea9dabca6daedadddac0f779d53ba67564b2d695a9255e3aa7b9704a76d6", "query": "The following code compiles on stable and beta, but not nightly: For reference, the error produced by the above is: The original test case was reduced down to the above thanks to eddyb and bluss. The original problematic line was (clap \u2192 \u2192 \u2192 boom). Specifically, the issue was with the method, though the error pointed to an earlier temporary as not living long enough. If desired, I can provide a complete, in-context example of this going awry. bluss suggested that this was related to .\ncc\nHmm I wonder if the iterators involved need to have the escape hatch for dropck to them.\nHere is a variant (that has also regressed) that does not use the macro:\n(It looks like may need similar treatment too, but I am having trouble actually making a concrete test illustrating the necessity there -- it seems when I call instead of in the above test case, even stable and beta complain about lifetime issues. Maybe the old dropck is catching something I'm missing in my manual analysis of , so I'm not going to lump that in until I see evidence that it is warranted.) Update: Oh, duh, has a lifetime bound (and a impl), so of course the old dropck was going to reject it.", "positive_passages": [{"docid": "doc-en-rust-42d470b2795ab6d25736169ebd86a62c9a5eaa0f6fe2040c0992d02bfa01a165", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] impl Drop for IntoIter {  #[unsafe_destructor_blind_to_params]  fn drop(&mut self) { // destroy the remaining elements for _x in self {}", "commid": "rust_pr_29186"}], "negative_passages": []}
{"query_id": "q-en-rust-439dea9dabca6daedadddac0f779d53ba67564b2d695a9255e3aa7b9704a76d6", "query": "The following code compiles on stable and beta, but not nightly: For reference, the error produced by the above is: The original test case was reduced down to the above thanks to eddyb and bluss. The original problematic line was (clap \u2192 \u2192 \u2192 boom). Specifically, the issue was with the method, though the error pointed to an earlier temporary as not living long enough. If desired, I can provide a complete, in-context example of this going awry. bluss suggested that this was related to .\ncc\nHmm I wonder if the iterators involved need to have the escape hatch for dropck to them.\nHere is a variant (that has also regressed) that does not use the macro:\n(It looks like may need similar treatment too, but I am having trouble actually making a concrete test illustrating the necessity there -- it seems when I call instead of in the above test case, even stable and beta complain about lifetime issues. Maybe the old dropck is catching something I'm missing in my manual analysis of , so I'm not going to lump that in until I see evidence that it is warranted.) Update: Oh, duh, has a lifetime bound (and a impl), so of course the old dropck was going to reject it.", "positive_passages": [{"docid": "doc-en-rust-ca3bfd6146e26138af6ebba0ca152cd744ec275abeb99ebbe7c98404df59d644", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // This test ensures that vec.into_iter does not overconstrain element lifetime. pub fn main() { original_report(); revision_1(); revision_2(); } fn original_report() { drop(vec![&()].into_iter()) } fn revision_1() { // below is what above `vec!` expands into at time of this writing. drop(<[_]>::into_vec(::std::boxed::Box::new([&()])).into_iter()) } fn revision_2() { drop((match (Vec::new(), &()) { (mut v, b) => { v.push(b); v } }).into_iter()) } ", "commid": "rust_pr_29186"}], "negative_passages": []}
{"query_id": "q-en-rust-5ce93ae8d688c5b782c57540770a04fa08a2ca8b12bd3b1bec41e97e03d84ee1", "query": "One minor doc bug: first example should call writetenbytesatend in test case instead of writetenbytes. PS: Is this a good place for website/doc issues?\nYep, this is the right place!", "positive_passages": [{"docid": "doc-en-rust-f2d1476a859347c798cebed1968b169f31eacb8df99fc80bd871cbce1504a2f2", "text": "///     use std::io::Cursor; ///     let mut buff = Cursor::new(vec![0; 15]); ///  ///     write_ten_bytes(&mut buff).unwrap();   ///     write_ten_bytes_at_end(&mut buff).unwrap();  /// ///     assert_eq!(&buff.get_ref()[5..15], &[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]); /// }", "commid": "rust_pr_29224"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-24bfc2158eeb67a8db90acfeb56c9b90b5eacf29cf21f58770bfdfd7304be236", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms.  //! A Unicode scalar value   //! Unicode scalar values  //! //! This module provides the `CharExt` trait, as well as its //! implementation for the primitive `char` type, in order to allow", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-ba1183a7d196d1a851226e35d98ea31b611e8610d2a8c642f19c2b498acd3961", "text": "/// character, as `char`s. /// /// All characters are escaped with Rust syntax of the form `u{NNNN}`  /// where `NNNN` is the shortest hexadecimal representation of the code /// point.   /// where `NNNN` is the shortest hexadecimal representation.  /// /// # Examples ///  /// Basic usage: ///  /// ``` /// for c in '\u2764'.escape_unicode() { ///     print!(\"{}\", c);", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-7f21086aeaafa839fb37b115c7dfef64a7d99ba7b8578a026e4bb508e176d7e5", "text": "/// /// # Examples ///  /// Basic usage: ///  /// ``` /// let n = '\u00df'.len_utf16(); /// assert_eq!(n, 1);", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-0e029c1231faf4e20a4e9e07713290da95d4ceba44a0343a84df88343b621ac6", "text": "/// /// # Examples ///  /// Basic usage: ///  /// ``` /// #![feature(decode_utf16)] ///", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-1425108cf54cd35afc3036671a181f95917b9e61cd95d372698b5d37b7202714", "query": "Part of These have gotten some love lately, but should still be checked out.\nI am happy with this now.", "positive_passages": [{"docid": "doc-en-rust-70ff6a0999fad25b6f87651b9b24f6494e7226b97000f648f809918854311e0a", "text": "#[doc(primitive = \"char\")] //  /// A Unicode scalar value.   /// A character type.  ///  /// A `char` represents a /// *[Unicode scalar /// value](http://www.unicode.org/glossary/#unicode_scalar_value)*, as it can /// contain any Unicode code point except high-surrogate and low-surrogate code /// points.   /// The `char` type represents a single character. More specifically, since /// 'character' isn't a well-defined concept in Unicode, `char` is a '[Unicode /// scalar value]', which is similar to, but not the same as, a '[Unicode code /// point]'.  ///  /// As such, only values in the ranges [0x0,0xD7FF] and [0xE000,0x10FFFF] /// (inclusive) are allowed. A `char` can always be safely cast to a `u32`; /// however the converse is not always true due to the above range limits /// and, as such, should be performed via the `from_u32` function.   /// [Unicode scalar value]: http://www.unicode.org/glossary/#unicode_scalar_value /// [Unicode code point]: http://www.unicode.org/glossary/#code_point  ///  /// *[See also the `std::char` module](char/index.html).*   /// This documentation describes a number of methods and trait implementations on the /// `char` type. For technical reasons, there is additional, separate /// documentation in [the `std::char` module](char/index.html) as well.  ///  /// # Representation /// /// `char` is always four bytes in size. This is a different representation than /// a given character would have as part of a [`String`], for example: /// /// ``` /// let v = vec!['h', 'e', 'l', 'l', 'o']; /// /// // five elements times four bytes for each element /// assert_eq!(20, v.len() * std::mem::size_of::()); /// /// let s = String::from(\"hello\"); /// /// // five elements times one byte per element /// assert_eq!(5, s.len() * std::mem::size_of::()); /// ``` /// /// [`String`]: string/struct.String.html /// /// As always, remember that a human intuition for 'character' may not map to /// Unicode's definitions. For example, emoji symbols such as '\u2764\ufe0f' are more than /// one byte; \u2764\ufe0f in particular is six: /// /// ``` /// let s = String::from(\"\u2764\ufe0f\"); /// /// // six bytes times one byte for each element /// assert_eq!(6, s.len() * std::mem::size_of::()); /// ``` /// /// This also means it won't fit into a `char`, and so trying to create a /// literal with `let heart = '\u2764\ufe0f';` gives an error: /// /// ```text /// error: character literal may only contain one codepoint: '\u2764 /// let heart = '\u2764\ufe0f'; ///             ^~ /// ``` /// /// Another implication of this is that if you want to do per-`char`acter /// processing, it can end up using a lot more memory: /// /// ``` /// let s = String::from(\"love: \u2764\ufe0f\"); /// let v: Vec = s.chars().collect(); /// /// assert_eq!(12, s.len() * std::mem::size_of::()); /// assert_eq!(32, v.len() * std::mem::size_of::()); /// ``` /// /// Or may give you results you may not expect: /// /// ``` /// let s = String::from(\"\u2764\ufe0f\"); /// /// let mut iter = s.chars(); /// /// // we get two chars out of a single \u2764\ufe0f /// assert_eq!(Some('u{2764}'), iter.next()); /// assert_eq!(Some('u{fe0f}'), iter.next()); /// assert_eq!(None, iter.next()); /// ```  mod prim_char { } #[doc(primitive = \"unit\")]", "commid": "rust_pr_30013"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-2a889f9fb345cd084727dd9535ee780deff33c73f884c3de20617d404e43116a", "text": "//! Like many traits, these are often used as bounds for generic functions, to //! support arguments of multiple types. //!  //! - Impl the `As*` traits for reference-to-reference conversions //! - Impl the `Into` trait when you want to consume the value in the conversion //! - The `From` trait is the most flexible, usefull for values _and_ references conversions //! //! As a library writer, you should prefer implementing `From` rather than //! `Into`, as `From` provides greater flexibility and offer the equivalent `Into` //! implementation for free, thanks to a blanket implementation in the standard library. //! //! **Note: these traits must not fail**. If the conversion can fail, you must use a dedicated //! method which return an `Option` or a `Result`. //! //! # Generic impl //! //! - `AsRef` and `AsMut` auto-dereference if the inner type is a reference //! - `From for T` implies `Into for U` //! - `From` and `Into` are reflexive, which means that all types can `into()` //!   themselves and `from()` themselves //!  //! See each trait for usage examples. #![stable(feature = \"rust1\", since = \"1.0.0\")]", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-e034ff9ad93df0abfbb06f539fa111582da503d747b55979ce105d9a2d8d9967", "text": "/// /// [book]: ../../book/borrow-and-asref.html ///  /// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which /// return an `Option` or a `Result`. ///  /// # Examples /// /// Both `String` and `&str` implement `AsRef`:", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-66907cda6fa2032f8bf50711b123815003c51ab46befd00475c2ab384e2441d9", "text": "/// let s = \"hello\".to_string(); /// is_hello(s); /// ```  /// /// # Generic Impls /// /// - `AsRef` auto-dereference if the inner type is a reference or a mutable /// reference (eg: `foo.as_ref()` will work the same if `foo` has type `&mut Foo` or `&&mut Foo`) ///  #[stable(feature = \"rust1\", since = \"1.0.0\")] pub trait AsRef { /// Performs the conversion.", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-66156038a9f4dc98268003f2f6732c3be6413b0171eb80049bf37fa1e6282ace", "text": "} /// A cheap, mutable reference-to-mutable reference conversion.  /// /// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which /// return an `Option` or a `Result`. /// /// # Generic Impls /// /// - `AsMut` auto-dereference if the inner type is a reference or a mutable /// reference (eg: `foo.as_ref()` will work the same if `foo` has type `&mut Foo` or `&&mut Foo`) ///  #[stable(feature = \"rust1\", since = \"1.0.0\")] pub trait AsMut { /// Performs the conversion.", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-9b1ce6279b01d439468d7aa9e545b7dcd7ec9cc251fd4eaea490c1e20e2aca60", "text": "/// A conversion that consumes `self`, which may or may not be expensive. ///  /// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which /// return an `Option` or a `Result`. /// /// Library writer should not implement directly this trait, but should prefer the implementation /// of the `From` trait, which offer greater flexibility and provide the equivalent `Into` /// implementation for free, thanks to a blanket implementation in the standard library. ///  /// # Examples /// /// `String` implements `Into>`:", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-33969ce7768aa7378c9026f61c9733b5f1383d4e5942747a4f9794a70e87a1c3", "text": "/// let s = \"hello\".to_string(); /// is_hello(s); /// ```  /// /// # Generic Impls /// /// - `From for U` implies `Into for T` /// - `into()` is reflexive, which means that `Into for T` is implemented ///  #[stable(feature = \"rust1\", since = \"1.0.0\")] pub trait Into: Sized { /// Performs the conversion.", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-38449d79b50ac025ceb3ccf085184c4b17fa22e1a2439148784adabd3e85b150", "text": "/// Construct `Self` via a conversion. ///  /// **Note: this trait must not fail**. If the conversion can fail, use a dedicated method which /// return an `Option` or a `Result`. ///  /// # Examples /// /// `String` implements `From<&str>`:", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-6b11b9d47f36473127b87a54e86efa98d4c7f48b06ef973b43dafab417cc1299", "query": "Part of Here's what needs to be done to close out this issue: is great, but has a lot of abbreviations, so the wording could be cleaned up a bit. should compare itself to , and should show a sample implementation. needs far better docs, as it is one of the most misunderstood traits in Rust. Specifically it should talk about its relationship with . It also needs an example implementation. needs a lot of work; it should talk about its usage in , it should show an example implementation, and general improvement. is mostly fine, but \"generic impls\" should say \"implementations\" and it should be above examples.\nI am happy to mentor anyone who wants to tackle this issue.\nI would be keen to give this one a go I'm interested in understanding this section better.\nAwesome, ! Let me know if I can help you help me in any way\nHey , Just a couple of questions: the issue description for the trait you say to have the generic implementations above the examples, It turns out all of the traits have the examples above the generic implementations. Should I move the generic implementations above the examples for all of the traits? adding sample implementation do you like to make a generic example or use a common example from the std lib? I only ask because I have become familiar with through the usage within the package in creating paths and was thinking of using this as an example. What are your thoughts? relationship with is where I am having a bit of trouble explaining this concisely. Are we wanting to have a brief summarization of the section in the book and do we need to reciprocate or align with the documentation for the trait? I only ask as I feel to explain the difference I am describing the trait a bit and do not want to introduce a second differing description. Thanks :smile:\nHey ! Yes, that'd be great! Ideally, we'd have both: one extremely simple example for people to be able to copy/paste, and one more realistic example. Feel free to submit one or both yes it's tough! The second edition of the book does not have that chapter; synchronizing these API docs is the right way to go\nOK great :smile: I will hopefully have something in today to start picking apart. :+1:\nHey , I had a look at the [RFC] mentioned in the contribution guide and couldn't find anything on the guidelines for line length. I only ask as one of the Travis builds failed due to the line exceeding 100 chars, so was wondering what I should be aiming for. Thanks again! :smile: [RFC]:\nStandard library docs are wrapped to 80 characters. :)\nThis is now done!", "positive_passages": [{"docid": "doc-en-rust-5d7d3ac44d9820ee1934e98fcaae75bb2d9a11a437ca17b1770513b29b6e7dee", "text": "/// /// assert_eq!(string, other_string); /// ```  /// # Generic impls /// /// - `From for U` implies `Into for T` /// - `from()` is reflexive, which means that `From for T` is implemented ///  #[stable(feature = \"rust1\", since = \"1.0.0\")] pub trait From: Sized { /// Performs the conversion.", "commid": "rust_pr_30901"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-8d5c08b056c28ae89b4083eb4d1333b20cfd0e26b23b3972cd5de1360eaf65c1", "text": "} }  pub fn get_static_val<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, did: DefId, ty: Ty<'tcx>) -> ValueRef { if let Some(node_id) = ccx.tcx().map.as_local_node_id(did) { base::get_item_val(ccx, node_id) } else { base::get_extern_const(ccx, did, ty) } } ", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-7ebbd33064fb9082db77853b665d2f15d24d552e771b9aa1949769dd3e000812", "text": "use middle::def_id::DefId; use trans::{adt, closure, debuginfo, expr, inline, machine}; use trans::base::{self, push_ctxt};  use trans::common::{self, type_is_sized, ExprOrMethodCall, node_id_substs, C_nil, const_get_elt};  use trans::common::{CrateContext, C_integral, C_floating, C_bool, C_str_slice, C_bytes, val_ty};  use trans::common::{type_is_sized, ExprOrMethodCall, node_id_substs, C_nil, const_get_elt};  use trans::common::{C_struct, C_undef, const_to_opt_int, const_to_opt_uint, VariantInfo, C_uint}; use trans::common::{type_is_fat_ptr, Field, C_vector, C_array, C_null, ExprId, MethodCallKey}; use trans::declare;", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-769c3e38081c07d4171195c44d723e3b24835ab406aa9f4e028ce12966c48d9b", "text": "} let opt_def = cx.tcx().def_map.borrow().get(&cur.id).map(|d| d.full_def()); if let Some(def::DefStatic(def_id, _)) = opt_def {  get_static_val(cx, def_id, ety)   common::get_static_val(cx, def_id, ety)  } else { // If this isn't the address of a static, then keep going through // normal constant evaluation.", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-2adc89f506d8365cec35c37ce8783c4ff9fc59b567ecc36fbfbd16fe412e60cd", "text": "Ok(g) } }  fn get_static_val<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, did: DefId, ty: Ty<'tcx>) -> ValueRef { if let Some(node_id) = ccx.tcx().map.as_local_node_id(did) { base::get_item_val(ccx, node_id) } else { base::trans_external_path(ccx, did, ty) } } ", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-feb5dded6f41bd92b336e5c9eebfff6173cbe72d3ff59f40d981f3dbb67c0e40", "text": "DatumBlock::new(bcx, datum.to_expr_datum()) } def::DefStatic(did, _) => {  // There are two things that may happen here: //  1) If the static item is defined in this crate, it will be //     translated using `get_item_val`, and we return a pointer to //     the result. //  2) If the static item is defined in another crate then we add //     (or reuse) a declaration of an external global, and return a //     pointer to that.  let const_ty = expr_ty(bcx, ref_expr);  // For external constants, we don't inline. let val = if let Some(node_id) = bcx.tcx().map.as_local_node_id(did) { // Case 1. // The LLVM global has the type of its initializer, // which may not be equal to the enum's type for // non-C-like enums. let val = base::get_item_val(bcx.ccx(), node_id); let pty = type_of::type_of(bcx.ccx(), const_ty).ptr_to(); PointerCast(bcx, val, pty) } else { // Case 2. base::get_extern_const(bcx.ccx(), did, const_ty) };   let val = get_static_val(bcx.ccx(), did, const_ty);  let lval = Lvalue::new(\"expr::trans_def\"); DatumBlock::new(bcx, Datum::new(val, const_ty, LvalueExpr(lval))) }", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-debca2743ad5a324e6cb531ac89145875900291930561b799afd49cbc4766892", "query": "We need to support references to static items in lvalues. This would be used for accessing a static lock or assigning to a static mut.", "positive_passages": [{"docid": "doc-en-rust-a871a9726838a6af1fa7e17538c18c790f74751a3f848cc6c1d74f04b246af2d", "text": "tcx.sess.bug(&format!(\"using operand temp {:?} as lvalue\", lvalue)), }, mir::Lvalue::Arg(index) => self.args[index as usize],  mir::Lvalue::Static(_def_id) => unimplemented!(),   mir::Lvalue::Static(def_id) => { let const_ty = self.mir.lvalue_ty(tcx, lvalue); LvalueRef::new(common::get_static_val(ccx, def_id, const_ty.to_ty(tcx)), const_ty) },  mir::Lvalue::ReturnPointer => { let return_ty = bcx.monomorphize(&self.mir.return_ty); let llval = fcx.get_ret_slot(bcx, return_ty, \"return\");", "commid": "rust_pr_29759"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-cb37be509f1015ad3a82b3608f8bc9a75babb2110990b82cd117036d1aa2a613", "text": "[rust]: https://www.rust-lang.org  \u201cThe Rust Programming Language\u201d is split into eight sections. This introduction   \u201cThe Rust Programming Language\u201d is split into sections. This introduction  is the first. After this: * [Getting started][gs] - Set up your computer for Rust development.  * [Learn Rust][lr] - Learn Rust programming through small projects. * [Effective Rust][er] - Higher-level concepts for writing excellent Rust code.   * [Tutorial: Guessing Game][gg] - Learn some Rust with a small project.  * [Syntax and Semantics][ss] - Each bit of Rust, broken down into small chunks.  * [Effective Rust][er] - Higher-level concepts for writing excellent Rust code.  * [Nightly Rust][nr] - Cutting-edge features that aren\u2019t in stable builds yet. * [Glossary][gl] - A reference of terms used in the book. * [Bibliography][bi] - Background on Rust's influences, papers about Rust. [gs]: getting-started.html  [lr]: learn-rust.html   [gg]: guessing-game.html  [er]: effective-rust.html [ss]: syntax-and-semantics.html [nr]: nightly-rust.html [gl]: glossary.html [bi]: bibliography.html  After reading this introduction, you\u2019ll want to dive into either \u2018Learn Rust\u2019 or \u2018Syntax and Semantics\u2019, depending on your preference: \u2018Learn Rust\u2019 if you want to dive in with a project, or \u2018Syntax and Semantics\u2019 if you prefer to start small, and learn a single concept thoroughly before moving onto the next. Copious cross-linking connects these parts together.  ### Contributing The source files from which this book is generated can be found on", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-a11cc19f04b6df079bcae0078fc7ddc7655cc41746b6ae9775d3cf0d725aef49", "text": "# Summary * [Getting Started](getting-started.md)  * [Learn Rust](learn-rust.md) * [Guessing Game](guessing-game.md) * [Dining Philosophers](dining-philosophers.md) * [Rust Inside Other Languages](rust-inside-other-languages.md)   * [Tutorial: Guessing Game](guessing-game.md)  * [Syntax and Semantics](syntax-and-semantics.md) * [Variable Bindings](variable-bindings.md) * [Functions](functions.md)", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-43c553a0cb0b3d9660ff937229b67968c4bd6faaad1f931ce501069bb2ef3536", "text": " % Dining Philosophers For our second project, let\u2019s look at a classic concurrency problem. It\u2019s called \u2018the dining philosophers\u2019. It was originally conceived by Dijkstra in 1965, but we\u2019ll use a lightly adapted version from [this paper][paper] by Tony Hoare in 1985. [paper]: http://www.usingcsp.com/cspbook.pdf > In ancient times, a wealthy philanthropist endowed a College to accommodate > five eminent philosophers. Each philosopher had a room in which they could > engage in their professional activity of thinking; there was also a common > dining room, furnished with a circular table, surrounded by five chairs, each > labelled by the name of the philosopher who was to sit in it. They sat > anticlockwise around the table. To the left of each philosopher there was > laid a golden fork, and in the center stood a large bowl of spaghetti, which > was constantly replenished. A philosopher was expected to spend most of > their time thinking; but when they felt hungry, they went to the dining > room, sat down in their own chair, picked up their own fork on their left, > and plunged it into the spaghetti. But such is the tangled nature of > spaghetti that a second fork is required to carry it to the mouth. The > philosopher therefore had also to pick up the fork on their right. When > they were finished they would put down both their forks, get up from their > chair, and continue thinking. Of course, a fork can be used by only one > philosopher at a time. If the other philosopher wants it, they just have > to wait until the fork is available again. This classic problem shows off a few different elements of concurrency. The reason is that it's actually slightly tricky to implement: a simple implementation can deadlock. For example, let's consider a simple algorithm that would solve this problem: 1. A philosopher picks up the fork on their left. 2. They then pick up the fork on their right. 3. They eat. 4. They return the forks. Now, let\u2019s imagine this sequence of events: 1. Philosopher 1 begins the algorithm, picking up the fork on their left. 2. Philosopher 2 begins the algorithm, picking up the fork on their left. 3. Philosopher 3 begins the algorithm, picking up the fork on their left. 4. Philosopher 4 begins the algorithm, picking up the fork on their left. 5. Philosopher 5 begins the algorithm, picking up the fork on their left. 6. ... ? All the forks are taken, but nobody can eat! There are different ways to solve this problem. We\u2019ll get to our solution in the tutorial itself. For now, let\u2019s get started and create a new project with `cargo`: ```bash $ cd ~/projects $ cargo new dining_philosophers --bin $ cd dining_philosophers ``` Now we can start modeling the problem itself. We\u2019ll start with the philosophers in `src/main.rs`: ```rust struct Philosopher { name: String, } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } } fn main() { let p1 = Philosopher::new(\"Judith Butler\"); let p2 = Philosopher::new(\"Gilles Deleuze\"); let p3 = Philosopher::new(\"Karl Marx\"); let p4 = Philosopher::new(\"Emma Goldman\"); let p5 = Philosopher::new(\"Michel Foucault\"); } ``` Here, we make a [`struct`][struct] to represent a philosopher. For now, a name is all we need. We choose the [`String`][string] type for the name, rather than `&str`. Generally speaking, working with a type which owns its data is easier than working with one that uses references. [struct]: structs.html [string]: strings.html Let\u2019s continue: ```rust # struct Philosopher { #     name: String, # } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } } ``` This `impl` block lets us define things on `Philosopher` structs. In this case, we define an \u2018associated function\u2019 called `new`. The first line looks like this: ```rust # struct Philosopher { #     name: String, # } # impl Philosopher { fn new(name: &str) -> Philosopher { #         Philosopher { #             name: name.to_string(), #         } #     } # } ``` We take one argument, a `name`, of type `&str`. This is a reference to another string. It returns an instance of our `Philosopher` struct. ```rust # struct Philosopher { #     name: String, # } # impl Philosopher { #    fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } #     } # } ``` This creates a new `Philosopher`, and sets its `name` to our `name` argument. Not just the argument itself, though, as we call `.to_string()` on it. This will create a copy of the string that our `&str` points to, and give us a new `String`, which is the type of the `name` field of `Philosopher`. Why not accept a `String` directly? It\u2019s nicer to call. If we took a `String`, but our caller had a `&str`, they\u2019d have to call this method themselves. The downside of this flexibility is that we _always_ make a copy. For this small program, that\u2019s not particularly important, as we know we\u2019ll just be using short strings anyway. One last thing you\u2019ll notice: we just define a `Philosopher`, and seemingly don\u2019t do anything with it. Rust is an \u2018expression based\u2019 language, which means that almost everything in Rust is an expression which returns a value. This is true of functions as well \u2014 the last expression is automatically returned. Since we create a new `Philosopher` as the last expression of this function, we end up returning it. This name, `new()`, isn\u2019t anything special to Rust, but it is a convention for functions that create new instances of structs. Before we talk about why, let\u2019s look at `main()` again: ```rust # struct Philosopher { #     name: String, # } # # impl Philosopher { #     fn new(name: &str) -> Philosopher { #         Philosopher { #             name: name.to_string(), #         } #     } # } # fn main() { let p1 = Philosopher::new(\"Judith Butler\"); let p2 = Philosopher::new(\"Gilles Deleuze\"); let p3 = Philosopher::new(\"Karl Marx\"); let p4 = Philosopher::new(\"Emma Goldman\"); let p5 = Philosopher::new(\"Michel Foucault\"); } ``` Here, we create five variable bindings with five new philosophers. If we _didn\u2019t_ define that `new()` function, it would look like this: ```rust # struct Philosopher { #     name: String, # } fn main() { let p1 = Philosopher { name: \"Judith Butler\".to_string() }; let p2 = Philosopher { name: \"Gilles Deleuze\".to_string() }; let p3 = Philosopher { name: \"Karl Marx\".to_string() }; let p4 = Philosopher { name: \"Emma Goldman\".to_string() }; let p5 = Philosopher { name: \"Michel Foucault\".to_string() }; } ``` That\u2019s much noisier. Using `new` has other advantages too, but even in this simple case, it ends up being nicer to use. Now that we\u2019ve got the basics in place, there\u2019s a number of ways that we can tackle the broader problem here. I like to start from the end first: let\u2019s set up a way for each philosopher to finish eating. As a tiny step, let\u2019s make a method, and then loop through all the philosophers, calling it: ```rust struct Philosopher { name: String, } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } fn eat(&self) { println!(\"{} is done eating.\", self.name); } } fn main() { let philosophers = vec![ Philosopher::new(\"Judith Butler\"), Philosopher::new(\"Gilles Deleuze\"), Philosopher::new(\"Karl Marx\"), Philosopher::new(\"Emma Goldman\"), Philosopher::new(\"Michel Foucault\"), ]; for p in &philosophers { p.eat(); } } ``` Let\u2019s look at `main()` first. Rather than have five individual variable bindings for our philosophers, we make a `Vec` of them instead. `Vec` is also called a \u2018vector\u2019, and it\u2019s a growable array type. We then use a [`for`][for] loop to iterate through the vector, getting a reference to each philosopher in turn. [for]: loops.html#for In the body of the loop, we call `p.eat()`, which is defined above: ```rust,ignore fn eat(&self) { println!(\"{} is done eating.\", self.name); } ``` In Rust, methods take an explicit `self` parameter. That\u2019s why `eat()` is a method, but `new` is an associated function: `new()` has no `self`. For our first version of `eat()`, we just print out the name of the philosopher, and mention they\u2019re done eating. Running this program should give you the following output: ```text Judith Butler is done eating. Gilles Deleuze is done eating. Karl Marx is done eating. Emma Goldman is done eating. Michel Foucault is done eating. ``` Easy enough, they\u2019re all done! We haven\u2019t actually implemented the real problem yet, though, so we\u2019re not done yet! Next, we want to make our philosophers not just finish eating, but actually eat. Here\u2019s the next version: ```rust use std::thread; use std::time::Duration; struct Philosopher { name: String, } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } fn eat(&self) { println!(\"{} is eating.\", self.name); thread::sleep(Duration::from_millis(1000)); println!(\"{} is done eating.\", self.name); } } fn main() { let philosophers = vec![ Philosopher::new(\"Judith Butler\"), Philosopher::new(\"Gilles Deleuze\"), Philosopher::new(\"Karl Marx\"), Philosopher::new(\"Emma Goldman\"), Philosopher::new(\"Michel Foucault\"), ]; for p in &philosophers { p.eat(); } } ``` Just a few changes. Let\u2019s break it down. ```rust,ignore use std::thread; ``` `use` brings names into scope. We\u2019re going to start using the `thread` module from the standard library, and so we need to `use` it. ```rust,ignore fn eat(&self) { println!(\"{} is eating.\", self.name); thread::sleep(Duration::from_millis(1000)); println!(\"{} is done eating.\", self.name); } ``` We now print out two messages, with a `sleep` in the middle. This will simulate the time it takes a philosopher to eat. If you run this program, you should see each philosopher eat in turn: ```text Judith Butler is eating. Judith Butler is done eating. Gilles Deleuze is eating. Gilles Deleuze is done eating. Karl Marx is eating. Karl Marx is done eating. Emma Goldman is eating. Emma Goldman is done eating. Michel Foucault is eating. Michel Foucault is done eating. ``` Excellent! We\u2019re getting there. There\u2019s just one problem: we aren\u2019t actually operating in a concurrent fashion, which is a core part of the problem! To make our philosophers eat concurrently, we need to make a small change. Here\u2019s the next iteration: ```rust use std::thread; use std::time::Duration; struct Philosopher { name: String, } impl Philosopher { fn new(name: &str) -> Philosopher { Philosopher { name: name.to_string(), } } fn eat(&self) { println!(\"{} is eating.\", self.name); thread::sleep(Duration::from_millis(1000)); println!(\"{} is done eating.\", self.name); } } fn main() { let philosophers = vec![ Philosopher::new(\"Judith Butler\"), Philosopher::new(\"Gilles Deleuze\"), Philosopher::new(\"Karl Marx\"), Philosopher::new(\"Emma Goldman\"), Philosopher::new(\"Michel Foucault\"), ]; let handles: Vec<_> = philosophers.into_iter().map(|p| { thread::spawn(move || { p.eat(); }) }).collect(); for h in handles { h.join().unwrap(); } } ``` All we\u2019ve done is change the loop in `main()`, and added a second one! Here\u2019s the first change: ```rust,ignore let handles: Vec<_> = philosophers.into_iter().map(|p| { thread::spawn(move || { p.eat(); }) }).collect(); ``` While this is only five lines, they\u2019re a dense five. Let\u2019s break it down. ```rust,ignore let handles: Vec<_> = ``` We introduce a new binding, called `handles`. We\u2019ve given it this name because we are going to make some new threads, and that will return some handles to those threads that let us control their operation. We need to explicitly annotate the type here, though, due to an issue we\u2019ll talk about later. The `_` is a type placeholder. We\u2019re saying \u201c`handles` is a vector of something, but you can figure out what that something is, Rust.\u201d ```rust,ignore philosophers.into_iter().map(|p| { ``` We take our list of philosophers and call `into_iter()` on it. This creates an iterator that takes ownership of each philosopher. We need to do this to pass them to our threads. We take that iterator and call `map` on it, which takes a closure as an argument and calls that closure on each element in turn. ```rust,ignore thread::spawn(move || { p.eat(); }) ``` Here\u2019s where the concurrency happens. The `thread::spawn` function takes a closure as an argument and executes that closure in a new thread. This closure needs an extra annotation, `move`, to indicate that the closure is going to take ownership of the values it\u2019s capturing. In this case, it's the `p` variable of the `map` function. Inside the thread, all we do is call `eat()` on `p`. Also note that the call to `thread::spawn` lacks a trailing semicolon, making this an expression. This distinction is important, yielding the correct return value. For more details, read [Expressions vs. Statements][es]. [es]: functions.html#expressions-vs-statements ```rust,ignore }).collect(); ``` Finally, we take the result of all those `map` calls and collect them up. `collect()` will make them into a collection of some kind, which is why we needed to annotate the return type: we want a `Vec`. The elements are the return values of the `thread::spawn` calls, which are handles to those threads. Whew! ```rust,ignore for h in handles { h.join().unwrap(); } ``` At the end of `main()`, we loop through the handles and call `join()` on them, which blocks execution until the thread has completed execution. This ensures that the threads complete their work before the program exits. If you run this program, you\u2019ll see that the philosophers eat out of order! We have multi-threading! ```text Judith Butler is eating. Gilles Deleuze is eating. Karl Marx is eating. Emma Goldman is eating. Michel Foucault is eating. Judith Butler is done eating. Gilles Deleuze is done eating. Karl Marx is done eating. Emma Goldman is done eating. Michel Foucault is done eating. ``` But what about the forks? We haven\u2019t modeled them at all yet. To do that, let\u2019s make a new `struct`: ```rust use std::sync::Mutex; struct Table { forks: Vec>, } ``` This `Table` has a vector of `Mutex`es. A mutex is a way to control concurrency: only one thread can access the contents at once. This is exactly the property we need with our forks. We use an empty tuple, `()`, inside the mutex, since we\u2019re not actually going to use the value, just hold onto it. Let\u2019s modify the program to use the `Table`: ```rust use std::thread; use std::time::Duration; use std::sync::{Mutex, Arc}; struct Philosopher { name: String, left: usize, right: usize, } impl Philosopher { fn new(name: &str, left: usize, right: usize) -> Philosopher { Philosopher { name: name.to_string(), left: left, right: right, } } fn eat(&self, table: &Table) { let _left = table.forks[self.left].lock().unwrap(); thread::sleep(Duration::from_millis(150)); let _right = table.forks[self.right].lock().unwrap(); println!(\"{} is eating.\", self.name); thread::sleep(Duration::from_millis(1000)); println!(\"{} is done eating.\", self.name); } } struct Table { forks: Vec>, } fn main() { let table = Arc::new(Table { forks: vec![ Mutex::new(()), Mutex::new(()), Mutex::new(()), Mutex::new(()), Mutex::new(()), ]}); let philosophers = vec![ Philosopher::new(\"Judith Butler\", 0, 1), Philosopher::new(\"Gilles Deleuze\", 1, 2), Philosopher::new(\"Karl Marx\", 2, 3), Philosopher::new(\"Emma Goldman\", 3, 4), Philosopher::new(\"Michel Foucault\", 0, 4), ]; let handles: Vec<_> = philosophers.into_iter().map(|p| { let table = table.clone(); thread::spawn(move || { p.eat(&table); }) }).collect(); for h in handles { h.join().unwrap(); } } ``` Lots of changes! However, with this iteration, we\u2019ve got a working program. Let\u2019s go over the details: ```rust,ignore use std::sync::{Mutex, Arc}; ``` We\u2019re going to use another structure from the `std::sync` package: `Arc`. We\u2019ll talk more about it when we use it. ```rust,ignore struct Philosopher { name: String, left: usize, right: usize, } ``` We need to add two more fields to our `Philosopher`. Each philosopher is going to have two forks: the one on their left, and the one on their right. We\u2019ll use the `usize` type to indicate them, as it\u2019s the type that you index vectors with. These two values will be the indexes into the `forks` our `Table` has. ```rust,ignore fn new(name: &str, left: usize, right: usize) -> Philosopher { Philosopher { name: name.to_string(), left: left, right: right, } } ``` We now need to construct those `left` and `right` values, so we add them to `new()`. ```rust,ignore fn eat(&self, table: &Table) { let _left = table.forks[self.left].lock().unwrap(); thread::sleep(Duration::from_millis(150)); let _right = table.forks[self.right].lock().unwrap(); println!(\"{} is eating.\", self.name); thread::sleep(Duration::from_millis(1000)); println!(\"{} is done eating.\", self.name); } ``` We have three new lines. We\u2019ve added an argument, `table`. We access the `Table`\u2019s list of forks, and then use `self.left` and `self.right` to access the fork at that particular index. That gives us access to the `Mutex` at that index, and we call `lock()` on it. If the mutex is currently being accessed by someone else, we\u2019ll block until it becomes available. We have also a call to `thread::sleep` between the moment the first fork is picked and the moment the second forked is picked, as the process of picking up the fork is not immediate. The call to `lock()` might fail, and if it does, we want to crash. In this case, the error that could happen is that the mutex is [\u2018poisoned\u2019][poison], which is what happens when the thread panics while the lock is held. Since this shouldn\u2019t happen, we just use `unwrap()`. [poison]: ../std/sync/struct.Mutex.html#poisoning One other odd thing about these lines: we\u2019ve named the results `_left` and `_right`. What\u2019s up with that underscore? Well, we aren\u2019t planning on _using_ the value inside the lock. We just want to acquire it. As such, Rust will warn us that we never use the value. By using the underscore, we tell Rust that this is what we intended, and it won\u2019t throw a warning. What about releasing the lock? Well, that will happen when `_left` and `_right` go out of scope, automatically. ```rust,ignore let table = Arc::new(Table { forks: vec![ Mutex::new(()), Mutex::new(()), Mutex::new(()), Mutex::new(()), Mutex::new(()), ]}); ``` Next, in `main()`, we make a new `Table` and wrap it in an `Arc`. \u2018arc\u2019 stands for \u2018atomic reference count\u2019, and we need that to share our `Table` across multiple threads. As we share it, the reference count will go up, and when each thread ends, it will go back down. ```rust,ignore let philosophers = vec![ Philosopher::new(\"Judith Butler\", 0, 1), Philosopher::new(\"Gilles Deleuze\", 1, 2), Philosopher::new(\"Karl Marx\", 2, 3), Philosopher::new(\"Emma Goldman\", 3, 4), Philosopher::new(\"Michel Foucault\", 0, 4), ]; ``` We need to pass in our `left` and `right` values to the constructors for our `Philosopher`s. But there\u2019s one more detail here, and it\u2019s _very_ important. If you look at the pattern, it\u2019s all consistent until the very end. Monsieur Foucault should have `4, 0` as arguments, but instead, has `0, 4`. This is what prevents deadlock, actually: one of our philosophers is left handed! This is one way to solve the problem, and in my opinion, it\u2019s the simplest. If you change the order of the parameters, you will be able to observe the deadlock taking place. ```rust,ignore let handles: Vec<_> = philosophers.into_iter().map(|p| { let table = table.clone(); thread::spawn(move || { p.eat(&table); }) }).collect(); ``` Finally, inside of our `map()`/`collect()` loop, we call `table.clone()`. The `clone()` method on `Arc` is what bumps up the reference count, and when it goes out of scope, it decrements the count. This is needed so that we know how many references to `table` exist across our threads. If we didn\u2019t have a count, we wouldn\u2019t know how to deallocate it. You\u2019ll notice we can introduce a new binding to `table` here, and it will shadow the old one. This is often used so that you don\u2019t need to come up with two unique names. With this, our program works! Only two philosophers can eat at any one time, and so you\u2019ll get some output like this: ```text Gilles Deleuze is eating. Emma Goldman is eating. Emma Goldman is done eating. Gilles Deleuze is done eating. Judith Butler is eating. Karl Marx is eating. Judith Butler is done eating. Michel Foucault is eating. Karl Marx is done eating. Michel Foucault is done eating. ``` Congrats! You\u2019ve implemented a classic concurrency problem in Rust. ", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-b61ee9be3fa762c7b8fd2212493de15737331d7996e4ec0bd013c7cdff59669a", "text": "% Guessing Game  For our first project, we\u2019ll implement a classic beginner programming problem: the guessing game. Here\u2019s how it works: Our program will generate a random integer between one and a hundred. It will then prompt us to enter a guess. Upon entering our guess, it will tell us if we\u2019re too low or too high. Once we guess correctly, it will congratulate us. Sounds good?   Let\u2019s learn some Rust! For our first project, we\u2019ll implement a classic beginner programming problem: the guessing game. Here\u2019s how it works: Our program will generate a random integer between one and a hundred. It will then prompt us to enter a guess. Upon entering our guess, it will tell us if we\u2019re too low or too high. Once we guess correctly, it will congratulate us. Sounds good? Along the way, we\u2019ll learn a little bit about Rust. The next section, \u2018Syntax and Semantics\u2019, will dive deeper into each part.  # Set up", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-928cd822b2040ff1cc4c93f9d2fb8aa51ca8572bc50f30533405cf5c07731993", "query": "The presence in a Rust source file of unusual but useful kinds of whitespace, such as ASCII 0x0C (form feed), leads to the following error: I have a specific use case for form-feeds in source files. But I think in general it is nice to ignore the same whitespace that every other programming language and file format ignores; it lessens confusion for people coming from other languages and backgrounds. My specific use case is the long-standing, but somewhat uncommon use of the form-feed character (which semantically is a separator between pages of text) as a way to group together especially closely related functions or blocks in a file of source code. Text editors or IDEs such as vim, Emacs or XCode provide convenience features to display these form-feeds in aesthetically pleasing way, move between form-feed-delimited pages, and restrict editing to one form-feed-delimited page at a time. It's just a simple convenience feature, but it would really be nice to support it.\n+1, I also use this feature and was disappointed when I found Rust didn't treat it as whitespace.\nLooks like a fairly simple change could be made to the lexer so it uses instead of limiting to . The only think I can think of is that the function in has been around since before we had a better function and nobody has changed it since then.\n/cc , do we want to accept all kinds of whitespace?\nI believe we should, yes.", "positive_passages": [{"docid": "doc-en-rust-ee9291f62ecc808e147f53e7bf560f443fbc7603a7c3fdef646df7cecd834766", "text": " % Rust Inside Other Languages For our third project, we\u2019re going to choose something that shows off one of Rust\u2019s greatest strengths: a lack of a substantial runtime. As organizations grow, they increasingly rely on a multitude of programming languages. Different programming languages have different strengths and weaknesses, and a polyglot stack lets you use a particular language where its strengths make sense and a different one where it\u2019s weak. A very common area where many programming languages are weak is in runtime performance of programs. Often, using a language that is slower, but offers greater programmer productivity, is a worthwhile trade-off. To help mitigate this, they provide a way to write some of your system in C and then call that C code as though it were written in the higher-level language. This is called a \u2018foreign function interface\u2019, often shortened to \u2018FFI\u2019. Rust has support for FFI in both directions: it can call into C code easily, but crucially, it can also be called _into_ as easily as C. Combined with Rust\u2019s lack of a garbage collector and low runtime requirements, this makes Rust a great candidate to embed inside of other languages when you need that extra oomph. There is a whole [chapter devoted to FFI][ffi] and its specifics elsewhere in the book, but in this chapter, we\u2019ll examine this particular use-case of FFI, with examples in Ruby, Python, and JavaScript. [ffi]: ffi.html # The problem There are many different projects we could choose here, but we\u2019re going to pick an example where Rust has a clear advantage over many other languages: numeric computing and threading. Many languages, for the sake of consistency, place numbers on the heap, rather than on the stack. Especially in languages that focus on object-oriented programming and use garbage collection, heap allocation is the default. Sometimes optimizations can stack allocate particular numbers, but rather than relying on an optimizer to do its job, we may want to ensure that we\u2019re always using primitive number types rather than some sort of object type. Second, many languages have a \u2018global interpreter lock\u2019 (GIL), which limits concurrency in many situations. This is done in the name of safety, which is a positive effect, but it limits the amount of work that can be done at the same time, which is a big negative. To emphasize these two aspects, we\u2019re going to create a little project that uses these two aspects heavily. Since the focus of the example is to embed Rust into other languages, rather than the problem itself, we\u2019ll just use a toy example: > Start ten threads. Inside each thread, count from one to five million. After > all ten threads are finished, print out \u2018done!\u2019. I chose five million based on my particular computer. Here\u2019s an example of this code in Ruby: ```ruby threads = [] 10.times do threads << Thread.new do count = 0 5_000_000.times do count += 1 end count end end threads.each do |t| puts \"Thread finished with count=#{t.value}\" end puts \"done!\" ``` Try running this example, and choose a number that runs for a few seconds. Depending on your computer\u2019s hardware, you may have to increase or decrease the number. On my system, running this program takes `2.156` seconds. And, if I use some sort of process monitoring tool, like `top`, I can see that it only uses one core on my machine. That\u2019s the GIL kicking in. While it\u2019s true that this is a synthetic program, one can imagine many problems that are similar to this in the real world. For our purposes, spinning up a few busy threads represents some sort of parallel, expensive computation. # A Rust library Let\u2019s rewrite this problem in Rust. First, let\u2019s make a new project with Cargo: ```bash $ cargo new embed $ cd embed ``` This program is fairly easy to write in Rust: ```rust use std::thread; fn process() { let handles: Vec<_> = (0..10).map(|_| { thread::spawn(|| { let mut x = 0; for _ in 0..5_000_000 { x += 1 } x }) }).collect(); for h in handles { println!(\"Thread finished with count={}\", h.join().map_err(|_| \"Could not join a thread!\").unwrap()); } } ``` Some of this should look familiar from previous examples. We spin up ten threads, collecting them into a `handles` vector. Inside of each thread, we loop five million times, and add one to `x` each time. Finally, we join on each thread. Right now, however, this is a Rust library, and it doesn\u2019t expose anything that\u2019s callable from C. If we tried to hook this up to another language right now, it wouldn\u2019t work. We only need to make two small changes to fix this, though. The first is to modify the beginning of our code: ```rust,ignore #[no_mangle] pub extern fn process() { ``` We have to add a new attribute, `no_mangle`. When you create a Rust library, it changes the name of the function in the compiled output. The reasons for this are outside the scope of this tutorial, but in order for other languages to know how to call the function, we can\u2019t do that. This attribute turns that behavior off. The other change is the `pub extern`. The `pub` means that this function should be callable from outside of this module, and the `extern` says that it should be able to be called from C. That\u2019s it! Not a whole lot of change. The second thing we need to do is to change a setting in our `Cargo.toml`. Add this at the bottom: ```toml [lib] name = \"embed\" crate-type = [\"dylib\"] ``` This tells Rust that we want to compile our library into a standard dynamic library. By default, Rust compiles an \u2018rlib\u2019, a Rust-specific format. Let\u2019s build the project now: ```bash $ cargo build --release Compiling embed v0.1.0 (file:///home/steve/src/embed) ``` We\u2019ve chosen `cargo build --release`, which builds with optimizations on. We want this to be as fast as possible! You can find the output of the library in `target/release`: ```bash $ ls target/release/ build  deps  examples  libembed.so  native ``` That `libembed.so` is our \u2018shared object\u2019 library. We can use this file just like any shared object library written in C! As an aside, this may be `embed.dll` (Microsoft Windows) or `libembed.dylib` (Mac OS X), depending on your operating system. Now that we\u2019ve got our Rust library built, let\u2019s use it from our Ruby. # Ruby Open up an `embed.rb` file inside of our project, and do this: ```ruby require 'ffi' module Hello extend FFI::Library ffi_lib 'target/release/libembed.so' attach_function :process, [], :void end Hello.process puts 'done!' ``` Before we can run this, we need to install the `ffi` gem: ```bash $ gem install ffi # this may need sudo Fetching: ffi-1.9.8.gem (100%) Building native extensions.  This could take a while... Successfully installed ffi-1.9.8 Parsing documentation for ffi-1.9.8 Installing ri documentation for ffi-1.9.8 Done installing documentation for ffi after 0 seconds 1 gem installed ``` And finally, we can try running it: ```bash $ ruby embed.rb Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 Thread finished with count=5000000 done! done! $ ``` Whoa, that was fast! On my system, this took `0.086` seconds, rather than the two seconds the pure Ruby version took. Let\u2019s break down this Ruby code: ```ruby require 'ffi' ``` We first need to require the `ffi` gem. This lets us interface with our Rust library like a C library. ```ruby module Hello extend FFI::Library ffi_lib 'target/release/libembed.so' ``` The `Hello` module is used to attach the native functions from the shared library. Inside, we `extend` the necessary `FFI::Library` module and then call `ffi_lib` to load up our shared object library. We just pass it the path that our library is stored, which, as we saw before, is `target/release/libembed.so`. ```ruby attach_function :process, [], :void ``` The `attach_function` method is provided by the FFI gem. It\u2019s what connects our `process()` function in Rust to a Ruby function of the same name. Since `process()` takes no arguments, the second parameter is an empty array, and since it returns nothing, we pass `:void` as the final argument. ```ruby Hello.process ``` This is the actual call into Rust. The combination of our `module` and the call to `attach_function` sets this all up. It looks like a Ruby function but is actually Rust! ```ruby puts 'done!' ``` Finally, as per our project\u2019s requirements, we print out `done!`. That\u2019s it! As we\u2019ve seen, bridging between the two languages is really easy, and buys us a lot of performance. Next, let\u2019s try Python! # Python Create an `embed.py` file in this directory, and put this in it: ```python from ctypes import cdll lib = cdll.LoadLibrary(\"target/release/libembed.so\") lib.process() print(\"done!\") ``` Even easier! We use `cdll` from the `ctypes` module. A quick call to `LoadLibrary` later, and we can call `process()`. On my system, this takes `0.017` seconds. Speedy! # Node.js Node isn\u2019t a language, but it\u2019s currently the dominant implementation of server-side JavaScript. In order to do FFI with Node, we first need to install the library: ```bash $ npm install ffi ``` After that installs, we can use it: ```javascript var ffi = require('ffi'); var lib = ffi.Library('target/release/libembed', { 'process': ['void', []] }); lib.process(); console.log(\"done!\"); ``` It looks more like the Ruby example than the Python example. We use the `ffi` module to get access to `ffi.Library()`, which loads up our shared object. We need to annotate the return type and argument types of the function, which are `void` for return and an empty array to signify no arguments. From there, we just call it and print the result. On my system, this takes a quick `0.092` seconds. # Conclusion As you can see, the basics of doing this are _very_ easy. Of course, there's a lot more that we could do here. Check out the [FFI][ffi] chapter for more details. ", "commid": "rust_pr_30595"}], "negative_passages": []}
{"query_id": "q-en-rust-b42b6ef93bee4d7180bb7d77309475fb36bc4cc78a287c17c9828feb5e3fa4cb", "query": "On currently nightly, I get: () Yet the description for this error was merged weeks ago:\ncc , in the future when adding new diagnostics mods be sure to add a call", "positive_passages": [{"docid": "doc-en-rust-f4d6ec07215a1a32663b89912b6987b6ce425def3d306872ad55a6423dedac7f", "text": "``` ptr::read(&v as *const _ as *const SomeType) // `v` transmuted to `SomeType` ```  Note that this does not move `v` (unlike `transmute`), and may need a call to `mem::forget(v)` in case you want to avoid destructors being called.  \"##, E0152: r##\"", "commid": "rust_pr_29980"}], "negative_passages": []}
{"query_id": "q-en-rust-e2a0c87d80579f0269081877ec7f7a2629dabf28513179d76c0e1d76683fc4cd", "query": "Sorting Ipv4Addrs results in unexpected ordering. Ordering is based on the internal representation of the Ipv4Addr without regard to \"network order\". I tried this code on little-endian architecture: I expected to see this happen: Instead, this happened: Rust playground demonstration:\nIpv6Addr has a similar issue: outputs:", "positive_passages": [{"docid": "doc-en-rust-6636f990a39255ba74688b8fb175988ea52afdbab1647bcbf476ad7d73749fdf", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] impl Ord for Ipv4Addr { fn cmp(&self, other: &Ipv4Addr) -> Ordering {  self.inner.s_addr.cmp(&other.inner.s_addr)   self.octets().cmp(&other.octets())  } }", "commid": "rust_pr_29724"}], "negative_passages": []}
{"query_id": "q-en-rust-e2a0c87d80579f0269081877ec7f7a2629dabf28513179d76c0e1d76683fc4cd", "query": "Sorting Ipv4Addrs results in unexpected ordering. Ordering is based on the internal representation of the Ipv4Addr without regard to \"network order\". I tried this code on little-endian architecture: I expected to see this happen: Instead, this happened: Rust playground demonstration:\nIpv6Addr has a similar issue: outputs:", "positive_passages": [{"docid": "doc-en-rust-ae57fb92b8300ccdde4af82f61abb84f8d987019d221aa0f9515fbe51c9272bf", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] impl Ord for Ipv6Addr { fn cmp(&self, other: &Ipv6Addr) -> Ordering {  self.inner.s6_addr.cmp(&other.inner.s6_addr)   self.segments().cmp(&other.segments())  } }", "commid": "rust_pr_29724"}], "negative_passages": []}
{"query_id": "q-en-rust-e2a0c87d80579f0269081877ec7f7a2629dabf28513179d76c0e1d76683fc4cd", "query": "Sorting Ipv4Addrs results in unexpected ordering. Ordering is based on the internal representation of the Ipv4Addr without regard to \"network order\". I tried this code on little-endian architecture: I expected to see this happen: Instead, this happened: Rust playground demonstration:\nIpv6Addr has a similar issue: outputs:", "positive_passages": [{"docid": "doc-en-rust-21c02b5cc2cd4d8c1ba6360d77af994dac635dd907712c91e935a84f09769917", "text": "let a = Ipv4Addr::new(127, 0, 0, 1); assert_eq!(Ipv4Addr::from(2130706433), a); }  #[test] fn ord() { assert!(Ipv4Addr::new(100, 64, 3, 3) < Ipv4Addr::new(192, 0, 2, 2)); assert!(\"2001:db8:f00::1002\".parse::().unwrap() < \"2001:db8:f00::2001\".parse::().unwrap()); }  }", "commid": "rust_pr_29724"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-35a7083761c329b8dd635ddd44271be2f6324524f952444696f07fffd2357c3d", "text": " // Regression test for #29988 // compile-flags: -C no-prepopulate-passes // only-x86_64 // ignore-windows #[repr(C)] struct S { f1: i32, f2: i32, f3: i32, } extern { fn foo(s: S); } fn main() { let s = S { f1: 1, f2: 2, f3: 3 }; unsafe { // CHECK: load { i64, i32 }, { i64, i32 }* {{.*}}, align 4 // CHECK: call void @foo({ i64, i32 } {{.*}}) foo(s); } } ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-698c3311aa3dbf2d1b8427d5e0c10ba4e04ac276670370dbd5aaae7dd31d41a9", "text": " enum Bug { Var = { let x: S = 0; //~ ERROR: mismatched types 0 }, } fn main() {} ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-c63951d94861c953072be302539c9e4563b619eb81d8e12ec352648e834e6a4f", "text": " error[E0308]: mismatched types --> $DIR/issue-67945-1.rs:3:20 | LL | enum Bug { |          - this type parameter LL |     Var = { LL |         let x: S = 0; |                -   ^ expected type parameter `S`, found integer |                | |                expected due to this | = note: expected type parameter `S` found type `{integer}` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-f652d675ec8f5fcfd73aa32554f64e24f3a2e41e5201ba93acb1cdb2977965e1", "text": " #![feature(type_ascription)] enum Bug { Var = 0: S, //~^ ERROR: mismatched types //~| ERROR: mismatched types } fn main() {} ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-2c693447b59b202cb8bcd7918ac10104329e775c2027f37598f5cd44b5935973", "text": " error[E0308]: mismatched types --> $DIR/issue-67945-2.rs:4:11 | LL | enum Bug { |          - this type parameter LL |     Var = 0: S, |           ^ expected type parameter `S`, found integer | = note: expected type parameter `S` found type `{integer}` error[E0308]: mismatched types --> $DIR/issue-67945-2.rs:4:11 | LL | enum Bug { |          - this type parameter LL |     Var = 0: S, |           ^^^^ expected `isize`, found type parameter `S` | = note:        expected type `isize` found type parameter `S` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-bb986ae4763f2ed124451eb7b166e2d5f84f7a021234a61a9b2038d442dab406", "text": " trait Foo {} impl<'a, T> Foo for &'a T {} struct Ctx<'a>(&'a ()) where &'a (): Foo, //~ ERROR: type annotations needed &'static (): Foo; fn main() {} ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-c06e64e4379da7896ed0629280583c22a3a67c292ea07c43958847cb5d464ea3", "query": "Given this Rust code: A call to generates IR like this on x86_64 Linux: So we have a 12 byte alloca from which we then read 16 bytes.\nIsn't this subject to alignment by LLVM?\nwhat do you mean? The alloca is only 12 bytes large. That's also the amount of bytes we write in the , but then we ask to read data for a type that is 16 bytes large. I don't see how alignment is relevant here.\nhere's a demo: Build without optimizations on x86_64 Linux, and you should be able to observe in the C code, although it hasn't been passed as an argument. The mismatch in the struct type definitions is deliberate to show the problem.\nReading past the end of an alloca has well-defined behavior in LLVM IR. I guess we might end up with false positives if we ever support msan.\nare you implying that the generated IR is fine? If so, what is the defined behaviour / where can I read up on that? reads as if it is undefined. Without optimizations, this gives random output: With optimizations, SROA ignores the out-of-bounds read and emits a warning in debug mode, which doesn't seem like it is well-defined behaviour to me either.\nBasically, if a pointer points to valid memory, and is sufficiently aligned, you can load bytes up to the alignment... LLVM itself does this at etc. That said, looking a bit more closely, the given testcase performs an \"align 8\" load from an \"align 4\" pointer, which is more problematic.\nThanks! That looks like it just assumes that the load won't trap. Part of my concern here is that we might expose additional data this way, like observing in that C code example. Though I guess optimizations might eventually leave the upper 32bits undefined and expose data anyway, so this might be moot. The alignment problem is probably due to bad usage of . Ultimately, we should load and pass the struct elements individually anyway.\nThis has been fixed, we now only load 12 bytes: instead of . Marking as E-needstest.", "positive_passages": [{"docid": "doc-en-rust-ce30cd927de832b35372c5bdbb7413ecfee46e8e5e0e7e00fe7f022990f3cfb4", "text": " error[E0283]: type annotations needed --> $DIR/issue-34979.rs:6:13 | LL | trait Foo {} | --------- required by this bound in `Foo` ... LL |     &'a (): Foo, |             ^^^ cannot infer type for reference `&'a ()` | = note: cannot satisfy `&'a (): Foo` error: aborting due to previous error For more information about this error, try `rustc --explain E0283`. ", "commid": "rust_pr_71952"}], "negative_passages": []}
{"query_id": "q-en-rust-a289d76f0e4e79e26fef39dbf074bc52ac4314129c2350795d7488bc7ed8c20e", "query": "It is currently impossible to use a std::io::Cursor over a Vec in a smart pointer (Mutex, RefCell, ...) in a sane way. Please implement Read/Write/... over Cursor<&'a mut Vec // Non-resizing write implementation fn slice_write(pos_mut: &mut u64, slice: &mut [u8], buf: &[u8]) -> io::Result { let pos = cmp::min(*pos_mut, slice.len() as u64); let amt = (&mut slice[(pos as usize)..]).write(buf)?; *pos_mut += amt as u64; Ok(amt) } // Resizing write implementation fn vec_write(pos_mut: &mut u64, vec: &mut Vec, buf: &[u8]) -> io::Result { let pos: usize = (*pos_mut).try_into().map_err(|_| { Error::new(ErrorKind::InvalidInput, \"cursor position exceeds maximum possible vector length\") })?; // Make sure the internal buffer is as least as big as where we // currently are let len = vec.len(); if len < pos { // use `resize` so that the zero filling is as efficient as possible vec.resize(pos, 0); } // Figure out what bytes will be used to overwrite what's currently // there (left), and what will be appended on the end (right) { let space = vec.len() - pos; let (left, right) = buf.split_at(cmp::min(space, buf.len())); vec[pos..pos + left.len()].copy_from_slice(left); vec.extend_from_slice(right); } // Bump us forward *pos_mut = (pos + buf.len()) as u64; Ok(buf.len()) }  #[stable(feature = \"rust1\", since = \"1.0.0\")] impl<'a> Write for Cursor<&'a mut [u8]> { #[inline]  fn write(&mut self, data: &[u8]) -> io::Result { let pos = cmp::min(self.pos, self.inner.len() as u64); let amt = (&mut self.inner[(pos as usize)..]).write(data)?; self.pos += amt as u64; Ok(amt)   fn write(&mut self, buf: &[u8]) -> io::Result { slice_write(&mut self.pos, self.inner, buf) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } #[unstable(feature = \"cursor_mut_vec\", issue = \"30132\")] impl<'a> Write for Cursor<&'a mut Vec> { fn write(&mut self, buf: &[u8]) -> io::Result { vec_write(&mut self.pos, self.inner, buf)  } fn flush(&mut self) -> io::Result<()> { Ok(()) } }", "commid": "rust_pr_46830"}], "negative_passages": []}
{"query_id": "q-en-rust-a289d76f0e4e79e26fef39dbf074bc52ac4314129c2350795d7488bc7ed8c20e", "query": "It is currently impossible to use a std::io::Cursor over a Vec in a smart pointer (Mutex, RefCell, ...) in a sane way. Please implement Read/Write/... over Cursor<&'a mut Vec> { fn write(&mut self, buf: &[u8]) -> io::Result {  let pos: usize = self.position().try_into().map_err(|_| { Error::new(ErrorKind::InvalidInput, \"cursor position exceeds maximum possible vector length\") })?; // Make sure the internal buffer is as least as big as where we // currently are let len = self.inner.len(); if len < pos { // use `resize` so that the zero filling is as efficient as possible self.inner.resize(pos, 0); } // Figure out what bytes will be used to overwrite what's currently // there (left), and what will be appended on the end (right) { let space = self.inner.len() - pos; let (left, right) = buf.split_at(cmp::min(space, buf.len())); self.inner[pos..pos + left.len()].copy_from_slice(left); self.inner.extend_from_slice(right); } // Bump us forward self.set_position((pos + buf.len()) as u64); Ok(buf.len())   vec_write(&mut self.pos, &mut self.inner, buf)  } fn flush(&mut self) -> io::Result<()> { Ok(()) } }", "commid": "rust_pr_46830"}], "negative_passages": []}
{"query_id": "q-en-rust-a289d76f0e4e79e26fef39dbf074bc52ac4314129c2350795d7488bc7ed8c20e", "query": "It is currently impossible to use a std::io::Cursor over a Vec in a smart pointer (Mutex, RefCell, ...) in a sane way. Please implement Read/Write/... over Cursor<&'a mut Vec> { #[inline] fn write(&mut self, buf: &[u8]) -> io::Result {  let pos = cmp::min(self.pos, self.inner.len() as u64); let amt = (&mut self.inner[(pos as usize)..]).write(buf)?; self.pos += amt as u64; Ok(amt)   slice_write(&mut self.pos, &mut self.inner, buf)  } fn flush(&mut self) -> io::Result<()> { Ok(()) } }", "commid": "rust_pr_46830"}], "negative_passages": []}
{"query_id": "q-en-rust-a289d76f0e4e79e26fef39dbf074bc52ac4314129c2350795d7488bc7ed8c20e", "query": "It is currently impossible to use a std::io::Cursor over a Vec in a smart pointer (Mutex, RefCell, ...) in a sane way. Please implement Read/Write/... over Cursor<&'a mut Vec fn test_mem_mut_writer() { let mut vec = Vec::new(); let mut writer = Cursor::new(&mut vec); assert_eq!(writer.write(&[0]).unwrap(), 1); assert_eq!(writer.write(&[1, 2, 3]).unwrap(), 3); assert_eq!(writer.write(&[4, 5, 6, 7]).unwrap(), 4); let b: &[_] = &[0, 1, 2, 3, 4, 5, 6, 7]; assert_eq!(&writer.get_ref()[..], b); } #[test]  fn test_box_slice_writer() { let mut writer = Cursor::new(vec![0u8; 9].into_boxed_slice()); assert_eq!(writer.position(), 0);", "commid": "rust_pr_46830"}], "negative_passages": []}
{"query_id": "q-en-rust-81010324f12c35cfacd1e252e9c003ab5a26a7cb052a679d3eeca2e6142eda06", "query": "When a doctest like this is written: /// /// running the tests will produce an error like: since the test code is wrapped inside a for execution. This is imho not clear in this algorithm description\nThis seems like a bug somewhere in rustc/cargo, rather than a problem with the docs, because rust /// works fine, but (as was mentioned here) rust /// does not. This seems like wrong behaviour. Also, the suggested rust /// does not work either, I don't know if there is an RFC or issue open about attempting to resolve the names suggested under ... and only showing them if they exist. EDIT: as mentioned below, this is not a bug with rustc or cargo, it's just that there is no way to refer to the current function scope in a use statement if you import a crate into it.\nit's not wrong but it is very misleading. The key is that if does not appear in the doc test, then rustdoc encloses the entire test in a main function. This, coupled with the fact that does not really work inside a function, produces the bad situation in which we find ourselves.\nRight, it's the 'entire' part that's confusing. I myself find it to be so and I wrote the sentence!\nIt would be nice if rustdoc used similar heuristics to playbot, where stuff like extern crates and crate attributes get hoisted out of the generated main function.\nWhy is extern crate even allowed inside of functions? That seems like a horrible misfeature.\nIt's similar to anything else that brings a name into scope, you can do it in whatever scope you'd like.\nI guess the \"real\" problem is that has no way to refer to a function/expression scope. doesn't work, making the \"did you mean\" note less than helpful.", "positive_passages": [{"docid": "doc-en-rust-f9f432b28713f634fb92606023a0fb35fde84c10e4187821fada48fccaee8957", "text": "``` You'll notice that you don't need a `fn main()` or anything here. `rustdoc` will  automatically add a `main()` wrapper around your code, and in the right place. For example:   automatically add a `main()` wrapper around your code, using heuristics to attempt to put it in the right place. For example:  ```rust /// ```", "commid": "rust_pr_30153"}], "negative_passages": []}
{"query_id": "q-en-rust-81010324f12c35cfacd1e252e9c003ab5a26a7cb052a679d3eeca2e6142eda06", "query": "When a doctest like this is written: /// /// running the tests will produce an error like: since the test code is wrapped inside a for execution. This is imho not clear in this algorithm description\nThis seems like a bug somewhere in rustc/cargo, rather than a problem with the docs, because rust /// works fine, but (as was mentioned here) rust /// does not. This seems like wrong behaviour. Also, the suggested rust /// does not work either, I don't know if there is an RFC or issue open about attempting to resolve the names suggested under ... and only showing them if they exist. EDIT: as mentioned below, this is not a bug with rustc or cargo, it's just that there is no way to refer to the current function scope in a use statement if you import a crate into it.\nit's not wrong but it is very misleading. The key is that if does not appear in the doc test, then rustdoc encloses the entire test in a main function. This, coupled with the fact that does not really work inside a function, produces the bad situation in which we find ourselves.\nRight, it's the 'entire' part that's confusing. I myself find it to be so and I wrote the sentence!\nIt would be nice if rustdoc used similar heuristics to playbot, where stuff like extern crates and crate attributes get hoisted out of the generated main function.\nWhy is extern crate even allowed inside of functions? That seems like a horrible misfeature.\nIt's similar to anything else that brings a name into scope, you can do it in whatever scope you'd like.\nI guess the \"real\" problem is that has no way to refer to a function/expression scope. doesn't work, making the \"did you mean\" note less than helpful.", "positive_passages": [{"docid": "doc-en-rust-d6839759505fb3c5f6f84fa7073980019ebf094e7de9fdb99a97595c28048b2e", "text": "`unused_attributes`, and `dead_code`. Small examples often trigger these lints. 3. If the example does not contain `extern crate`, then `extern crate  ;` is inserted. 2. Finally, if the example does not contain `fn main`, the remainder of the text is wrapped in `fn main() { your_code }` Sometimes, this isn't enough, though. For example, all of these code samples   ;` is inserted (note the lack of `#[macro_use]`). 4. Finally, if the example does not contain `fn main`, the remainder of the text is wrapped in `fn main() { your_code }`. This generated `fn main` can be a problem! If you have `extern crate` or a `mod` statements in the example code that are referred to by `use` statements, they will fail to resolve unless you include at least `fn main() {}` to inhibit step 4. `#[macro_use] extern crate` also does not work except at the crate root, so when testing macros an explicit `main` is always required. It doesn't have to clutter up your docs, though -- keep reading! Sometimes this algorithm isn't enough, though. For example, all of these code samples  with `///` we've been talking about? The raw text: ```text", "commid": "rust_pr_30153"}], "negative_passages": []}
{"query_id": "q-en-rust-81010324f12c35cfacd1e252e9c003ab5a26a7cb052a679d3eeca2e6142eda06", "query": "When a doctest like this is written: /// /// running the tests will produce an error like: since the test code is wrapped inside a for execution. This is imho not clear in this algorithm description\nThis seems like a bug somewhere in rustc/cargo, rather than a problem with the docs, because rust /// works fine, but (as was mentioned here) rust /// does not. This seems like wrong behaviour. Also, the suggested rust /// does not work either, I don't know if there is an RFC or issue open about attempting to resolve the names suggested under ... and only showing them if they exist. EDIT: as mentioned below, this is not a bug with rustc or cargo, it's just that there is no way to refer to the current function scope in a use statement if you import a crate into it.\nit's not wrong but it is very misleading. The key is that if does not appear in the doc test, then rustdoc encloses the entire test in a main function. This, coupled with the fact that does not really work inside a function, produces the bad situation in which we find ourselves.\nRight, it's the 'entire' part that's confusing. I myself find it to be so and I wrote the sentence!\nIt would be nice if rustdoc used similar heuristics to playbot, where stuff like extern crates and crate attributes get hoisted out of the generated main function.\nWhy is extern crate even allowed inside of functions? That seems like a horrible misfeature.\nIt's similar to anything else that brings a name into scope, you can do it in whatever scope you'd like.\nI guess the \"real\" problem is that has no way to refer to a function/expression scope. doesn't work, making the \"did you mean\" note less than helpful.", "positive_passages": [{"docid": "doc-en-rust-203ec5b5c7ed82d71572d3ba01c1f5bb447afa29d9505d2583825b03a17fa6d0", "text": "You\u2019ll note three things: we need to add our own `extern crate` line, so that we can add the `#[macro_use]` attribute. Second, we\u2019ll need to add our own  `main()` as well. Finally, a judicious use of `#` to comment out those two things, so they don\u2019t show up in the output.   `main()` as well (for reasons discussed above). Finally, a judicious use of `#` to comment out those two things, so they don\u2019t show up in the output.  Another case where the use of `#` is handy is when you want to ignore error handling. Lets say you want the following,", "commid": "rust_pr_30153"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-f204fc99f2aa6379497b74ec81e4ca3d611d662dd977591485e5369e5d2cd5b0", "text": "// the merge has confused the heck out of josh in the past. // We pass `--no-verify` to avoid running git hooks like `./miri fmt` that could in turn // trigger auto-actions.  sh.write_file(\"rust-version\", &commit)?;   sh.write_file(\"rust-version\", format!(\"{commit}n\"))?;  const PREPARING_COMMIT_MESSAGE: &str = \"Preparing for merge from rustc\"; cmd!(sh, \"git commit rust-version --no-verify -m {PREPARING_COMMIT_MESSAGE}\") .run()", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-13547713196d3df676561f599d894c1cf65ff098e8d9f9e78b076c44caa880dd", "text": "// interleaving, but wether UB happens can depend on whether a write occurs in the // future... let is_write = new_perm.initial_state.is_active()  || (new_perm.initial_state.is_resrved() && new_perm.protector.is_some());   || (new_perm.initial_state.is_reserved() && new_perm.protector.is_some());  if is_write { // Need to get mutable access to alloc_extra. // (Cannot always do this as we can do read-only reborrowing on read-only allocations.)", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-3ae5c75c7e70e6f94fb2a14a00738d579ed341579feed3e3595ce2fe234510bd", "text": "matches!(self.inner, Active) }  pub fn is_resrved(self) -> bool {   pub fn is_reserved(self) -> bool {  matches!(self.inner, Reserved { .. }) }", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-7835a5cc299544380275eb1e9368af3a50a80176472ef27d66eb0c6872f787a1", "query": "As of , we have enabled the LiveIRVariables LLVM pass which is a part of ongoing GC work for . Unfortunately, something in test/bench/task-perf-word- makes the LiveIRVariables pass unhappy. It seems that some LLVM optimization pass is producing an irreducible control flow graph, and LLVM's LoopSimplify pass isn't sophisticated enough to undo the damage. As a temporary workaround, xfails the two tests which break the liveness pass. But we should figure out what LLVM pass is causing the trouble and try to resolve that and un-xfail those tests.\nFixed in , and workaround reverted in .", "positive_passages": [{"docid": "doc-en-rust-198fc16bb90497cfb332e19864f84c3ae04617279a7858caad47d20bc7526e5e", "text": "/// in an existing allocation, then returns Err containing the position /// where such allocation should be inserted fn find_offset(&self, offset: Size) -> Result {  // We do a binary search. let mut left = 0usize; // inclusive let mut right = self.v.len(); // exclusive loop { if left == right { // No element contains the given offset. But the // position is where such element should be placed at. return Err(left); } let candidate = left.checked_add(right).unwrap() / 2; let elem = &self.v[candidate];   self.v.binary_search_by(|elem| -> std::cmp::Ordering {  if offset < elem.range.start { // We are too far right (offset is further left).  debug_assert!(candidate < right); // we are making progress right = candidate;   std::cmp::Ordering::Greater  } else if offset >= elem.range.end() { // We are too far left (offset is further right).  debug_assert!(candidate >= left); // we are making progress left = candidate + 1;   std::cmp::Ordering::Less  } else { // This is it!  return Ok(candidate);   std::cmp::Ordering::Equal  }  }   })  } /// Determines whether a given access on `range` overlaps with", "commid": "rust_pr_114735"}], "negative_passages": []}
{"query_id": "q-en-rust-edf1e264a5c853d2fa7a2c0afc3db045448734e8ef7cb1e8b1d201c8a3bc9174", "query": "As a newbie, I made the mistake that can be witnessed with the field in . The way this happened is that I vaguely remembered that you can't use when specifying the fields of and to make sure, I looked up the struct chapter in the book. Sure enough, it said \"Mutability is a property of the binding, not of the structure itself.\" Now, as a newbie, I thought this forbade in field definitions altogether even though it only forbids to the left of the colon and you can still have as the type to the right of the colon. It would be nice if this was briefly clarified for the benefit of newbies looking stuff up piecemeal and not properly thinking about stuff in the context of what has previously been said in an earlier chapter.\n:+1:", "positive_passages": [{"docid": "doc-en-rust-761f50ba59787d1f702bc334dab2bdb5bd092cc344eac4320b858e0b7738412b", "text": "} ```  Your structure can still contain `&mut` pointers, which will let you do some kinds of mutation: ```rust struct Point { x: i32, y: i32, } struct PointRef<'a> { x: &'a mut i32, y: &'a mut i32, } fn main() { let mut point = Point { x: 0, y: 0 }; { let r = PointRef { x: &mut point.x, y: &mut point.y }; *r.x = 5; *r.y = 6; } assert_eq!(5, point.x); assert_eq!(6, point.y); } ```  # Update syntax A `struct` can include `..` to indicate that you want to use a copy of some", "commid": "rust_pr_30699"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  /// Configuration for the child process\u2019s standard input (stdin) handle. /// /// See [`std::process::Command::stdin`]. pub fn stdin>(&mut self, cfg: T) -> &mut Self { self.cmd.stdin(cfg); self } /// Configuration for the child process\u2019s standard output (stdout) handle. /// /// See [`std::process::Command::stdout`]. pub fn stdout>(&mut self, cfg: T) -> &mut Self { self.cmd.stdout(cfg); self } /// Configuration for the child process\u2019s standard error (stderr) handle. /// /// See [`std::process::Command::stderr`]. pub fn stderr>(&mut self, cfg: T) -> &mut Self { self.cmd.stderr(cfg); self }  /// Inspect what the underlying [`Command`] is up to the /// current construction. pub fn inspect(&mut self, inspector: I) -> &mut Self", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  run-make/emit-to-stdout/Makefile  run-make/extern-fn-reachable/Makefile run-make/incr-add-rust-src-component/Makefile run-make/issue-84395-lto-embed-bitcode/Makefile", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  ui/consts/issue-77062-large-zst-array.rs  ui/consts/issue-78655.rs ui/consts/issue-79137-monomorphic.rs ui/consts/issue-79137-toogeneric.rs", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  include ../tools.mk SRC=test.rs OUT=$(TMPDIR)/out all: asm llvm-ir dep-info mir llvm-bc obj metadata link multiple-types multiple-types-option-o asm: $(OUT) $(RUSTC) --emit asm=$(OUT)/$@ $(SRC) $(RUSTC) --emit asm=- $(SRC) | diff - $(OUT)/$@ llvm-ir: $(OUT) $(RUSTC) --emit llvm-ir=$(OUT)/$@ $(SRC) $(RUSTC) --emit llvm-ir=- $(SRC) | diff - $(OUT)/$@ dep-info: $(OUT) $(RUSTC) -Z dep-info-omit-d-target=yes --emit dep-info=$(OUT)/$@ $(SRC) $(RUSTC) --emit dep-info=- $(SRC) | diff - $(OUT)/$@ mir: $(OUT) $(RUSTC) --emit mir=$(OUT)/$@ $(SRC) $(RUSTC) --emit mir=- $(SRC) | diff - $(OUT)/$@ llvm-bc: $(OUT) $(RUSTC) --emit llvm-bc=- $(SRC) 1>/dev/ptmx 2>$(OUT)/$@ || true diff $(OUT)/$@ emit-llvm-bc.stderr obj: $(OUT) $(RUSTC) --emit obj=- $(SRC) 1>/dev/ptmx 2>$(OUT)/$@ || true diff $(OUT)/$@ emit-obj.stderr # For metadata output, a temporary directory will be created to hold the temporary # metadata file. But when output is stdout, the temporary directory will be located # in the same place as $(SRC), which is mounted as read-only in the tests. Thus as # a workaround, $(SRC) is copied to the test output directory $(OUT) and we compile # it there. metadata: $(OUT) cp $(SRC) $(OUT) (cd $(OUT); $(RUSTC) --emit metadata=- $(SRC) 1>/dev/ptmx 2>$(OUT)/$@ || true) diff $(OUT)/$@ emit-metadata.stderr link: $(OUT) $(RUSTC) --emit link=- $(SRC) 1>/dev/ptmx 2>$(OUT)/$@ || true diff $(OUT)/$@ emit-link.stderr multiple-types: $(OUT) $(RUSTC) --emit asm=- --emit llvm-ir=- --emit dep-info=- --emit mir=- $(SRC) 2>$(OUT)/$@ || true diff $(OUT)/$@ emit-multiple-types.stderr multiple-types-option-o: $(OUT) $(RUSTC) -o - --emit asm,llvm-ir,dep-info,mir $(SRC) 2>$(OUT)/$@ || true diff $(OUT)/$@ emit-multiple-types.stderr $(OUT): mkdir -p $(OUT) ", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  //! If `-o -` or `--emit KIND=-` is provided, output should be written to stdout //! instead. Binary output (`obj`, `llvm-bc`, `link` and `metadata`) //! being written this way will result in an error if stdout is a tty. //! Multiple output types going to stdout will trigger an error too, //! as they would all be mixed together. //! //! See . use std::fs::File; use run_make_support::{diff, run_in_tmpdir, rustc}; // Test emitting text outputs to stdout works correctly fn run_diff(name: &str, file_args: &[&str]) { rustc().emit(format!(\"{name}={name}\")).input(\"test.rs\").args(file_args).run(); let out = rustc().emit(format!(\"{name}=-\")).input(\"test.rs\").run().stdout_utf8(); diff().expected_file(name).actual_text(\"stdout\", &out).run(); } // Test that emitting binary formats to a terminal gives the correct error fn run_terminal_err_diff(name: &str) { #[cfg(not(windows))] let terminal = File::create(\"/dev/ptmx\").unwrap(); // FIXME: If this test fails and the compiler does print to the console, // then this will produce a lot of output. // We should spawn a new console instead to print stdout. #[cfg(windows)] let terminal = File::options().read(true).write(true).open(r\".CONOUT$\").unwrap(); let err = File::create(name).unwrap(); rustc().emit(format!(\"{name}=-\")).input(\"test.rs\").stdout(terminal).stderr(err).run_fail(); diff().expected_file(format!(\"emit-{name}.stderr\")).actual_file(name).run(); } fn main() { run_in_tmpdir(|| { run_diff(\"asm\", &[]); run_diff(\"llvm-ir\", &[]); run_diff(\"dep-info\", &[\"-Zdep-info-omit-d-target=yes\"]); run_diff(\"mir\", &[]); run_terminal_err_diff(\"llvm-bc\"); run_terminal_err_diff(\"obj\"); run_terminal_err_diff(\"metadata\"); run_terminal_err_diff(\"link\"); // Test error for emitting multiple types to stdout rustc() .input(\"test.rs\") .emit(\"asm=-\") .emit(\"llvm-ir=-\") .emit(\"dep-info=-\") .emit(\"mir=-\") .stderr(File::create(\"multiple-types\").unwrap()) .run_fail(); diff().expected_file(\"emit-multiple-types.stderr\").actual_file(\"multiple-types\").run(); // Same as above, but using `-o` rustc() .input(\"test.rs\") .output(\"-\") .emit(\"asm,llvm-ir,dep-info,mir\") .stderr(File::create(\"multiple-types-option-o\").unwrap()) .run_fail(); diff() .expected_file(\"emit-multiple-types.stderr\") .actual_file(\"multiple-types-option-o\") .run(); // Test that `-o -` redirected to a file works correctly (#26719) rustc().input(\"test.rs\").output(\"-\").stdout(File::create(\"out-stdout\").unwrap()).run(); }); } ", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  //@ build-pass fn main() { let _ = &[(); usize::MAX]; } ", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  //@ build-pass pub static FOO: [(); usize::MAX] = [(); usize::MAX]; fn main() { let _ = &[(); usize::MAX]; } ", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  //@ check-pass //! Tests that associated type projections normalize properly in the presence of HRTBs. //! Original issue:  pub trait MyFrom {} impl MyFrom for T {} pub trait MyInto {} impl MyInto for T where U: MyFrom {} pub trait A<'self_> { type T; } pub trait B: for<'self_> A<'self_> { // Originally caused the `type U = usize` example below to fail with a type mismatch error type U: for<'self_> MyFrom<>::T>; } pub struct M; impl<'self_> A<'self_> for M { type T = usize; } impl B for M { type U = usize; } fn main() {} ", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  //@ check-pass //! Tests that a HRTB + FnOnce bound involving an associated type don't prevent //! a function pointer from implementing `Fn` traits. //! Test for  trait LifetimeToType<'a> { type Out; } impl<'a> LifetimeToType<'a> for () { type Out = &'a (); } fn id<'a>(val: &'a ()) -> <() as LifetimeToType<'a>>::Out { val } fn assert_fn FnOnce(&'a ()) -> <() as LifetimeToType<'a>>::Out>(_func: F) { } fn main() { assert_fn(id); } ", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-bfbc512e1a388512ed369460691dd47244d7efe2ff4e317e7da309e9b2454163", "query": "This could've been written with written as and similarly for /. It was written this way only to make explicit the necessity of one blanket impl for satisfying the other. I'm guessing that extensional equality is not judged, even though it is possible to make such a judgment and such a judgment must be made to compile code that compiles today. on line 12 ought to be equated to (which must be detected at some other point in time else line 13 wouldn't be able to compile). EDIT: Playing around a bit, it looks more and more like it's all about the HKL bounds...\nHRTBs (I've been using the wrong nomenclature the whole time!) make projections sad. Very likely a dupe of .\ncc\nKnown issue.\n(or Still a bit unclear on y'all's role separation) What's the expected overall approach to fixing this issue? Even if you don't expect a newcomer to handle it, I'd like to know anyway. :-D\nactually, I'm not entirely sure. There are a serious of refactorings that I have in mind for the type system / trait resolver, and I've been figuring that after those are done, I would come back to revisit some of these issues, if they still occur, but it may be worth digging into this example (or ) in detail. Honestly it's not all fresh in my mind. The refactorings I am thinking of are, first, lazy normalization (as described ), which may indeed help here, though I've not thought it through very deeply. Second, I'd like to generally rework how the trait checker is working to make it easier to extend the environment as you go -- the current setup, where the set of assumptions etc is semi-fixed when the inference context is created -- makes it hard to \"descend through\" a binder like and then do trait resolutions and normalizations within that context.\nSo, potentially w.r.t. lazy normalization, I've been playing with another bit of code (trying to shrink it a bit): From reading the debug logs, it seems that when checking the well-formed-ness of , upon normalizing to , and registering and ing on the implied , the compiler has no idea that and never normalizes to it either. Is this a situation in which the compiler might have assumed that everything was already normalized the best it could be and thus gave up? An aside: does lazy normalization have something to do with ?\nIs this It looks similar, but uncertain.\nWhat's the state of this now? I fairly frequently run into problems with associated types involved with HRTBs not projecting/unifying.\nTriage: last two comments asking for clarification, no replies. Given comments earlier in the thread, I imagine this issue is \"post-chalk\".\nThis is one of the things that the traits working group is kind of actively working towards. It's blocked to some extent on the universes work, see e.g. which tries to unblock that effort, plus lazy norm. I think we'll be making active progress though over the next few months.\nTriage: According to , this should get tagged .\nI think this issue is related to an error I encountered when trying to use the trait example . The trait is defined as follow: I implemented it for a vector of booleans: I believe this should compile, but it errors.  //@ check-pass //! Tests that HRTB impl selection covers type parameters not directly related //! to the trait. //! Test for  #![crate_type = \"lib\"] trait Unary {} impl U> Unary for F {} fn unary Unary<&'a T>, T>() {} pub fn test Fn(&'a i32) -> &'a i32>() { unary::() } ", "commid": "rust_pr_131355"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-62aae9ffca07801c4c02852e0118000188ff7580bc506d7813dcea9af032d209", "text": "// These items live in the type namespace. ItemTy(..) => {  let parent_link = ModuleParentLink(parent, name);  let def = Def::TyAlias(self.ast_map.local_def_id(item.id));  let module = self.new_module(parent_link, Some(def), false, is_public); self.define(parent, name, TypeNS, (module, sp));   self.define(parent, name, TypeNS, (def, sp, modifiers));  parent }", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-3d6ba1bf6cc5d650c1eb634cf0785a6a7d882ec5f01bb87e2f7d382c17d0ab5a", "text": "} match def {  Def::Mod(_) | Def::ForeignMod(_) | Def::Enum(..) | Def::TyAlias(..) => {   Def::Mod(_) | Def::ForeignMod(_) | Def::Enum(..) => {  debug!(\"(building reduced graph for external crate) building module {} {}\", final_ident, is_public);", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-e259e0eaf5d12d162cc288e89f7512658c20e42fb8a8b698c0e131ca455e015c", "text": "let module = self.new_module(parent_link, Some(def), true, is_public); self.try_define(new_parent, name, TypeNS, (module, DUMMY_SP)); }  Def::AssociatedTy(..) => {   Def::TyAlias(..) | Def::AssociatedTy(..) => {  debug!(\"(building reduced graph for external crate) building type {}\", final_ident); self.try_define(new_parent, name, TypeNS, (def, DUMMY_SP, modifiers));", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-39f798ae9849d5de4bfd079de52bcfe276f623f3d5122345f2effe36c70c4e24", "text": "target_module: Module<'b>, directive: &'b ImportDirective) -> ResolveResult<()> {  if let Some(Def::Trait(_)) = target_module.def { self.resolver.session.span_err(directive.span, \"items in traits are not importable.\"); }  if module_.def_id() == target_module.def_id() { // This means we are trying to glob import a module into itself, and it is a no-go let msg = \"Cannot glob-import a module into itself.\".into();", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-982d85e2418997eec91ec11587e1a75c5019587713ac915a830e36ac21b292b9", "query": "Code that compiles fine with rustc is causing an internal compiler error: unexpected panic with rustdoc. I tried this code (): The second line is essentially a typo, which can happen if used to be an Enum for instance. Again, the following works fine (although I'm not sure why, can you even add anything to the namespace?): However, if I run: I get the error: This actually happened on a version of a package already pushed to (hdf5-sys 0.3.0), so anything that depends on the erroneous version and runs will see this error. Run on . Backtrace:\nThis is easy to fix in rustdoc, but should this even compile?\nWhoa kinda crazy... cc (resolve weridness, longstanding though!)", "positive_passages": [{"docid": "doc-en-rust-8df80afe3cb494eccf9554fbb990ed8da5e8d86187ae8cfe81e23738f4d698c4", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. type Alias = (); use Alias::*; //~ ERROR Not a module use std::io::Result::*; //~ ERROR Not a module trait T {} use T::*; //~ ERROR items in traits are not importable fn main() {} ", "commid": "rust_pr_32134"}], "negative_passages": []}
{"query_id": "q-en-rust-74025332a011245f8052eb4b386ac42d81c77362ee866142c88fcade861e394e", "query": "For example, that the struct being dropped has not been deallocated when is called (it's implied by the existence of and Rust's general reluctance to dangle pointers, but still...), and that the struct members are recursively dropped after finishes, and what happens if panics.", "positive_passages": [{"docid": "doc-en-rust-1b7696abc3f419163f3184c80b01cc347eb9716073610e0d1d3eb594f45f34aa", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] pub trait Drop { /// A method called when the value goes out of scope.  /// /// When this method has been called, `self` has not yet been deallocated. /// If it were, `self` would be a dangling reference. /// /// After this function is over, the memory of `self` will be deallocated. /// /// # Panics /// /// Given that a `panic!` will call `drop()` as it unwinds, any `panic!` in /// a `drop()` implementation will likely abort.  #[stable(feature = \"rust1\", since = \"1.0.0\")] fn drop(&mut self); }", "commid": "rust_pr_30696"}], "negative_passages": []}
{"query_id": "q-en-rust-935edc8c2f69afe2cb6a69cbe3c19994de7fd5a4d4a71f36e30ee4b837d76280", "query": "It would be nice, if the would mention that the process is detached (and not joined) by default, when a object goes out of scope. Not having a impl is too subtile IMO.\nPart of\nI'd be happy to take this on. :smile:\nPlease do! :) On Jan 30, 2016, 11:59 -0500, Dirk , wrote:", "positive_passages": [{"docid": "doc-en-rust-3d1d613a403dde3b157ed18807721f478be8430bb13fb9f5955878261e23151d", "text": "/// /// assert!(ecode.success()); /// ```  /// /// # Note /// /// Take note that there is no implementation of /// [`Drop`](../../core/ops/trait.Drop.html) for child processes, so if you /// do not ensure the `Child` has exited then it will continue to run, even /// after the `Child` handle to the child process has gone out of scope. /// /// Calling `wait` (or other functions that wrap around it) will make the /// parent process wait until the child has actually exited before continuing.  #[stable(feature = \"process\", since = \"1.0.0\")] pub struct Child { handle: imp::Process,", "commid": "rust_pr_31327"}], "negative_passages": []}
{"query_id": "q-en-rust-0b0f6730fce7dffa2d0be2400db454ca790389aee6a7c42d8bd0b6bbe896a6e1", "query": "Exemple code: Error is: The compiler should detect that \"self\" is missing. Note this is an error I do all the time as a beginner.\nEven without doing an analysis of what could have likely been intended this error message could mention that paths in are absolute by default, unless they start with or .\nRan into this a few days ago, would love to the messaging here improved :+1:\nThe linked PR adds the information to the long diagnostics, at least. Note that if you just have an then the error message is markedly different: I wonder if some consolidation would be in order.\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-32b1cd5eb91134a3a026e6991cda0840c9372d508022048d00a1ca782ccf8f37", "text": "use something::Foo; // error: unresolved import `something::Foo`. ```  Please verify you didn't misspell the import name or the import does exist in the module from where you tried to import it. Example:   Paths in `use` statements are relative to the crate root. To import items relative to the current and parent modules, use the `self::` and `super::` prefixes, respectively. Also verify that you didn't misspell the import name and that the import exists in the module from where you tried to import it. Example:  ```ignore  use something::Foo; // ok!   use self::something::Foo; // ok!  mod something { pub struct Foo;", "commid": "rust_pr_33320"}], "negative_passages": []}
{"query_id": "q-en-rust-0b0f6730fce7dffa2d0be2400db454ca790389aee6a7c42d8bd0b6bbe896a6e1", "query": "Exemple code: Error is: The compiler should detect that \"self\" is missing. Note this is an error I do all the time as a beginner.\nEven without doing an analysis of what could have likely been intended this error message could mention that paths in are absolute by default, unless they start with or .\nRan into this a few days ago, would love to the messaging here improved :+1:\nThe linked PR adds the information to the long diagnostics, at least. Note that if you just have an then the error message is markedly different: I wonder if some consolidation would be in order.\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-409daa07628c5eab5a3d998d58512d2835a8c3c4c010b272486f4204209365b6", "text": "``` Or, if you tried to use a module from an external crate, you may have missed  the `extern crate` declaration:   the `extern crate` declaration (which is usually placed in the crate root):  ```ignore extern crate homura; // Required to use the `homura` crate", "commid": "rust_pr_33320"}], "negative_passages": []}
{"query_id": "q-en-rust-cbb83b0c5c6c3ca5f35bdae7e5f1890b43d36a4e87d6df41d4a8b50fc1aab6db", "query": "rustdoc source links end in as can be seen here at the moment (the [src] link) Source link is: (Which is a dead link) Also reproduced locally using rustc 1.9.0-nightly ( 2016-03-07)", "positive_passages": [{"docid": "doc-en-rust-a90caf70b1a527aca7bdfec9be38c26d62c24fbfa85afcd7f016789d00437e50", "text": "// has anchors for the line numbers that we're linking to. } else if self.item.def_id.is_local() { self.cx.local_sources.get(&PathBuf::from(&self.item.source.filename)).map(|path| {  format!(\"{root}src/{krate}/{path}.html#{href}\",   format!(\"{root}src/{krate}/{path}#{href}\",  root = self.cx.root_path, krate = self.cx.layout.krate, path = path,", "commid": "rust_pr_32117"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-0eb3a2ef6ae2ba2ceb051567d2e6d862d5fd173e3be2207f3db8d0c9ee7aebe0", "text": "} }  fn increment_outstanding_references(&mut self, is_public: bool) { self.outstanding_references += 1; if is_public { self.pub_outstanding_references += 1; } } fn decrement_outstanding_references(&mut self, is_public: bool) { let decrement_references = |count: &mut _| { assert!(*count > 0); *count -= 1; }; decrement_references(&mut self.outstanding_references); if is_public { decrement_references(&mut self.pub_outstanding_references); } }  fn report_conflicts(&self, mut report: F) { let binding = match self.binding { Some(binding) => binding,", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-53d423c7b93fbbc610416e1dcb06f8004f7bcac466dd8fe10fd8852f7f0c5c92", "text": "} pub fn increment_outstanding_references_for(&self, name: Name, ns: Namespace, is_public: bool) {  let mut resolutions = self.resolutions.borrow_mut(); let resolution = resolutions.entry((name, ns)).or_insert_with(Default::default); resolution.outstanding_references += 1; if is_public { resolution.pub_outstanding_references += 1; } } fn decrement_outstanding_references_for(&self, name: Name, ns: Namespace, is_public: bool) { let decrement_references = |count: &mut _| { assert!(*count > 0); *count -= 1; }; self.update_resolution(name, ns, |resolution| { decrement_references(&mut resolution.outstanding_references); if is_public { decrement_references(&mut resolution.pub_outstanding_references); } })   self.resolutions.borrow_mut().entry((name, ns)).or_insert_with(Default::default) .increment_outstanding_references(is_public);  } // Use `update` to mutate the resolution for the name.", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-efe07827c2a8a00221a654d844a57f0ec275463dd42e2a31555d5395223e30a1", "text": "// Temporarily count the directive as determined so that the resolution fails // (as opposed to being indeterminate) when it can only be defined by the directive. if !determined {  module_.decrement_outstanding_references_for(target, ns, directive.is_public)   module_.resolutions.borrow_mut().get_mut(&(target, ns)).unwrap() .decrement_outstanding_references(directive.is_public);  } let result = self.resolver.resolve_name_in_module(target_module, source, ns, false, true);", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-91ab42eb7faf4f6e47a2ec66426ba38022c680b7c4740582c9f9b07797b46009", "text": "self.report_conflict(target, ns, &directive.import(binding, None), old_binding); } }  module_.decrement_outstanding_references_for(target, ns, directive.is_public);   module_.update_resolution(target, ns, |resolution| { resolution.decrement_outstanding_references(directive.is_public); })  } match (&value_result, &type_result) {", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-adc16be03be73de6e0a1b3d713cfe38f6f55b14b6e38aa02975befa3d0479aaf", "query": "Unfortunately, I can't reproduce this at a smaller scale. This issue occurs with as of commit . Rust version is Currently the tests (run from the directory with ) pass. However, applying the following patch: causes the following compiler error: which is claiming that an import conflicts with itself. I have pushed up the branch with that change applied for easy reproduction. Again, I'm sorry that I cannot provide a report on a smaller scale, but this appears to be some edge case blowing up. I have no clue why adding a module with a few imports and nothing else would be causing this issue.\nBisecting to find the nightly which caused this\nI can confirm that this issue does not occur on . Unfortunately, there was a bug that prevented us from compiling on that wasn't fixed until , so I cannot provide anything more specific than that.\ncc\nThanks for the report! This bug was introduced in . I diagnosed the issue and will fix it ASAP.", "positive_passages": [{"docid": "doc-en-rust-95a5222989c78f3ff8f6367c46cd3a6b4d12e78e84148080a658042419da77b2", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(rustc_attrs)] #![allow(warnings)] mod foo { pub fn bar() {} } pub use foo::*; use b::bar; mod foobar { use super::*; } mod a { pub mod bar {} } mod b { pub use a::bar; } #[rustc_error] fn main() {} //~ ERROR compilation successful ", "commid": "rust_pr_32227"}], "negative_passages": []}
{"query_id": "q-en-rust-07a68ea41004a79066381221c34b88f5f933945d75fe5c981d48b34b7b3ebb7c", "query": "Doc comment says that it fails if the character is not a digit. In fact it returns option::none.", "positive_passages": [{"docid": "doc-en-rust-d5c7df3c254f81e9f3fa0bad5829000fe7d95425b2d041b45d3edd70f0da6f4b", "text": "* * # Safety note *  * This function fails if `c` is not a valid char   * This function returns none if `c` is not a valid char  * * # Return value *", "commid": "rust_pr_3251"}], "negative_passages": []}
{"query_id": "q-en-rust-1da86149387dd4a1e3c80739e6a091dbc8d7b2c4791f06905f5e96bb8823b756", "query": "talks about what it uses to query time on the various platforms, but that is an implementation detail. The API documentation should define minimum precision guarantees for these APIs, so the user can make an informed decision whether using these APIs will provide sufficient precision for their use case on all supported platforms. For example, if a user wanted to use these APIs to limit framerate in a video game, second precision wouldn't cut it, but millisecond precision should be sufficient for most cases, so if these APIs guarantee at least millisecond precision, then they can be used for this purpose.\nShould we guarantee precision or document which system APIs these things use?\nI definitely agree this should be documented, at least for major platforms. If it's not documented, all you can do is test the precision, but then you have to make assumptions about future systems. If we guarantee a precision, that leaves open the question of what to do for a system that can't support such a precision. Panic?\nWe already can't guarantee any particular precision. Linux for example runs on processors with a wide variety of timing hardware, with all sorts of different precisions.\nPerhaps we need another API to query the precision.\nIf it's not possible to provide any guarantees, we should at least document the precision for major (tier-1) platforms, and explicitly state that there are no guarantees for other platforms.\nDoes \"tier-1 platform\" include guest Linux on VMWare? Its clock is known to be freaky. POSIX has which return the (finest) unit of time supported by the system, but they don't guarantee measurements are that accurate. When OSs don't guarantee anything, what can Rust do? Perhaps we can just add a wrapper of these query APIs if that is convenient to somebody.\nHrrm. If truly nothing else can be done, then at least let's document the platform APIs we use for at least the major platforms, so the user doesn't have to look them up in the RFCs. That's at least more information about the precision than nothing at all.\nDo we aim for at least some level of precision where possible?  /// /// # Underlying System calls /// Currently, the following system calls are being used to get the current time using `now()`: /// /// |  Platform |               System call                                            | /// |:---------:|:--------------------------------------------------------------------:| /// | Cloud ABI | [clock_time_get (Monotonic Clock)]                                   | /// | SGX       | [`insecure_time` usercall]. More information on [timekeeping in SGX] | /// | UNIX      | [clock_time_get (Monotonic Clock)]                                   | /// | Darwin    | [mach_absolute_time]                                                 | /// | VXWorks   | [clock_gettime (Monotonic Clock)]                                    | /// | WASI      | [__wasi_clock_time_get (Monotonic Clock)]                            | /// | Windows   | [QueryPerformanceCounter]                                            | /// /// [QueryPerformanceCounter]: https://docs.microsoft.com/en-us/windows/win32/api/profileapi/nf-profileapi-queryperformancecounter /// [`insecure_time` usercall]: https://edp.fortanix.com/docs/api/fortanix_sgx_abi/struct.Usercalls.html#method.insecure_time /// [timekeeping in SGX]: https://edp.fortanix.com/docs/concepts/rust-std/#codestdtimecode /// [__wasi_clock_time_get (Monotonic Clock)]: https://github.com/CraneStation/wasmtime/blob/master/docs/WASI-api.md#clock_time_get /// [clock_gettime (Monotonic Clock)]: https://linux.die.net/man/3/clock_gettime /// [mach_absolute_time]: https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/services/services.html /// [clock_time_get (Monotonic Clock)]: https://github.com/NuxiNL/cloudabi/blob/master/cloudabi.txt /// /// **Disclaimer:** These system calls might change over time. ///  #[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] #[stable(feature = \"time2\", since = \"1.8.0\")] pub struct Instant(time::Instant);", "commid": "rust_pr_63846"}], "negative_passages": []}
{"query_id": "q-en-rust-1da86149387dd4a1e3c80739e6a091dbc8d7b2c4791f06905f5e96bb8823b756", "query": "talks about what it uses to query time on the various platforms, but that is an implementation detail. The API documentation should define minimum precision guarantees for these APIs, so the user can make an informed decision whether using these APIs will provide sufficient precision for their use case on all supported platforms. For example, if a user wanted to use these APIs to limit framerate in a video game, second precision wouldn't cut it, but millisecond precision should be sufficient for most cases, so if these APIs guarantee at least millisecond precision, then they can be used for this purpose.\nShould we guarantee precision or document which system APIs these things use?\nI definitely agree this should be documented, at least for major platforms. If it's not documented, all you can do is test the precision, but then you have to make assumptions about future systems. If we guarantee a precision, that leaves open the question of what to do for a system that can't support such a precision. Panic?\nWe already can't guarantee any particular precision. Linux for example runs on processors with a wide variety of timing hardware, with all sorts of different precisions.\nPerhaps we need another API to query the precision.\nIf it's not possible to provide any guarantees, we should at least document the precision for major (tier-1) platforms, and explicitly state that there are no guarantees for other platforms.\nDoes \"tier-1 platform\" include guest Linux on VMWare? Its clock is known to be freaky. POSIX has which return the (finest) unit of time supported by the system, but they don't guarantee measurements are that accurate. When OSs don't guarantee anything, what can Rust do? Perhaps we can just add a wrapper of these query APIs if that is convenient to somebody.\nHrrm. If truly nothing else can be done, then at least let's document the platform APIs we use for at least the major platforms, so the user doesn't have to look them up in the RFCs. That's at least more information about the precision than nothing at all.\nDo we aim for at least some level of precision where possible?  /// /// # Underlying System calls /// Currently, the following system calls are being used to get the current time using `now()`: /// /// |  Platform |               System call                                            | /// |:---------:|:--------------------------------------------------------------------:| /// | Cloud ABI | [clock_time_get (Realtime Clock)]                                    | /// | SGX       | [`insecure_time` usercall]. More information on [timekeeping in SGX] | /// | UNIX      | [clock_gettime (Realtime Clock)]                                     | /// | DARWIN    | [gettimeofday]                                                       | /// | VXWorks   | [clock_gettime (Realtime Clock)]                                     | /// | WASI      | [__wasi_clock_time_get (Realtime Clock)]                             | /// | Windows   | [GetSystemTimeAsFileTime]                                            | /// /// [clock_time_get (Realtime Clock)]: https://github.com/NuxiNL/cloudabi/blob/master/cloudabi.txt /// [gettimeofday]: http://man7.org/linux/man-pages/man2/gettimeofday.2.html /// [clock_gettime (Realtime Clock)]: https://linux.die.net/man/3/clock_gettime /// [__wasi_clock_time_get (Realtime Clock)]: https://github.com/CraneStation/wasmtime/blob/master/docs/WASI-api.md#clock_time_get /// [GetSystemTimeAsFileTime]: https://docs.microsoft.com/en-us/windows/win32/api/sysinfoapi/nf-sysinfoapi-getsystemtimeasfiletime /// /// **Disclaimer:** These system calls might change over time. ///  #[derive(Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)] #[stable(feature = \"time2\", since = \"1.8.0\")] pub struct SystemTime(time::SystemTime);", "commid": "rust_pr_63846"}], "negative_passages": []}
{"query_id": "q-en-rust-a3e8c34e864973a8bfb1792cd3bd2f96489995c8593c9b9f71bcfc3164e1cf95", "query": "Macro-expanded unconfigured items are gated feature checked, but ordinary unconfigured items are not. For example, the following should compile, ... If we manually expand , it always compiles:\nIt looks like this was (introduced in ), but I don't think it's a good idea. cc cc\nThere are arguments on about why this is not a good idea: is typically used by crates that need to build against both stable and nightly and use nightly-specific features when available (the most important example that I have in mind is syntax extensions using plugins on nightly and syntex on stable). Checking for use of gated features inside this -guarded code will break those crates, and it's not a good thing, this use case is actually useful. This check pass should be removed.\ncc", "positive_passages": [{"docid": "doc-en-rust-2e71df928ce7bf217c4c0dfd859b8cf69f6ce4ff771daede15c2dcc04b00ccdb", "text": "ret });  // Needs to go *after* expansion to be able to check the results // of macro expansion.  This runs before #[cfg] to try to catch as // much as possible (e.g. help the programmer avoid platform // specific differences) time(time_passes, \"complete gated feature checking 1\", || { sess.track_errors(|| { let features = syntax::feature_gate::check_crate(sess.codemap(), &sess.parse_sess.span_diagnostic, &krate, &attributes, sess.opts.unstable_features); *sess.features.borrow_mut() = features; }) })?;  // JBC: make CFG processing part of expansion to avoid this problem: // strip again, in case expansion added anything with a #[cfg].", "commid": "rust_pr_32846"}], "negative_passages": []}
{"query_id": "q-en-rust-a3e8c34e864973a8bfb1792cd3bd2f96489995c8593c9b9f71bcfc3164e1cf95", "query": "Macro-expanded unconfigured items are gated feature checked, but ordinary unconfigured items are not. For example, the following should compile, ... If we manually expand , it always compiles:\nIt looks like this was (introduced in ), but I don't think it's a good idea. cc cc\nThere are arguments on about why this is not a good idea: is typically used by crates that need to build against both stable and nightly and use nightly-specific features when available (the most important example that I have in mind is syntax extensions using plugins on nightly and syntex on stable). Checking for use of gated features inside this -guarded code will break those crates, and it's not a good thing, this use case is actually useful. This check pass should be removed.\ncc", "positive_passages": [{"docid": "doc-en-rust-c843c3bd41f3df79540d8bfc8b83aab0c94cd1a4de42d62524fc47e9740b6b76", "text": "\"checking for inline asm in case the target doesn't support it\", || no_asm::check_crate(sess, &krate));  // One final feature gating of the true AST that gets compiled // later, to make sure we've got everything (e.g. configuration // can insert new attributes via `cfg_attr`) time(time_passes, \"complete gated feature checking 2\", || {   // Needs to go *after* expansion to be able to check the results of macro expansion. time(time_passes, \"complete gated feature checking\", || {  sess.track_errors(|| { let features = syntax::feature_gate::check_crate(sess.codemap(), &sess.parse_sess.span_diagnostic,", "commid": "rust_pr_32846"}], "negative_passages": []}
{"query_id": "q-en-rust-a3e8c34e864973a8bfb1792cd3bd2f96489995c8593c9b9f71bcfc3164e1cf95", "query": "Macro-expanded unconfigured items are gated feature checked, but ordinary unconfigured items are not. For example, the following should compile, ... If we manually expand , it always compiles:\nIt looks like this was (introduced in ), but I don't think it's a good idea. cc cc\nThere are arguments on about why this is not a good idea: is typically used by crates that need to build against both stable and nightly and use nightly-specific features when available (the most important example that I have in mind is syntax extensions using plugins on nightly and syntex on stable). Checking for use of gated features inside this -guarded code will break those crates, and it's not a good thing, this use case is actually useful. This check pass should be removed.\ncc", "positive_passages": [{"docid": "doc-en-rust-4d783f03143fcfd8f54fe201aebdd2da979c4ec9d47c13616620d7f6f6efdbf9", "text": "// When we enter a module, record it, for the sake of `module!` pub fn expand_item(it: P, fld: &mut MacroExpander) -> SmallVector> {  let it = expand_item_multi_modifier(Annotatable::Item(it), fld); expand_annotatable(it, fld)   expand_annotatable(Annotatable::Item(it), fld)  .into_iter().map(|i| i.expect_item()).collect() }", "commid": "rust_pr_32846"}], "negative_passages": []}
{"query_id": "q-en-rust-a3e8c34e864973a8bfb1792cd3bd2f96489995c8593c9b9f71bcfc3164e1cf95", "query": "Macro-expanded unconfigured items are gated feature checked, but ordinary unconfigured items are not. For example, the following should compile, ... If we manually expand , it always compiles:\nIt looks like this was (introduced in ), but I don't think it's a good idea. cc cc\nThere are arguments on about why this is not a good idea: is typically used by crates that need to build against both stable and nightly and use nightly-specific features when available (the most important example that I have in mind is syntax extensions using plugins on nightly and syntex on stable). Checking for use of gated features inside this -guarded code will break those crates, and it's not a good thing, this use case is actually useful. This check pass should be removed.\ncc", "positive_passages": [{"docid": "doc-en-rust-8a1e84c9676fe716d00cfbeec35a44fc5a97ea42529bdc49e3ece5f9be4e7fcd", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(rustc_attrs)] macro_rules! mac { {} => { #[cfg(attr)] mod m { #[lang_item] fn f() {} } } } mac! {} #[rustc_error] fn main() {} //~ ERROR compilation successful ", "commid": "rust_pr_32846"}], "negative_passages": []}
{"query_id": "q-en-rust-26fa8c0b4175c102d370bb82ac6052bfde276b31c7945142aa5ca851a5cec412", "query": "The Unix implementation of in boils down to: The branch of the conditional doesn't properly handle the case where fails. For comparison's sake, the Windows implementation says:\nOh dear, seems bad! Out of curiosity, was this discovered from reading or from a bug? We could consider a backport to beta if that's necessary\nThis was discovered when reading through to write a custom memory allocator for Firefox. I suspect it's hard to make it trigger due to overcommit policies and the general lack of usage of over-aligned types.", "positive_passages": [{"docid": "doc-en-rust-664627b8b4b9a836a942d462cbb75ce8d3f5febbb70341d1956e5b98920e6545", "text": "libc::realloc(ptr as *mut libc::c_void, size as libc::size_t) as *mut u8 } else { let new_ptr = allocate(size, align);  ptr::copy(ptr, new_ptr, cmp::min(size, old_size)); deallocate(ptr, old_size, align);   if !new_ptr.is_null() { ptr::copy(ptr, new_ptr, cmp::min(size, old_size)); deallocate(ptr, old_size, align); }  new_ptr } }", "commid": "rust_pr_32997"}], "negative_passages": []}
{"query_id": "q-en-rust-658d1fdf442652abfc80f22e8a4fb5bd036ed3231b41d09d848e4078368f8a7d", "query": "Before (on librustc): After (on same librustc): cc\nThe extra RAM usage seems to be because of the AST not getting freed after the early lint checks. Not sure if this is intentional.\nThe AST should definitely still get freed, nobody touched , so it's not intentional (unless something about the command line options has changed?)\nThe performance regression in looks like it was caused by . I'll work on a fix now. I'm not sure what caused the RAM increase.\nResolution used to take about 1 seconds a few months ago in winapi and now it takes 9 seconds.\nI fixed the performance regression in .", "positive_passages": [{"docid": "doc-en-rust-319465190e7d3737ab23f558710d6f58a83ceba5bcf35666661b4653bc84ec5a", "text": "glob_importers: RefCell, &'a ImportDirective<'a>)>>, globs: RefCell>>,  // Used to memoize the traits in this module for faster searches through all traits in scope. traits: RefCell]>>>,  // Whether this module is populated. If not populated, any attempt to // access the children must be preceded with a // `populate_module_if_necessary` call.", "commid": "rust_pr_33064"}], "negative_passages": []}
{"query_id": "q-en-rust-658d1fdf442652abfc80f22e8a4fb5bd036ed3231b41d09d848e4078368f8a7d", "query": "Before (on librustc): After (on same librustc): cc\nThe extra RAM usage seems to be because of the AST not getting freed after the early lint checks. Not sure if this is intentional.\nThe AST should definitely still get freed, nobody touched , so it's not intentional (unless something about the command line options has changed?)\nThe performance regression in looks like it was caused by . I'll work on a fix now. I'm not sure what caused the RAM increase.\nResolution used to take about 1 seconds a few months ago in winapi and now it takes 9 seconds.\nI fixed the performance regression in .", "positive_passages": [{"docid": "doc-en-rust-bc2707b437b7604bc1245aec0f714b49a4e02a735cc1aeccc7ac0677e0417816", "text": "prelude: RefCell::new(None), glob_importers: RefCell::new(Vec::new()), globs: RefCell::new((Vec::new())),  traits: RefCell::new(None),  populated: Cell::new(!external), arenas: arenas }", "commid": "rust_pr_33064"}], "negative_passages": []}
{"query_id": "q-en-rust-658d1fdf442652abfc80f22e8a4fb5bd036ed3231b41d09d848e4078368f8a7d", "query": "Before (on librustc): After (on same librustc): cc\nThe extra RAM usage seems to be because of the AST not getting freed after the early lint checks. Not sure if this is intentional.\nThe AST should definitely still get freed, nobody touched , so it's not intentional (unless something about the command line options has changed?)\nThe performance regression in looks like it was caused by . I'll work on a fix now. I'm not sure what caused the RAM increase.\nResolution used to take about 1 seconds a few months ago in winapi and now it takes 9 seconds.\nI fixed the performance regression in .", "positive_passages": [{"docid": "doc-en-rust-3dfd3a4162d2b24206e6157d74d64fef8fe3d85826d5c13e6e3e6b3da9138767", "text": "let mut search_module = self.current_module; loop { // Look for trait children.  let mut search_in_module = |module: Module<'a>| module.for_each_child(|_, ns, binding| { if ns != TypeNS { return } let trait_def_id = match binding.def() { Some(Def::Trait(trait_def_id)) => trait_def_id, Some(..) | None => return, }; if self.trait_item_map.contains_key(&(name, trait_def_id)) { add_trait_info(&mut found_traits, trait_def_id, name); let trait_name = self.get_trait_name(trait_def_id); self.record_use(trait_name, TypeNS, binding);   let mut search_in_module = |module: Module<'a>| { let mut traits = module.traits.borrow_mut(); if traits.is_none() { let mut collected_traits = Vec::new(); module.for_each_child(|_, ns, binding| { if ns != TypeNS { return } if let Some(Def::Trait(_)) = binding.def() { collected_traits.push(binding); } }); *traits = Some(collected_traits.into_boxed_slice());  }  });   for binding in traits.as_ref().unwrap().iter() { let trait_def_id = binding.def().unwrap().def_id(); if self.trait_item_map.contains_key(&(name, trait_def_id)) { add_trait_info(&mut found_traits, trait_def_id, name); let trait_name = self.get_trait_name(trait_def_id); self.record_use(trait_name, TypeNS, binding); } } };  search_in_module(search_module); match search_module.parent_link {", "commid": "rust_pr_33064"}], "negative_passages": []}
{"query_id": "q-en-rust-d627101c1998a69a9acfc4fc050d526fd9490d359fe680c8705f7fea7fa42de8", "query": "Should it expect a ? Full example code:\nIt should expect , but it's lost from the expected set. I't not an issue specific for , but a symptom of a more general problem with expected token sets. There's a long weekend ahead, I'll investigate.\nOh wait, this works on nightly, as expected (fixed in , more specifically). It means there are no problems with expected token sets.", "positive_passages": [{"docid": "doc-en-rust-98800dbc71b2c8918d6accba945d4cc965d3490141f5b662d51521d4e482a07a", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use foo.bar; //~ ERROR expected one of `::`, `;`, or `as`, found `.` ", "commid": "rust_pr_34460"}], "negative_passages": []}
{"query_id": "q-en-rust-e8b63fbd3eef42ff24fbcc14cfedd13b401fb28c9bb6269f0f29605f837a2515", "query": "Errors like are happening on the bots, e.g.: This is under the covers, but rustbuild's command should blow away the build system directory ahead of time to be more resilient to bugs like this.", "positive_passages": [{"docid": "doc-en-rust-c2390243834c7d0fc5dd136269eb46cf6648c311d675cd38d6e1b0470940e349", "text": "return os.path.join(self.bin_root(), '.cargo-stamp') def rustc_out_of_date(self):  if not os.path.exists(self.rustc_stamp()):   if not os.path.exists(self.rustc_stamp()) or self.clean:  return True with open(self.rustc_stamp(), 'r') as f: return self.stage0_rustc_date() != f.read() def cargo_out_of_date(self):  if not os.path.exists(self.cargo_stamp()):   if not os.path.exists(self.cargo_stamp()) or self.clean:  return True with open(self.cargo_stamp(), 'r') as f: return self.stage0_cargo_date() != f.read()", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-e8b63fbd3eef42ff24fbcc14cfedd13b401fb28c9bb6269f0f29605f837a2515", "query": "Errors like are happening on the bots, e.g.: This is under the covers, but rustbuild's command should blow away the build system directory ahead of time to be more resilient to bugs like this.", "positive_passages": [{"docid": "doc-en-rust-a7525ed96805665467b62d3f796c2ec735b3cf490f69c383cbf3cf683e4e4a9a", "text": "return '' def build_bootstrap(self):  build_dir = os.path.join(self.build_dir, \"bootstrap\") if self.clean and os.path.exists(build_dir): shutil.rmtree(build_dir)  env = os.environ.copy()  env[\"CARGO_TARGET_DIR\"] = os.path.join(self.build_dir, \"bootstrap\")   env[\"CARGO_TARGET_DIR\"] = build_dir  env[\"RUSTC\"] = self.rustc() env[\"LD_LIBRARY_PATH\"] = os.path.join(self.bin_root(), \"lib\") env[\"DYLD_LIBRARY_PATH\"] = os.path.join(self.bin_root(), \"lib\")", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-e8b63fbd3eef42ff24fbcc14cfedd13b401fb28c9bb6269f0f29605f837a2515", "query": "Errors like are happening on the bots, e.g.: This is under the covers, but rustbuild's command should blow away the build system directory ahead of time to be more resilient to bugs like this.", "positive_passages": [{"docid": "doc-en-rust-ad850e0c783007976584f6829b38980a717032bb71173a9aa28a3d5c8160348f", "text": "def main(): parser = argparse.ArgumentParser(description='Build rust') parser.add_argument('--config')  parser.add_argument('--clean', action='store_true')  parser.add_argument('-v', '--verbose', action='store_true') args = [a for a in sys.argv if a != '-h']", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-e8b63fbd3eef42ff24fbcc14cfedd13b401fb28c9bb6269f0f29605f837a2515", "query": "Errors like are happening on the bots, e.g.: This is under the covers, but rustbuild's command should blow away the build system directory ahead of time to be more resilient to bugs like this.", "positive_passages": [{"docid": "doc-en-rust-cc13989c36eb40f8b69377233f00b80e31c7c7c87f425459847e052fd5ad1ac6", "text": "rb.rust_root = os.path.abspath(os.path.join(__file__, '../../..')) rb.build_dir = os.path.join(os.getcwd(), \"build\") rb.verbose = args.verbose  rb.clean = args.clean  try: with open(args.config or 'config.toml') as config:", "commid": "rust_pr_33991"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-b1d54e385ea164094b8e04d30d72890eb838c01d0f373a06329a51f593ed0804", "text": "asm!(\"xor %eax, %eax\" : :  : \"{eax}\"   : \"eax\"  ); # } } ```", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-727a0780e24d618962e673d38bc11f2f3c142b8ca1bb78d060ba280109f2716f", "text": "# #![feature(asm)] # #[cfg(any(target_arch = \"x86\", target_arch = \"x86_64\"))] # fn main() { unsafe {  asm!(\"xor %eax, %eax\" ::: \"{eax}\");   asm!(\"xor %eax, %eax\" ::: \"eax\");  # } } ```", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-4d6e98f9e04985cfee3b87b307b5c34c306ebaf9c622f8e116984f6f2277e30f", "text": "# #[cfg(any(target_arch = \"x86\", target_arch = \"x86_64\"))] # fn main() { unsafe { // Put the value 0x200 in eax  asm!(\"mov $$0x200, %eax\" : /* no outputs */ : /* no inputs */ : \"{eax}\");   asm!(\"mov $$0x200, %eax\" : /* no outputs */ : /* no inputs */ : \"eax\");  # } } ```", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-1d604524d84e25804c5f1f9600ed0b1d966dc265c6de68a3638e1a15c8d96597", "text": "if OPTIONS.iter().any(|&opt| s == opt) { cx.span_warn(p.last_span, \"expected a clobber, found an option\");  } else if s.starts_with(\"{\") || s.ends_with(\"}\") { cx.span_err(p.last_span, \"clobber should not be surrounded by braces\");  } clobs.push(s); } }", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-b278a33a45c1e97c10100b3dad4ec24c2c8bb8eb8922d851b0ab693416a57ae4", "query": "The rust book states that asm! clobbers should be written as \"{rdx}\" but when you use them as such the clobbers will be silently ignored making for some nasty bugs. output Relevant part of objdump: Relevant LLVM IR: As we can seen in the emitted LLVM IR, the given clobbers are surrounded by {{}}. I'm not sure what effect this has, but it seems like LLVM just silently ignores them. A fix would either be to not send the extra {}'s to LLVM or to edit the documentation to reflect that clobbers should be given without surrounding {}'s. Either way, this should probably not generate invalid code silently.", "positive_passages": [{"docid": "doc-en-rust-9f65aaeb7a5953252377a300b771c2afe7af0a5cbcd410ce5968f23d729cabfb", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // ignore-android // ignore-arm // ignore-aarch64 #![feature(asm, rustc_attrs)] #[cfg(any(target_arch = \"x86\", target_arch = \"x86_64\"))] #[rustc_error] pub fn main() { unsafe { // clobber formatted as register input/output asm!(\"xor %eax, %eax\" : : : \"{eax}\"); //~^ ERROR clobber should not be surrounded by braces } } ", "commid": "rust_pr_34682"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen use rustc_middle::ty::{self, Binder, DefIdTree, IsSuggestable, Ty};   use rustc_middle::ty::{ self, suggest_constraining_type_params, Binder, DefIdTree, IsSuggestable, ToPredicate, Ty, };  use rustc_session::errors::ExprParenthesesNeeded; use rustc_span::symbol::sym; use rustc_span::Span;", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen && let trait_ref = ty::Binder::dummy(self.tcx.mk_trait_ref(clone_trait_did, [expected_ty]))  // And the expected type doesn't implement `Clone` && !self.predicate_must_hold_considering_regions(&traits::Obligation::new( self.tcx, traits::ObligationCause::dummy(), self.param_env,  ty::Binder::dummy(self.tcx.mk_trait_ref( clone_trait_did, [expected_ty], )),   trait_ref,  )) { diag.span_note(", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen let owner = self.tcx.hir().enclosing_body_owner(expr.hir_id); if let ty::Param(param) = expected_ty.kind() && let Some(generics) = self.tcx.hir().get_generics(owner) { suggest_constraining_type_params( self.tcx, generics, diag, vec![(param.name.as_str(), \"Clone\", Some(clone_trait_did))].into_iter(), ); } else { self.suggest_derive(diag, &[(trait_ref.to_predicate(self.tcx), None, None)]); }  } }", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen fn suggest_derive(   pub fn suggest_derive(  &self, err: &mut Diagnostic, unsatisfied_predicates: &[(", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen // run-rustfix fn wat(t: &T) -> T { t.clone() //~ ERROR E0308 } #[derive(Clone)] struct Foo; fn wut(t: &Foo) -> Foo { t.clone() //~ ERROR E0308 } fn main() { wat(&42); wut(&Foo); } ", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen // run-rustfix fn wat(t: &T) -> T { t.clone() //~ ERROR E0308 } struct Foo; fn wut(t: &Foo) -> Foo { t.clone() //~ ERROR E0308 } fn main() { wat(&42); wut(&Foo); } ", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen error[E0308]: mismatched types --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:3:5 | LL | fn wat(t: &T) -> T { |        -            - expected `T` because of return type |        | |        this type parameter LL |     t.clone() |     ^^^^^^^^^ expected type parameter `T`, found `&T` | = note: expected type parameter `T` found reference `&T` note: `T` does not implement `Clone`, so `&T` was cloned instead --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:3:5 | LL |     t.clone() |     ^ help: consider restricting type parameter `T` | LL | fn wat(t: &T) -> T { |         +++++++ error[E0308]: mismatched types --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:9:5 | LL | fn wut(t: &Foo) -> Foo { |                    --- expected `Foo` because of return type LL |     t.clone() |     ^^^^^^^^^ expected struct `Foo`, found `&Foo` | note: `Foo` does not implement `Clone`, so `&Foo` was cloned instead --> $DIR/clone-on-unconstrained-borrowed-type-param.rs:9:5 | LL |     t.clone() |     ^ help: consider annotating `Foo` with `#[derive(Clone)]` | LL | #[derive(Clone)] | error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-4fd363e3c2caaa69ef789bf01f16038e09b968a59dfd250a6cc32eb3d0bfb237", "query": "This gives the very unintuitive error: You expect to return , i.e. a . It mysteriously returns an instead. Because has a clone impl, it too, can be cloned (really, copied), returning a reference of the same lifetime. This is an operation you rarely want to do explicitly, except when dealing with generics. This error can crop up when you've forgotten to implement on , and have a reference (which could have been silently inserted using , as in the code above). However, it's not very obvious what's happening here. It can also crop up We should hint that itself isn't cloneable and thus we fell back to cloning here. This could be a lint on hitting the clone impl for references, but lints run too late. I'm not sure how this can be implemented, since we need to see the source of a value and that's a bit tricky.\ncc\nInteresting. This does seem like it would require some pretty special-case code to achieve, but I agree it'd be a nice case to capture.\nA relatively easy way to catch most instances of this is to Pick up the expression which has a type error like this If it is a method call, check if that method call is a clone() call resolving to If it is a variable, find the variable's def If the def is a local def, look at the init expr, check if it's a bad clone This will miss things, but I think the majority of such bugs will be due to this. As it stands this feels like an incredibly easy case to hit (we've all forgotten to decorate structs).\ncc\nCC\nAs reported in : This reports:\nCurrent output is even worse for the original report: The last comment's code is marginally different:\nCurrent output: The last case should suggest constraining . Edit: that was easier than I thought (we already had the suggestion machinery): \"Screen help: consider annotating `NotClone` with `#[derive(Clone)]` | LL | #[derive(Clone)] |  error: aborting due to previous error", "commid": "rust_pr_105679"}], "negative_passages": []}
{"query_id": "q-en-rust-afd99a09558cb6a3848844ef436e6494e1cbb1dbf88a969fa3f44e23fe09260f", "query": "Was just trying to read through std::fmt to understand how format values to a particular decimal point. I came across this set of examples, which don't do a good job of explaining the output of each, so it's difficult to visually pattern match what I type in to what comes out:\nSorry for the mess :sweat_smile: I guess it is clearer now.", "positive_passages": [{"docid": "doc-en-rust-b0a4fc36f2d2163a9b0cccaf614f538b3a8e828825e08a1606290e5ab2eb7200", "text": "//!    in this case, if one uses the format string `{:.*}`, then the `` part refers //!    to the *value* to print, and the `precision` must come in the input preceding ``. //!  //! For example, these:   //! For example, the following calls all print the same thing `Hello x is 0.01000`:  //! //! ```  //! // Hello {arg 0 (x)} is {arg 1 (0.01) with precision specified inline (5)}   //! // Hello {arg 0 (\"x\")} is {arg 1 (0.01) with precision specified inline (5)}  //! println!(\"Hello {0} is {1:.5}\", \"x\", 0.01); //!  //! // Hello {arg 1 (x)} is {arg 2 (0.01) with precision specified in arg 0 (5)}   //! // Hello {arg 1 (\"x\")} is {arg 2 (0.01) with precision specified in arg 0 (5)}  //! println!(\"Hello {1} is {2:.0$}\", 5, \"x\", 0.01); //!  //! // Hello {arg 0 (x)} is {arg 2 (0.01) with precision specified in arg 1 (5)}   //! // Hello {arg 0 (\"x\")} is {arg 2 (0.01) with precision specified in arg 1 (5)}  //! println!(\"Hello {0} is {2:.1$}\", \"x\", 5, 0.01); //!  //! // Hello {next arg (x)} is {second of next two args (0.01) with precision   //! // Hello {next arg (\"x\")} is {second of next two args (0.01) with precision  //! //                          specified in first of next two args (5)} //! println!(\"Hello {} is {:.*}\",    \"x\", 5, 0.01); //!  //! // Hello {next arg (x)} is {arg 2 (0.01) with precision   //! // Hello {next arg (\"x\")} is {arg 2 (0.01) with precision  //! //                          specified in its predecessor (5)} //! println!(\"Hello {} is {2:.*}\",   \"x\", 5, 0.01); //!  //! // Hello {next arg (x)} is {arg \"number\" (0.01) with precision specified   //! // Hello {next arg (\"x\")} is {arg \"number\" (0.01) with precision specified  //! //                          in arg \"prec\" (5)} //! println!(\"Hello {} is {number:.prec$}\", \"x\", prec = 5, number = 0.01); //! ``` //!  //! All print the same thing: //! //! ```text //! Hello x is 0.01000 //! ``` //!  //! While these: //! //! ```", "commid": "rust_pr_35050"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-16da9f2bf0a84a71964433a08b9ebd21b8bc6609c84744a2dd2d7684c7e44c34", "text": "/// ``` #[stable(feature = \"move_cell\", since = \"1.17.0\")] pub fn into_inner(self) -> T {  unsafe { self.value.into_inner() }   self.value.into_inner()  } }", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-c6c890d736bf918b58a99677d7ce4c57e82bba6dea0ed6ff4f5fd462f825975b", "text": "// compiler statically verifies that it is not currently borrowed. // Therefore the following assertion is just a `debug_assert!`. debug_assert!(self.borrow.get() == UNUSED);  unsafe { self.value.into_inner() }   self.value.into_inner()  } /// Replaces the wrapped value with a new one, returning the old value,", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-64b3f025f7044d1ddff8acb6b0ffe7dd386a6390fbf860aafe0566b7308134d7", "text": "/// Unwraps the value. ///  /// # Safety /// /// This function is unsafe because this thread or another thread may currently be /// inspecting the inner value. ///  /// # Examples /// /// ```", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-a1af952078c4e2b21fc389d658b8f11b36a5bc00d83a63e47d6787cc9bace151", "text": "/// /// let uc = UnsafeCell::new(5); ///  /// let five = unsafe { uc.into_inner() };   /// let five = uc.into_inner();  /// ``` #[inline] #[stable(feature = \"rust1\", since = \"1.0.0\")]  pub unsafe fn into_inner(self) -> T {   pub fn into_inner(self) -> T {  self.value } }", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-406d0fe7110cd57dd6e1e7a5c1ed7494da6b617a3616ca32e75f792f638aa7dc", "text": "#[inline] #[stable(feature = \"atomic_access\", since = \"1.15.0\")] pub fn into_inner(self) -> bool {  unsafe { self.v.into_inner() != 0 }   self.v.into_inner() != 0  } /// Loads a value from the bool.", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-0eb161898bda2bdeaddd72afbb0217e5fada8d47dd64bd741d6c407e6a5b3c12", "text": "#[inline] #[stable(feature = \"atomic_access\", since = \"1.15.0\")] pub fn into_inner(self) -> *mut T {  unsafe { self.p.into_inner() }   self.p.into_inner()  } /// Loads a value from the pointer.", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-0d1264126124504c28deff0fdd1f9f10ff7b0971b2cfc371ed677d6f5abcce94", "query": "It's been unsafe since 2014 when it was called . Nobody thought about it, I guess, but it shouldn't be unsafe.\nFor reference the PR which it is\nSomeone thought about it enough to : Has something changed to make this consideration unnecessary or do you have an explanation of why the reasoning is faulty?\nBecause the function takes so by definition it must be the owner of the value so there can be no other viewers, either from this thread or another thread.\nAlthough this isn't backwards-compatible there is some prior art of going unsafe -safe: (although, that was pre-1.0)\nThat's weird, it seems like you ought to be able to assign a safe fn to an unsafe fn pointer. , Nicole Mazzuca wrote:\nWe removed the subtyping relation at some point; it does seem like a coercion would be (probably) ok, but that would still not be (strictly) backwards compatible.\n(I tend to agree that could be safe, though, and I highly doubt any existing crates would actually break due to this change.)\nWe could always do a crater run just to be absolutely sure nobody is turning into a function pointer.\nHI ! Any news on this one ?\nNo change that I know of.\nFWIW the relevant coercion was in\nThis change is breaking my crate. If I add the then it will work for older compilers. If I remove the then it will work for newer compilers. Is there a conditional compilation attribute for compiler version? I can't find one.\nUse\nYeah, that's what I'm doing for now. It seems kind of lame, though. But it does work.", "positive_passages": [{"docid": "doc-en-rust-e7b470a8ba30c5c4d612a5cfe7f8c8c7ee16ba380f9e27de9c47ea975e2747fd", "text": "#[inline] #[$stable_access] pub fn into_inner(self) -> $int_type {  unsafe { self.v.into_inner() }   self.v.into_inner()  } /// Loads a value from the atomic integer.", "commid": "rust_pr_47204"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-4c63558cc9d3a7d49de2099dde6fed65293235d399582099d4ea5d910ad53337", "text": "let &(ref first_arm_pats, _) = &arms[0]; let first_pat = &first_arm_pats[0]; let span = first_pat.span;  span_err!(cx.tcx.sess, span, E0165, \"irrefutable while-let pattern\");   struct_span_err!(cx.tcx.sess, span, E0165, \"irrefutable while-let pattern\") .span_label(span, &format!(\"irrefutable pattern\")) .emit();  }, hir::MatchSource::ForLoopDesugar => {", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-8b08f0d27415a104fa48b934f0765a0355d43ca6fc67afd713dc36a5a957c834", "text": "tcx.sess.add_lint(lint::builtin::MATCH_OF_UNIT_VARIANT_VIA_PAREN_DOTDOT, pat.id, pat.span, msg); } else {  span_err!(tcx.sess, pat.span, E0164, \"{}\", msg);   struct_span_err!(tcx.sess, pat.span, E0164, \"{}\", msg) .span_label(pat.span, &format!(\"not a tuple variant or struct\")).emit();  on_error(); } };", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-ff8270d0215ddbada1cc300e82fad9e762a3ece3b4bbee04e4bcb0dce691dc6b", "text": ".emit(); } Err(CopyImplementationError::HasDestructor) => {  span_err!(tcx.sess, span, E0184,   struct_span_err!(tcx.sess, span, E0184,  \"the trait `Copy` may not be implemented for this type;   the type has a destructor\");   the type has a destructor\") .span_label(span, &format!(\"Copy not allowed on types with destructors\")) .emit();  } } });", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-89b81bc2ced1e642ad37d83220c291f95b6679f99542f755cb9e538c06b880f2", "text": "fn bar(foo: Foo) -> u32 { match foo { Foo::B(i) => i, //~ ERROR E0164  //~| NOTE not a tuple variant or struct  } }", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-5ac9244c3a6c15fbdae0e13a10f6fd45a8109f654a92720dd619ed7e268a3979", "text": "fn main() { let irr = Irrefutable(0); while let Irrefutable(x) = irr { //~ ERROR E0165  //~| irrefutable pattern  // ... } }", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-2248a5835c777256553e41bdc9de1a961cdda64c90ad7ae16cedfbc4a4a55a52", "query": "From: src/test/compile- Error E0165 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-2c8aa37c87b67ffc2fb3043a235cfa0e3690b6a6bb14675adbdd13e85870289f", "text": "// except according to those terms. #[derive(Copy)] //~ ERROR E0184  //~| NOTE Copy not allowed on types with destructors //~| NOTE in this expansion of #[derive(Copy)]  struct Foo; impl Drop for Foo {", "commid": "rust_pr_36125"}], "negative_passages": []}
{"query_id": "q-en-rust-84fe10bf056d6814d3eda74c00d09770737f23d8e448b3d0bf98856cb4433638", "query": "From: src/test/compile- E0263 needs a span_label, updating it from: To: Bonus: underline and label the previous declaration:\nI'm on it.", "positive_passages": [{"docid": "doc-en-rust-38a9cbb8d7784ac6e58b0f388834f185efbcbac0f357b13904c143e24fa94b01", "text": "let lifetime_j = &lifetimes[j]; if lifetime_i.lifetime.name == lifetime_j.lifetime.name {  span_err!(self.sess, lifetime_j.lifetime.span, E0263, \"lifetime name `{}` declared twice in  the same scope\", lifetime_j.lifetime.name);   struct_span_err!(self.sess, lifetime_j.lifetime.span, E0263, \"lifetime name `{}` declared twice in the same scope\", lifetime_j.lifetime.name) .span_label(lifetime_j.lifetime.span, &format!(\"declared twice\")) .span_label(lifetime_i.lifetime.span, &format!(\"previous declaration here\")) .emit();  } }", "commid": "rust_pr_35557"}], "negative_passages": []}
{"query_id": "q-en-rust-84fe10bf056d6814d3eda74c00d09770737f23d8e448b3d0bf98856cb4433638", "query": "From: src/test/compile- E0263 needs a span_label, updating it from: To: Bonus: underline and label the previous declaration:\nI'm on it.", "positive_passages": [{"docid": "doc-en-rust-c092baab073064214101ca07b352e059950a1f86d7a817d05cdddd9420fd0404", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms.  fn foo<'a, 'b, 'a>(x: &'a str, y: &'b str) { } //~ ERROR E0263   fn foo<'a, 'b, 'a>(x: &'a str, y: &'b str) { //~^ ERROR E0263 //~| NOTE declared twice //~| NOTE previous declaration here }  fn main() {}", "commid": "rust_pr_35557"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-3e310c59d818cf186ba206df2ff602e523016faba098a47a19c1fc182368190f", "text": "\"tempfile\", \"thorin-dwp\", \"tracing\",  \"windows 0.46.0\",  ] [[package]]", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-d75178a812a0c4446607096a6908b31cffdac187df1d605ac77b17af7f47a77a", "text": "version = \"0.30.1\" default-features = false features = [\"read_core\", \"elf\", \"macho\", \"pe\", \"unaligned\", \"archive\", \"write\"]  [target.'cfg(windows)'.dependencies.windows] version = \"0.46.0\" features = [\"Win32_Globalization\"] ", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-6051f9024d454ecd6c0122ea5d2a3ff370f4690246f59e55aeb952e2ecabf673", "text": "if !prog.status.success() { let mut output = prog.stderr.clone(); output.extend_from_slice(&prog.stdout);  let escaped_output = escape_string(&output);   let escaped_output = escape_linker_output(&output, flavor);  // FIXME: Add UI tests for this error. let err = errors::LinkingFailed { linker_path: &linker_path,", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-e31fad30eb1a03dc28c092f0cd0313b70028c639e3f1d04d78cf3128ef1f4854", "text": "} }  #[cfg(not(windows))] fn escape_linker_output(s: &[u8], _flavour: LinkerFlavor) -> String { escape_string(s) } /// If the output of the msvc linker is not UTF-8 and the host is Windows, /// then try to convert the string from the OEM encoding. #[cfg(windows)] fn escape_linker_output(s: &[u8], flavour: LinkerFlavor) -> String { // This only applies to the actual MSVC linker. if flavour != LinkerFlavor::Msvc(Lld::No) { return escape_string(s); } match str::from_utf8(s) { Ok(s) => return s.to_owned(), Err(_) => match win::locale_byte_str_to_string(s, win::oem_code_page()) { Some(s) => s, // The string is not UTF-8 and isn't valid for the OEM code page None => format!(\"Non-UTF-8 output: {}\", s.escape_ascii()), }, } } /// Wrappers around the Windows API. #[cfg(windows)] mod win { use windows::Win32::Globalization::{ GetLocaleInfoEx, MultiByteToWideChar, CP_OEMCP, LOCALE_IUSEUTF8LEGACYOEMCP, LOCALE_NAME_SYSTEM_DEFAULT, LOCALE_RETURN_NUMBER, MB_ERR_INVALID_CHARS, }; /// Get the Windows system OEM code page. This is most notably the code page /// used for link.exe's output. pub fn oem_code_page() -> u32 { unsafe { let mut cp: u32 = 0; // We're using the `LOCALE_RETURN_NUMBER` flag to return a u32. // But the API requires us to pass the data as though it's a [u16] string. let len = std::mem::size_of::() / std::mem::size_of::(); let data = std::slice::from_raw_parts_mut(&mut cp as *mut u32 as *mut u16, len); let len_written = GetLocaleInfoEx( LOCALE_NAME_SYSTEM_DEFAULT, LOCALE_IUSEUTF8LEGACYOEMCP | LOCALE_RETURN_NUMBER, Some(data), ); if len_written as usize == len { cp } else { CP_OEMCP } } } /// Try to convert a multi-byte string to a UTF-8 string using the given code page /// The string does not need to be null terminated. /// /// This is implemented as a wrapper around `MultiByteToWideChar`. /// See  /// /// It will fail if the multi-byte string is longer than `i32::MAX` or if it contains /// any invalid bytes for the expected encoding. pub fn locale_byte_str_to_string(s: &[u8], code_page: u32) -> Option { // `MultiByteToWideChar` requires a length to be a \"positive integer\". if s.len() > isize::MAX as usize { return None; } // Error if the string is not valid for the expected code page. let flags = MB_ERR_INVALID_CHARS; // Call MultiByteToWideChar twice. // First to calculate the length then to convert the string. let mut len = unsafe { MultiByteToWideChar(code_page, flags, s, None) }; if len > 0 { let mut utf16 = vec![0; len as usize]; len = unsafe { MultiByteToWideChar(code_page, flags, s, Some(&mut utf16)) }; if len > 0 { return utf16.get(..len as usize).map(String::from_utf16_lossy); } } None } }  fn add_sanitizer_libraries(sess: &Session, crate_type: CrateType, linker: &mut dyn Linker) { // On macOS the runtimes are distributed as dylibs which should be linked to // both executables and dynamic shared objects. Everywhere else the runtimes", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-b77d2be166e4d19e95bd6f7660bfd6f24dcc63df48dc19b669e2897eaf9c2698", "text": " // build-fail // compile-flags:-C link-arg=m\u00e4rchenhaft // only-msvc // error-pattern:= note: LINK : fatal error LNK1181: // normalize-stderr-test \"(s*|n)s*= note: .*n\" -> \"$1\" pub fn main() {} ", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-b423d86be32781bbde83dc4f37b13beb499a9a18f09b245824818eb20605c688", "text": " error: linking with `link.exe` failed: exit code: 1181 | = note: LINK : fatal error LNK1181: cannot open input file 'm\u00e4rchenhaft.obj' error: aborting due to previous error ", "commid": "rust_pr_110586"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-6c7d29d8abc506094a9c058991bd31b2177ab4f9307642b2e58704a4abe635e8", "text": "linker_error.emit(); if sess.target.target.options.is_like_msvc && linker_not_found {  sess.note_without_error(\"the msvc targets depend on the msvc linker  but `link.exe` was not found\"); sess.note_without_error(\"please ensure that VS 2013, VS 2015 or VS 2017  was installed with the Visual C++ option\");   sess.note_without_error( \"the msvc targets depend on the msvc linker  but `link.exe` was not found\", ); sess.note_without_error( \"please ensure that VS 2013, VS 2015, VS 2017 or VS 2019  was installed with the Visual C++ option\", );  } sess.abort_if_errors(); }", "commid": "rust_pr_62021"}], "negative_passages": []}
{"query_id": "q-en-rust-65b77b7e06ebe16e3cdb656b584dc7bc041446f7fa3f3531ba29eb2e11ea7c50", "query": "A localized (e.g. German) MSVC can produce non-UTF-8 output, which can become almost unreadable in the way it's currently forwarded to the user. This can be seen in an (otherwise unrelated) issue comment: Note especially this line: That output might be a lot longer for multiple LNK errors (one line per error, but the lines are not properly separated in the output, because they are converted to ) and become really hard to read. If possible, the output should be converted to Unicode in this case. (previously reported as ) NOTE from Please install Visual Studio English Language Pack side by side with your favorite language pack for UI\nI\u2019m surprised we try to interpret any output in Windows as UTF-8 as opposed to UTF-16, like we should.\nI suspect that the MSVC compiler's raw output is encoded as Windows-1252 or Windows-1250 (for German) depending on the current console code page and not as UTF-16. Does solve the encoding problem?\nNo, neither in nor in (usually I'm working with the latter). It prints but the encoding problems persist.\nIs it just MSVC which is localized, or is your system codepage itself different? If we know that a certain program's output is the system codepage we could use with to convert it. Perhaps we could check whether the output is UTF-8, and attempt to do the codepage conversion if it isn't.\nI am using a German localized Windows. How can I find out the system codepage? The default CP for console windows is \"850 (OEM - Multilingual Lateinisch I)\".\nYou can call  with or .\nThat returns codepage numbers 1252 for and 850 for .\nIn which case the output from the linker appears to be , so now someone just has to add code to rustc which detects when the linker output isn't utf-8 on windows and use with instead.\nI ran into this problem as well (also on a German Windows 10). As a workaround it helps if you go to in Win 10 to Settings -Region and Language -Language and add English as a language and make it default ( you may have to log out and back in again). After that, programs like should output using a locale that works with rust as it is right now. It would still be great if this could be fixed in rust :)\nJust saw another instance of this in , except with ) (a common encoding in China), where got printed instead of the desired .\nUsing nightly 2019-07-04, we can confirm that this issue is fixed. (I'm using Chinese version of Visual Studio).\nFor me, it still does not work even with nightly build (1.38 x86_64-pc-windows-msvc, Russian version of VS). I try to compile a Hello world project in Intellij Idea but always get the exception:\nCould you execute and see what that outputs?\nThat's a little strange. Could you open \"x64 Native Tools Command Prompt for Visual Studio 2019\" from start menu, and execute the following command: And see whether the output is English? If the output's still not in English, would you mind open \"Visual Studio Installer\" from start menu, choose \"Modify\" for the corresponding Visual Studio edition, and in the Language packs tab, install the English language pack. And try the instructions above again?\nthanks! The hint about installing the English language pack has made the previous error disappear. But now I have another message while running in Idea: I haven't changed the path to the std library, it's looks like\nYes, this is the same error as before, it's because you didn't install windows sdk. Choose the newest one in the visual studio installer too.\nthank you again. Now it's compiling and running. I'm happy ;) Just out of curiosity: was I supposed to know in advance that I would need an English lang pack and a Windows SDK when installing Visual Studio?\nPlease feel free to point out places where Rust tells you to install Visual Studio but does not tell you to install the Windows SDK, so that we can fix those places. It might still be worth implementing the text encoding version of the fix for people who don't have the English language pack, though I really wish we could just get unicode output out of VS without having to specify a language.\nsorry, seems I was a bit inattentive. Now I double-checked: after starting rustup- the screen info says, I should assure that I have Windows SDK installed. So my fault, nothing to fix there :) Thanks.\nThis is not a fix anymore.\nrefer to\nPlease make sure you have installed Visual Studio English Language Pack side by side with your favorite language pack for UI for this to work.\nI'm not sure this problem is really fixed. The PR that closed this kinda works around the issue because English is usually ASCII which is the same in UTF-8. However, as noted above, this relies on the Visual Studio English language pack being installed which it isn't always. And that there's no non-ascii output (e.g. file paths, etc). The suggestion to use with seems like a better fix, imho.", "positive_passages": [{"docid": "doc-en-rust-110741c6b5677bb67eda59812bd0aafcb890919d102f39238e5c95969a5b756b", "text": "target_family: Some(\"windows\".to_string()), is_like_windows: true, is_like_msvc: true,  // set VSLANG to 1033 can prevent link.exe from using // language packs, and avoid generating Non-UTF-8 error // messages if a link error occurred. link_env: vec![(\"VSLANG\".to_string(), \"1033\".to_string())],  pre_link_args: args, crt_static_allows_dylibs: true, crt_static_respected: true,", "commid": "rust_pr_62021"}], "negative_passages": []}
{"query_id": "q-en-rust-0483b61e9654161151d5b42d5b803a6be5ccea3c7401c4a0380da8c5c3e28b1c", "query": "Full code: Given a smart pointer, which only works on types implementing the trait or on trait objects, such that is allowed to coerce to , and the corresponding implementation : Applying the operator on two will try to coerce them to , even though this isn't needed. This is an issue since coercion moves the original pointer. Removing the implementation lets the code build, which means that the coercion is not needed. As , if the is restricted to pointers with identical types (), then no coercion occurs.  $DIR/issue-42060.rs:13:23 | LL |     let other: typeof(thing) = thing; //~ ERROR attempt to use a non-constant value in a constant |                       ^^^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/issue-42060.rs:19:13 | LL |     ::N //~ ERROR attempt to use a non-constant value in a constant |             ^ non-constant value error[E0516]: `typeof` is a reserved keyword but unimplemented --> $DIR/issue-42060.rs:13:16 | LL |     let other: typeof(thing) = thing; //~ ERROR attempt to use a non-constant value in a constant |                ^^^^^^^^^^^^^ reserved keyword error[E0516]: `typeof` is a reserved keyword but unimplemented --> $DIR/issue-42060.rs:19:6 | LL |     ::N //~ ERROR attempt to use a non-constant value in a constant |      ^^^^^^^^^ reserved keyword error: aborting due to 4 previous errors Some errors occurred: E0435, E0516. For more information about an error, try `rustc --explain E0435`. ", "commid": "rust_pr_52558"}], "negative_passages": []}
{"query_id": "q-en-rust-6e29eac33e11002c376e1848d2b2dc3ec4f2d88657a91b0ce5d7784f66efd71d", "query": "Even more minimal test case: The parser for some reason is expecting a pattern by end of the second .\nThe ICE is fixed in nightly.", "positive_passages": [{"docid": "doc-en-rust-82e2e207ac72d350587cde4e89f093f9fe113ab7e404a81f99d64d535f673d48", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { | } //~^ ERROR expected `|`, found `}` | //~^ ERROR expected item, found `|` ", "commid": "rust_pr_52558"}], "negative_passages": []}
{"query_id": "q-en-rust-6e29eac33e11002c376e1848d2b2dc3ec4f2d88657a91b0ce5d7784f66efd71d", "query": "Even more minimal test case: The parser for some reason is expecting a pattern by end of the second .\nThe ICE is fixed in nightly.", "positive_passages": [{"docid": "doc-en-rust-4e71d00751d53e41329683d008a5b01fdee1f2904b1082996b4b6dc321f8d361", "text": " error: expected `|`, found `}` --> $DIR/issue-43196.rs:13:1 | LL |     | |      - expected `|` here LL | } | ^ unexpected token error: expected item, found `|` --> $DIR/issue-43196.rs:15:1 | LL | | | ^ expected item error: aborting due to 2 previous errors ", "commid": "rust_pr_52558"}], "negative_passages": []}
{"query_id": "q-en-rust-ff3d0d1f9967b6655cfb6db452de61f802dc0c70856137ca311fce2372a9328e", "query": "This code: gives this ICE: Doesn't happen if you use e.g. instead of , or use instead of .\nMinimized to:\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-df3d362b11230240f9621aeb21ec0f7741651165610b261d778a83ba12efd6f2", "text": "use tvec; use value::Value;  use super::MirContext;   use super::{MirContext, LocalRef};  use super::constant::const_scalar_checked_binop; use super::operand::{OperandRef, OperandValue}; use super::lvalue::LvalueRef;", "commid": "rust_pr_44060"}], "negative_passages": []}
{"query_id": "q-en-rust-ff3d0d1f9967b6655cfb6db452de61f802dc0c70856137ca311fce2372a9328e", "query": "This code: gives this ICE: Doesn't happen if you use e.g. instead of , or use instead of .\nMinimized to:\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-4f67c1c402ed652ffca2f3f4a25bcad630cbbe057da44b43bab5660935a8f894", "text": "} mir::Rvalue::Len(ref lvalue) => {  let tr_lvalue = self.trans_lvalue(&bcx, lvalue);   let size = self.evaluate_array_len(&bcx, lvalue);  let operand = OperandRef {  val: OperandValue::Immediate(tr_lvalue.len(bcx.ccx)),   val: OperandValue::Immediate(size),  ty: bcx.tcx().types.usize, }; (bcx, operand)", "commid": "rust_pr_44060"}], "negative_passages": []}
{"query_id": "q-en-rust-ff3d0d1f9967b6655cfb6db452de61f802dc0c70856137ca311fce2372a9328e", "query": "This code: gives this ICE: Doesn't happen if you use e.g. instead of , or use instead of .\nMinimized to:\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-9e548218d033d975c449d10e00be7fa96561bdff1ac49c328e3a9338ac7a1df7", "text": "} }  fn evaluate_array_len(&mut self, bcx: &Builder<'a, 'tcx>, lvalue: &mir::Lvalue<'tcx>) -> ValueRef { // ZST are passed as operands and require special handling // because trans_lvalue() panics if Local is operand. if let mir::Lvalue::Local(index) = *lvalue { if let LocalRef::Operand(Some(op)) = self.locals[index] { if common::type_is_zero_size(bcx.ccx, op.ty) { if let ty::TyArray(_, n) = op.ty.sty { return common::C_uint(bcx.ccx, n); } } } } // use common size calculation for non zero-sized types let tr_value = self.trans_lvalue(&bcx, lvalue); return tr_value.len(bcx.ccx); }  pub fn trans_scalar_binop(&mut self, bcx: &Builder<'a, 'tcx>, op: mir::BinOp,", "commid": "rust_pr_44060"}], "negative_passages": []}
{"query_id": "q-en-rust-ff3d0d1f9967b6655cfb6db452de61f802dc0c70856137ca311fce2372a9328e", "query": "This code: gives this ICE: Doesn't happen if you use e.g. instead of , or use instead of .\nMinimized to:\nFixed in .", "positive_passages": [{"docid": "doc-en-rust-44ae3f81bea7de82a8daa5e2465e063bbbf62931b384b9b24526faed6a52a717", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { &&[()][0]; println!(\"{:?}\", &[(),()][1]); } ", "commid": "rust_pr_44060"}], "negative_passages": []}
{"query_id": "q-en-rust-d18dca641ad3f6c34ea7506f36f33953bcd4db554e591ab5bdb6eaaf6d992e43", "query": "The compiler ices with error \"Failed to unify obligation\" when I try to compile the following code: I get the following error: : Backtrace:\nAnother version:\ntriage: p-medium\nminimal version Note that if you pass in the closure to it doesn't ICE the compiler when I used , bud did ICE the compiler when I used /. i.e.\nnone of the programs provided seem to ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-b02af5748bf13d8fda87d55c42490e3e009db81d7644ac87785fcd286bac3643", "text": " pub trait Trait<'a> { type Assoc; } pub struct Type; impl<'a> Trait<'a> for Type { type Assoc = (); } pub fn break_me(f: F) where T: for<'b> Trait<'b>, F: for<'b> FnMut(>::Assoc) { break_me::; //~^ ERROR: type mismatch in function arguments //~| ERROR: type mismatch resolving } fn main() {} ", "commid": "rust_pr_63397"}], "negative_passages": []}
{"query_id": "q-en-rust-d18dca641ad3f6c34ea7506f36f33953bcd4db554e591ab5bdb6eaaf6d992e43", "query": "The compiler ices with error \"Failed to unify obligation\" when I try to compile the following code: I get the following error: : Backtrace:\nAnother version:\ntriage: p-medium\nminimal version Note that if you pass in the closure to it doesn't ICE the compiler when I used , bud did ICE the compiler when I used /. i.e.\nnone of the programs provided seem to ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-b54b6bf53ae90e09021510892c64a36aa72024b01be7bb5e5ffcaa19d33a9dd1", "text": " error[E0631]: type mismatch in function arguments --> $DIR/issue-43623.rs:14:5 | LL |     break_me::; |     ^^^^^^^^^^^^^^^^^^^^^^^ |     | |     expected signature of `for<'b> fn(>::Assoc) -> _` |     found signature of `fn(_) -> _` | note: required by `break_me` --> $DIR/issue-43623.rs:11:1 | LL | / pub fn break_me(f: F) LL | | where T: for<'b> Trait<'b>, LL | |       F: for<'b> FnMut(>::Assoc) { LL | |     break_me::; LL | | LL | | LL | | } | |_^ error[E0271]: type mismatch resolving `for<'b> >::Assoc,)>>::Output == ()` --> $DIR/issue-43623.rs:14:5 | LL |     break_me::; |     ^^^^^^^^^^^^^^^^^^^^^^^ expected bound lifetime parameter 'b, found concrete lifetime | note: required by `break_me` --> $DIR/issue-43623.rs:11:1 | LL | / pub fn break_me(f: F) LL | | where T: for<'b> Trait<'b>, LL | |       F: for<'b> FnMut(>::Assoc) { LL | |     break_me::; LL | | LL | | LL | | } | |_^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0271`. ", "commid": "rust_pr_63397"}], "negative_passages": []}
{"query_id": "q-en-rust-d18dca641ad3f6c34ea7506f36f33953bcd4db554e591ab5bdb6eaaf6d992e43", "query": "The compiler ices with error \"Failed to unify obligation\" when I try to compile the following code: I get the following error: : Backtrace:\nAnother version:\ntriage: p-medium\nminimal version Note that if you pass in the closure to it doesn't ICE the compiler when I used , bud did ICE the compiler when I used /. i.e.\nnone of the programs provided seem to ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-a287460175c349e076f5602d4e710532246364c78c7325281e23afc0f946d8ac", "text": " use std::ops::Index; struct Test; struct Container(Test); impl Test { fn test(&mut self) {} } impl<'a> Index<&'a bool> for Container { type Output = Test; fn index(&self, _index: &'a bool) -> &Test { &self.0 } } fn main() { let container = Container(Test); let mut val = true; container[&mut val].test(); //~ ERROR: cannot borrow data } ", "commid": "rust_pr_63397"}], "negative_passages": []}
{"query_id": "q-en-rust-d18dca641ad3f6c34ea7506f36f33953bcd4db554e591ab5bdb6eaaf6d992e43", "query": "The compiler ices with error \"Failed to unify obligation\" when I try to compile the following code: I get the following error: : Backtrace:\nAnother version:\ntriage: p-medium\nminimal version Note that if you pass in the closure to it doesn't ICE the compiler when I used , bud did ICE the compiler when I used /. i.e.\nnone of the programs provided seem to ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-00891e61eb315fe9da6470c946fb3e0a06bca19d1a2a7d68ccf42209b2049300", "text": " error[E0596]: cannot borrow data in an index of `Container` as mutable --> $DIR/issue-44405.rs:21:5 | LL |     container[&mut val].test(); |     ^^^^^^^^^^^^^^^^^^^ cannot borrow as mutable | = help: trait `IndexMut` is required to modify indexed content, but it is not implemented for `Container` error: aborting due to previous error For more information about this error, try `rustc --explain E0596`. ", "commid": "rust_pr_63397"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-814a159519b9cbe4f71b58ec7e6a7f0cf847515c84cb0ee7f214ca4d9f565ded", "text": "// Building with a static libstdc++ is only supported on linux right now, // not for MSVC or macOS if build.config.llvm_static_stdcpp &&  !target.contains(\"freebsd\") &&  !target.contains(\"windows\") && !target.contains(\"apple\") { cargo.env(\"LLVM_STATIC_STDCPP\",", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-d430cfb99b119c7a7b1fd7ffbadba16efc1a45ed734662981620c96334f582f6", "text": "libssl-dev  pkg-config  COPY dist-i686-freebsd/build-toolchain.sh /tmp/ RUN /tmp/build-toolchain.sh i686   COPY scripts/freebsd-toolchain.sh /tmp/ RUN /tmp/freebsd-toolchain.sh i686  COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh ENV  AR_i686_unknown_freebsd=i686-unknown-freebsd10-ar   CC_i686_unknown_freebsd=i686-unknown-freebsd10-gcc  CXX_i686_unknown_freebsd=i686-unknown-freebsd10-g++   CC_i686_unknown_freebsd=i686-unknown-freebsd10-clang  CXX_i686_unknown_freebsd=i686-unknown-freebsd10-clang++  ENV HOSTS=i686-unknown-freebsd", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-f43be0dd38b3adfdd81d77e16569ad2dac730954e961aea7c03052de7a95768c", "text": "FROM ubuntu:16.04 RUN apt-get update && apt-get install -y --no-install-recommends   g++    clang   make  file  curl ", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-cb739ac321b2600af9dca769b083f2af653cc2705f15c2af8f01c11be1811825", "text": "libssl-dev  pkg-config  COPY dist-x86_64-freebsd/build-toolchain.sh /tmp/ RUN /tmp/build-toolchain.sh x86_64   COPY scripts/freebsd-toolchain.sh /tmp/ RUN /tmp/freebsd-toolchain.sh x86_64  COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh ENV  AR_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-ar   CC_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-gcc  CXX_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-g++   CC_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-clang  CXX_x86_64_unknown_freebsd=x86_64-unknown-freebsd10-clang++  ENV HOSTS=x86_64-unknown-freebsd", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-e458fc54eefaa2caed11e4bce9257a5a3063f2040aac4893266093929989f446", "text": " #!/usr/bin/env bash # Copyright 2016 The Rust Project Developers. See the COPYRIGHT # file at the top-level directory of this distribution and at # http://rust-lang.org/COPYRIGHT. # # Licensed under the Apache License, Version 2.0  or the MIT license # , at your # option. This file may not be copied, modified, or distributed # except according to those terms. set -ex ARCH=$1 BINUTILS=2.25.1 GCC=6.4.0 hide_output() { set +x on_err=\" echo ERROR: An error was encountered with the build. cat /tmp/build.log exit 1 \" trap \"$on_err\" ERR bash -c \"while true; do sleep 30; echo $(date) - building ...; done\" & PING_LOOP_PID=$! $@ &> /tmp/build.log trap - ERR kill $PING_LOOP_PID set -x } mkdir binutils cd binutils # First up, build binutils curl https://ftp.gnu.org/gnu/binutils/binutils-$BINUTILS.tar.bz2 | tar xjf - mkdir binutils-build cd binutils-build hide_output ../binutils-$BINUTILS/configure  --target=$ARCH-unknown-freebsd10 hide_output make -j10 hide_output make install cd ../.. rm -rf binutils # Next, download the FreeBSD libc and relevant header files mkdir freebsd case \"$ARCH\" in x86_64) URL=ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/10.2-RELEASE/base.txz ;; i686) URL=ftp://ftp.freebsd.org/pub/FreeBSD/releases/i386/10.2-RELEASE/base.txz ;; esac curl $URL | tar xJf - -C freebsd ./usr/include ./usr/lib ./lib dst=/usr/local/$ARCH-unknown-freebsd10 cp -r freebsd/usr/include $dst/ cp freebsd/usr/lib/crt1.o $dst/lib cp freebsd/usr/lib/Scrt1.o $dst/lib cp freebsd/usr/lib/crti.o $dst/lib cp freebsd/usr/lib/crtn.o $dst/lib cp freebsd/usr/lib/libc.a $dst/lib cp freebsd/usr/lib/libutil.a $dst/lib cp freebsd/usr/lib/libutil_p.a $dst/lib cp freebsd/usr/lib/libm.a $dst/lib cp freebsd/usr/lib/librt.so.1 $dst/lib cp freebsd/usr/lib/libexecinfo.so.1 $dst/lib cp freebsd/lib/libc.so.7 $dst/lib cp freebsd/lib/libm.so.5 $dst/lib cp freebsd/lib/libutil.so.9 $dst/lib cp freebsd/lib/libthr.so.3 $dst/lib/libpthread.so ln -s libc.so.7 $dst/lib/libc.so ln -s libm.so.5 $dst/lib/libm.so ln -s librt.so.1 $dst/lib/librt.so ln -s libutil.so.9 $dst/lib/libutil.so ln -s libexecinfo.so.1 $dst/lib/libexecinfo.so rm -rf freebsd # Finally, download and build gcc to target FreeBSD mkdir gcc cd gcc curl https://ftp.gnu.org/gnu/gcc/gcc-$GCC/gcc-$GCC.tar.gz | tar xzf - cd gcc-$GCC ./contrib/download_prerequisites mkdir ../gcc-build cd ../gcc-build hide_output ../gcc-$GCC/configure                 --enable-languages=c,c++                        --target=$ARCH-unknown-freebsd10                --disable-multilib                              --disable-nls                                   --disable-libgomp                               --disable-libquadmath                           --disable-libssp                                --disable-libvtv                                --disable-libcilkrts                            --disable-libada                                --disable-libsanitizer                          --disable-libquadmath-support                   --disable-lto hide_output make -j10 hide_output make install cd ../.. rm -rf gcc ", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-7eb4064eb5f35b9e9996e162c41bd72e4ff71c09389c24d1437a5c67120c9db1", "text": " #!/bin/bash # Copyright 2016-2017 The Rust Project Developers. See the COPYRIGHT # file at the top-level directory of this distribution and at # http://rust-lang.org/COPYRIGHT. # # Licensed under the Apache License, Version 2.0  or the MIT license # , at your # option. This file may not be copied, modified, or distributed # except according to those terms. set -eux arch=$1 binutils_version=2.25.1 freebsd_version=10.3 triple=$arch-unknown-freebsd10 sysroot=/usr/local/$triple hide_output() { set +x local on_err=\" echo ERROR: An error was encountered with the build. cat /tmp/build.log exit 1 \" trap \"$on_err\" ERR bash -c \"while true; do sleep 30; echo $(date) - building ...; done\" & local ping_loop_pid=$! $@ &> /tmp/build.log trap - ERR kill $ping_loop_pid set -x } # First up, build binutils mkdir binutils cd binutils curl https://ftp.gnu.org/gnu/binutils/binutils-${binutils_version}.tar.bz2 | tar xjf - mkdir binutils-build cd binutils-build hide_output ../binutils-${binutils_version}/configure  --target=\"$triple\" --with-sysroot=\"$sysroot\" hide_output make -j\"$(getconf _NPROCESSORS_ONLN)\" hide_output make install cd ../.. rm -rf binutils # Next, download the FreeBSD libraries and header files mkdir -p \"$sysroot\" case $arch in (x86_64) freebsd_arch=amd64 ;; (i686) freebsd_arch=i386 ;; esac files_to_extract=( \"./usr/include\" \"./usr/lib/*crt*.o\" ) # Try to unpack only the libraries the build needs, to save space. for lib in c cxxrt gcc_s m thr util; do files_to_extract=(\"${files_to_extract[@]}\" \"./lib/lib${lib}.*\" \"./usr/lib/lib${lib}.*\") done for lib in c++ c_nonshared compiler_rt execinfo gcc pthread rt ssp_nonshared; do files_to_extract=(\"${files_to_extract[@]}\" \"./usr/lib/lib${lib}.*\") done URL=https://download.freebsd.org/ftp/releases/${freebsd_arch}/${freebsd_version}-RELEASE/base.txz curl \"$URL\" | tar xJf - -C \"$sysroot\" --wildcards \"${files_to_extract[@]}\" # Fix up absolute symlinks from the system image.  This can be removed # for FreeBSD 11.  (If there's an easy way to make them relative # symlinks instead, feel free to change this.) set +x find \"$sysroot\" -type l | while read symlink_path; do symlink_target=$(readlink \"$symlink_path\") case $symlink_target in (/*) echo \"Fixing symlink ${symlink_path} -> ${sysroot}${symlink_target}\" >&2 ln -nfs \"${sysroot}${symlink_target}\" \"${symlink_path}\" ;; esac done set -x # Clang can do cross-builds out of the box, if we give it the right # flags.  (The local binutils seem to work, but they set the ELF # header \"OS/ABI\" (EI_OSABI) field to SysV rather than FreeBSD, so # there might be other problems.) # # The --target option is last because the cross-build of LLVM uses # --target without an OS version (\"-freebsd\" vs. \"-freebsd10\").  This # makes Clang default to libstdc++ (which no longer exists), and also # controls other features, like GNU-style symbol table hashing and # anything predicated on the version number in the __FreeBSD__ # preprocessor macro. for tool in clang clang++; do tool_path=/usr/local/bin/${triple}-${tool} cat > \"$tool_path\" <", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-2779eafb2ac6e0a9082d7a8aae9add135f589d5d5b723f4540b45d54e7cd7aa0", "query": "Steps to reproduce, on my FreeBSD/amd64 11.0 system: , where is: It's not 100%, but it usually happens after a few tries at most. I'll skip all the debugging and source-diving and just say that it's the same basic problem as : threads A and B are both in , thread A gets to first and s a lambda, then thread B enters and blocks in with the global mutex held, then thread A calls reentrantly on a different and deadlocks. But there might be a nicer solution than for , because has this patch: , on all platforms where they use Clang (apparently x86, little-endian ARM, and PPC), and 10.x is now the . appears to have a self-contained that doesn't use , and emprically the Rust 1.18.0 package available via doesn't deadlock. So, it might be enough to upstream that patch and update the build environment to 10.x if it isn't already.\nLooks like we're using 10.2 on ci:\nThanks for the pointer. I took a (much) closer look at this, and I've gotten the build more or less working using Ubuntu's regular Clang package, which can cross-compile out of the box. I'll send a PR once I've gotten things cleaned up and tested a bit more.\nSo I'm kind of stuck because, when I try to test locally, I run into unless I remove , which isn't maximally useful because I don't have Cargo and therefore can't build Cargo. This happens even without my changes, but the same container running on Travis CI seems to work, and I'm at a loss as to what differences there could be that would affect (apparently?) the type checker. I suppose I could just test what I can and send a PR and hope for the best, but if possible I'd prefer to understand what my local setup is doing wrong.\nNoticed this on 12-CURRENT, compilers from rustup always lock up when compiling , but it usually succeeds after a couple tries. It's nearly always !\nHmm, the rust from ports (that doesn't have this problem IIRC??) is built with bundled llvm by default\u2026", "positive_passages": [{"docid": "doc-en-rust-abe1491706e2d09c344da08af90d26ccd45ba88c8c13e7f70128ee7f0c2f6b92", "text": "let stdcppname = if target.contains(\"openbsd\") { // llvm-config on OpenBSD doesn't mention stdlib=libc++ \"c++\"  } else if target.contains(\"freebsd\") { \"c++\"  } else if target.contains(\"netbsd\") && llvm_static_stdcpp.is_some() { // NetBSD uses a separate library when relocation is required \"stdc++_pic\"", "commid": "rust_pr_46941"}], "negative_passages": []}
{"query_id": "q-en-rust-4bff0daebed0f135abe9b5be5a2abf74b3f937adf7e8de7b8f3dd32a675a2915", "query": "\"Cousins\" of borrowed union sub-fields (and their further children) are not marked as borrowed.\nThis is , though I sort of thought we had tried to fix this.\nI'm tagging this as WG-compiler-nll; I think it's something we should fix there.\nFor the sake of potential testsuite additions: this also should not compile with a simultaneous borrow of and .\nI'll take a look at this one.\nI'm not really sure what the bug is here, but I can explain what I think should be happening. This function: is responsible for determining when two values overlap. I expect us to say that two fields from a union are always considered to potentially overlap. In this case, and would be considered to overlap. Specifically, comparing those two fields would return : This is the code that I think should be responsible for doing that: It already looks a little fishy to me; in particular, it is returning if is a union, but I guess it should be equally true if is a union. Anyway, something here seems to be going awry!\nSo it seems like I forgot about how smart the compiler is now. This version of the test does error, as expected:\nI think the policy is that we will keep this bug open, but file it under\n(Since it is only fixed when NLL is enabled)\nshould this be ?\nNote that to get an NLL test failure, you have to use after declaring :\nNLL (migrate mode) is enabled in all editions as of PR . The only policy question is that, under migrate mode, we only emit a warning on this (unsound) test case. Therefore, I am not 100% sure whether we should close this until that warning has been turned into a hard-error under our (still in development) future-compatibility lint policy.", "positive_passages": [{"docid": "doc-en-rust-600abfd11f44b28bb2ffc038cd6865efa9d2e2d38b12e2572ed76586561eb90c", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0  or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![allow(unused)] #![feature(nll)] #[derive(Clone, Copy, Default)] struct S { a: u8, b: u8, } #[derive(Clone, Copy, Default)] struct Z { c: u8, d: u8, } union U { s: S, z: Z, } fn main() { unsafe { let mut u = U { s: Default::default() }; let mref = &mut u.s.a; *mref = 22; let nref = &u.z.c; //~^ ERROR cannot borrow `u.z.c` as immutable because it is also borrowed as mutable [E0502] println!(\"{} {}\", mref, nref) //~^ ERROR cannot borrow `u.s.a` as mutable because it is also borrowed as immutable [E0502] } } ", "commid": "rust_pr_47689"}], "negative_passages": []}
{"query_id": "q-en-rust-4bff0daebed0f135abe9b5be5a2abf74b3f937adf7e8de7b8f3dd32a675a2915", "query": "\"Cousins\" of borrowed union sub-fields (and their further children) are not marked as borrowed.\nThis is , though I sort of thought we had tried to fix this.\nI'm tagging this as WG-compiler-nll; I think it's something we should fix there.\nFor the sake of potential testsuite additions: this also should not compile with a simultaneous borrow of and .\nI'll take a look at this one.\nI'm not really sure what the bug is here, but I can explain what I think should be happening. This function: is responsible for determining when two values overlap. I expect us to say that two fields from a union are always considered to potentially overlap. In this case, and would be considered to overlap. Specifically, comparing those two fields would return : This is the code that I think should be responsible for doing that: It already looks a little fishy to me; in particular, it is returning if is a union, but I guess it should be equally true if is a union. Anyway, something here seems to be going awry!\nSo it seems like I forgot about how smart the compiler is now. This version of the test does error, as expected:\nI think the policy is that we will keep this bug open, but file it under\n(Since it is only fixed when NLL is enabled)\nshould this be ?\nNote that to get an NLL test failure, you have to use after declaring :\nNLL (migrate mode) is enabled in all editions as of PR . The only policy question is that, under migrate mode, we only emit a warning on this (unsound) test case. Therefore, I am not 100% sure whether we should close this until that warning has been turned into a hard-error under our (still in development) future-compatibility lint policy.", "positive_passages": [{"docid": "doc-en-rust-f836d4a94463aa8ae031f0f88ae407d5cc60ebcb9e101a29ca9c04e8a6f384a7", "text": " error[E0502]: cannot borrow `u.z.c` as immutable because it is also borrowed as mutable --> $DIR/issue-45157.rs:37:20 | 34 |         let mref = &mut u.s.a; |                    ---------- mutable borrow occurs here ... 37 |         let nref = &u.z.c; |                    ^^^^^^ immutable borrow occurs here error[E0502]: cannot borrow `u.s.a` as mutable because it is also borrowed as immutable --> $DIR/issue-45157.rs:39:27 | 37 |         let nref = &u.z.c; |                    ------ immutable borrow occurs here 38 |         //~^ ERROR cannot borrow `u.z.c` as immutable because it is also borrowed as mutable [E0502] 39 |         println!(\"{} {}\", mref, nref) |                           ^^^^ mutable borrow occurs here error: aborting due to 2 previous errors ", "commid": "rust_pr_47689"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-87ac04833daa9327e95f76fac97f1e4487675e9db2da2f77d8b0c5ebaa7aff21", "text": "[[package]] name = \"bitflags\"  version = \"0.8.2\" source = \"registry+https://github.com/rust-lang/crates.io-index\" [[package]] name = \"bitflags\"  version = \"0.9.1\" source = \"registry+https://github.com/rust-lang/crates.io-index\"", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-dc8f317de13abe52e63288ce62a368e745646ac1f00b40845f83e1fc4f58338e", "text": "[[package]] name = \"pulldown-cmark\"  version = \"0.0.14\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [ \"bitflags 0.8.2 (registry+https://github.com/rust-lang/crates.io-index)\", ] [[package]] name = \"pulldown-cmark\"  version = \"0.0.15\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-0d2b9be350dbea0ffe71875b2643413cbd75b05cf8bc0fd3b2b3fd859782d124", "text": "\"env_logger 0.4.3 (registry+https://github.com/rust-lang/crates.io-index)\", \"html-diff 0.0.4 (registry+https://github.com/rust-lang/crates.io-index)\", \"log 0.3.8 (registry+https://github.com/rust-lang/crates.io-index)\",  \"pulldown-cmark 0.0.14 (registry+https://github.com/rust-lang/crates.io-index)\",   \"pulldown-cmark 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\",  ] [[package]]", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-64ee7aff51da1c035cec665b1ad6d61948954f7ab089de3bed35073bf0be7385", "text": "\"checksum backtrace 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"99f2ce94e22b8e664d95c57fff45b98a966c2252b60691d0b7aeeccd88d70983\" \"checksum backtrace-sys 0.1.14 (registry+https://github.com/rust-lang/crates.io-index)\" = \"c63ea141ef8fdb10409d0f5daf30ac51f84ef43bff66f16627773d2a292cd189\" \"checksum bitflags 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"aad18937a628ec6abcd26d1489012cc0e18c21798210f491af69ded9b881106d\"  \"checksum bitflags 0.8.2 (registry+https://github.com/rust-lang/crates.io-index)\" = \"1370e9fc2a6ae53aea8b7a5110edbd08836ed87c88736dfabccade1c2b44bff4\"  \"checksum bitflags 0.9.1 (registry+https://github.com/rust-lang/crates.io-index)\" = \"4efd02e230a02e18f92fc2735f44597385ed02ad8f831e7c1c1156ee5e1ab3a5\" \"checksum bitflags 1.0.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"f5cde24d1b2e2216a726368b2363a273739c91f4e3eb4e0dd12d672d396ad989\" \"checksum bufstream 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"f2f382711e76b9de6c744cc00d0497baba02fb00a787f088c879f01d09468e32\"", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-3efae0781d9bea1f58fd26275198b7fd5ede569569c5373638cad8895b714b48", "text": "\"checksum precomputed-hash 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"cdf1fc3616b3ef726a847f2cd2388c646ef6a1f1ba4835c2629004da48184150\" \"checksum procedural-masquerade 0.1.2 (registry+https://github.com/rust-lang/crates.io-index)\" = \"c93cdc1fb30af9ddf3debc4afbdb0f35126cbd99daa229dd76cdd5349b41d989\" \"checksum psapi-sys 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"abcd5d1a07d360e29727f757a9decb3ce8bc6e0efa8969cfaad669a8317a2478\"  \"checksum pulldown-cmark 0.0.14 (registry+https://github.com/rust-lang/crates.io-index)\" = \"d9ab1e588ef8efd702c7ed9d2bd774db5e6f4d878bb5a1a9f371828fbdff6973\"  \"checksum pulldown-cmark 0.0.15 (registry+https://github.com/rust-lang/crates.io-index)\" = \"378e941dbd392c101f2cb88097fa4d7167bc421d4b88de3ff7dbee503bc3233b\" \"checksum pulldown-cmark 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"a656fdb8b6848f896df5e478a0eb9083681663e37dcb77dd16981ff65329fe8b\" \"checksum quick-error 1.2.1 (registry+https://github.com/rust-lang/crates.io-index)\" = \"eda5fe9b71976e62bc81b781206aaa076401769b2143379d3eb2118388babac4\"", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-a243f1fb1d7282a33e98f2b9ba8242021d6edd4790a1108ddea1a68e6a4f366d", "text": "[dependencies] env_logger = { version = \"0.4\", default-features = false } log = \"0.3\"  pulldown-cmark = { version = \"0.0.14\", default-features = false }   pulldown-cmark = { version = \"0.1.0\", default-features = false }  html-diff = \"0.0.4\" [build-dependencies]", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-9bfbd62dd3e8c6443cbe0215b081fa95b7eb60020c3323f3286d56f2b3fcb23f", "text": "match self.inner.next() { Some(Event::FootnoteReference(ref reference)) => { let entry = self.get_entry(&reference);  let reference = format!(\"{0}   let reference = format!(\"{0}  \", (*entry).1); return Some(Event::Html(reference.into()));", "commid": "rust_pr_45421"}], "negative_passages": []}
{"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-dfca98d48cf15ae7508025d30e3a140507ba05b3db6ed288a04794ba2d17a018", "text": "v.sort_by(|a, b| a.1.cmp(&b.1)); let mut ret = String::from(\"

    \"); for (mut content, id) in v { write!(ret, \"
  1. \", id).unwrap(); write!(ret, \"
  2. \", id).unwrap(); let mut is_paragraph = false; if let Some(&Event::End(Tag::Paragraph)) = content.last() { content.pop();", "commid": "rust_pr_45421"}], "negative_passages": []} {"query_id": "q-en-rust-6b3140c4ac37a443418f17da0cb04077cfc140e10f76134da465372bc66b86a3", "query": "generates different HTML under and not. Not only does this trigger what's effectively a spurious warning, but it doesn't render correctly either. cc , marking this as p-high\nThis was fixed by google/pulldown-cmark rustdoc just needs to update its version of pulldown-cmark.\nThe thread in that issue says it was released in 0.0.15. we're pulling in 0.0.14 for rustdoc, but also 0.1.0 for mdbook. I'll see if i can update the dependency for rustdoc and pull it in.\nupdates Pulldown for rustdoc and fixes this.\nI am not sure this actually fixes the warnings, as they use different IDs and such. I'm testing this locally right now though; expect an update in ~20 minutes", "positive_passages": [{"docid": "doc-en-rust-06d59849faf213f3043585eb7c5500a45bdae9610b669b7a58b0ad8b3845fc84", "text": "} html::push_html(&mut ret, content.into_iter()); write!(ret, \" \u21a9\", \" \u21a9\", id).unwrap(); if is_paragraph { ret.push_str(\"

    \");", "commid": "rust_pr_45421"}], "negative_passages": []} {"query_id": "q-en-rust-de9df3815052fb4d4f0b86178e17f6faccc696328a590af5ce0248acf6328f62", "query": "Hey, I'm curious about the . From what I understand: this works because Path is a single member struct containing only the type it's being reinterpreted as. My original question on #rust IRC was: Is this always guaranteed to be correct (in Rust) in terms of memory layout? The discussion seemed to hint at that this is in fact not a guarantee, but that std is in a unique position being developed in concert with the language and any future breakage would be patched when it occurs. I would love some clarification if this usage is correct or to what degree it is not. I'd also suggest we add clarification around this case with comments or support functions to aid future spelunking into std.\nNo, strictly speaking, this is not safe without .\nexcellent, thank you! The seems to indicate things going slowly. Adding a comment might still be a good idea for now. Reading the RFC, it mentions at least ARM64 where there might be layout differences. But according to , std shows up as supported as a Tier 2 platform. This implies that at least for now, this pattern in this instance () works. Is this correct?\nThat ARM64 issue seems to be related to calling convention, not memory layout, so it's not an issue here.\nThis is my understanding as well. I believe it is safe with either repr(transparent) or repr(C), the latter of which is stable. The crate is a generalization of this safe cast.", "positive_passages": [{"docid": "doc-en-rust-6fb9f940d966b57cb742063c69d9a40c04806dd4416bdd27f9d73d16d5639ceb", "text": "} // See note at the top of this module to understand why these are used: // // These casts are safe as OsStr is internally a wrapper around [u8] on all // platforms. // // Note that currently this relies on the special knowledge that libstd has; // these types are single-element structs but are not marked repr(transparent) // or repr(C) which would make these casts allowable outside std. fn os_str_as_u8_slice(s: &OsStr) -> &[u8] { unsafe { &*(s as *const OsStr as *const [u8]) } }", "commid": "rust_pr_67635"}], "negative_passages": []} {"query_id": "q-en-rust-888581a47955a17237fe4d92405e251b297168b89dff51ba03028fb200bf9668", "query": "The benchmarks for LinearMap and TreeMap should just be using the Map trait, but I encountered a strange borrow checking issue: diff: error: This seems like a borrowck bug, but I'm not sure. The same issue occurs when trying to work around it with a trait like with a method.", "positive_passages": [{"docid": "doc-en-rust-460114894fb10599b720633abe9819bc613d1f42e13bd34517ad5a88d7f5faa7", "text": " // Copyright 2012 The Rust Project Developers. See the COPYRIGHT // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. //", "commid": "rust_pr_5476"}], "negative_passages": []} {"query_id": "q-en-rust-888581a47955a17237fe4d92405e251b297168b89dff51ba03028fb200bf9668", "query": "The benchmarks for LinearMap and TreeMap should just be using the Map trait, but I encountered a strange borrow checking issue: diff: error: This seems like a borrowck bug, but I'm not sure. The same issue occurs when trying to work around it with a trait like with a method.", "positive_passages": [{"docid": "doc-en-rust-d2b9a4fcf51ea22676a3f2c8bf063af47cb3cf4f7a58c04fd8000b8fc9201a5f", "text": "// except according to those terms. extern mod std; use std::oldmap; use std::treemap::TreeMap; use core::hashmap::linear::*; use core::io::WriterUtil; struct Results { sequential_ints: float, random_ints: float, delete_ints: float, sequential_strings: float, random_strings: float, delete_strings: float } fn timed(result: &mut float, op: &fn()) { let start = std::time::precise_time_s(); op(); let end = std::time::precise_time_s(); *result = (end - start); use core::io; use std::time; use std::treemap::TreeMap; use core::hashmap::linear::{LinearMap, LinearSet}; use core::trie::TrieMap; fn timed(label: &str, f: &fn()) { let start = time::precise_time_s(); f(); let end = time::precise_time_s(); io::println(fmt!(\" %s: %f\", label, end - start)); } fn old_int_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let map = oldmap::HashMap(); do timed(&mut results.sequential_ints) { for uint::range(0, num_keys) |i| { map.insert(i, i+1); } fn ascending>(map: &mut M, n_keys: uint) { io::println(\" Ascending integers:\"); for uint::range(0, num_keys) |i| { fail_unless!(map.get(&i) == i+1); } do timed(\"insert\") { for uint::range(0, n_keys) |i| { map.insert(i, i + 1); } } { let map = oldmap::HashMap(); do timed(&mut results.random_ints) { for uint::range(0, num_keys) |i| { map.insert(rng.next() as uint, i); } do timed(\"search\") { for uint::range(0, n_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i + 1)); } } { let map = oldmap::HashMap(); for uint::range(0, num_keys) |i| { map.insert(i, i);; } do timed(&mut results.delete_ints) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&i)); } do timed(\"remove\") { for uint::range(0, n_keys) |i| { fail_unless!(map.remove(&i)); } } } fn old_str_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let map = oldmap::HashMap(); do timed(&mut results.sequential_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(i); map.insert(s, i); } fn descending>(map: &mut M, n_keys: uint) { io::println(\" Descending integers:\"); for uint::range(0, num_keys) |i| { let s = uint::to_str(i); fail_unless!(map.get(&s) == i); } do timed(\"insert\") { for uint::range(0, n_keys) |i| { map.insert(i, i + 1); } } { let map = oldmap::HashMap(); do timed(&mut results.random_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(rng.next() as uint); map.insert(s, i); } do timed(\"search\") { for uint::range(0, n_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i + 1)); } } { let map = oldmap::HashMap(); for uint::range(0, num_keys) |i| { map.insert(uint::to_str(i), i); } do timed(&mut results.delete_strings) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&uint::to_str(i))); } do timed(\"remove\") { for uint::range(0, n_keys) |i| { fail_unless!(map.remove(&i)); } } } fn linear_int_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = LinearMap::new(); do timed(&mut results.sequential_ints) { for uint::range(0, num_keys) |i| { map.insert(i, i+1); } fn vector>(map: &mut M, n_keys: uint, dist: &[uint]) { for uint::range(0, num_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i+1)); } do timed(\"insert\") { for uint::range(0, n_keys) |i| { map.insert(dist[i], i + 1); } } { let mut map = LinearMap::new(); do timed(&mut results.random_ints) { for uint::range(0, num_keys) |i| { map.insert(rng.next() as uint, i); } do timed(\"search\") { for uint::range(0, n_keys) |i| { fail_unless!(map.find(&dist[i]).unwrap() == &(i + 1)); } } { let mut map = LinearMap::new(); for uint::range(0, num_keys) |i| { map.insert(i, i);; } do timed(&mut results.delete_ints) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&i)); } do timed(\"remove\") { for uint::range(0, n_keys) |i| { fail_unless!(map.remove(&dist[i])); } } } fn linear_str_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = LinearMap::new(); do timed(&mut results.sequential_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(i); map.insert(s, i); } for uint::range(0, num_keys) |i| { let s = uint::to_str(i); fail_unless!(map.find(&s).unwrap() == &i); } fn main() { let args = os::args(); let n_keys = { if args.len() == 2 { uint::from_str(args[1]).get() } else { 1000000 } } }; { let mut map = LinearMap::new(); do timed(&mut results.random_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(rng.next() as uint); map.insert(s, i); } } } let mut rand = vec::with_capacity(n_keys); { let mut map = LinearMap::new(); for uint::range(0, num_keys) |i| { map.insert(uint::to_str(i), i); } do timed(&mut results.delete_strings) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&uint::to_str(i))); let rng = core::rand::seeded_rng([1, 1, 1, 1, 1, 1, 1]); let mut set = LinearSet::new(); while set.len() != n_keys { let next = rng.next() as uint; if set.insert(next) { rand.push(next); } } } } fn tree_int_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = TreeMap::new(); do timed(&mut results.sequential_ints) { for uint::range(0, num_keys) |i| { map.insert(i, i+1); } io::println(fmt!(\"%? keys\", n_keys)); for uint::range(0, num_keys) |i| { fail_unless!(map.find(&i).unwrap() == &(i+1)); } } } io::println(\"nTreeMap:\"); { let mut map = TreeMap::new(); do timed(&mut results.random_ints) { for uint::range(0, num_keys) |i| { map.insert(rng.next() as uint, i); } } let mut map = TreeMap::new::(); ascending(&mut map, n_keys); } { let mut map = TreeMap::new(); for uint::range(0, num_keys) |i| { map.insert(i, i);; } do timed(&mut results.delete_ints) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&i)); } } let mut map = TreeMap::new::(); descending(&mut map, n_keys); } } fn tree_str_benchmarks(rng: @rand::Rng, num_keys: uint, results: &mut Results) { { let mut map = TreeMap::new(); do timed(&mut results.sequential_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(i); map.insert(s, i); } for uint::range(0, num_keys) |i| { let s = uint::to_str(i); fail_unless!(map.find(&s).unwrap() == &i); } } io::println(\" Random integers:\"); let mut map = TreeMap::new::(); vector(&mut map, n_keys, rand); } io::println(\"nLinearMap:\"); { let mut map = TreeMap::new(); do timed(&mut results.random_strings) { for uint::range(0, num_keys) |i| { let s = uint::to_str(rng.next() as uint); map.insert(s, i); } } let mut map = LinearMap::new::(); ascending(&mut map, n_keys); } { let mut map = TreeMap::new(); for uint::range(0, num_keys) |i| { map.insert(uint::to_str(i), i); } do timed(&mut results.delete_strings) { for uint::range(0, num_keys) |i| { fail_unless!(map.remove(&uint::to_str(i))); } } let mut map = LinearMap::new::(); descending(&mut map, n_keys); } } fn write_header(header: &str) { io::stdout().write_str(header); io::stdout().write_str(\"n\"); } fn write_row(label: &str, value: float) { io::stdout().write_str(fmt!(\"%30s %f sn\", label, value)); } fn write_results(label: &str, results: &Results) { write_header(label); write_row(\"sequential_ints\", results.sequential_ints); write_row(\"random_ints\", results.random_ints); write_row(\"delete_ints\", results.delete_ints); write_row(\"sequential_strings\", results.sequential_strings); write_row(\"random_strings\", results.random_strings); write_row(\"delete_strings\", results.delete_strings); } fn empty_results() -> Results { Results { sequential_ints: 0f, random_ints: 0f, delete_ints: 0f, sequential_strings: 0f, random_strings: 0f, delete_strings: 0f, { io::println(\" Random integers:\"); let mut map = LinearMap::new::(); vector(&mut map, n_keys, rand); } } fn main() { let args = os::args(); let num_keys = { if args.len() == 2 { uint::from_str(args[1]).get() } else { 100 // woefully inadequate for any real measurement } }; let seed = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; io::println(\"nTrieMap:\"); { let rng = rand::seeded_rng(seed); let mut results = empty_results(); old_int_benchmarks(rng, num_keys, &mut results); old_str_benchmarks(rng, num_keys, &mut results); write_results(\"std::oldmap::HashMap\", &results); let mut map = TrieMap::new::(); ascending(&mut map, n_keys); } { let rng = rand::seeded_rng(seed); let mut results = empty_results(); linear_int_benchmarks(rng, num_keys, &mut results); linear_str_benchmarks(rng, num_keys, &mut results); write_results(\"core::hashmap::linear::LinearMap\", &results); let mut map = TrieMap::new::(); descending(&mut map, n_keys); } { let rng = rand::seeded_rng(seed); let mut results = empty_results(); tree_int_benchmarks(rng, num_keys, &mut results); tree_str_benchmarks(rng, num_keys, &mut results); write_results(\"std::treemap::TreeMap\", &results); io::println(\" Random integers:\"); let mut map = TrieMap::new::(); vector(&mut map, n_keys, rand); } }", "commid": "rust_pr_5476"}], "negative_passages": []} {"query_id": "q-en-rust-048e8bc0a7141491aa38a90cc2a49477bac51c738aeed492218a9d2c99d11a32", "query": "Implemented in .\nDumping this link here for future reference:\nClosing since is stable in nightly and will make its way to stable in 1.26.0.\nI was confused why this was allowed to stabilize, but it appears that was fixed?\nThe mechanism doesn't work anymore at least. Should we escalate this to a new issue with P-high?\nNope, just leaving a historical note for anyone else who was confused. Although if anyone knows when this was fixed (and if a test was ?) that would be good to note.", "positive_passages": [{"docid": "doc-en-rust-b8175fa287fd65e467b4790edc5f04bb11396b43690cfce8beaa3b8046bcc696", "text": "/// Simple usage: /// /// ``` /// #![feature(box_leak)] /// /// fn main() { /// let x = Box::new(41); /// let static_ref: &'static mut usize = Box::leak(x);", "commid": "rust_pr_48110"}], "negative_passages": []} {"query_id": "q-en-rust-048e8bc0a7141491aa38a90cc2a49477bac51c738aeed492218a9d2c99d11a32", "query": "Implemented in .\nDumping this link here for future reference:\nClosing since is stable in nightly and will make its way to stable in 1.26.0.\nI was confused why this was allowed to stabilize, but it appears that was fixed?\nThe mechanism doesn't work anymore at least. Should we escalate this to a new issue with P-high?\nNope, just leaving a historical note for anyone else who was confused. Although if anyone knows when this was fixed (and if a test was ?) that would be good to note.", "positive_passages": [{"docid": "doc-en-rust-8249ce9d6c380064acf32b954380f5137c30349f016c3fdc5b8e433305b88ee7", "text": "/// Unsized data: /// /// ``` /// #![feature(box_leak)] /// /// fn main() { /// let x = vec![1, 2, 3].into_boxed_slice(); /// let static_ref = Box::leak(x);", "commid": "rust_pr_48110"}], "negative_passages": []} {"query_id": "q-en-rust-048e8bc0a7141491aa38a90cc2a49477bac51c738aeed492218a9d2c99d11a32", "query": "Implemented in .\nDumping this link here for future reference:\nClosing since is stable in nightly and will make its way to stable in 1.26.0.\nI was confused why this was allowed to stabilize, but it appears that was fixed?\nThe mechanism doesn't work anymore at least. Should we escalate this to a new issue with P-high?\nNope, just leaving a historical note for anyone else who was confused. Although if anyone knows when this was fixed (and if a test was ?) that would be good to note.", "positive_passages": [{"docid": "doc-en-rust-2d74c0eafb3963f8f8e1ee8446b8608ee2ec207c4dbb1dab916d500538b2bf32", "text": "/// assert_eq!(*static_ref, [4, 2, 3]); /// } /// ``` #[unstable(feature = \"box_leak\", reason = \"needs an FCP to stabilize\", issue = \"46179\")] #[stable(feature = \"box_leak\", since = \"1.26.0\")] #[inline] pub fn leak<'a>(b: Box) -> &'a mut T where", "commid": "rust_pr_48110"}], "negative_passages": []} {"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-51214220e5b31595de0c37197d6723d6af4f122c1a6f8eba888a9fed6edc945c", "text": "return; } let (_, line_hi, col_hi) = match ctx.byte_pos_to_line_and_col(span.hi) { Some(pos) => pos, None => { Hash::hash(&TAG_INVALID_SPAN, hasher); span.ctxt.hash_stable(ctx, hasher); return; } }; Hash::hash(&TAG_VALID_SPAN, hasher); // We truncate the stable ID hash and line and column numbers. The chances // of causing a collision this way should be minimal. Hash::hash(&(file_lo.name_hash as u64), hasher); let col = (col_lo.0 as u64) & 0xFF; let line = ((line_lo as u64) & 0xFF_FF_FF) << 8; let len = ((span.hi - span.lo).0 as u64) << 32; let line_col_len = col | line | len; Hash::hash(&line_col_len, hasher); // Hash both the length and the end location (line/column) of a span. If we // hash only the length, for example, then two otherwise equal spans with // different end locations will have the same hash. This can cause a problem // during incremental compilation wherein a previous result for a query that // depends on the end location of a span will be incorrectly reused when the // end location of the span it depends on has changed (see issue #74890). A // similar analysis applies if some query depends specifically on the length // of the span, but we only hash the end location. So hash both. let col_lo_trunc = (col_lo.0 as u64) & 0xFF; let line_lo_trunc = ((line_lo as u64) & 0xFF_FF_FF) << 8; let col_hi_trunc = (col_hi.0 as u64) & 0xFF << 32; let line_hi_trunc = ((line_hi as u64) & 0xFF_FF_FF) << 40; let col_line = col_lo_trunc | line_lo_trunc | col_hi_trunc | line_hi_trunc; let len = (span.hi - span.lo).0; Hash::hash(&col_line, hasher); Hash::hash(&len, hasher); span.ctxt.hash_stable(ctx, hasher); } }", "commid": "rust_pr_76256"}], "negative_passages": []} {"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-44765efb3f3d67d33668bb98e690f398fa812059fa50e353eef1c76dc7e155a4", "text": " include ../../run-make-fulldeps/tools.mk # FIXME https://github.com/rust-lang/rust/issues/78911 # ignore-32bit wrong/no cross compiler and sometimes we pass wrong gcc args (-m64) # Tests that we don't ICE during incremental compilation after modifying a # function span such that its previous end line exceeds the number of lines # in the new file, but its start line/column and length remain the same. SRC=$(TMPDIR)/src INCR=$(TMPDIR)/incr all: mkdir $(SRC) mkdir $(INCR) cp a.rs $(SRC)/main.rs $(RUSTC) -C incremental=$(INCR) $(SRC)/main.rs cp b.rs $(SRC)/main.rs $(RUSTC) -C incremental=$(INCR) $(SRC)/main.rs ", "commid": "rust_pr_76256"}], "negative_passages": []} {"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-2dd9dafe076fbf295bc70c7f860aa6e3ed726228d3f4d85520467c716799d72e", "text": " fn main() { // foo must be used. foo(); } // For this test to operate correctly, foo's body must start on exactly the same // line and column and have the exact same length in bytes in a.rs and b.rs. In // a.rs, the body must end on a line number which does not exist in b.rs. // Basically, avoid modifying this file, including adding or removing whitespace! fn foo() { assert_eq!(1, 1); } ", "commid": "rust_pr_76256"}], "negative_passages": []} {"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-ca230d5e18bba74e60b1cdfff1714f362c7735739b7fce812a2052404d595c44", "text": " fn main() { // foo must be used. foo(); } // For this test to operate correctly, foo's body must start on exactly the same // line and column and have the exact same length in bytes in a.rs and b.rs. In // a.rs, the body must end on a line number which does not exist in b.rs. // Basically, avoid modifying this file, including adding or removing whitespace! fn foo() { assert_eq!(1, 1);//// } ", "commid": "rust_pr_76256"}], "negative_passages": []} {"query_id": "q-en-rust-eb3a5a04dc565f82620b41f30effb44b9a03962f5bb8b5c03e83f069f107b6f6", "query": "Index out of bounds ICE while compiling . I don't know enough about why this could happen to produce a minimal reproduction. Backtrace:\nhere's a test case. i got this ICE by random (un)luck and i can't remove any further code or it won't reproduce. at least it's a single file. cannot reproduce with rustc, must be called from cargo. Cargo toml has no dependencies tho\nat least i think its related, since its in CacheDecoder as well\ncannot reproduce after cargo clean, so attached is the entire project. linux x86_64 rustc 1.26.2 ( 2018-06-01)", "positive_passages": [{"docid": "doc-en-rust-fa86e930fb387826ec73795c6b563d7f243349db35a973d5ee234aa8d9960cda", "text": "include ../../run-make-fulldeps/tools.mk # FIXME https://github.com/rust-lang/rust/issues/78911 # ignore-32bit wrong/no cross compiler and sometimes we pass wrong gcc args (-m64) all: foo", "commid": "rust_pr_76256"}], "negative_passages": []} {"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-d26cc4e78fb8a3c07b6c09839d3455856b5189a06f5206dd567e636412a55aa1", "text": "let (span, e) = self.interpolated_or_expr_span(e)?; (lo.to(span), ExprKind::Box(e)) } _ => return self.parse_dot_or_call_expr(Some(attrs)) token::Ident(..) if self.token.is_ident_named(\"not\") => { // `not` is just an ordinary identifier in Rust-the-language, // but as `rustc`-the-compiler, we can issue clever diagnostics // for confused users who really want to say `!` let token_cannot_continue_expr = |t: &token::Token| match *t { // These tokens can start an expression after `!`, but // can't continue an expression after an ident token::Ident(ident, is_raw) => token::ident_can_begin_expr(ident, is_raw), token::Literal(..) | token::Pound => true, token::Interpolated(ref nt) => match nt.0 { token::NtIdent(..) | token::NtExpr(..) | token::NtBlock(..) | token::NtPath(..) => true, _ => false, }, _ => false }; let cannot_continue_expr = self.look_ahead(1, token_cannot_continue_expr); if cannot_continue_expr { self.bump(); // Emit the error ... let mut err = self.diagnostic() .struct_span_err(self.span, &format!(\"unexpected {} after identifier\", self.this_token_descr())); // span the `not` plus trailing whitespace to avoid // trailing whitespace after the `!` in our suggestion let to_replace = self.sess.codemap() .span_until_non_whitespace(lo.to(self.span)); err.span_suggestion_short(to_replace, \"use `!` to perform logical negation\", \"!\".to_owned()); err.emit(); // \u2014and recover! (just as if we were in the block // for the `token::Not` arm) let e = self.parse_prefix_expr(None); let (span, e) = self.interpolated_or_expr_span(e)?; (lo.to(span), self.mk_unary(UnOp::Not, e)) } else { return self.parse_dot_or_call_expr(Some(attrs)); } } _ => { return self.parse_dot_or_call_expr(Some(attrs)); } }; return Ok(self.mk_expr(lo.to(hi), ex, attrs)); }", "commid": "rust_pr_49258"}], "negative_passages": []} {"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-9624e351cf4bad0520ea726b7c62ecbe9bd282479f3cd7913a1c02a042f1910f", "text": "// Which is valid in other languages, but not Rust. match self.parse_stmt_without_recovery(false) { Ok(Some(stmt)) => { if self.look_ahead(1, |t| t == &token::OpenDelim(token::Brace)) { // if the next token is an open brace (e.g., `if a b {`), the place- // inside-a-block suggestion would be more likely wrong than right return Err(e); } let mut stmt_span = stmt.span; // expand the span to include the semicolon, if it exists if self.eat(&token::Semi) {", "commid": "rust_pr_49258"}], "negative_passages": []} {"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-7afa98cf0c3fc68fbc297f2ddd7842c43f00a33bcc0b8688c2ab41f17953f0fe", "text": "} } fn ident_can_begin_expr(ident: ast::Ident, is_raw: bool) -> bool { pub(crate) fn ident_can_begin_expr(ident: ast::Ident, is_raw: bool) -> bool { let ident_token: Token = Ident(ident, is_raw); !ident_token.is_reserved_ident() ||", "commid": "rust_pr_49258"}], "negative_passages": []} {"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-fa4c0741eadf067ab39d78425020e16d8ef5bb7d52ad34532a14b5eb5ebf9368", "text": "self.lifetime().is_some() } /// Returns `true` if the token is a identifier whose name is the given /// string slice. pub fn is_ident_named(&self, name: &str) -> bool { match self.ident() { Some((ident, _)) => ident.name.as_str() == name, None => false } } /// Returns `true` if the token is a documentation comment. pub fn is_doc_comment(&self) -> bool { match *self {", "commid": "rust_pr_49258"}], "negative_passages": []} {"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-c1cf2c57629d64b5a14d8d63f5653bd0435a0a3d7741462ff686862e428fd799", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn gratitude() { let for_you = false; if not for_you { //~^ ERROR unexpected `for_you` after identifier println!(\"I couldn't\"); } } fn qualification() { let the_worst = true; while not the_worst { //~^ ERROR unexpected `the_worst` after identifier println!(\"still pretty bad\"); } } fn should_we() { let not = true; if not // lack of braces is [sic] println!(\"Then when?\"); //~^ ERROR expected `{`, found `; //~| ERROR unexpected `println` after identifier } fn sleepy() { let resource = not 2; //~^ ERROR unexpected `2` after identifier } fn main() { let be_smothered_out_before = true; let young_souls = not be_smothered_out_before; //~^ ERROR unexpected `be_smothered_out_before` after identifier } ", "commid": "rust_pr_49258"}], "negative_passages": []} {"query_id": "q-en-rust-66bd0557e3adcbd9bc2e0338b5bc767c87f0007e5e61dd72343927c908c77916", "query": "this code currently emmits Could we detect explicitly if there are 2 unchained (via && or ) \"variables\" after an \"if\" and if the first one is \"not\" and explicitly suggest using \"!\" instead of \"not\" instead? meta: `", "positive_passages": [{"docid": "doc-en-rust-d4cabfe3953a9d01ee42fc62554ace4cd85c0233cfe9817e35891a3835786572", "text": " error: unexpected `for_you` after identifier --> $DIR/issue-46836-identifier-not-instead-of-negation.rs:13:12 | LL | if not for_you { | ----^^^^^^^ | | | help: use `!` to perform logical negation error: unexpected `the_worst` after identifier --> $DIR/issue-46836-identifier-not-instead-of-negation.rs:21:15 | LL | while not the_worst { | ----^^^^^^^^^ | | | help: use `!` to perform logical negation error: unexpected `println` after identifier --> $DIR/issue-46836-identifier-not-instead-of-negation.rs:30:9 | LL | if not // lack of braces is [sic] | ----- help: use `!` to perform logical negation LL | println!(\"Then when?\"); | ^^^^^^^ error: expected `{`, found `;` --> $DIR/issue-46836-identifier-not-instead-of-negation.rs:30:31 | LL | if not // lack of braces is [sic] | -- this `if` statement has a condition, but no block LL | println!(\"Then when?\"); | ^ error: unexpected `2` after identifier --> $DIR/issue-46836-identifier-not-instead-of-negation.rs:36:24 | LL | let resource = not 2; | ----^ | | | help: use `!` to perform logical negation error: unexpected `be_smothered_out_before` after identifier --> $DIR/issue-46836-identifier-not-instead-of-negation.rs:42:27 | LL | let young_souls = not be_smothered_out_before; | ----^^^^^^^^^^^^^^^^^^^^^^^ | | | help: use `!` to perform logical negation error: aborting due to 6 previous errors ", "commid": "rust_pr_49258"}], "negative_passages": []} {"query_id": "q-en-rust-b652b2986b58acc1b404a614070114b68db4134dd82737c46ecfa844323d3f96", "query": "This is an issue extracted from which is caused by an issue that any OSX compilation with debuginfo ends up being nondeterministic. Specifically (currently known at least) the source of nondeterminism is that an mtime for an object file winds up in the final binary. It turns out this isn't really our fault (unfortunately that makes it harder to fix!). This can be reproduced with just C and a linker: Here we're using the exact same object file (with two timestamps) and we're seeing different linked artifacts. This is a source of bugs in programs that expect rustc to be deterministic (aka as was originally stated) and is something that we as rustc should probably fix. Unfortunately I don't really know of a fix for this myself. I'd be tempted to take a big hammer to the problem and deterministically set all mtime fields for objects going into the linker to a known fixed value, but that unfortunately doesn't fix the determinism for C code (whose objects we don't control) and also is probably too big of a hammer (if a build system uses the mtime of the object to control rebuilds it'd get mixed up). We could also use something like and reach in to the specific field and remove the actual data. I found it in a symbol section with the type (described in various documents online too apparently). We may be able to postprocess all output artifacts on OSX to maybe just zero out these fields unconditionally (or set them to something like May 15, 2015), although I'm not actually sure if this would be easy to do.\ncc cc (you're probably interested in this for the sccache ramifications like is)\nIf we used LLD for linking (), it would be possible to fix this in the linker (by providing a flag to ignore mtime). This would also fix (this part of) deterministic compilation for other languages as well.\nSource code for the darwin linker , but I have no idea whether they take patches. Maybe LLVM develpers know more. Most likely, Apple will switch to LLD eventually.\nI wonder what experts on deterministic builds ( ) can say about this.\nHm. I wonder why we haven't noticed this for Firefox builds? Maybe the version of the linker we're using has a patch to work around this? We're using for our builds.\nthat is indeed surprising! The source code there , but that may be getting postprocessed somewhere else perhaps.\nWe discussed this on IRC, and I suspect the reason is that nobody has actually tried to do unstripped reproducible Firefox builds for macOS (although I'm not 100% sure). The info in question are STABS entries used by dsymutil to link the debug info from the object files into the dSYM. This isn't critical for sccache currently, since it doesn't cache linker outputs.\nRelated: noticed that static archives are not reproducible on macOS because Apple's tool puts timestamps in the archive ().\nRight, I think the impact on sccache is similar to What I am seeing is that the .dylib non-determinism is causing unexpected cache misses in sccache since the dylibs end up being passed to rustc. Here is an example for :\nAh, right, proc macro crates!\nThis should be fixable by setting the in the env of the linker (libtool and ld64). For more context, see determinism blog post and commit for Chromium. I'm not sure of the first Xcode version that supports this though.\nIt should be harmless to set it even if it's not supported, you just won't get a deterministic binary. in the most recent ld64 source available, FWIW.", "positive_passages": [{"docid": "doc-en-rust-b531e525b0c28830d790c19742bdd0d42b2f13dcf4e9b22363db7c00b2d9d683", "text": "has_elf_tls: version >= (10, 7), abi_return_struct_as_int: true, emit_debug_gdb_scripts: false, // This environment variable is pretty magical but is intended for // producing deterministic builds. This was first discovered to be used // by the `ar` tool as a way to control whether or not mtime entries in // the archive headers were set to zero or not. It appears that // eventually the linker got updated to do the same thing and now reads // this environment variable too in recent versions. // // For some more info see the commentary on #47086 link_env: vec![(\"ZERO_AR_DATE\".to_string(), \"1\".to_string())], ..Default::default() } }", "commid": "rust_pr_71931"}], "negative_passages": []} {"query_id": "q-en-rust-b652b2986b58acc1b404a614070114b68db4134dd82737c46ecfa844323d3f96", "query": "This is an issue extracted from which is caused by an issue that any OSX compilation with debuginfo ends up being nondeterministic. Specifically (currently known at least) the source of nondeterminism is that an mtime for an object file winds up in the final binary. It turns out this isn't really our fault (unfortunately that makes it harder to fix!). This can be reproduced with just C and a linker: Here we're using the exact same object file (with two timestamps) and we're seeing different linked artifacts. This is a source of bugs in programs that expect rustc to be deterministic (aka as was originally stated) and is something that we as rustc should probably fix. Unfortunately I don't really know of a fix for this myself. I'd be tempted to take a big hammer to the problem and deterministically set all mtime fields for objects going into the linker to a known fixed value, but that unfortunately doesn't fix the determinism for C code (whose objects we don't control) and also is probably too big of a hammer (if a build system uses the mtime of the object to control rebuilds it'd get mixed up). We could also use something like and reach in to the specific field and remove the actual data. I found it in a symbol section with the type (described in various documents online too apparently). We may be able to postprocess all output artifacts on OSX to maybe just zero out these fields unconditionally (or set them to something like May 15, 2015), although I'm not actually sure if this would be easy to do.\ncc cc (you're probably interested in this for the sccache ramifications like is)\nIf we used LLD for linking (), it would be possible to fix this in the linker (by providing a flag to ignore mtime). This would also fix (this part of) deterministic compilation for other languages as well.\nSource code for the darwin linker , but I have no idea whether they take patches. Maybe LLVM develpers know more. Most likely, Apple will switch to LLD eventually.\nI wonder what experts on deterministic builds ( ) can say about this.\nHm. I wonder why we haven't noticed this for Firefox builds? Maybe the version of the linker we're using has a patch to work around this? We're using for our builds.\nthat is indeed surprising! The source code there , but that may be getting postprocessed somewhere else perhaps.\nWe discussed this on IRC, and I suspect the reason is that nobody has actually tried to do unstripped reproducible Firefox builds for macOS (although I'm not 100% sure). The info in question are STABS entries used by dsymutil to link the debug info from the object files into the dSYM. This isn't critical for sccache currently, since it doesn't cache linker outputs.\nRelated: noticed that static archives are not reproducible on macOS because Apple's tool puts timestamps in the archive ().\nRight, I think the impact on sccache is similar to What I am seeing is that the .dylib non-determinism is causing unexpected cache misses in sccache since the dylibs end up being passed to rustc. Here is an example for :\nAh, right, proc macro crates!\nThis should be fixable by setting the in the env of the linker (libtool and ld64). For more context, see determinism blog post and commit for Chromium. I'm not sure of the first Xcode version that supports this though.\nIt should be harmless to set it even if it's not supported, you just won't get a deterministic binary. in the most recent ld64 source available, FWIW.", "positive_passages": [{"docid": "doc-en-rust-8be9023bfece88e4bc01d760106669b6a8a6366c9170301de0c7ec5f26e508de", "text": "# ignore-musl # ignore-windows # ignore-macos (rust-lang/rust#66568) # Objects are reproducible but their path is not. all: ", "commid": "rust_pr_71931"}], "negative_passages": []} {"query_id": "q-en-rust-b652b2986b58acc1b404a614070114b68db4134dd82737c46ecfa844323d3f96", "query": "This is an issue extracted from which is caused by an issue that any OSX compilation with debuginfo ends up being nondeterministic. Specifically (currently known at least) the source of nondeterminism is that an mtime for an object file winds up in the final binary. It turns out this isn't really our fault (unfortunately that makes it harder to fix!). This can be reproduced with just C and a linker: Here we're using the exact same object file (with two timestamps) and we're seeing different linked artifacts. This is a source of bugs in programs that expect rustc to be deterministic (aka as was originally stated) and is something that we as rustc should probably fix. Unfortunately I don't really know of a fix for this myself. I'd be tempted to take a big hammer to the problem and deterministically set all mtime fields for objects going into the linker to a known fixed value, but that unfortunately doesn't fix the determinism for C code (whose objects we don't control) and also is probably too big of a hammer (if a build system uses the mtime of the object to control rebuilds it'd get mixed up). We could also use something like and reach in to the specific field and remove the actual data. I found it in a symbol section with the type (described in various documents online too apparently). We may be able to postprocess all output artifacts on OSX to maybe just zero out these fields unconditionally (or set them to something like May 15, 2015), although I'm not actually sure if this would be easy to do.\ncc cc (you're probably interested in this for the sccache ramifications like is)\nIf we used LLD for linking (), it would be possible to fix this in the linker (by providing a flag to ignore mtime). This would also fix (this part of) deterministic compilation for other languages as well.\nSource code for the darwin linker , but I have no idea whether they take patches. Maybe LLVM develpers know more. Most likely, Apple will switch to LLD eventually.\nI wonder what experts on deterministic builds ( ) can say about this.\nHm. I wonder why we haven't noticed this for Firefox builds? Maybe the version of the linker we're using has a patch to work around this? We're using for our builds.\nthat is indeed surprising! The source code there , but that may be getting postprocessed somewhere else perhaps.\nWe discussed this on IRC, and I suspect the reason is that nobody has actually tried to do unstripped reproducible Firefox builds for macOS (although I'm not 100% sure). The info in question are STABS entries used by dsymutil to link the debug info from the object files into the dSYM. This isn't critical for sccache currently, since it doesn't cache linker outputs.\nRelated: noticed that static archives are not reproducible on macOS because Apple's tool puts timestamps in the archive ().\nRight, I think the impact on sccache is similar to What I am seeing is that the .dylib non-determinism is causing unexpected cache misses in sccache since the dylibs end up being passed to rustc. Here is an example for :\nAh, right, proc macro crates!\nThis should be fixable by setting the in the env of the linker (libtool and ld64). For more context, see determinism blog post and commit for Chromium. I'm not sure of the first Xcode version that supports this though.\nIt should be harmless to set it even if it's not supported, you just won't get a deterministic binary. in the most recent ld64 source available, FWIW.", "positive_passages": [{"docid": "doc-en-rust-82e04b7a5a03d5180740161a60dacaa1cb036288f719ec2c57f0e9f5952973b0", "text": "rm -rf $(TMPDIR) && mkdir $(TMPDIR) $(RUSTC) reproducible-build-aux.rs $(RUSTC) reproducible-build.rs --crate-type rlib --sysroot $(shell $(RUSTC) --print sysroot) --remap-path-prefix=$(shell $(RUSTC) --print sysroot)=/sysroot cp -r $(shell $(RUSTC) --print sysroot) $(TMPDIR)/sysroot cp -R $(shell $(RUSTC) --print sysroot) $(TMPDIR)/sysroot cp $(TMPDIR)/libreproducible_build.rlib $(TMPDIR)/libfoo.rlib $(RUSTC) reproducible-build.rs --crate-type rlib --sysroot $(TMPDIR)/sysroot --remap-path-prefix=$(TMPDIR)/sysroot=/sysroot cmp \"$(TMPDIR)/libreproducible_build.rlib\" \"$(TMPDIR)/libfoo.rlib\" || exit 1", "commid": "rust_pr_71931"}], "negative_passages": []} {"query_id": "q-en-rust-7588c1e3095f0508576ed42a62374fae3979416b2680ee03b763f2cb86ed355a", "query": "If a procedural macro generates an access to a named struct field like then the may be either defsite or callsite and it works either way. But if accessing an unnamed tuple struct field like then it only works if is call_site. I believe it should work either way in either case.\nSorry for the delay! Fixed in .", "positive_passages": [{"docid": "doc-en-rust-e1bac876801e44f8f11ade170aa42df1269990b4eed86124b80bfdc0a70eeee1", "text": "// A tuple index may not have a suffix self.expect_no_suffix(sp, \"tuple index\", suf); let dot_span = self.prev_span; hi = self.span; let idx_span = self.span; self.bump(); let invalid_msg = \"invalid tuple or struct index\";", "commid": "rust_pr_48083"}], "negative_passages": []} {"query_id": "q-en-rust-7588c1e3095f0508576ed42a62374fae3979416b2680ee03b763f2cb86ed355a", "query": "If a procedural macro generates an access to a named struct field like then the may be either defsite or callsite and it works either way. But if accessing an unnamed tuple struct field like then it only works if is call_site. I believe it should work either way in either case.\nSorry for the delay! Fixed in .", "positive_passages": [{"docid": "doc-en-rust-081ea6a2bd57e3b98e77a3c39181b981b1e25f22fa48473718c6fd3de07d75bc", "text": "n.to_string()); err.emit(); } let id = respan(dot_span.to(hi), n); let field = self.mk_tup_field(e, id); e = self.mk_expr(lo.to(hi), field, ThinVec::new()); let field = self.mk_tup_field(e, respan(idx_span, n)); e = self.mk_expr(lo.to(idx_span), field, ThinVec::new()); } None => { let prev_span = self.prev_span;", "commid": "rust_pr_48083"}], "negative_passages": []} {"query_id": "q-en-rust-7588c1e3095f0508576ed42a62374fae3979416b2680ee03b763f2cb86ed355a", "query": "If a procedural macro generates an access to a named struct field like then the may be either defsite or callsite and it works either way. But if accessing an unnamed tuple struct field like then it only works if is call_site. I believe it should work either way in either case.\nSorry for the delay! Fixed in .", "positive_passages": [{"docid": "doc-en-rust-5220983bc183be2aa1cf1b5245052153651e550fc14c338fb92f401c6054f982", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // ignore-pretty pretty-printing is unhygienic #![feature(decl_macro)] #![allow(unused)] mod foo { pub macro m($s:tt, $i:tt) { $s.$i } } mod bar { struct S(i32); fn f() { let s = S(0); ::foo::m!(s, 0); } } fn main() {} ", "commid": "rust_pr_48083"}], "negative_passages": []} {"query_id": "q-en-rust-f6f45c329194f28dd2333fc1b6893cd181117e4a9188c96da9a46fd20eed02cb", "query": "I'd like to work on this.\nThis appears to be a regular error, not an ICE.", "positive_passages": [{"docid": "doc-en-rust-a515099f01efd599dbbe5e02f074cee54a764cdc328d3229606764c9410af7ca", "text": "use rustc::mir::{BasicBlock, Location, Mir}; use rustc::mir::Local; use rustc::ty::{self, Ty, TyCtxt, TypeFoldable}; use rustc::traits; use rustc::infer::InferOk; use rustc::util::common::ErrorReported; use borrow_check::nll::type_check::AtLocation; use rustc_data_structures::fx::FxHashSet; use syntax::codemap::DUMMY_SP; use util::liveness::LivenessResults;", "commid": "rust_pr_47920"}], "negative_passages": []} {"query_id": "q-en-rust-f6f45c329194f28dd2333fc1b6893cd181117e4a9188c96da9a46fd20eed02cb", "query": "I'd like to work on this.\nThis appears to be a regular error, not an ICE.", "positive_passages": [{"docid": "doc-en-rust-d70f31622a4edde7bd5fbae2707d3681a548577f692b857dbbe4dfa87f91e35b", "text": "location ); let tcx = self.cx.infcx.tcx; let mut types = vec![(dropped_ty, 0)]; let mut known = FxHashSet(); while let Some((ty, depth)) = types.pop() { let span = DUMMY_SP; // FIXME let result = match tcx.dtorck_constraint_for_ty(span, dropped_ty, depth, ty) { Ok(result) => result, Err(ErrorReported) => { continue; } }; let ty::DtorckConstraint { outlives, dtorck_types, } = result; // All things in the `outlives` array may be touched by // the destructor and must be live at this point. for outlive in outlives { let cause = Cause::DropVar(dropped_local, location); self.push_type_live_constraint(outlive, location, cause); } // If we end visiting the same type twice (usually due to a cycle involving // associated types), we need to ensure that its region types match up with the type // we added to the 'known' map the first time around. For this reason, we need // our infcx to hold onto its calculated region constraints after each call // to dtorck_constraint_for_ty. Otherwise, normalizing the corresponding associated // type will end up instantiating the type with a new set of inference variables // Since this new type will never be in 'known', we end up looping forever. // // For this reason, we avoid calling TypeChecker.normalize, instead doing all normalization // ourselves in one large 'fully_perform_op' callback. let (type_constraints, kind_constraints) = self.cx.fully_perform_op(location.at_self(), |cx| { let tcx = cx.infcx.tcx; let mut selcx = traits::SelectionContext::new(cx.infcx); let cause = cx.misc(cx.last_span); let mut types = vec![(dropped_ty, 0)]; let mut final_obligations = Vec::new(); let mut type_constraints = Vec::new(); let mut kind_constraints = Vec::new(); // However, there may also be some types that // `dtorck_constraint_for_ty` could not resolve (e.g., // associated types and parameters). We need to normalize // associated types here and possibly recursively process. for ty in dtorck_types { let ty = self.cx.normalize(&ty, location); let ty = self.cx.infcx.resolve_type_and_region_vars_if_possible(&ty); match ty.sty { ty::TyParam(..) | ty::TyProjection(..) | ty::TyAnon(..) => { let cause = Cause::DropVar(dropped_local, location); self.push_type_live_constraint(ty, location, cause); let mut known = FxHashSet(); while let Some((ty, depth)) = types.pop() { let span = DUMMY_SP; // FIXME let result = match tcx.dtorck_constraint_for_ty(span, dropped_ty, depth, ty) { Ok(result) => result, Err(ErrorReported) => { continue; } }; let ty::DtorckConstraint { outlives, dtorck_types, } = result; // All things in the `outlives` array may be touched by // the destructor and must be live at this point. for outlive in outlives { let cause = Cause::DropVar(dropped_local, location); kind_constraints.push((outlive, location, cause)); } _ => if known.insert(ty) { types.push((ty, depth + 1)); }, // However, there may also be some types that // `dtorck_constraint_for_ty` could not resolve (e.g., // associated types and parameters). We need to normalize // associated types here and possibly recursively process. for ty in dtorck_types { let traits::Normalized { value: ty, obligations } = traits::normalize(&mut selcx, cx.param_env, cause.clone(), &ty); final_obligations.extend(obligations); let ty = cx.infcx.resolve_type_and_region_vars_if_possible(&ty); match ty.sty { ty::TyParam(..) | ty::TyProjection(..) | ty::TyAnon(..) => { let cause = Cause::DropVar(dropped_local, location); type_constraints.push((ty, location, cause)); } _ => if known.insert(ty) { types.push((ty, depth + 1)); }, } } } Ok(InferOk { value: (type_constraints, kind_constraints), obligations: final_obligations }) }).unwrap(); for (ty, location, cause) in type_constraints { self.push_type_live_constraint(ty, location, cause); } for (kind, location, cause) in kind_constraints { self.push_type_live_constraint(kind, location, cause); } } }", "commid": "rust_pr_47920"}], "negative_passages": []} {"query_id": "q-en-rust-f6f45c329194f28dd2333fc1b6893cd181117e4a9188c96da9a46fd20eed02cb", "query": "I'd like to work on this.\nThis appears to be a regular error, not an ICE.", "positive_passages": [{"docid": "doc-en-rust-9334535e2f7df9a81d012b03120511124fb3646a69d18f6806248531d859a1b6", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(nll)] pub struct DescriptorSet<'a> { pub slots: Vec> } pub trait ResourcesTrait<'r>: Sized { type DescriptorSet: 'r; } pub struct Resources; impl<'a> ResourcesTrait<'a> for Resources { type DescriptorSet = DescriptorSet<'a>; } pub enum AttachInfo<'a, R: ResourcesTrait<'a>> { NextDescriptorSet(Box) } fn main() { let _x = DescriptorSet {slots: Vec::new()}; } ", "commid": "rust_pr_47920"}], "negative_passages": []} {"query_id": "q-en-rust-07f86595f0e2e014fad92b19da52fe07bb661abc6a9e0c04354b4e8782bb814e", "query": "When building the documentation (or running the doctests) of a few crates, such as libsecp256k1, in beta or nightly, rustdoc crashes with this message (while it works fine on stable, even if with some pulldown warnings): Crate regressed from stable to beta (). Crate regressed from stable to beta (). cc\nCrate is also affected, cc\nCan someone investigate and see if we can fix this?\nI'll give it a look.\nBoth and worked just fine for me...\nI ran on on my own system without updating my nightly and hit the ICE:\nSo maybe we need a backport to beta? That may have already happened, this is a relatively old beta run.\nFirst we'd need to find what's crashing...\nI've tried bisecting in the range ... and the result is... ? Needs to reverify later. Test script: Command: Edit: Verified, in the doc test passes and in the doc test ICEs. cc\nI just tried documenting with that exact nightly and it crashed for me? // Emit all buffered lints from early on in the session now that we've // calculated the lint levels for all AST nodes. for (_id, lints) in cx.buffered.map { for early_lint in lints { span_bug!(early_lint.span, \"failed to process buffered lint here\"); // All of the buffered lints should have been emitted at this point. // If not, that means that we somehow buffered a lint for a node id // that was not lint-checked (perhaps it doesn't exist?). This is a bug. // // Rustdoc runs everybody-loops before the early lints and removes // function bodies, so it's totally possible for linted // node ids to not exist (e.g. macros defined within functions for the // unused_macro lint) anymore. So we only run this check // when we're not in rustdoc mode. (see issue #47639) if !sess.opts.actually_rustdoc { for (_id, lints) in cx.buffered.map { for early_lint in lints { span_bug!(early_lint.span, \"failed to process buffered lint here\"); } } } }", "commid": "rust_pr_47959"}], "negative_passages": []} {"query_id": "q-en-rust-07f86595f0e2e014fad92b19da52fe07bb661abc6a9e0c04354b4e8782bb814e", "query": "When building the documentation (or running the doctests) of a few crates, such as libsecp256k1, in beta or nightly, rustdoc crashes with this message (while it works fine on stable, even if with some pulldown warnings): Crate regressed from stable to beta (). Crate regressed from stable to beta (). cc\nCrate is also affected, cc\nCan someone investigate and see if we can fix this?\nI'll give it a look.\nBoth and worked just fine for me...\nI ran on on my own system without updating my nightly and hit the ICE:\nSo maybe we need a backport to beta? That may have already happened, this is a relatively old beta run.\nFirst we'd need to find what's crashing...\nI've tried bisecting in the range ... and the result is... ? Needs to reverify later. Test script: Command: Edit: Verified, in the doc test passes and in the doc test ICEs. cc\nI just tried documenting with that exact nightly and it crashed for me? // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // This should not ICE pub fn test() { macro_rules! foo { () => () } } ", "commid": "rust_pr_47959"}], "negative_passages": []} {"query_id": "q-en-rust-535540629c679811d8d504acc5a06b7392f2345682a2b4727bf356513c9fb850", "query": "/ and / do the same thing.\nSo, should registertask/unregistertask then be exposed in rust_builtin so it can be called from Rust?\nYes, though I suggest renaming it, since it will no longer just be used for 'registering', nor does it strictly have anything to do with tasks or weak tasks from the kernel's perspective - this count is only for determining when to shutdown the runtime. Maybe for the method name and for the builtin. It's not great either, but is less wed to implementation details.", "positive_passages": [{"docid": "doc-en-rust-0b90683f7be94eb7ed44bc993377633f678d7303fc0b66d0b55e1e6759cd023c", "text": "let task = get_task_id(); // Expect the weak task service to be alive assert service.try_send(RegisterWeakTask(task, shutdown_chan)); unsafe { rust_inc_kernel_live_count(); } unsafe { rust_dec_kernel_live_count(); } do fn&() { let shutdown_port = swap_unwrap(&mut *shutdown_port); f(shutdown_port) }.finally || { unsafe { rust_dec_kernel_live_count(); } unsafe { rust_inc_kernel_live_count(); } // Service my have already exited service.send(UnregisterWeakTask(task)); }", "commid": "rust_pr_4858"}], "negative_passages": []} {"query_id": "q-en-rust-535540629c679811d8d504acc5a06b7392f2345682a2b4727bf356513c9fb850", "query": "/ and / do the same thing.\nSo, should registertask/unregistertask then be exposed in rust_builtin so it can be called from Rust?\nYes, though I suggest renaming it, since it will no longer just be used for 'registering', nor does it strictly have anything to do with tasks or weak tasks from the kernel's perspective - this count is only for determining when to shutdown the runtime. Maybe for the method name and for the builtin. It's not great either, but is less wed to implementation details.", "positive_passages": [{"docid": "doc-en-rust-9fea85231ddd20a247ef2af9b8985073e5d7c24d60a875b948c4cb4a7ad6ac54", "text": "let port = swap_unwrap(&mut *port); // The weak task service is itself a weak task debug!(\"weakening the weak service task\"); unsafe { rust_inc_kernel_live_count(); } unsafe { rust_dec_kernel_live_count(); } run_weak_task_service(port); }.finally { debug!(\"unweakening the weak service task\"); unsafe { rust_dec_kernel_live_count(); } unsafe { rust_inc_kernel_live_count(); } } }", "commid": "rust_pr_4858"}], "negative_passages": []} {"query_id": "q-en-rust-c9272f22dbff022b76cc904a2e597f2aa2588cb65afe57d99a14b694adccd85a", "query": "Do you have a testcase that reproduces this problem? Also, what version of rustc are you using?\nI'm sorry late for your comment. $ rustc this is bad my code. : : rustc 0.6 ( 2013-01-20 20:35:24 -0800)\nFrom my cursory glance, multibyte characters (Hangul in this case) seem to cause this bug.\nNot critical for 0.6; removing milestone\nSeems to be fixed. Checking in a test case.\nDone pending\nTest case in , closing", "positive_passages": [{"docid": "doc-en-rust-7c79db9fd3ee1476cadff9b8510e982305de351b4d19e872415fec81b21b110d", "text": " // Copyright 2012 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that multibyte characters don't crash the compiler fn main() { io::println(\"\ub9c8\uc774\ub108\uc2a4 \uc0ac\uc778\uc774 \uc5c6\uc73c\uba74\"); } ", "commid": "rust_pr_6662"}], "negative_passages": []} {"query_id": "q-en-rust-399c0a0a4e91b30587e3c8d8c9b5a8afdd092fb236f97233806047af4b955338", "query": "Would you like to fix this bug? If so, . The documentation currently states that : However, this is not what 's of that method does. It will only fill the buffer if no data is currently buffered. If data is already present in the buffer, no filling takes place: I would argue that the current behavior of 's implementation is usually what users would want to do, but if does not match the documentation of, nor the name of, . I don't know what the right solution here is, but I suspect the only safe thing to do would be to update 's implementation to match the docs (i.e., to make it actually fill its buffer), and then to add another method to (perhaps ?) that gives the currently buffered contents, regardless of the amount of buffered data, and without reading from the underlying .\nAdding a method to would be a breaking change unless you have a suitable default implementation in mind. I would love it if we can make \"every time this is called, the implementation should fetch more data\" part of the API contract because it's easy to get burned when you make that assumption with . The only one impl I know of where this guarantee isn't really feasible is which can only return buffers from one source at a time.\nWell, even that is tricky, right? What if the buffer is already full? I also think there's value in being able to get access to the bytes currently in the buffer without doing a syscall (something like ).\nThe issue is that you can't really do anything to the structure of the trait at this point without breaking changes, though this is now feasible with epochs. Adding extra functionality directly to is quite a bit more reasonable though. I have that I couldn't be bothered to champion and instead just .\nI guess one question is where we decide to not expand and say that external crates should provide features instead. I'd advocate for this being something should support (though as you say, possibly not ).\nI also ran into this 'faulty' documentation issue. I have a program which needed pushing back data on the reader and by reading the documentation I thought I could use fillbuf with consume for that, but found out the hard way that fillbuf isn't actually filling the buffer but returning the existing buffer if something is still in there. Hardly appreciated if the fill_buf acts as documented.\nThe proposed behavior is fundamentally broken. Imagine a being used to parse some line-oriented protocol. The BufReader's buffer capacity is 4096 bytes, and the peer sends a single TCP packet with 2 lines in 100 bytes. It then expects our end to respond, so it's not doing any more writes. You call , which calls into . Since this is a new , the buffer is empty so it calls down into the and reads all 100 bytes into the buffer. then returns the first line, and there's the one remaining line in the buffer. Now you call again to read the last line the peer sent over. With the current behavior, the 's buffer is nonempty, so immediately returns it. returns the last line our end responds to the peer, and everything is happy. With the proposed behavior, calls back down into the since its buffer isn't full and the protocol deadlocks.\nJust to clarify, is the idea here to update the docs from: Fills the internal buffer of this object, returning the buffer contents. something like this?\nAh, commented while I was writing my comment and answered a couple of the questions I had\nI'd probably phrase it as\nI agree with you that changing the behavior of seems wrong. As pointed out in the original post: \"the current behavior of 's implementation is usually what users would want to do, but if does not match the documentation of, nor the name of, \". I do think there's an argument for providing a method that gives the buffer contents without ever doing a syscall, but that's probably an argument for another issue.\nI'm happy to mentor anyone who wants to fix this issue! Here's how: The line in question is here: This should be changed to: I'm happy to answer any additional questions!\nPerhaps we should also include a reference to the newly ()? Though I suppose that is (sadly) only available for and not all .", "positive_passages": [{"docid": "doc-en-rust-377db672e80a18128623bee7f065ea2881bb8c4e68a42a355d4ac07f2384ad52", "text": "/// #[stable(feature = \"rust1\", since = \"1.0.0\")] pub trait BufRead: Read { /// Fills the internal buffer of this object, returning the buffer contents. /// Returns the contents of the internal buffer, filling it with more data /// from the inner reader if it is empty. /// /// This function is a lower-level call. It needs to be paired with the /// [`consume`] method to function properly. When calling this", "commid": "rust_pr_54046"}], "negative_passages": []} {"query_id": "q-en-rust-cccb20dd5e45ba2c8a0dcee70480ba8cadef6bbaee0f670525e5aa1d3789ef5d", "query": "This is a tracking issue for the RFC \"Type privacy and private-in-public lints \" (rust-lang/rfcs). Steps: [ ] Implement the RFC (cc ) [ ] Adjust documentation ([see instructions on forge][doc-guide]) [ ] Stabilization PR ([see instructions on forge][stabilization-guide]) [stabilization-guide]: [doc-guide]: Unresolved questions: [ ] It's not fully clear if the restriction for associated type definitions required for type privacy soundness, or it's just a workaround for a technical difficulty. [ ] Interactions between macros 2.0 and the notions of reachability / effective visibility used for the lints are unclear. $DIR/feature-gate-type_privacy_lints.rs:3:1 | LL | #![warn(unnameable_types)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the `unnameable_types` lint is unstable = note: see issue #48054 for more information = help: add `#![feature(type_privacy_lints)]` to the crate attributes to enable = note: this compiler was built on YYYY-MM-DD; consider upgrading it if it is out of date = note: `#[warn(unknown_lints)]` on by default warning: 1 warning emitted ", "commid": "rust_pr_120144"}], "negative_passages": []} {"query_id": "q-en-rust-cccb20dd5e45ba2c8a0dcee70480ba8cadef6bbaee0f670525e5aa1d3789ef5d", "query": "This is a tracking issue for the RFC \"Type privacy and private-in-public lints \" (rust-lang/rfcs). Steps: [ ] Implement the RFC (cc ) [ ] Adjust documentation ([see instructions on forge][doc-guide]) [ ] Stabilization PR ([see instructions on forge][stabilization-guide]) [stabilization-guide]: [doc-guide]: Unresolved questions: [ ] It's not fully clear if the restriction for associated type definitions required for type privacy soundness, or it's just a workaround for a technical difficulty. [ ] Interactions between macros 2.0 and the notions of reachability / effective visibility used for the lints are unclear. $DIR/unnameable_types.rs:5:5 --> $DIR/unnameable_types.rs:4:5 | LL | pub struct PubStruct(pub i32); | ^^^^^^^^^^^^^^^^^^^^ reachable at visibility `pub`, but can only be named at visibility `pub(crate)` | note: the lint level is defined here --> $DIR/unnameable_types.rs:2:9 --> $DIR/unnameable_types.rs:1:9 | LL | #![deny(unnameable_types)] | ^^^^^^^^^^^^^^^^ error: enum `PubE` is reachable but cannot be named --> $DIR/unnameable_types.rs:7:5 --> $DIR/unnameable_types.rs:6:5 | LL | pub enum PubE { | ^^^^^^^^^^^^^ reachable at visibility `pub`, but can only be named at visibility `pub(crate)` error: trait `PubTr` is reachable but cannot be named --> $DIR/unnameable_types.rs:11:5 --> $DIR/unnameable_types.rs:10:5 | LL | pub trait PubTr { | ^^^^^^^^^^^^^^^ reachable at visibility `pub`, but can only be named at visibility `pub(crate)`", "commid": "rust_pr_120144"}], "negative_passages": []} {"query_id": "q-en-rust-2021d9dc7731cc45fb4f5f2c4a0ed5ada1f4422c3a3e4de3f3d2bafb3259d1d0", "query": "Document of and for unsingned integers take 2^4 ( is not xor but power) as examples. Because 2^4 = 4^2, these examples are not very helpful for those unfamiliar with math words in English and thus rely on example codes. We should take another example, e.g. 2^5 (which is not equal to 5^2).", "positive_passages": [{"docid": "doc-en-rust-a7ce838a7d130dc685d99b9059a299d91f8a580262377e39f18016badcf00305", "text": "``` \", $Feature, \"let x: \", stringify!($SelfT), \" = 2; // or any other integer type assert_eq!(x.pow(4), 16);\", assert_eq!(x.pow(5), 32);\", $EndFeature, \" ```\"), #[stable(feature = \"rust1\", since = \"1.0.0\")]", "commid": "rust_pr_48397"}], "negative_passages": []} {"query_id": "q-en-rust-2021d9dc7731cc45fb4f5f2c4a0ed5ada1f4422c3a3e4de3f3d2bafb3259d1d0", "query": "Document of and for unsingned integers take 2^4 ( is not xor but power) as examples. Because 2^4 = 4^2, these examples are not very helpful for those unfamiliar with math words in English and thus rely on example codes. We should take another example, e.g. 2^5 (which is not equal to 5^2).", "positive_passages": [{"docid": "doc-en-rust-5cacde338521445d339ec3b94f430427e114ccce8e6a651c5b0d6a5cc3c6a68c", "text": "Basic usage: ``` \", $Feature, \"assert_eq!(2\", stringify!($SelfT), \".pow(4), 16);\", $EndFeature, \" \", $Feature, \"assert_eq!(2\", stringify!($SelfT), \".pow(5), 32);\", $EndFeature, \" ```\"), #[stable(feature = \"rust1\", since = \"1.0.0\")] #[inline]", "commid": "rust_pr_48397"}], "negative_passages": []} {"query_id": "q-en-rust-2f71a7a4df60a8ef522a8c924484ad7d4061c88f4933572142d5c21fcf38cc97", "query": "One of the easiest ways to make CI faster is to make things parallel and simply use the hardware we have available to us. Unfortunately though we don't have a lot of data about how parallel our build is. Are there steps we think are parallel but actually aren't? Are we pegged to one core for long durations when there's other work we could be doing? The general idea here is that we'd spin up a daemon at the very start of the build which would sample CPU utilization every so often. This daemon would then update a file that's either displayed or uploaded at the end of the build. Hopefully we could then use these logs to get a better view into how the builders are working during the build, diagnose non-parallel portions of the build, and implement fixes to use all the cpus we've got. cc\nOn Windows this can be done by taking advantage of job objects. If the entire build is wrapped in a job object then we can call with to get a bunch of useful data.\nI made script that will print output into the travis log every 30 seconds , . Some findings: Cloning submodules jemalloc, libcompilerbuildtins and liblibc alone takes 30 seconds. While building bootstrap, compiling serdederive, serdejson and bootstrap crates seems to take 30 seconds (total build time: 47 seconds). stage0: Compiling tidy crate seems to take around 30 seconds. Compiling rustcerrors takes at least 2 minutes, only one codegen-unit is used Compiling syntaxext takes 9 minutes, only one CGU used stage0 codegen artifacts: Compiling rustcllvm takes 1,5 minutes, one CGU During stage1, rustcerrors and syntaxext builds are approximately as slow as during stage0, rustc_plugins 2 minutes, one CGU. stage2: rustdoc took 2 minutes to build, one CGU compiletest suite=run-make mode=run-make: It looks like there is a single test that takes around 3 minutes to complete and has no parallelization. Testing alloc stage1: building liballoc takes around a minute Testing syntax stage1: building syntax takes 1.5 minutes, one CGU Notes: When the load average dropped towards 1, I assumed only one codegen unit was active. The script was only applied to the default pullrequest travis-ci configuration.\nAs shown in , the CPUs assigned to each job may have some performance difference: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz The clock-rate 2.4 GHz vs 2.5 GHz shouldn't make any noticeable difference though (this would at most slow down by 7.2 minutes out of 3 hours if everything is CPU-bound). It is not enough to explain the timeout in .\nI was working on recently for this where it periodically prints out the CPU usage as a percentage for the whole system (aka 1 core on a 4 core machine is 25%). I only got Linux/OSX working though and was unable to figure out a good way to do it on Windows. My thinking for how we'd do this is probably download a binary near the beginning of the build (or set up some script). We'd then run just before we run . That way we could correlate the two timestamps of each log (the main log and the to similar moments in time. Initially I was also thinking we'd just at the end of the build and scrape it later if need be.", "positive_passages": [{"docid": "doc-en-rust-f6e6ae8e17afafac5d9c10044c92753fce1f0ff74a599a2e7442ad4a9ed1b67e", "text": "- checkout: self fetchDepth: 2 # Spawn a background process to collect CPU usage statistics which we'll upload # at the end of the build. See the comments in the script here for more # information. - bash: python src/ci/cpu-usage-over-time.py &> cpu-usage.csv & displayName: \"Collect CPU-usage statistics in the background\" - bash: printenv | sort displayName: Show environment variables", "commid": "rust_pr_61632"}], "negative_passages": []} {"query_id": "q-en-rust-2f71a7a4df60a8ef522a8c924484ad7d4061c88f4933572142d5c21fcf38cc97", "query": "One of the easiest ways to make CI faster is to make things parallel and simply use the hardware we have available to us. Unfortunately though we don't have a lot of data about how parallel our build is. Are there steps we think are parallel but actually aren't? Are we pegged to one core for long durations when there's other work we could be doing? The general idea here is that we'd spin up a daemon at the very start of the build which would sample CPU utilization every so often. This daemon would then update a file that's either displayed or uploaded at the end of the build. Hopefully we could then use these logs to get a better view into how the builders are working during the build, diagnose non-parallel portions of the build, and implement fixes to use all the cpus we've got. cc\nOn Windows this can be done by taking advantage of job objects. If the entire build is wrapped in a job object then we can call with to get a bunch of useful data.\nI made script that will print output into the travis log every 30 seconds , . Some findings: Cloning submodules jemalloc, libcompilerbuildtins and liblibc alone takes 30 seconds. While building bootstrap, compiling serdederive, serdejson and bootstrap crates seems to take 30 seconds (total build time: 47 seconds). stage0: Compiling tidy crate seems to take around 30 seconds. Compiling rustcerrors takes at least 2 minutes, only one codegen-unit is used Compiling syntaxext takes 9 minutes, only one CGU used stage0 codegen artifacts: Compiling rustcllvm takes 1,5 minutes, one CGU During stage1, rustcerrors and syntaxext builds are approximately as slow as during stage0, rustc_plugins 2 minutes, one CGU. stage2: rustdoc took 2 minutes to build, one CGU compiletest suite=run-make mode=run-make: It looks like there is a single test that takes around 3 minutes to complete and has no parallelization. Testing alloc stage1: building liballoc takes around a minute Testing syntax stage1: building syntax takes 1.5 minutes, one CGU Notes: When the load average dropped towards 1, I assumed only one codegen unit was active. The script was only applied to the default pullrequest travis-ci configuration.\nAs shown in , the CPUs assigned to each job may have some performance difference: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz The clock-rate 2.4 GHz vs 2.5 GHz shouldn't make any noticeable difference though (this would at most slow down by 7.2 minutes out of 3 hours if everything is CPU-bound). It is not enough to explain the timeout in .\nI was working on recently for this where it periodically prints out the CPU usage as a percentage for the whole system (aka 1 core on a 4 core machine is 25%). I only got Linux/OSX working though and was unable to figure out a good way to do it on Windows. My thinking for how we'd do this is probably download a binary near the beginning of the build (or set up some script). We'd then run just before we run . That way we could correlate the two timestamps of each log (the main log and the to similar moments in time. Initially I was also thinking we'd just at the end of the build and scrape it later if need be.", "positive_passages": [{"docid": "doc-en-rust-cf5e7ac7ec96d14e6e0c19a2169041e34ae8e7847a421f893faffa08161eb55c", "text": "AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY) condition: and(succeeded(), or(eq(variables.DEPLOY, '1'), eq(variables.DEPLOY_ALT, '1'))) displayName: Upload artifacts # Upload CPU usage statistics that we've been gathering this whole time. Always # execute this step in case we want to inspect failed builds, but don't let # errors here ever fail the build since this is just informational. - bash: aws s3 cp --acl public-read cpu-usage.csv s3://$DEPLOY_BUCKET/rustc-builds/$BUILD_SOURCEVERSION/cpu-$SYSTEM_JOBNAME.csv env: AWS_SECRET_ACCESS_KEY: $(AWS_SECRET_ACCESS_KEY) condition: contains(variables, 'AWS_SECRET_ACCESS_KEY') continueOnError: true displayName: Upload CPU usage statistics ", "commid": "rust_pr_61632"}], "negative_passages": []} {"query_id": "q-en-rust-2f71a7a4df60a8ef522a8c924484ad7d4061c88f4933572142d5c21fcf38cc97", "query": "One of the easiest ways to make CI faster is to make things parallel and simply use the hardware we have available to us. Unfortunately though we don't have a lot of data about how parallel our build is. Are there steps we think are parallel but actually aren't? Are we pegged to one core for long durations when there's other work we could be doing? The general idea here is that we'd spin up a daemon at the very start of the build which would sample CPU utilization every so often. This daemon would then update a file that's either displayed or uploaded at the end of the build. Hopefully we could then use these logs to get a better view into how the builders are working during the build, diagnose non-parallel portions of the build, and implement fixes to use all the cpus we've got. cc\nOn Windows this can be done by taking advantage of job objects. If the entire build is wrapped in a job object then we can call with to get a bunch of useful data.\nI made script that will print output into the travis log every 30 seconds , . Some findings: Cloning submodules jemalloc, libcompilerbuildtins and liblibc alone takes 30 seconds. While building bootstrap, compiling serdederive, serdejson and bootstrap crates seems to take 30 seconds (total build time: 47 seconds). stage0: Compiling tidy crate seems to take around 30 seconds. Compiling rustcerrors takes at least 2 minutes, only one codegen-unit is used Compiling syntaxext takes 9 minutes, only one CGU used stage0 codegen artifacts: Compiling rustcllvm takes 1,5 minutes, one CGU During stage1, rustcerrors and syntaxext builds are approximately as slow as during stage0, rustc_plugins 2 minutes, one CGU. stage2: rustdoc took 2 minutes to build, one CGU compiletest suite=run-make mode=run-make: It looks like there is a single test that takes around 3 minutes to complete and has no parallelization. Testing alloc stage1: building liballoc takes around a minute Testing syntax stage1: building syntax takes 1.5 minutes, one CGU Notes: When the load average dropped towards 1, I assumed only one codegen unit was active. The script was only applied to the default pullrequest travis-ci configuration.\nAs shown in , the CPUs assigned to each job may have some performance difference: Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz The clock-rate 2.4 GHz vs 2.5 GHz shouldn't make any noticeable difference though (this would at most slow down by 7.2 minutes out of 3 hours if everything is CPU-bound). It is not enough to explain the timeout in .\nI was working on recently for this where it periodically prints out the CPU usage as a percentage for the whole system (aka 1 core on a 4 core machine is 25%). I only got Linux/OSX working though and was unable to figure out a good way to do it on Windows. My thinking for how we'd do this is probably download a binary near the beginning of the build (or set up some script). We'd then run just before we run . That way we could correlate the two timestamps of each log (the main log and the to similar moments in time. Initially I was also thinking we'd just at the end of the build and scrape it later if need be.", "positive_passages": [{"docid": "doc-en-rust-8ab9e0a27c1da409968a249131796085dc211fe1dd96aaa82ab6267f3719955b", "text": " #!/usr/bin/env python # ignore-tidy-linelength # This is a small script that we use on CI to collect CPU usage statistics of # our builders. By seeing graphs of CPU usage over time we hope to correlate # that with possible improvements to Rust's own build system, ideally diagnosing # that either builders are always fully using their CPU resources or they're # idle for long stretches of time. # # This script is relatively simple, but it's platform specific. Each platform # (OSX/Windows/Linux) has a different way of calculating the current state of # CPU at a point in time. We then compare two captured states to determine the # percentage of time spent in one state versus another. The state capturing is # all platform-specific but the loop at the bottom is the cross platform part # that executes everywhere. # # # Viewing statistics # # All builders will upload their CPU statistics as CSV files to our S3 buckets. # These URLS look like: # # https://$bucket.s3.amazonaws.com/rustc-builds/$commit/cpu-$builder.csv # # for example # # https://rust-lang-ci2.s3.amazonaws.com/rustc-builds/68baada19cd5340f05f0db15a3e16d6671609bcc/cpu-x86_64-apple.csv # # Each CSV file has two columns. The first is the timestamp of the measurement # and the second column is the % of idle cpu time in that time slice. Ideally # the second column is always zero. # # Once you've downloaded a file there's various ways to plot it and visualize # it. For command line usage you can use a script like so: # # set timefmt '%Y-%m-%dT%H:%M:%S' # set xdata time # set ylabel \"Idle CPU %\" # set xlabel \"Time\" # set datafile sep ',' # set term png # set output \"printme.png\" # set grid # builder = \"i686-apple\" # plot \"cpu-\".builder.\".csv\" using 1:2 with lines title builder # # Executed as `gnuplot < ./foo.plot` it will generate a graph called # `printme.png` which you can then open up. If you know how to improve this # script or the viewing process that would be much appreciated :) (or even if # you know how to automate it!) import datetime import sys import time if sys.platform == 'linux2': class State: def __init__(self): with open('/proc/stat', 'r') as file: data = file.readline().split() if data[0] != 'cpu': raise Exception('did not start with \"cpu\"') self.user = int(data[1]) self.nice = int(data[2]) self.system = int(data[3]) self.idle = int(data[4]) self.iowait = int(data[5]) self.irq = int(data[6]) self.softirq = int(data[7]) self.steal = int(data[8]) self.guest = int(data[9]) self.guest_nice = int(data[10]) def idle_since(self, prev): user = self.user - prev.user nice = self.nice - prev.nice system = self.system - prev.system idle = self.idle - prev.idle iowait = self.iowait - prev.iowait irq = self.irq - prev.irq softirq = self.softirq - prev.softirq steal = self.steal - prev.steal guest = self.guest - prev.guest guest_nice = self.guest_nice - prev.guest_nice total = user + nice + system + idle + iowait + irq + softirq + steal + guest + guest_nice return float(idle) / float(total) * 100 elif sys.platform == 'win32': from ctypes.wintypes import DWORD from ctypes import Structure, windll, WinError, GetLastError, byref class FILETIME(Structure): _fields_ = [ (\"dwLowDateTime\", DWORD), (\"dwHighDateTime\", DWORD), ] class State: def __init__(self): idle, kernel, user = FILETIME(), FILETIME(), FILETIME() success = windll.kernel32.GetSystemTimes( byref(idle), byref(kernel), byref(user), ) assert success, WinError(GetLastError())[1] self.idle = (idle.dwHighDateTime << 32) | idle.dwLowDateTime self.kernel = (kernel.dwHighDateTime << 32) | kernel.dwLowDateTime self.user = (user.dwHighDateTime << 32) | user.dwLowDateTime def idle_since(self, prev): idle = self.idle - prev.idle user = self.user - prev.user kernel = self.kernel - prev.kernel return float(idle) / float(user + kernel) * 100 elif sys.platform == 'darwin': from ctypes import * libc = cdll.LoadLibrary('/usr/lib/libc.dylib') PROESSOR_CPU_LOAD_INFO = c_int(2) CPU_STATE_USER = 0 CPU_STATE_SYSTEM = 1 CPU_STATE_IDLE = 2 CPU_STATE_NICE = 3 c_int_p = POINTER(c_int) class State: def __init__(self): num_cpus_u = c_uint(0) cpu_info = c_int_p() cpu_info_cnt = c_int(0) err = libc.host_processor_info( libc.mach_host_self(), PROESSOR_CPU_LOAD_INFO, byref(num_cpus_u), byref(cpu_info), byref(cpu_info_cnt), ) assert err == 0 self.user = 0 self.system = 0 self.idle = 0 self.nice = 0 cur = 0 while cur < cpu_info_cnt.value: self.user += cpu_info[cur + CPU_STATE_USER] self.system += cpu_info[cur + CPU_STATE_SYSTEM] self.idle += cpu_info[cur + CPU_STATE_IDLE] self.nice += cpu_info[cur + CPU_STATE_NICE] cur += num_cpus_u.value def idle_since(self, prev): user = self.user - prev.user system = self.system - prev.system idle = self.idle - prev.idle nice = self.nice - prev.nice return float(idle) / float(user + system + idle + nice) * 100.0 else: print('unknown platform', sys.platform) sys.exit(1) cur_state = State(); print(\"Time,Idle\") while True: time.sleep(1); next_state = State(); now = datetime.datetime.utcnow().isoformat() idle = next_state.idle_since(cur_state) print(\"%s,%s\" % (now, idle)) sys.stdout.flush() cur_state = next_state ", "commid": "rust_pr_61632"}], "negative_passages": []} {"query_id": "q-en-rust-a0777ab808493aa30d4546d94b8d6f838d0529d50b0d181e0b50b951924fd1cd", "query": "Updated to via rustup. I have a and additional which specify additional rustc arguments. Now if I run The build will succeed but then fail to run qemu because Instead, there's a file named in the directory where xargo was run from. To fix this I had to change the target json file with these options: I'm afraid this could subtly break my builds elsewhere, as the linker is, in fact, GNU ld. I believe this is related to the latest -pie/-no-pie patches which somehow interfere with custom cargo linker arguments specifications (where ends up landing instead of output file name).\nSeems like is likely to be related?\nThat's my suspicion\nhmm my change shouldn't have impacted when the -pie flag was sent, only the -no-pie flag. what version were you using before?\nCould you please build with and post the output related to the link step (after the line)\ntriage: P-high Not assigning this to anyone immediately, since seems to be on it -- if you could take a shot at supplying that debug output it would be very helpful, thanks. =)\ndoes that contain the info you needed?\nIt does look weird as I don't see any standalone in the command line.\nYes, that helps. I think this is what is happening: Even though you have set in your target file to produce position-independent-executables (pass the \"-pie\" flag), the code overrides this because it sees \"-static\" on the linker's command line and decides not to generate PIE. My PR made it pass \"-no-pie\" in this case, and fall back to removing -no-pie if the link fails For some reason this seems to be confusing ld instead of causing it to generate an error Can you confirm this by manually running: (same command line without -no-pie) to see if it works correctly, and also a ? I need to look more into ld to see if -no-pie is even necessary for it, I think maybe the default pie stuff might only be applicable with the gcc front-end. If so, I think the fix would be to change the code not to generate -no-pie if linking with ld.\nnm, I think I was able to reproduce this. I don't see -no-pie documented for ld so I think that is the problem. I'll work on a fix.\nThanks, I will test this asap!\nWorks! Thank you!", "positive_passages": [{"docid": "doc-en-rust-4f04670d5bf66f58d0b6ad0f1cd3d0720b9158978c20aab7875293b778b7f0d7", "text": "// linking executables as pie. Different versions of gcc seem to use // different quotes in the error message so don't check for them. if sess.target.target.options.linker_is_gnu && sess.linker_flavor() != LinkerFlavor::Ld && (out.contains(\"unrecognized command line option\") || out.contains(\"unknown argument\")) && out.contains(\"-no-pie\") &&", "commid": "rust_pr_49329"}], "negative_passages": []} {"query_id": "q-en-rust-a0777ab808493aa30d4546d94b8d6f838d0529d50b0d181e0b50b951924fd1cd", "query": "Updated to via rustup. I have a and additional which specify additional rustc arguments. Now if I run The build will succeed but then fail to run qemu because Instead, there's a file named in the directory where xargo was run from. To fix this I had to change the target json file with these options: I'm afraid this could subtly break my builds elsewhere, as the linker is, in fact, GNU ld. I believe this is related to the latest -pie/-no-pie patches which somehow interfere with custom cargo linker arguments specifications (where ends up landing instead of output file name).\nSeems like is likely to be related?\nThat's my suspicion\nhmm my change shouldn't have impacted when the -pie flag was sent, only the -no-pie flag. what version were you using before?\nCould you please build with and post the output related to the link step (after the line)\ntriage: P-high Not assigning this to anyone immediately, since seems to be on it -- if you could take a shot at supplying that debug output it would be very helpful, thanks. =)\ndoes that contain the info you needed?\nIt does look weird as I don't see any standalone in the command line.\nYes, that helps. I think this is what is happening: Even though you have set in your target file to produce position-independent-executables (pass the \"-pie\" flag), the code overrides this because it sees \"-static\" on the linker's command line and decides not to generate PIE. My PR made it pass \"-no-pie\" in this case, and fall back to removing -no-pie if the link fails For some reason this seems to be confusing ld instead of causing it to generate an error Can you confirm this by manually running: (same command line without -no-pie) to see if it works correctly, and also a ? I need to look more into ld to see if -no-pie is even necessary for it, I think maybe the default pie stuff might only be applicable with the gcc front-end. If so, I think the fix would be to change the code not to generate -no-pie if linking with ld.\nnm, I think I was able to reproduce this. I don't see -no-pie documented for ld so I think that is the problem. I'll work on a fix.\nThanks, I will test this asap!\nWorks! Thank you!", "positive_passages": [{"docid": "doc-en-rust-e8c472858b8da5e909abb968dd72d8abd6ce3dfafdb55879a0e8a4190248ba80", "text": "} else { // recent versions of gcc can be configured to generate position // independent executables by default. We have to pass -no-pie to // explicitly turn that off. if sess.target.target.options.linker_is_gnu { // explicitly turn that off. Not applicable to ld. if sess.target.target.options.linker_is_gnu && sess.linker_flavor() != LinkerFlavor::Ld { cmd.no_position_independent_executable(); } }", "commid": "rust_pr_49329"}], "negative_passages": []} {"query_id": "q-en-rust-4490d0a807cebe407192754fbd723cc8ce3ceceb078d8316cc44a80dbc0b3018", "query": "Servo doesn\u2019t build in today\u2019s Nightly: Since the type is stable I think this is a breaking change. CC for\nThe type is stable, but that impl is . Edit: just realised that's not the impl in question, just wanted to say that I don't think it's a breaking change with regard to stable.\nOops. Indeed, that's a regression.\nEgad sorry about that! I've posted to revert this change", "positive_passages": [{"docid": "doc-en-rust-5e3e96aaa7e42a126e0f703fe5643d3293d219692e9df72ced5554211f0d9686", "text": "#[unstable(feature = \"proc_macro\", issue = \"38356\")] impl iter::FromIterator for TokenStream { fn from_iter>(trees: I) -> Self { trees.into_iter().map(TokenStream::from).collect() } } #[unstable(feature = \"proc_macro\", issue = \"38356\")] impl iter::FromIterator for TokenStream { fn from_iter>(streams: I) -> Self { let mut builder = tokenstream::TokenStreamBuilder::new(); for tree in trees { builder.push(tree.to_internal()); for stream in streams { builder.push(stream.0); } TokenStream(builder.build()) }", "commid": "rust_pr_49734"}], "negative_passages": []} {"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-47939c0e492d3e1ea12fd762792dccddce09168a51075f06fa7512f488643a56", "text": "assigned_map: FxHashMap, FxHashSet>, /// Locations which activate borrows. /// NOTE: A given location may activate more than one borrow in the future /// when more general two-phase borrow support is introduced, but for now we /// only need to store one borrow index activation_map: FxHashMap, activation_map: FxHashMap>, /// Every borrow has a region; this maps each such regions back to /// its borrow-indexes.", "commid": "rust_pr_49678"}], "negative_passages": []} {"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-1a1713cf9c2e35b0f29c922a4b96dc08d9ac13e6a03221bbef88bae4bfcbb631", "text": "idx_vec: IndexVec>, location_map: FxHashMap, assigned_map: FxHashMap, FxHashSet>, activation_map: FxHashMap, activation_map: FxHashMap>, region_map: FxHashMap, FxHashSet>, local_map: FxHashMap>, region_span_map: FxHashMap,", "commid": "rust_pr_49678"}], "negative_passages": []} {"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-724584acba2194cf29c111c6fda6849c8ade487f3cdb899c91dae3c76dd35617", "text": "let idx = self.idx_vec.push(borrow); self.location_map.insert(location, idx); // This assert is a good sanity check until more general 2-phase borrow // support is introduced. See NOTE on the activation_map field for more assert!(!self.activation_map.contains_key(&activate_location), \"More than one activation introduced at the same location.\"); self.activation_map.insert(activate_location, idx); insert(&mut self.activation_map, &activate_location, idx); insert(&mut self.assigned_map, assigned_place, idx); insert(&mut self.region_map, ®ion, idx); if let Some(local) = root_local(borrowed_place) {", "commid": "rust_pr_49678"}], "negative_passages": []} {"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-b8098b4610d9b99304f3d159cc585ec983833f3d60d122c34882818f0f0e9990", "text": "location: Location) { // Handle activations match self.activation_map.get(&location) { Some(&activated) => { debug!(\"activating borrow {:?}\", activated); sets.gen(&ReserveOrActivateIndex::active(activated)) Some(activations) => { for activated in activations { debug!(\"activating borrow {:?}\", activated); sets.gen(&ReserveOrActivateIndex::active(*activated)) } } None => {} }", "commid": "rust_pr_49678"}], "negative_passages": []} {"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-3c925445703941c9515139f54fe48cbf849517ec23b4b925dc8a9c057a66537b", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(nll)] struct Foo { } impl Foo { fn method(&mut self, foo: &mut Foo) { } } fn main() { let mut foo = Foo { }; foo.method(&mut foo); //~^ cannot borrow `foo` as mutable more than once at a time //~^^ cannot borrow `foo` as mutable more than once at a time } ", "commid": "rust_pr_49678"}], "negative_passages": []} {"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-688a6e8f90fc820b3dd35ccdebdd2bf8162f1469a5eb81df8c1bdb7d8589e130", "text": " error[E0499]: cannot borrow `foo` as mutable more than once at a time --> $DIR/two-phase-multi-mut.rs:23:16 | LL | foo.method(&mut foo); | -----------^^^^^^^^- | | | | | second mutable borrow occurs here | first mutable borrow occurs here | borrow later used here error[E0499]: cannot borrow `foo` as mutable more than once at a time --> $DIR/two-phase-multi-mut.rs:23:5 | LL | foo.method(&mut foo); | ^^^^^^^^^^^--------^ | | | | | first mutable borrow occurs here | second mutable borrow occurs here | borrow later used here error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0499`. ", "commid": "rust_pr_49678"}], "negative_passages": []} {"query_id": "q-en-rust-295feeb296bf0f0df30ae3c5f7dad54e6805758ec4af569b0b0b10673d1bea5c", "query": "This program : See also , which I believe to be relevant. I'm refactoring some of this code, trying to explore the new NLL approach, so I will probably wind up fixing this. But obviously we should have this as a test! cc\nIs this similar to this ICE?", "positive_passages": [{"docid": "doc-en-rust-2493e2649d6c53ea3dbfee92d86c7059b3a88975d5898198ec9ad03e3b7a2349", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // revisions: lxl nll //[lxl]compile-flags: -Z borrowck=mir -Z two-phase-borrows //[nll]compile-flags: -Z borrowck=mir -Z two-phase-borrows -Z nll // run-pass use std::io::Result; struct Foo {} pub trait FakeRead { fn read_to_end(&mut self, buf: &mut Vec) -> Result; } impl FakeRead for Foo { fn read_to_end(&mut self, buf: &mut Vec) -> Result { Ok(4) } } fn main() { let mut a = Foo {}; let mut v = Vec::new(); a.read_to_end(&mut v); } ", "commid": "rust_pr_49678"}], "negative_passages": []} {"query_id": "q-en-rust-5fc4c867e436d3cb0c5228de357ead96eaafbfea3522ea35eea7b7e3ae56cc0f", "query": "I've just got very surprised to find out that doesn't compile, but does. After some investigation, I found that is missing. Similar impl is present for though. I believe it'd be nice to implement it.\nis to as is to , so check out , this implementation might meet your needs.\nI've soled my problem immediately. I filed issue because I think we could help others by implementing the trait.", "positive_passages": [{"docid": "doc-en-rust-068930f6c07d30639ab6f1641e301734325d253bdd705d179a7da2bc914f1e45", "text": "} } #[stable(feature = \"os_str_str_ref_eq\", since = \"1.28.0\")] impl<'a> PartialEq<&'a str> for OsString { fn eq(&self, other: &&'a str) -> bool { **self == **other } } #[stable(feature = \"os_str_str_ref_eq\", since = \"1.28.0\")] impl<'a> PartialEq for &'a str { fn eq(&self, other: &OsString) -> bool { **other == **self } } #[stable(feature = \"rust1\", since = \"1.0.0\")] impl Eq for OsString {}", "commid": "rust_pr_51178"}], "negative_passages": []} {"query_id": "q-en-rust-5fc4c867e436d3cb0c5228de357ead96eaafbfea3522ea35eea7b7e3ae56cc0f", "query": "I've just got very surprised to find out that doesn't compile, but does. After some investigation, I found that is missing. Similar impl is present for though. I believe it'd be nice to implement it.\nis to as is to , so check out , this implementation might meet your needs.\nI've soled my problem immediately. I filed issue because I think we could help others by implementing the trait.", "positive_passages": [{"docid": "doc-en-rust-936ff5d089e753b4d7338e0e9d361efbbecf3ebe04c19b02e5d4e89d466546b2", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::ffi::OsString; fn main() { let os_str = OsString::from(\"Hello Rust!\"); assert_eq!(os_str, \"Hello Rust!\"); assert_eq!(\"Hello Rust!\", os_str); } ", "commid": "rust_pr_51178"}], "negative_passages": []} {"query_id": "q-en-rust-e5de0d99a31d54341c3c0ccb627dae365750acd68eed9816a3f3a45571978b64", "query": "These feel like an omission to me, given how is implemented for this type: And if we really want to go for feature parity, then also has impls for , but I think that's mostly for the sake of methods like , and it really doesn't make sense here. Edit: Then again, considering that RangeArgument is going to have methods that return , maybe is worth it. (secretly, I just want the standard library to implement my bounds-checking and length-computation for me in my n-dimensional array, so that I can just write and trust that it's correct. But shhhhhh, don't tell anyone)\nOn second thought, after seeing how meager libcore's test suite is for the existing indexing operations, I have decided that I should stop trusting the standard library and start contributing to it myself. Working on a PR. Edit: awww, looks like there's actually a good reason why there's no tests with or . (the first takes forever on debug, the second sets forth The End of All Times) Edit 2: How on earth does not appear a single time in the tests for slice or str!?!\ndo you still have plans to make a PR for this feature?\nNope, totally forgot about this! Feel free to take the reins. (looking back, it seems I went ahead with the test suite PR (), but that took long enough to merge that by the time it was done I must have moved on to other things.)", "positive_passages": [{"docid": "doc-en-rust-376c528e2ce2d4d87a48e4fcb56a484e52ee096435590591eed46685c8624ddf", "text": "impl Sealed for ops::RangeInclusive {} #[stable(feature = \"slice_get_slice\", since = \"1.28.0\")] impl Sealed for ops::RangeToInclusive {} #[stable(feature = \"slice_index_with_ops_bound_pair\", since = \"1.53.0\")] impl Sealed for (ops::Bound, ops::Bound) {} } /// A helper trait used for indexing operations.", "commid": "rust_pr_77704"}], "negative_passages": []} {"query_id": "q-en-rust-e5de0d99a31d54341c3c0ccb627dae365750acd68eed9816a3f3a45571978b64", "query": "These feel like an omission to me, given how is implemented for this type: And if we really want to go for feature parity, then also has impls for , but I think that's mostly for the sake of methods like , and it really doesn't make sense here. Edit: Then again, considering that RangeArgument is going to have methods that return , maybe is worth it. (secretly, I just want the standard library to implement my bounds-checking and length-computation for me in my n-dimensional array, so that I can just write and trust that it's correct. But shhhhhh, don't tell anyone)\nOn second thought, after seeing how meager libcore's test suite is for the existing indexing operations, I have decided that I should stop trusting the standard library and start contributing to it myself. Working on a PR. Edit: awww, looks like there's actually a good reason why there's no tests with or . (the first takes forever on debug, the second sets forth The End of All Times) Edit 2: How on earth does not appear a single time in the tests for slice or str!?!\ndo you still have plans to make a PR for this feature?\nNope, totally forgot about this! Feel free to take the reins. (looking back, it seems I went ahead with the test suite PR (), but that took long enough to merge that by the time it was done I must have moved on to other things.)", "positive_passages": [{"docid": "doc-en-rust-7d551a989a48d205980ef1a00b53d1a395122770361b9cdaf8da34f5d67f6699", "text": "ops::Range { start, end } } /// Convert pair of `ops::Bound`s into `ops::Range` without performing any bounds checking and (in debug) overflow checking fn into_range_unchecked( len: usize, (start, end): (ops::Bound, ops::Bound), ) -> ops::Range { use ops::Bound; let start = match start { Bound::Included(i) => i, Bound::Excluded(i) => i + 1, Bound::Unbounded => 0, }; let end = match end { Bound::Included(i) => i + 1, Bound::Excluded(i) => i, Bound::Unbounded => len, }; start..end } /// Convert pair of `ops::Bound`s into `ops::Range`. /// Returns `None` on overflowing indices. fn into_range( len: usize, (start, end): (ops::Bound, ops::Bound), ) -> Option> { use ops::Bound; let start = match start { Bound::Included(start) => start, Bound::Excluded(start) => start.checked_add(1)?, Bound::Unbounded => 0, }; let end = match end { Bound::Included(end) => end.checked_add(1)?, Bound::Excluded(end) => end, Bound::Unbounded => len, }; // Don't bother with checking `start < end` and `end <= len` // since these checks are handled by `Range` impls Some(start..end) } /// Convert pair of `ops::Bound`s into `ops::Range`. /// Panics on overflowing indices. fn into_slice_range( len: usize, (start, end): (ops::Bound, ops::Bound), ) -> ops::Range { use ops::Bound; let start = match start { Bound::Included(start) => start, Bound::Excluded(start) => { start.checked_add(1).unwrap_or_else(|| slice_start_index_overflow_fail()) } Bound::Unbounded => 0, }; let end = match end { Bound::Included(end) => { end.checked_add(1).unwrap_or_else(|| slice_end_index_overflow_fail()) } Bound::Excluded(end) => end, Bound::Unbounded => len, }; // Don't bother with checking `start < end` and `end <= len` // since these checks are handled by `Range` impls start..end } #[stable(feature = \"slice_index_with_ops_bound_pair\", since = \"1.53.0\")] unsafe impl SliceIndex<[T]> for (ops::Bound, ops::Bound) { type Output = [T]; #[inline] fn get(self, slice: &[T]) -> Option<&Self::Output> { into_range(slice.len(), self)?.get(slice) } #[inline] fn get_mut(self, slice: &mut [T]) -> Option<&mut Self::Output> { into_range(slice.len(), self)?.get_mut(slice) } #[inline] unsafe fn get_unchecked(self, slice: *const [T]) -> *const Self::Output { // SAFETY: the caller has to uphold the safety contract for `get_unchecked`. unsafe { into_range_unchecked(slice.len(), self).get_unchecked(slice) } } #[inline] unsafe fn get_unchecked_mut(self, slice: *mut [T]) -> *mut Self::Output { // SAFETY: the caller has to uphold the safety contract for `get_unchecked_mut`. unsafe { into_range_unchecked(slice.len(), self).get_unchecked_mut(slice) } } #[inline] fn index(self, slice: &[T]) -> &Self::Output { into_slice_range(slice.len(), self).index(slice) } #[inline] fn index_mut(self, slice: &mut [T]) -> &mut Self::Output { into_slice_range(slice.len(), self).index_mut(slice) } } ", "commid": "rust_pr_77704"}], "negative_passages": []} {"query_id": "q-en-rust-e5de0d99a31d54341c3c0ccb627dae365750acd68eed9816a3f3a45571978b64", "query": "These feel like an omission to me, given how is implemented for this type: And if we really want to go for feature parity, then also has impls for , but I think that's mostly for the sake of methods like , and it really doesn't make sense here. Edit: Then again, considering that RangeArgument is going to have methods that return , maybe is worth it. (secretly, I just want the standard library to implement my bounds-checking and length-computation for me in my n-dimensional array, so that I can just write and trust that it's correct. But shhhhhh, don't tell anyone)\nOn second thought, after seeing how meager libcore's test suite is for the existing indexing operations, I have decided that I should stop trusting the standard library and start contributing to it myself. Working on a PR. Edit: awww, looks like there's actually a good reason why there's no tests with or . (the first takes forever on debug, the second sets forth The End of All Times) Edit 2: How on earth does not appear a single time in the tests for slice or str!?!\ndo you still have plans to make a PR for this feature?\nNope, totally forgot about this! Feel free to take the reins. (looking back, it seems I went ahead with the test suite PR (), but that took long enough to merge that by the time it was done I must have moved on to other things.)", "positive_passages": [{"docid": "doc-en-rust-7f9f4b9e13e7c735c92ea25306530f54e4053bee03214748288aeffbd72d21e6", "text": "} )*) => {$( mod $case_name { #[allow(unused_imports)] use core::ops::Bound; #[test] fn pass() { let mut v = $data;", "commid": "rust_pr_77704"}], "negative_passages": []} {"query_id": "q-en-rust-e5de0d99a31d54341c3c0ccb627dae365750acd68eed9816a3f3a45571978b64", "query": "These feel like an omission to me, given how is implemented for this type: And if we really want to go for feature parity, then also has impls for , but I think that's mostly for the sake of methods like , and it really doesn't make sense here. Edit: Then again, considering that RangeArgument is going to have methods that return , maybe is worth it. (secretly, I just want the standard library to implement my bounds-checking and length-computation for me in my n-dimensional array, so that I can just write and trust that it's correct. But shhhhhh, don't tell anyone)\nOn second thought, after seeing how meager libcore's test suite is for the existing indexing operations, I have decided that I should stop trusting the standard library and start contributing to it myself. Working on a PR. Edit: awww, looks like there's actually a good reason why there's no tests with or . (the first takes forever on debug, the second sets forth The End of All Times) Edit 2: How on earth does not appear a single time in the tests for slice or str!?!\ndo you still have plans to make a PR for this feature?\nNope, totally forgot about this! Feel free to take the reins. (looking back, it seems I went ahead with the test suite PR (), but that took long enough to merge that by the time it was done I must have moved on to other things.)", "positive_passages": [{"docid": "doc-en-rust-0ff9bfc39d077bb52392091ef2f89eed99f1c3f569c6729929dfe7d67448879b", "text": "bad: data[7..=6]; message: \"out of range\"; } in mod boundpair_len { data: [0, 1, 2, 3, 4, 5]; good: data[(Bound::Included(6), Bound::Unbounded)] == []; good: data[(Bound::Unbounded, Bound::Included(5))] == [0, 1, 2, 3, 4, 5]; good: data[(Bound::Unbounded, Bound::Excluded(6))] == [0, 1, 2, 3, 4, 5]; good: data[(Bound::Included(0), Bound::Included(5))] == [0, 1, 2, 3, 4, 5]; good: data[(Bound::Included(0), Bound::Excluded(6))] == [0, 1, 2, 3, 4, 5]; good: data[(Bound::Included(2), Bound::Excluded(4))] == [2, 3]; good: data[(Bound::Excluded(1), Bound::Included(4))] == [2, 3, 4]; good: data[(Bound::Excluded(5), Bound::Excluded(6))] == []; good: data[(Bound::Included(6), Bound::Excluded(6))] == []; good: data[(Bound::Excluded(5), Bound::Included(5))] == []; good: data[(Bound::Included(6), Bound::Included(5))] == []; bad: data[(Bound::Unbounded, Bound::Included(6))]; message: \"out of range\"; } } panic_cases! {", "commid": "rust_pr_77704"}], "negative_passages": []} {"query_id": "q-en-rust-e5de0d99a31d54341c3c0ccb627dae365750acd68eed9816a3f3a45571978b64", "query": "These feel like an omission to me, given how is implemented for this type: And if we really want to go for feature parity, then also has impls for , but I think that's mostly for the sake of methods like , and it really doesn't make sense here. Edit: Then again, considering that RangeArgument is going to have methods that return , maybe is worth it. (secretly, I just want the standard library to implement my bounds-checking and length-computation for me in my n-dimensional array, so that I can just write and trust that it's correct. But shhhhhh, don't tell anyone)\nOn second thought, after seeing how meager libcore's test suite is for the existing indexing operations, I have decided that I should stop trusting the standard library and start contributing to it myself. Working on a PR. Edit: awww, looks like there's actually a good reason why there's no tests with or . (the first takes forever on debug, the second sets forth The End of All Times) Edit 2: How on earth does not appear a single time in the tests for slice or str!?!\ndo you still have plans to make a PR for this feature?\nNope, totally forgot about this! Feel free to take the reins. (looking back, it seems I went ahead with the test suite PR (), but that took long enough to merge that by the time it was done I must have moved on to other things.)", "positive_passages": [{"docid": "doc-en-rust-f1cc330f4d190c15bda9ebdb10b5c2042f2454af1f91bfbede6c0234998c5f74", "text": "bad: data[4..=2]; message: \"but ends at\"; } in mod boundpair_neg_width { data: [0, 1, 2, 3, 4, 5]; good: data[(Bound::Included(4), Bound::Excluded(4))] == []; bad: data[(Bound::Included(4), Bound::Excluded(3))]; message: \"but ends at\"; } } panic_cases! {", "commid": "rust_pr_77704"}], "negative_passages": []} {"query_id": "q-en-rust-e5de0d99a31d54341c3c0ccb627dae365750acd68eed9816a3f3a45571978b64", "query": "These feel like an omission to me, given how is implemented for this type: And if we really want to go for feature parity, then also has impls for , but I think that's mostly for the sake of methods like , and it really doesn't make sense here. Edit: Then again, considering that RangeArgument is going to have methods that return , maybe is worth it. (secretly, I just want the standard library to implement my bounds-checking and length-computation for me in my n-dimensional array, so that I can just write and trust that it's correct. But shhhhhh, don't tell anyone)\nOn second thought, after seeing how meager libcore's test suite is for the existing indexing operations, I have decided that I should stop trusting the standard library and start contributing to it myself. Working on a PR. Edit: awww, looks like there's actually a good reason why there's no tests with or . (the first takes forever on debug, the second sets forth The End of All Times) Edit 2: How on earth does not appear a single time in the tests for slice or str!?!\ndo you still have plans to make a PR for this feature?\nNope, totally forgot about this! Feel free to take the reins. (looking back, it seems I went ahead with the test suite PR (), but that took long enough to merge that by the time it was done I must have moved on to other things.)", "positive_passages": [{"docid": "doc-en-rust-5d558175e4ba1cb0425d5760098cecabc70304eeede42dd42714ff5201671339", "text": "bad: data[..= usize::MAX]; message: \"maximum usize\"; } in mod boundpair_overflow_end { data: [0; 1]; bad: data[(Bound::Unbounded, Bound::Included(usize::MAX))]; message: \"maximum usize\"; } in mod boundpair_overflow_start { data: [0; 1]; bad: data[(Bound::Excluded(usize::MAX), Bound::Unbounded)]; message: \"maximum usize\"; } } // panic_cases! }", "commid": "rust_pr_77704"}], "negative_passages": []} {"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-fcfce7f65b02de96cc1532f1b1e9574dd8fed392252759318775e8d8164886a0", "text": "//! which do not. use rustc_data_structures::bitvec::BitVector; use rustc_data_structures::control_flow_graph::dominators::Dominators; use rustc_data_structures::indexed_vec::{Idx, IndexVec}; use rustc::mir::{self, Location, TerminatorKind}; use rustc::mir::visit::{Visitor, PlaceContext};", "commid": "rust_pr_50048"}], "negative_passages": []} {"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-02e5f62691a9d4c07ba873a0d74e011272bdaf28c296603ce51b764e1977830e", "text": "use type_of::LayoutLlvmExt; use super::FunctionCx; pub fn memory_locals<'a, 'tcx>(fx: &FunctionCx<'a, 'tcx>) -> BitVector { pub fn non_ssa_locals<'a, 'tcx>(fx: &FunctionCx<'a, 'tcx>) -> BitVector { let mir = fx.mir; let mut analyzer = LocalAnalyzer::new(fx);", "commid": "rust_pr_50048"}], "negative_passages": []} {"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-21fd2ae3a8b7f55b78e548a4b17a3f408dbb7a8c099893d8a325aecd38e09a07", "text": "// (e.g. structs) into an alloca unconditionally, just so // that we don't have to deal with having two pathways // (gep vs extractvalue etc). analyzer.mark_as_memory(mir::Local::new(index)); analyzer.not_ssa(mir::Local::new(index)); } } analyzer.memory_locals analyzer.non_ssa_locals } struct LocalAnalyzer<'mir, 'a: 'mir, 'tcx: 'a> { fx: &'mir FunctionCx<'a, 'tcx>, memory_locals: BitVector, seen_assigned: BitVector dominators: Dominators, non_ssa_locals: BitVector, // The location of the first visited direct assignment to each // local, or an invalid location (out of bounds `block` index). first_assignment: IndexVec } impl<'mir, 'a, 'tcx> LocalAnalyzer<'mir, 'a, 'tcx> { fn new(fx: &'mir FunctionCx<'a, 'tcx>) -> LocalAnalyzer<'mir, 'a, 'tcx> { let invalid_location = mir::BasicBlock::new(fx.mir.basic_blocks().len()).start_location(); let mut analyzer = LocalAnalyzer { fx, memory_locals: BitVector::new(fx.mir.local_decls.len()), seen_assigned: BitVector::new(fx.mir.local_decls.len()) dominators: fx.mir.dominators(), non_ssa_locals: BitVector::new(fx.mir.local_decls.len()), first_assignment: IndexVec::from_elem(invalid_location, &fx.mir.local_decls) }; // Arguments get assigned to by means of the function being called for idx in 0..fx.mir.arg_count { analyzer.seen_assigned.insert(idx + 1); for arg in fx.mir.args_iter() { analyzer.first_assignment[arg] = mir::START_BLOCK.start_location(); } analyzer } fn mark_as_memory(&mut self, local: mir::Local) { debug!(\"marking {:?} as memory\", local); self.memory_locals.insert(local.index()); fn first_assignment(&self, local: mir::Local) -> Option { let location = self.first_assignment[local]; if location.block.index() < self.fx.mir.basic_blocks().len() { Some(location) } else { None } } fn mark_assigned(&mut self, local: mir::Local) { if !self.seen_assigned.insert(local.index()) { self.mark_as_memory(local); fn not_ssa(&mut self, local: mir::Local) { debug!(\"marking {:?} as non-SSA\", local); self.non_ssa_locals.insert(local.index()); } fn assign(&mut self, local: mir::Local, location: Location) { if self.first_assignment(local).is_some() { self.not_ssa(local); } else { self.first_assignment[local] = location; } } }", "commid": "rust_pr_50048"}], "negative_passages": []} {"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-fd37b77010d9fae078b1c5f6c388f85f5f53889c8ff7eb67ce3019a3818d9ade", "text": "debug!(\"visit_assign(block={:?}, place={:?}, rvalue={:?})\", block, place, rvalue); if let mir::Place::Local(index) = *place { self.mark_assigned(index); self.assign(index, location); if !self.fx.rvalue_creates_operand(rvalue) { self.mark_as_memory(index); self.not_ssa(index); } } else { self.visit_place(place, PlaceContext::Store, location);", "commid": "rust_pr_50048"}], "negative_passages": []} {"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-d192cc8bb2369d941821ea7e0062a0dbd1646f02290dd9e361d6b24ccafcba66", "text": "if layout.is_llvm_immediate() || layout.is_llvm_scalar_pair() { // Recurse with the same context, instead of `Projection`, // potentially stopping at non-operand projections, // which would trigger `mark_as_memory` on locals. // which would trigger `not_ssa` on locals. self.visit_place(&proj.base, context, location); return; }", "commid": "rust_pr_50048"}], "negative_passages": []} {"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-6904f1290897186eb4fde172750a8aa3ab54f24f4629feea1f79ea1715c4453e", "text": "} fn visit_local(&mut self, &index: &mir::Local, &local: &mir::Local, context: PlaceContext<'tcx>, _: Location) { location: Location) { match context { PlaceContext::Call => { self.mark_assigned(index); self.assign(local, location); } PlaceContext::StorageLive | PlaceContext::StorageDead | PlaceContext::Validate | PlaceContext::Validate => {} PlaceContext::Copy | PlaceContext::Move => {} PlaceContext::Move => { // Reads from uninitialized variables (e.g. in dead code, after // optimizations) require locals to be in (uninitialized) memory. // NB: there can be uninitialized reads of a local visited after // an assignment to that local, if they happen on disjoint paths. let ssa_read = match self.first_assignment(local) { Some(assignment_location) => { assignment_location.dominates(location, &self.dominators) } None => false }; if !ssa_read { self.not_ssa(local); } } PlaceContext::Inspect | PlaceContext::Store | PlaceContext::AsmOutput | PlaceContext::Borrow { .. } | PlaceContext::Projection(..) => { self.mark_as_memory(index); self.not_ssa(local); } PlaceContext::Drop => { let ty = mir::Place::Local(index).ty(self.fx.mir, self.fx.cx.tcx); let ty = mir::Place::Local(local).ty(self.fx.mir, self.fx.cx.tcx); let ty = self.fx.monomorphize(&ty.to_ty(self.fx.cx.tcx)); // Only need the place if we're actually dropping it. if self.fx.cx.type_needs_drop(ty) { self.mark_as_memory(index); self.not_ssa(local); } } }", "commid": "rust_pr_50048"}], "negative_passages": []} {"query_id": "q-en-rust-0a2ebeaca1b1a7e642446457b0ba8fba4cc1fe8df33138bc1266a136790039ec", "query": "Context: In order to experiment with an -aware , I was preparing to modify the lang item signature, and my first step was to ensure MIR passes the right type to begin with, as noted in Legitimately, I assumed that the MIR inline code was doing the right thing, so I mimicked it in . The resulting code actually works fine, as far as completing the 3 stages of rustc bootstrapping is involved. But before going the route for the -aware , I first tried removing the special handling of 's , trying to leave it to , shortcutting the lang item. This didn't go well, and produced a stage 1 compiler that would crash on a bad in libsyntax's . From which I derived a small test case that would exhibit the problem with my code. Anyways, I was going well over my head with this approach, thus switched to the signature change. So, what's the deal with this issue, will you ask? Well, it turns out that my MIR changes, essentially copied from MIR inlining, while they worked to produce an apparently working compiler, failed to compile that reduced testcase with an LLVM ERROR. I was wondering if I did something significantly different from what the MIT inlining pass was doing, so I tried to trigger it manually (since it's not enabled by default), and after some trial and error, got it to happen on a plain nightly compiler, with the following reduced testcase: Building with yields: (Note the is only there to force MIR inlining to happen without having to go over the required threshold ; liballoc's has ; similarly, the on avoids inlining of dealloc, to limit the effects on the MIR) Interestingly enough, and fail with the same error. The former outputs a truncated MIR (truncated at the entry of the first basic block), and the latter outputs nothing.\nThis is what from a run with looks like: Edit: and what the llvm-ir, as disassembled from output, looks like:\nThe problem is that the code path bb1-bb9-bb21-bb20 in IR, corresponding to bb0-bb1-bb6-bb5 in MIR, is impossible, yet exists. And in that code path, in MIR, in IR, is never initialized, and that's what LLVM doesn't like. I guess one way to look at is it to wonder why in MIR has a switch in the first place.\nAnd the answer to that last question is false edges, according to the following MIR from the mir_map pass:\nActually, the path bb0-bb7-bb4-bb10-bb12 has without ever being initialized, but that's pre-elaborate-drops, and I hear drop is different then, so I don't know if it bad or not.\nThe weird switch that eventually causes the LLVM problem is by the elaborate-drops pass. The MIR after that phase looks like the following:\nI think the most relevant fact is this difference between MIR inlining being enabled or not: with MIR inlining disabled, this basic block in MIR: becomes the following IR: with MIR inlining enabled, this basic block in MIR: becomes whiich is weird... where did the extra code go?\nThe changes from fix this issue.", "positive_passages": [{"docid": "doc-en-rust-60b5fa2bfdb77d8d67a1b948d31c2c9dc477ab531bd33074b9a6c2164d71cfff", "text": "}, }; let memory_locals = analyze::memory_locals(&fx); let memory_locals = analyze::non_ssa_locals(&fx); // Allocate variable and temp allocas fx.locals = {", "commid": "rust_pr_50048"}], "negative_passages": []} {"query_id": "q-en-rust-ba68e05e233633fd548f84c5230d643f7efab0ada1389ab2ed74950ffb7335fe", "query": "When I run the test suite on campus, I get a failure in : The test says \"this IP is unroutable, so connections should always time out,\" but evidently there's something about the setup at this location that was not anticipated. The results are the same whether I am connected to WiFi or through ethernet. Both connections go through the same router (a Linksys WRT120N I have physical access to, which doesn't appear to save any sort of logs), and I don't know where it goes after that. I can work around it by disconnecting from the internet entirely, and do not have issues when running the tests at home (same laptop, different network).\nis your subnet 10.255.255.0/24 or 10.255.0.0/16 or 10.0.0.0/8 or anything similar? You can check by running . Either way, the test is ill formed and should be correctled.\nI'm not there right now, but just in case I forget to come back to this, I do recall that the router is accessed through an IP in the 192.168.x.x subnet. (I'm not sure if this precludes the situation you have described) (I wish GitHub notifications had a \"mark unread\" feature!)\nIt appears not to be. wifi ethernet:\nI think I must have misread something when making that test - the IP is inside of an address block reserved for private networks which means it could be routable depending on how your network is set up. We could instead use an IP from one of the documentation blocks like 192.0.2.0.\nThe address 10.255.255.1 is routable on my vanilla Ubuntu 18.04 box: The address 192.0.2.0 is not routable: Therefore, this test always fails on this box. I will file a PR changing it to 192.0.2.0.\nThis is a dup of .", "positive_passages": [{"docid": "doc-en-rust-370b94bc5b44edae3b733a3c8ed8438123a981a213b848cbc52be4497314655a", "text": "} #[test] fn connect_timeout_unroutable() { // this IP is unroutable, so connections should always time out, // provided the network is reachable to begin with. let addr = \"10.255.255.1:80\".parse().unwrap(); let e = TcpStream::connect_timeout(&addr, Duration::from_millis(250)).unwrap_err(); assert!(e.kind() == io::ErrorKind::TimedOut || e.kind() == io::ErrorKind::Other, \"bad error: {} {:?}\", e, e.kind()); } #[test] fn connect_timeout_unbound() { // bind and drop a socket to track down a \"probably unassigned\" port let socket = TcpListener::bind(\"127.0.0.1:0\").unwrap();", "commid": "rust_pr_57584"}], "negative_passages": []} {"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-404d6e78cf72132ed2f1e1b6234d4326bef6c8d83632640ebcca042ecd619e22", "text": "touch \"$TOOLSTATE_FILE\" # Try to test all the tools and store the build/test success in the TOOLSTATE_FILE set +e python2.7 \"$X_PY\" test --no-fail-fast src/doc/book ", "commid": "rust_pr_50573"}], "negative_passages": []} {"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-80adeffd8f7aea2e506dd9af7a37033dbede7940eb63883a53160c95e71230e4", "text": "cat \"$TOOLSTATE_FILE\" echo # This function checks that if a tool's submodule changed, the tool's state must improve verify_status() { echo \"Verifying status of $1...\" if echo \"$CHANGED_FILES\" | grep -q \"^M[[:blank:]]$2$\"; then", "commid": "rust_pr_50573"}], "negative_passages": []} {"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-dc22cdea4e93d83e8022dd65f88645e01893f33b2129d0de9e9bb06b3a6516ce", "text": "fi } # deduplicates the submodule check and the assertion that on beta some tools MUST be passing check_dispatch() { if [ \"$1\" = submodule_changed ]; then # ignore $2 (branch id) verify_status $3 $4 elif [ \"$2\" = beta ]; then echo \"Requiring test passing for $3...\" if grep -q '\"'\"$3\"'\":\"(test|build)-fail\"' \"$TOOLSTATE_FILE\"; then exit 4 fi fi } # list all tools here status_check() { check_dispatch $1 beta book src/doc/book check_dispatch $1 beta nomicon src/doc/nomicon check_dispatch $1 beta reference src/doc/reference check_dispatch $1 beta rust-by-example src/doc/rust-by-example check_dispatch $1 beta rls src/tool/rls check_dispatch $1 beta rustfmt src/tool/rustfmt # these tools are not required for beta to successfully branch check_dispatch $1 nightly clippy-driver src/tool/clippy check_dispatch $1 nightly miri src/tool/miri } # If this PR is intended to update one of these tools, do not let the build pass # when they do not test-pass. verify_status book src/doc/book verify_status nomicon src/doc/nomicon verify_status reference src/doc/reference verify_status rust-by-example src/doc/rust-by-example verify_status rls src/tool/rls verify_status rustfmt src/tool/rustfmt verify_status clippy-driver src/tool/clippy verify_status miri src/tool/miri status_check \"submodule_changed\" if [ \"$RUST_RELEASE_CHANNEL\" = nightly -a -n \"${TOOLSTATE_REPO_ACCESS_TOKEN+is_set}\" ]; then . \"$(dirname $0)/repo.sh\"", "commid": "rust_pr_50573"}], "negative_passages": []} {"query_id": "q-en-rust-d2884a1b6c1ce7c85420c3ad44910851fa8a8c5a763de172c053a6ca52fff6de", "query": "I think this is the fourth beta I'm having to re-figure-out how all this works and figure out how to ignore their failures on beta. can we just exclude these tools from toolstate until this is figured out?\nI'm slightly confused, why would these tools need to be disabled/their submodule update if they were building on master? Isn't the beta just the current master branch + some tweaks for --version output and no unstable features? The release in March actually prevented the merge of any PRs on master that would break a tool. So I thought the idea was to fix all tools in that week. Or is this about beta backports breaking tools?\nI don't think we've had a beta yet whether either clippy or miri were passing tests and fails the build if any tool fails to build on beta. We also do not want to release either of these tools to stable/beta yet.\nneither of these tools has a step, so there's no danger in that (the last beta) didn't have clippy issues. Since we have grace period of a week, I can easily get in a working clippy within that week (which then won't be broken until the beta). My issue is mainly that I don't really know when it begins ;) Maybe we can ping the tool authors once a day that their tool needs to be fixed for the upcoming beta?\nMy point with this issue is that we shouldn't automate ourselves into a hole where tools we do not want to ship block releases. That should not require authors to fix tools, we should simply ignore failures or have an easy way of disabling them. Right now producing a beta is pain as you have to resend it to bors multiple times after re-learning that these are blocking the release. We do not ship miri/clippy but a successful clippy compilation affects how rls is compiled, which should not be happening on beta/stable.\nThe code that influences rls is explicitly skipped on beta/stable, so the rls is completely free of clippy on beta/stable for miri I agree, but for clippy, aren't we trying to move to a model where we actually dist it on stable, so I'd call having it ready for stable some form of proof of concept for the distributing part.\nOk sure yes rls is protected but I feel like this is missing my point. Miri isn't ready. Clippy isn't ready. They're blocking beta releases. That shouldn't happen.", "positive_passages": [{"docid": "doc-en-rust-c07a8d7c3780da87d21f3d68ce8d02ab9576f8692fa07bdc2c2cf13cc2b9d862", "text": "exit 0 fi if grep -q fail \"$TOOLSTATE_FILE\"; then exit 4 fi # abort compilation if an important tool doesn't build # (this code is reachable if not on the nightly channel) status_check \"beta_required\" ", "commid": "rust_pr_50573"}], "negative_passages": []} {"query_id": "q-en-rust-17e28c23ef88ba0e1b08da6cf477a82e843c5ef036017727d25ade62c9390c06", "query": "When testing a local rebuild in Fedora -- using rustc 1.26.0 as stage0 to build rustc 1.26.0 again -- I ran into this ICE: The \"slice index starts at 1 but ends at 0\" comes from here: With GDB I found that is completely empty! Stepping through from the very beginning, does get the right argc and argv, calling appropriately. But the addresses of and are not where accesses later from , so that just sees and . It turns out that the bootstrap was causing my to load the freshly-built libraries, rather than its own in . The bootstrap wrapper does try to set a libdir at the front of that path, but it uses , here , so that doesn't help. This only hits local-rebuild, because normal prior-release stage0 builds have different library metadata.\nIs this fixed? cc\nSorry, yes, fixed by .", "positive_passages": [{"docid": "doc-en-rust-455d3c934cdcc095a8dae7b3162c33e8cb9a6a88b485d76e2c8943772edeb940", "text": "// FIXME: Temporary fix for https://github.com/rust-lang/cargo/issues/3005 // Force cargo to output binaries with disambiguating hashes in the name cargo.env(\"__CARGO_DEFAULT_LIB_METADATA\", &self.config.channel); let metadata = if compiler.stage == 0 { // Treat stage0 like special channel, whether it's a normal prior- // release rustc or a local rebuild with the same version, so we // never mix these libraries by accident. \"bootstrap\" } else { &self.config.channel }; cargo.env(\"__CARGO_DEFAULT_LIB_METADATA\", &metadata); let stage; if compiler.stage == 0 && self.local_rebuild {", "commid": "rust_pr_50789"}], "negative_passages": []} {"query_id": "q-en-rust-3ab25983fc6b01240f3f975c30df52d8e2b8b58602fa07a35a3c1d140ae8603d", "query": "For type names only, submodules can't apparently see non-pub types defined in the parent module. So, this code: ... generates this error:\nNow it at least tells you that is private: Though, it doesn't need to display the error twice and could do better than .\nlinking to for unified tracking of resolve\nClosing as a dupe of , this would be allowed with the rules in that bug.", "positive_passages": [{"docid": "doc-en-rust-2bef0edcdf7865127c0d209c39b604a30f1689341a38d38f4cbfc1ac522249a9", "text": " Subproject commit 8b7f7e667268921c278af94ae30a61e87a22b22b Subproject commit 329923edec41d0ddbea7f30ab12fca0436d459ae ", "commid": "rust_pr_69692"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-5bddb5543d6f5ab8230be1cfadf1ad41d4d3bd205a6f7f8090a7f4a5ed8bc869", "text": " #![feature(const_raw_ptr_to_usize_cast)] fn main() { [(); &(static |x| {}) as *const _ as usize]; //~^ ERROR: closures cannot be static //~| ERROR: type annotations needed [(); &(static || {}) as *const _ as usize]; //~^ ERROR: closures cannot be static //~| ERROR: evaluation of constant value failed } ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-72554819f1e09f22a198a99bff5992474411083d7ca6fd279f1de1acee4146d5", "text": " error[E0697]: closures cannot be static --> $DIR/issue-52432.rs:4:12 | LL | [(); &(static |x| {}) as *const _ as usize]; | ^^^^^^^^^^ error[E0697]: closures cannot be static --> $DIR/issue-52432.rs:7:12 | LL | [(); &(static || {}) as *const _ as usize]; | ^^^^^^^^^ error[E0282]: type annotations needed --> $DIR/issue-52432.rs:4:20 | LL | [(); &(static |x| {}) as *const _ as usize]; | ^ consider giving this closure parameter a type error[E0080]: evaluation of constant value failed --> $DIR/issue-52432.rs:7:10 | LL | [(); &(static || {}) as *const _ as usize]; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \"pointer-to-integer cast\" needs an rfc before being allowed inside constants error: aborting due to 4 previous errors Some errors have detailed explanations: E0080, E0282, E0697. For more information about an error, try `rustc --explain E0080`. ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-724d6dc7cfbe8265036e53ed5f6630adf31768d81ecf2bbd8f5c263f4af2e870", "text": " // check-pass #![allow(dead_code)] trait Structure: Sized where E: Encoding { type RefTarget: ?Sized; type FfiPtr; unsafe fn borrow_from_ffi_ptr<'a>(ptr: Self::FfiPtr) -> Option<&'a Self::RefTarget>; } enum Slice {} impl Structure for Slice where E: Encoding { type RefTarget = [E::Unit]; type FfiPtr = (*const E::FfiUnit, usize); unsafe fn borrow_from_ffi_ptr<'a>(_ptr: Self::FfiPtr) -> Option<&'a Self::RefTarget> { panic!() } } trait Encoding { type Unit: Unit; type FfiUnit; } trait Unit {} enum Utf16 {} impl Encoding for Utf16 { type Unit = Utf16Unit; type FfiUnit = u16; } struct Utf16Unit(pub u16); impl Unit for Utf16Unit {} type SUtf16Str = SeStr; struct SeStr where S: Structure, E: Encoding { _data: S::RefTarget, } impl SeStr where S: Structure, E: Encoding { pub unsafe fn from_ptr<'a>(_ptr: S::FfiPtr) -> Option<&'a Self> { panic!() } } fn main() { const TEXT_U16: &'static [u16] = &[]; let _ = unsafe { SUtf16Str::from_ptr((TEXT_U16.as_ptr(), TEXT_U16.len())).unwrap() }; } ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-7a2023dc841bbdb6763ce545eb991f16eb5ab4a88d0cbaa4960c58233974444a", "text": " // check-pass #![allow(dead_code)] trait Structure: Sized where E: Encoding { type RefTarget: ?Sized; type FfiPtr; unsafe fn borrow_from_ffi_ptr<'a>(ptr: Self::FfiPtr) -> Option<&'a Self::RefTarget>; } enum Slice {} impl Structure for Slice where E: Encoding { type RefTarget = [E::Unit]; type FfiPtr = (*const E::FfiUnit, usize); unsafe fn borrow_from_ffi_ptr<'a>(_ptr: Self::FfiPtr) -> Option<&'a Self::RefTarget> { panic!() } } trait Encoding { type Unit: Unit; type FfiUnit; } trait Unit {} enum Utf16 {} impl Encoding for Utf16 { type Unit = Utf16Unit; type FfiUnit = u16; } struct Utf16Unit(pub u16); impl Unit for Utf16Unit {} struct SUtf16Str { _data: >::RefTarget, } impl SUtf16Str { pub unsafe fn from_ptr<'a>(ptr: >::FfiPtr) -> Option<&'a Self> { std::mem::transmute::::Unit]>, _>( >::borrow_from_ffi_ptr(ptr)) } } fn main() { const TEXT_U16: &'static [u16] = &[]; let _ = unsafe { SUtf16Str::from_ptr((TEXT_U16.as_ptr(), TEXT_U16.len())).unwrap() }; } ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-63cdceff39b03b3a624a307128146e4fc945f0bfda9e06d2581da336ccd434d2", "text": " #![feature(type_alias_impl_trait)] type Closure = impl FnOnce(); //~ ERROR: type mismatch resolving fn c() -> Closure { || -> Closure { || () } } fn main() {} ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-d03a2925b19751e5f2dabe6f0a8ffa721f0e21c364be453372e0a85437892d1e", "text": " error[E0271]: type mismatch resolving `<[closure@$DIR/issue-63279.rs:6:5: 6:28] as std::ops::FnOnce<()>>::Output == ()` --> $DIR/issue-63279.rs:3:1 | LL | type Closure = impl FnOnce(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected opaque type, found () | = note: expected type `Closure` found type `()` = note: the return type of a function must have a statically known size error: aborting due to previous error For more information about this error, try `rustc --explain E0271`. ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-d88079ddf8cdb4aa938bdd88198f72713c0215a618a851aa4e4ae5591d89e28a", "text": " #![feature(fn_traits, unboxed_closures)] fn test FnOnce<(&'x str,)>>(_: F) {} struct Compose(F,G); impl FnOnce<(T,)> for Compose where F: FnOnce<(T,)>, G: FnOnce<(F::Output,)> { type Output = G::Output; extern \"rust-call\" fn call_once(self, (x,): (T,)) -> G::Output { (self.1)((self.0)(x)) } } struct Str<'a>(&'a str); fn mk_str<'a>(s: &'a str) -> Str<'a> { Str(s) } fn main() { let _: for<'a> fn(&'a str) -> Str<'a> = mk_str; // expected concrete lifetime, found bound lifetime parameter 'a let _: for<'a> fn(&'a str) -> Str<'a> = Str; //~^ ERROR: mismatched types test(|_: &str| {}); test(mk_str); // expected concrete lifetime, found bound lifetime parameter 'x test(Str); //~ ERROR: type mismatch in function arguments test(Compose(|_: &str| {}, |_| {})); test(Compose(mk_str, |_| {})); // internal compiler error: cannot relate bound region: // ReLateBound(DebruijnIndex { depth: 2 }, // BrNamed(DefId { krate: 0, node: DefIndex(6) => test::'x }, 'x(65))) //<= ReSkolemized(0, // BrNamed(DefId { krate: 0, node: DefIndex(6) => test::'x }, 'x(65))) test(Compose(Str, |_| {})); } ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-d08a597a6f1442c613d68fc673ac93e272370bf91300ee1d13952bf52364366f", "query": "This issue might be related to , which has been fixed in playground link: Backtrace:\nremoving the argument inside the closure gives a different ICE. Interestingly stable refuses to compile this, but beta and nightly ICE's playground: Backtrace:\nUPDATE: now also ICE's in stable.\nNow no longer ICE's since 1.32.\nThe ICE doesn't appear on the latest nightly via the above playground link, marked as E-needstest", "positive_passages": [{"docid": "doc-en-rust-624ef5eb0ef9044e70456adbae6f8e2f92deb4ba30e110f9235aaf91650252a9", "text": " error[E0308]: mismatched types --> $DIR/issue-30904.rs:20:45 | LL | let _: for<'a> fn(&'a str) -> Str<'a> = Str; | ^^^ expected concrete lifetime, found bound lifetime parameter 'a | = note: expected type `for<'a> fn(&'a str) -> Str<'a>` found type `fn(&str) -> Str<'_> {Str::<'_>}` error[E0631]: type mismatch in function arguments --> $DIR/issue-30904.rs:26:10 | LL | fn test FnOnce<(&'x str,)>>(_: F) {} | ---- -------------------------- required by this bound in `test` ... LL | struct Str<'a>(&'a str); | ------------------------ found signature of `fn(&str) -> _` ... LL | test(Str); | ^^^ expected signature of `for<'x> fn(&'x str) -> _` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_66331"}], "negative_passages": []} {"query_id": "q-en-rust-0529ba70049fc30d5208577df3ff9b503565ae2aa8f440d3c9de391979f11415", "query": "I've some code reading a stream from stdin. Wanted to wrap it in a Cursor to have it keep track of the number of bytes read already instead of having to do it manually, but started getting odd compiler errors. Since works just fine with (and ) shouldn't it work just the same with ? () Errors:\nA Cursor is meant as a wrapper to Vec /// A `Cursor` wraps another type and provides it with a /// A `Cursor` wraps an in-memory buffer and provides it with a /// [`Seek`] implementation. /// /// `Cursor`s are typically used with in-memory buffers to allow them to /// implement [`Read`] and/or [`Write`], allowing these buffers to be used /// anywhere you might use a reader or writer that does actual I/O. /// `Cursor`s are used with in-memory buffers, anything implementing /// `AsRef<[u8]>`, to allow them to implement [`Read`] and/or [`Write`], /// allowing these buffers to be used anywhere you might use a reader or writer /// that does actual I/O. /// /// The standard library implements some I/O traits on various types which /// are commonly used as a buffer, like `Cursor<`[`Vec`]`>` and", "commid": "rust_pr_52548"}], "negative_passages": []} {"query_id": "q-en-rust-0529ba70049fc30d5208577df3ff9b503565ae2aa8f440d3c9de391979f11415", "query": "I've some code reading a stream from stdin. Wanted to wrap it in a Cursor to have it keep track of the number of bytes read already instead of having to do it manually, but started getting odd compiler errors. Since works just fine with (and ) shouldn't it work just the same with ? () Errors:\nA Cursor is meant as a wrapper to Vec Cursor { /// Creates a new cursor wrapping the provided underlying I/O object. /// Creates a new cursor wrapping the provided underlying in-memory buffer. /// /// Cursor initial position is `0` even if underlying object (e. /// g. `Vec`) is not empty. So writing to cursor starts with /// overwriting `Vec` content, not with appending to it. /// Cursor initial position is `0` even if underlying buffer (e.g. `Vec`) /// is not empty. So writing to cursor starts with overwriting `Vec` /// content, not with appending to it. /// /// # Examples ///", "commid": "rust_pr_52548"}], "negative_passages": []} {"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-387b544c914de95389b319be1aeeb1b4522cfe26858749aa86ec2fba8dfe66c6", "text": "err.span_label(arm_span, msg); } } hir::MatchSource::TryDesugar => { // Issue #51632 if let Ok(try_snippet) = self.tcx.sess.source_map().span_to_snippet(arm_span) { err.span_suggestion_with_applicability( arm_span, \"try wrapping with a success variant\", format!(\"Ok({})\", try_snippet), Applicability::MachineApplicable, ); } } hir::MatchSource::TryDesugar => {} _ => { let msg = \"match arm with an incompatible type\"; if self.tcx.sess.source_map().is_multiline(arm_span) {", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-1e4985b5942246975c7be74a00b242e8c3492d4b3d8bdb2fd41296d89a828bf4", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // run-rustfix #![allow(dead_code)] fn missing_discourses() -> Result { Ok(1) } fn forbidden_narratives() -> Result { Ok(missing_discourses()?) //~^ ERROR try expression alternatives have incompatible types //~| HELP try wrapping with a success variant } fn main() {} ", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-1feaf58101a17b5147829d1ea1aad9e4f0c635e3a34ff7a32f221616708e1c27", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. // run-rustfix #![allow(dead_code)] fn missing_discourses() -> Result {", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-652e2841c72ee5c9c0a667ffd834e6c8873dbfffe4bb99ef5eda7faea21772c8", "text": "fn forbidden_narratives() -> Result { missing_discourses()? //~^ ERROR try expression alternatives have incompatible types //~| HELP try wrapping with a success variant } fn main() {}", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-e9dca273549beedd5b5489e201dc989d085cd5755cece8fde4d09f1712232875", "query": "Reproduced on versions: - Isolated example to reproduce: Actual error: Mentioned suggestion about \"wrap into \" is incorrect at all. There we should deref the value from the . - - Expected error: (something like this)\nHere's where the suggestion gets issued: The motivation was to solve (where a -expression appears in tail position, which seemed likely to be a common mistake), but it turns out that there are other situations where we run into a type mismatch between match arms of a desugared expression. We regret the error.\nHere's another case of this:\nI see what's happening, we're only checking wether the error came from a desugaring, disregarding the types involved. It should be possible to check that information as well, even if the suggestion is only provided when it's a where everything but the error type matches. (I think it's ok if the supplied error cannot be converted to the expected error, as this suggestion exposes newcomers to a non-obvious construct at least.)\nClippy to detect this that we might want to uplift ...\n(But for now I'm just going to remove the suggestion.)", "positive_passages": [{"docid": "doc-en-rust-d1bc2b172cb70190c65a14b5dfa6b929349378348910929ad014f2a551242d21", "text": "error[E0308]: try expression alternatives have incompatible types --> $DIR/issue-51632-try-desugar-incompatible-types.rs:20:5 --> $DIR/issue-51632-try-desugar-incompatible-types.rs:18:5 | LL | missing_discourses()? | ^^^^^^^^^^^^^^^^^^^^^ | | | expected enum `std::result::Result`, found isize | help: try wrapping with a success variant: `Ok(missing_discourses()?)` | ^^^^^^^^^^^^^^^^^^^^^ expected enum `std::result::Result`, found isize | = note: expected type `std::result::Result` found type `isize`", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-76ba411912a635c673132938a65ee3f117cbe7cabf618fd3502145f8bd1fbffa", "query": "What I actually wanted was . $DIR/binary-op-suggest-deref.rs:6:12 | LL | if i < 0 {} | ^ expected `&i64`, found integer | help: consider dereferencing the borrow | LL | if *i < 0 {} | + error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_117893"}], "negative_passages": []} {"query_id": "q-en-rust-b134729004a99970bd0ebb29cdf9e30f7b25f01ba1acbda070f62cfbf3553ae9", "query": "LLVM 7.0 has been branched as of 2018-08-01, with the final tag probably a couple weeks away. The first release candidate will be tagged in a couple of days. is the llvm-dev announcement, and is a view of the commits on the branch. Once the final release has been tagged, Rust should be updated to LLVM 7.0.\nGreat! I've started our branches: I'm hoping that this'll be pretty smooth since we upgraded most of the way through the cycle! I'm testing locally now and will send a PR once green.", "positive_passages": [{"docid": "doc-en-rust-89114cf93667acef9c5533306d97b6025a5c721cced0206ef06002e1ec04c08f", "text": " Subproject commit 52a6a4d7087d14a35d44a11c39c77fa79d71378d Subproject commit d549d85b1735dc5066b2973f8549557a813bb9c8 ", "commid": "rust_pr_52983"}], "negative_passages": []} {"query_id": "q-en-rust-b134729004a99970bd0ebb29cdf9e30f7b25f01ba1acbda070f62cfbf3553ae9", "query": "LLVM 7.0 has been branched as of 2018-08-01, with the final tag probably a couple weeks away. The first release candidate will be tagged in a couple of days. is the llvm-dev announcement, and is a view of the commits on the branch. Once the final release has been tagged, Rust should be updated to LLVM 7.0.\nGreat! I've started our branches: I'm hoping that this'll be pretty smooth since we upgraded most of the way through the cycle! I'm testing locally now and will send a PR once green.", "positive_passages": [{"docid": "doc-en-rust-22a5e40badb1b6003ca1c24e2ec339ee53d50ed6d40326eb9c5cd6964b9ae7f3", "text": " Subproject commit 03684905101f0b7e49dfe530e54dc1aeac6ef0fb Subproject commit e19f07f5a6e5546ab4f6ea951e3c6b8627edeaa7 ", "commid": "rust_pr_52983"}], "negative_passages": []} {"query_id": "q-en-rust-b134729004a99970bd0ebb29cdf9e30f7b25f01ba1acbda070f62cfbf3553ae9", "query": "LLVM 7.0 has been branched as of 2018-08-01, with the final tag probably a couple weeks away. The first release candidate will be tagged in a couple of days. is the llvm-dev announcement, and is a view of the commits on the branch. Once the final release has been tagged, Rust should be updated to LLVM 7.0.\nGreat! I've started our branches: I'm hoping that this'll be pretty smooth since we upgraded most of the way through the cycle! I'm testing locally now and will send a PR once green.", "positive_passages": [{"docid": "doc-en-rust-87641d3cfe5a484aa36a83a0fbf84b531b0529853af4fefdce2fe4982a28ca4b", "text": "# If this file is modified, then llvm will be (optionally) cleaned and then rebuilt. # The actual contents of this file do not matter, but to trigger a change on the # build bots then the contents should be changed so git updates the mtime. 2018-07-12 No newline at end of file 2018-08-02 ", "commid": "rust_pr_52983"}], "negative_passages": []} {"query_id": "q-en-rust-b134729004a99970bd0ebb29cdf9e30f7b25f01ba1acbda070f62cfbf3553ae9", "query": "LLVM 7.0 has been branched as of 2018-08-01, with the final tag probably a couple weeks away. The first release candidate will be tagged in a couple of days. is the llvm-dev announcement, and is a view of the commits on the branch. Once the final release has been tagged, Rust should be updated to LLVM 7.0.\nGreat! I've started our branches: I'm hoping that this'll be pretty smooth since we upgraded most of the way through the cycle! I'm testing locally now and will send a PR once green.", "positive_passages": [{"docid": "doc-en-rust-0d1fbd9cf0143e1ef1401934c2ffd99bc8f0fb146bc57a1d4c2495b66fb7f7b1", "text": " Subproject commit 8214ccf861d538671b0a1436dbf4538dc4a64d09 Subproject commit f76ea3ca16ed22dde8ef929db74a4b4df6f2f899 ", "commid": "rust_pr_52983"}], "negative_passages": []} {"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at /// Returns a pair of slices which contain the contents of the buffer not used by the VecDeque. #[inline] unsafe fn unused_as_mut_slices<'a>(&'a mut self) -> (&'a mut [T], &'a mut [T]) { let head = self.head; let tail = self.tail; let buf = self.buffer_as_mut_slice(); if head != tail { // In buf, head..tail contains the VecDeque and tail..head is unused. // So calling `ring_slices` with tail and head swapped returns unused slices. RingSlices::ring_slices(buf, tail, head) } else { // Swapping doesn't help when head == tail. let (before, after) = buf.split_at_mut(head); (after, before) } } /// Copies a potentially wrapping block of memory len long from src to dest. /// (abs(dst - src) + len) must be no larger than cap() (There must be at /// most one continuous overlapping region between src and dest).", "commid": "rust_pr_53571"}], "negative_passages": []} {"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at // Copies all values from `src_slice` to the start of `dst_slice`. unsafe fn copy_whole_slice(src_slice: &[T], dst_slice: &mut [T]) { let len = src_slice.len(); ptr::copy_nonoverlapping(src_slice.as_ptr(), dst_slice[..len].as_mut_ptr(), len); } let src_total = other.len(); // Guarantees there is space in `self` for `other`. self.reserve(src_total); self.head = { let original_head = self.head; // The goal is to copy all values from `other` into `self`. To avoid any // mismatch, all valid values in `other` are retrieved... let (src_high, src_low) = other.as_slices(); // and unoccupied parts of self are retrieved. let (dst_high, dst_low) = unsafe { self.unused_as_mut_slices() }; // Then all that is needed is to copy all values from // src (src_high and src_low) to dst (dst_high and dst_low). // // other [o o o . . . . . o o o o] // [5 6 7] [1 2 3 4] // src_low src_high // // self [. . . . . . o o o o . .] // [3 4 5 6 7 .] [1 2] // dst_low dst_high // // Values are not copied one by one but as slices in `copy_whole_slice`. // What slices are used depends on various properties of src and dst. // There are 6 cases in total: // 1. `src` is contiguous and fits in dst_high // 2. `src` is contiguous and does not fit in dst_high // 3. `src` is discontiguous and fits in dst_high // 4. `src` is discontiguous and does not fit in dst_high // + src_high is smaller than dst_high // 5. `src` is discontiguous and does not fit in dst_high // + dst_high is smaller than src_high // 6. `src` is discontiguous and does not fit in dst_high // + dst_high is the same size as src_high let src_contiguous = src_low.is_empty(); let dst_high_fits_src = dst_high.len() >= src_total; match (src_contiguous, dst_high_fits_src) { (true, true) => { // 1. // other [. . . o o o . . . . . .] // [] [1 1 1] // // self [. o o o o o . . . . . .] // [.] [1 1 1 . . .] unsafe { copy_whole_slice(src_high, dst_high); } original_head + src_total } (true, false) => { // 2. // other [. . . o o o o o . . . .] // [] [1 1 2 2 2] // // self [. . . . . . . o o o . .] // [2 2 2 . . . .] [1 1] let (src_1, src_2) = src_high.split_at(dst_high.len()); unsafe { copy_whole_slice(src_1, dst_high); copy_whole_slice(src_2, dst_low); } src_total - dst_high.len() } (false, true) => { // 3. // other [o o . . . . . . . o o o] // [2 2] [1 1 1] // // self [. o o . . . . . . . . .] // [.] [1 1 1 2 2 . . . .] let (dst_1, dst_2) = dst_high.split_at_mut(src_high.len()); unsafe { copy_whole_slice(src_high, dst_1); copy_whole_slice(src_low, dst_2); } original_head + src_total } (false, false) => { if src_high.len() < dst_high.len() { // 4. // other [o o o . . . . . . o o o] // [2 3 3] [1 1 1] // // self [. . . . . . o o . . . .] // [3 3 . . . .] [1 1 1 2] let (dst_1, dst_2) = dst_high.split_at_mut(src_high.len()); let (src_2, src_3) = src_low.split_at(dst_2.len()); unsafe { copy_whole_slice(src_high, dst_1); copy_whole_slice(src_2, dst_2); copy_whole_slice(src_3, dst_low); } src_3.len() } else if src_high.len() > dst_high.len() { // 5. // other [o o o . . . . . o o o o] // [3 3 3] [1 1 2 2] // // self [. . . . . . o o o o . .] // [2 2 3 3 3 .] [1 1] let (src_1, src_2) = src_high.split_at(dst_high.len()); let (dst_2, dst_3) = dst_low.split_at_mut(src_2.len()); unsafe { copy_whole_slice(src_1, dst_high); copy_whole_slice(src_2, dst_2); copy_whole_slice(src_low, dst_3); } dst_2.len() + src_low.len() } else { // 6. // other [o o . . . . . . . o o o] // [2 2] [1 1 1] // // self [. . . . . . . o o . . .] // [2 2 . . . . .] [1 1 1] unsafe { copy_whole_slice(src_high, dst_high); copy_whole_slice(src_low, dst_low); } src_low.len() } } } }; // Some values now exist in both `other` and `self` but are made inaccessible in `other`. other.tail = other.head; // naive impl self.extend(other.drain(..)); } /// Retains only the elements specified by the predicate.", "commid": "rust_pr_53571"}], "negative_passages": []} {"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at fmt::Debug for Iter<'a, T> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let (front, back) = RingSlices::ring_slices(self.ring, self.head, self.tail); f.debug_tuple(\"Iter\") .field(&self.ring) .field(&self.tail) .field(&self.head) .finish() .field(&front) .field(&back) .finish() } }", "commid": "rust_pr_53571"}], "negative_passages": []} {"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at fmt::Debug for IterMut<'a, T> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { let (front, back) = RingSlices::ring_slices(&*self.ring, self.head, self.tail); f.debug_tuple(\"IterMut\") .field(&self.ring) .field(&self.tail) .field(&self.head) .finish() .field(&front) .field(&back) .finish() } }", "commid": "rust_pr_53571"}], "negative_passages": []} {"query_id": "q-en-rust-a10ebf118771c08c5b9499c75b4940d17444dad36f993b7b4a374436e600f49f", "query": "a programming language that I am working on, and the VM is written in Rust. Up until Rust nightly 2018-08-17, everything works fine. Starting with the nightly from the 17th, I'm observing various crashes and different program behaviour. For example: On Windows it will either fail with a , or (more on this in a moment). On Linux . Note that the funny segfault output is because the command is started with Ruby, and Ruby installs its own segmentation fault handler. Locally it will usually fail with the same runtime error as observed in Windows above, but sometimes it will segfault. Sometimes it will panic because certain operations are performed using NULL pointers where this is not expected. The last nightly that did not suffer from these problems was Rust 2018-08-16. Stable Rust also works fine. When the segmentation faults happen, they are usually in different places. For example, for one segmentation fault the backtrace is as follows: 0x00007ffff7e12763 in intmalloc () from 0x00007ffff7e13ada in malloc () from 0x0000555555568e6b in alloc::alloc::alloc (layout=...) at #[test] fn issue_53529() { use boxed::Box; let mut dst = VecDeque::new(); dst.push_front(Box::new(1)); dst.push_front(Box::new(2)); assert_eq!(*dst.pop_back().unwrap(), 1); let mut src = VecDeque::new(); src.push_front(Box::new(2)); dst.append(&mut src); for a in dst { assert_eq!(*a, 2); } } }", "commid": "rust_pr_53571"}], "negative_passages": []} {"query_id": "q-en-rust-a0ee5342ca104e62eb6d098a22a435dd5d4e4c857f5864c4dc797a5f594b49df", "query": "here is the example: this example will compile well if you remove\nBacktrace: `\nHmm I reproduced same ICE with next snippet without :\nSmaller repro: Edit: AFAICT the problem is the existential assoc type is being made dependent on the type parameter of the trait method ('s in this repro, 's in the OP). So it would be a compile error if it weren't for the ICE.\ncompiles without ICE.\nsorry it needs to be feature or used 2018 edition. Here is complete one which fails to compile regardless feature:\nRight, 2015 doesn't hit the error until you actually try to use the type. Regardless, as I said in my previous comment, this code should be a compile error anyway regardless of edition, since it's trying to declare an associated type that depends on the type parameters of the trait method rather than the trait.\nhey have you had a chance to look into this yet? Do you think you will be looking at it in the near term, or should we unassign you?\nRemoved my assignment. Unfortunately I'm running behind on the not-time-critical issues I assigned to myself.\nDiscussed in T-compiler meeting. This is a blocker for the existential type feature. But there is no schedule yet set for stabilizing that feature. Thus, downgrading fixing this to P-medium.\nThis no longer produces an ICE on the latest nightly.\nYes, it was fixed by", "positive_passages": [{"docid": "doc-en-rust-0efe230a644798eef370dbc02275a6e99f7202a5be7534a598554d569974cb19", "text": " // ignore-tidy-linelength #![feature(type_alias_impl_trait)] use std::fmt::Debug; pub trait Foo { type Item: Debug; fn foo(_: T) -> Self::Item; } #[derive(Debug)] pub struct S(std::marker::PhantomData); pub struct S2; impl Foo for S2 { type Item = impl Debug; fn foo(_: T) -> Self::Item { //~^ Error type parameter `T` is part of concrete type but not used in parameter list for the `impl Trait` type alias S::(Default::default()) } } fn main() { S2::foo(123); } ", "commid": "rust_pr_63474"}], "negative_passages": []} {"query_id": "q-en-rust-a0ee5342ca104e62eb6d098a22a435dd5d4e4c857f5864c4dc797a5f594b49df", "query": "here is the example: this example will compile well if you remove\nBacktrace: `\nHmm I reproduced same ICE with next snippet without :\nSmaller repro: Edit: AFAICT the problem is the existential assoc type is being made dependent on the type parameter of the trait method ('s in this repro, 's in the OP). So it would be a compile error if it weren't for the ICE.\ncompiles without ICE.\nsorry it needs to be feature or used 2018 edition. Here is complete one which fails to compile regardless feature:\nRight, 2015 doesn't hit the error until you actually try to use the type. Regardless, as I said in my previous comment, this code should be a compile error anyway regardless of edition, since it's trying to declare an associated type that depends on the type parameters of the trait method rather than the trait.\nhey have you had a chance to look into this yet? Do you think you will be looking at it in the near term, or should we unassign you?\nRemoved my assignment. Unfortunately I'm running behind on the not-time-critical issues I assigned to myself.\nDiscussed in T-compiler meeting. This is a blocker for the existential type feature. But there is no schedule yet set for stabilizing that feature. Thus, downgrading fixing this to P-medium.\nThis no longer produces an ICE on the latest nightly.\nYes, it was fixed by", "positive_passages": [{"docid": "doc-en-rust-72ebb4c626c708755b313d8eb04f08eec7f7990a031147348d076ea0fec374c1", "text": " error: type parameter `T` is part of concrete type but not used in parameter list for the `impl Trait` type alias --> $DIR/issue-53598.rs:20:42 | LL | fn foo(_: T) -> Self::Item { | __________________________________________^ LL | | LL | | S::(Default::default()) LL | | } | |_____^ error: aborting due to previous error ", "commid": "rust_pr_63474"}], "negative_passages": []} {"query_id": "q-en-rust-a0ee5342ca104e62eb6d098a22a435dd5d4e4c857f5864c4dc797a5f594b49df", "query": "here is the example: this example will compile well if you remove\nBacktrace: `\nHmm I reproduced same ICE with next snippet without :\nSmaller repro: Edit: AFAICT the problem is the existential assoc type is being made dependent on the type parameter of the trait method ('s in this repro, 's in the OP). So it would be a compile error if it weren't for the ICE.\ncompiles without ICE.\nsorry it needs to be feature or used 2018 edition. Here is complete one which fails to compile regardless feature:\nRight, 2015 doesn't hit the error until you actually try to use the type. Regardless, as I said in my previous comment, this code should be a compile error anyway regardless of edition, since it's trying to declare an associated type that depends on the type parameters of the trait method rather than the trait.\nhey have you had a chance to look into this yet? Do you think you will be looking at it in the near term, or should we unassign you?\nRemoved my assignment. Unfortunately I'm running behind on the not-time-critical issues I assigned to myself.\nDiscussed in T-compiler meeting. This is a blocker for the existential type feature. But there is no schedule yet set for stabilizing that feature. Thus, downgrading fixing this to P-medium.\nThis no longer produces an ICE on the latest nightly.\nYes, it was fixed by", "positive_passages": [{"docid": "doc-en-rust-c2893578412453f47df0497bee972b249ab8d2d126a9d5a15679fe41d6b9c520", "text": " // ignore-tidy-linelength #![feature(arbitrary_self_types)] #![feature(type_alias_impl_trait)] use std::ops::Deref; trait Foo { type Bar: Foo; fn foo(self: impl Deref) -> Self::Bar; } impl Foo for C { type Bar = impl Foo; fn foo(self: impl Deref) -> Self::Bar { //~^ Error type parameter `impl Deref` is part of concrete type but not used in parameter list for the `impl Trait` type alias self } } fn main() {} ", "commid": "rust_pr_63474"}], "negative_passages": []} {"query_id": "q-en-rust-a0ee5342ca104e62eb6d098a22a435dd5d4e4c857f5864c4dc797a5f594b49df", "query": "here is the example: this example will compile well if you remove\nBacktrace: `\nHmm I reproduced same ICE with next snippet without :\nSmaller repro: Edit: AFAICT the problem is the existential assoc type is being made dependent on the type parameter of the trait method ('s in this repro, 's in the OP). So it would be a compile error if it weren't for the ICE.\ncompiles without ICE.\nsorry it needs to be feature or used 2018 edition. Here is complete one which fails to compile regardless feature:\nRight, 2015 doesn't hit the error until you actually try to use the type. Regardless, as I said in my previous comment, this code should be a compile error anyway regardless of edition, since it's trying to declare an associated type that depends on the type parameters of the trait method rather than the trait.\nhey have you had a chance to look into this yet? Do you think you will be looking at it in the near term, or should we unassign you?\nRemoved my assignment. Unfortunately I'm running behind on the not-time-critical issues I assigned to myself.\nDiscussed in T-compiler meeting. This is a blocker for the existential type feature. But there is no schedule yet set for stabilizing that feature. Thus, downgrading fixing this to P-medium.\nThis no longer produces an ICE on the latest nightly.\nYes, it was fixed by", "positive_passages": [{"docid": "doc-en-rust-2613bd79cb21c5ea0b34928b1727da3b11fa011a355ebc3f3783a15fa91f3f03", "text": " error: type parameter `impl Deref` is part of concrete type but not used in parameter list for the `impl Trait` type alias --> $DIR/issue-57700.rs:16:58 | LL | fn foo(self: impl Deref) -> Self::Bar { | __________________________________________________________^ LL | | LL | | self LL | | } | |_____^ error: aborting due to previous error ", "commid": "rust_pr_63474"}], "negative_passages": []} {"query_id": "q-en-rust-78e96853ebcaf733db84becb7bcdacbd2583af489093168310a894c62326410b", "query": "This takes a while to compile. I presume because needs to hit the loop checker in miri. Changing the foofoo&(loop{}, 1).1'staticlifetime. () Errors:\nThe following code times out with either borrow checker:\nThe reason that foo matters is because of\nI don't think this is an NLL issue in particular?\nBased on 's identification of a test that reproduces the issue on either borrow-checker, I am going to remove the A-NLL label.\nThat example is not problematic, because nothing is promoted at all. I now don't even think my example is problematic anymore, because it seems to be perfectly alright to promote anything if we diverge before we give the user access to the value. I guess this just needs a test now (just a ui test with a comment, not a run-pass test, as that would hang forever)\nCan I work on this? I'm a newbie to rust.\nSure! You can take the example from the issue message and create a new file in and add the comment at the top. Then, when running it should succeed to run the tests.", "positive_passages": [{"docid": "doc-en-rust-fc5d2bec7de6aab3f522e7b8ab504f6af043503ea41ddc8e08e87498b517003d", "text": " //compile-pass #![feature(nll)] fn main() { let _: &'static usize = &(loop {}, 1).1; } ", "commid": "rust_pr_58479"}], "negative_passages": []} {"query_id": "q-en-rust-70af7c23e68cd060dfb22014976009be7fb662d2212fd27550567d1b4223c16b", "query": "No matter what kind of const eval features we add, we should never allow this. Even if we managed to allow this, I think the current impl would ICE. Even if it does not ICE, it could never actually modify the memory. We should ensure that the following code always errors.\nHi , I would like to work on this if that's ok. I'm a new Rustacean and I'm learning rustc development and testing. It's time to get my feet wet\nGreat! I think the following steps should suffice, but don't hesitate to ask if anything is unclear or doesn't work as expected the test in a new file in a annotation to the line that should report an error (replace \"error message here\" with the relevant error message) 3 until everything is green your test and the generated file and open a PR to the PR message so when it gets merged this issue gets closed automatically\nHi , When I was running the test in step 3, or compiled the new test with nightly rustc (rust-lang/rust I got errors like this. error[E0015]: calls in statics are limited to constant functions, tuple structs and tuple variants --mod-static-with-const- | 24 | () = 5; | ^^^^^^^^^^^ error[E0658]: statements in statics are unstable (see issue ) --mod-static-with-const- | 24 | () = 5; | ^^^^^^^^^^^^^^^^ | = help: add #![feature(constlet)] to the crate attributes to enable error: aborting due to 2 previous errors I think E0015 is the expected error. But I'm not sure about E0658. It seems that the constlet feature is not stabilized yet. (But compiling doesn't return this error.) Could you please help with this problem? Thanks a lot.\nThese errors are fine. the idea of this issue is to just make sure the code always fails to compile. More errors are fine, too. the const let feature has a bug where even enabling it will still report it. that's not problematic for this issue. Can you add a comment to the test mentioning that this test should never ever succeed? If you want you can additionally try to minimize the test to something still emitting the feature gate error even though the feature gate is active (and maybe to something that should be perfectly fine at compile time). then open a new issue about that.\nThanks ! I will try to reproduce the unexpected error related to the feature gate. Meanwhile it seems that this task will be blocked by that issue since the ui test will fail if the error signature doesn't match with the expected one. Is that right?\nyou can just make them match ;) Add all the annotations you need to make the test pass. Whenever someone changes your test, because they fixed other bugs or implemented new features, they'll hopefully look at the comment about the fact that the test should never succeed to compile.", "positive_passages": [{"docid": "doc-en-rust-a15022094b672e1f66f79c04e7967d26c55045934ae92234f1c71cd58ffa3832", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // New test for #53818: modifying static memory at compile-time is not allowed. // The test should never succeed. #![feature(const_raw_ptr_deref)] #![feature(const_let)] use std::cell::UnsafeCell; struct Foo(UnsafeCell); unsafe impl Send for Foo {} unsafe impl Sync for Foo {} static FOO: Foo = Foo(UnsafeCell::new(42)); static BAR: () = unsafe { *FOO.0.get() = 5; //~^ ERROR calls in statics are limited to constant functions, tuple structs and tuple variants // This error is caused by a separate bug that the feature gate error is reported // even though the feature gate \"const_let\" is active. //~| statements in statics are unstable (see issue #48821) }; fn main() { println!(\"{}\", unsafe { *FOO.0.get() }); } ", "commid": "rust_pr_54147"}], "negative_passages": []} {"query_id": "q-en-rust-70af7c23e68cd060dfb22014976009be7fb662d2212fd27550567d1b4223c16b", "query": "No matter what kind of const eval features we add, we should never allow this. Even if we managed to allow this, I think the current impl would ICE. Even if it does not ICE, it could never actually modify the memory. We should ensure that the following code always errors.\nHi , I would like to work on this if that's ok. I'm a new Rustacean and I'm learning rustc development and testing. It's time to get my feet wet\nGreat! I think the following steps should suffice, but don't hesitate to ask if anything is unclear or doesn't work as expected the test in a new file in a annotation to the line that should report an error (replace \"error message here\" with the relevant error message) 3 until everything is green your test and the generated file and open a PR to the PR message so when it gets merged this issue gets closed automatically\nHi , When I was running the test in step 3, or compiled the new test with nightly rustc (rust-lang/rust I got errors like this. error[E0015]: calls in statics are limited to constant functions, tuple structs and tuple variants --mod-static-with-const- | 24 | () = 5; | ^^^^^^^^^^^ error[E0658]: statements in statics are unstable (see issue ) --mod-static-with-const- | 24 | () = 5; | ^^^^^^^^^^^^^^^^ | = help: add #![feature(constlet)] to the crate attributes to enable error: aborting due to 2 previous errors I think E0015 is the expected error. But I'm not sure about E0658. It seems that the constlet feature is not stabilized yet. (But compiling doesn't return this error.) Could you please help with this problem? Thanks a lot.\nThese errors are fine. the idea of this issue is to just make sure the code always fails to compile. More errors are fine, too. the const let feature has a bug where even enabling it will still report it. that's not problematic for this issue. Can you add a comment to the test mentioning that this test should never ever succeed? If you want you can additionally try to minimize the test to something still emitting the feature gate error even though the feature gate is active (and maybe to something that should be perfectly fine at compile time). then open a new issue about that.\nThanks ! I will try to reproduce the unexpected error related to the feature gate. Meanwhile it seems that this task will be blocked by that issue since the ui test will fail if the error signature doesn't match with the expected one. Is that right?\nyou can just make them match ;) Add all the annotations you need to make the test pass. Whenever someone changes your test, because they fixed other bugs or implemented new features, they'll hopefully look at the comment about the fact that the test should never succeed to compile.", "positive_passages": [{"docid": "doc-en-rust-0da99ce0d99e71614ad5a65ea0cfd75c45e36868244daaead5c5475ca42e1b5b", "text": " error[E0015]: calls in statics are limited to constant functions, tuple structs and tuple variants --> $DIR/mod-static-with-const-fn.rs:27:6 | LL | *FOO.0.get() = 5; | ^^^^^^^^^^^ error[E0658]: statements in statics are unstable (see issue #48821) --> $DIR/mod-static-with-const-fn.rs:27:5 | LL | *FOO.0.get() = 5; | ^^^^^^^^^^^^^^^^ | = help: add #![feature(const_let)] to the crate attributes to enable error: aborting due to 2 previous errors Some errors occurred: E0015, E0658. For more information about an error, try `rustc --explain E0015`. ", "commid": "rust_pr_54147"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-80e55fd209c223fb00aed2ff7ea3ec1d0a53c463c93b8182cb57d55f2841cf74", "text": "let mut cargo = Command::new(&self.initial_cargo); let out_dir = self.stage_out(compiler, mode); // command specific path, we call clear_if_dirty with this let mut my_out = match cmd { \"build\" => self.cargo_out(compiler, mode, target), // This is the intended out directory for crate documentation. \"doc\" | \"rustdoc\" => self.crate_doc_out(target), _ => self.stage_out(compiler, mode), }; // This is for the original compiler, but if we're forced to use stage 1, then // std/test/rustc stamps won't exist in stage 2, so we need to get those from stage 1, since // we copy the libs forward. let cmp = self.compiler_for(compiler.stage, compiler.host, target); let libstd_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libstd_stamp(self, cmp, target), _ => compile::libstd_stamp(self, cmp, target), }; let libtest_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libtest_stamp(self, cmp, target), _ => compile::libtest_stamp(self, cmp, target), }; let librustc_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::librustc_stamp(self, cmp, target), _ => compile::librustc_stamp(self, cmp, target), }; // Codegen backends are not yet tracked by -Zbinary-dep-depinfo, // so we need to explicitly clear out if they've been updated. for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&out_dir, &backend); } if cmd == \"doc\" || cmd == \"rustdoc\" { if mode == Mode::Rustc || mode == Mode::ToolRustc || mode == Mode::Codegen { let my_out = match mode { // This is the intended out directory for compiler documentation. my_out = self.compiler_doc_out(target); } Mode::Rustc | Mode::ToolRustc | Mode::Codegen => self.compiler_doc_out(target), _ => self.crate_doc_out(target), }; let rustdoc = self.rustdoc(compiler); self.clear_if_dirty(&my_out, &rustdoc); } else if cmd != \"test\" { match mode { Mode::Std => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&my_out, &backend); } }, Mode::Test => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::Rustc => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::Codegen => { self.clear_if_dirty(&my_out, &librustc_stamp); }, Mode::ToolBootstrap => { }, Mode::ToolStd => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::ToolTest => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::ToolRustc => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); self.clear_if_dirty(&my_out, &librustc_stamp); }, } } cargo", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-9d639d84c2f0fe53329d4b551bedabea3c30bcb4f1a8908092c0c607b18583a7", "text": "}, } // This tells Cargo (and in turn, rustc) to output more complete // dependency information. Most importantly for rustbuild, this // includes sysroot artifacts, like libstd, which means that we don't // need to track those in rustbuild (an error prone process!). This // feature is currently unstable as there may be some bugs and such, but // it represents a big improvement in rustbuild's reliability on // rebuilds, so we're using it here. // // For some additional context, see #63470 (the PR originally adding // this), as well as #63012 which is the tracking issue for this // feature on the rustc side. cargo.arg(\"-Zbinary-dep-depinfo\"); cargo.arg(\"-j\").arg(self.jobs().to_string()); // Remove make-related flags to ensure Cargo can correctly set things up cargo.env_remove(\"MAKEFLAGS\");", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-f515d4963da911b191f7faae357370e1be6dc99e30fd6ccb2209db0947d434f6", "text": "let libdir = builder.sysroot_libdir(compiler, target); let hostdir = builder.sysroot_libdir(compiler, compiler.host); add_to_sysroot(&builder, &libdir, &hostdir, &rustdoc_stamp(builder, compiler, target)); builder.cargo(compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-432d9631a7ef91d129d2cfaff55f38dadb459167bf99bc7744fd5bc512bfc321", "text": "use std::process::{Command, Stdio, exit}; use std::str; use build_helper::{output, mtime, t, up_to_date}; use build_helper::{output, t, up_to_date}; use filetime::FileTime; use serde::Deserialize; use serde_json;", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-093ad82386bc6f2c9804225303004b7374d3bd3f54cc107e31b82a46c2e29d8e", "text": "// for reason why the sanitizers are not built in stage0. copy_apple_sanitizer_dylibs(builder, &builder.native_dir(target), \"osx\", &libdir); } builder.cargo(target_compiler, Mode::ToolStd, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-4f02897f85d8e9bfdaeae6766e030d8555e33e4613fcd3fb04fedf0d0990a802", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &libtest_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolTest, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-d2343b61ee6096e2e13dd91893e26549fb3a997a5898079b8bbabeb6dce96bb6", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &librustc_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-3570768bb001b4b48f7d19c70c45cba0e8f211904f7edb5d3119ff7e7fee7cfa", "query": "This also occurs with etc. Changing requires only the reverse dependencies of to be recompiled. is also recompiled. As you can see, this takes twice as much as compiling the reverse dependencies of alone, leading to compilation being 3x slower.", "positive_passages": [{"docid": "doc-en-rust-d645c75ba90a4081207db3103439b1bcb0c9033b5d6299a205f1ce0b23845741", "text": "deps.push((path_to_add.into(), false)); } // Now we want to update the contents of the stamp file, if necessary. First // we read off the previous contents along with its mtime. If our new // contents (the list of files to copy) is different or if any dep's mtime // is newer then we rewrite the stamp file. deps.sort(); let stamp_contents = fs::read(stamp); let stamp_mtime = mtime(&stamp); let mut new_contents = Vec::new(); let mut max = None; let mut max_path = None; for (dep, proc_macro) in deps.iter() { let mtime = mtime(dep); if Some(mtime) > max { max = Some(mtime); max_path = Some(dep.clone()); } new_contents.extend(if *proc_macro { b\"h\" } else { b\"t\" }); new_contents.extend(dep.to_str().unwrap().as_bytes()); new_contents.extend(b\"0\"); } let max = max.unwrap(); let max_path = max_path.unwrap(); let contents_equal = stamp_contents .map(|contents| contents == new_contents) .unwrap_or_default(); if contents_equal && max <= stamp_mtime { builder.verbose(&format!(\"not updating {:?}; contents equal and {:?} <= {:?}\", stamp, max, stamp_mtime)); return deps.into_iter().map(|(d, _)| d).collect() } if max > stamp_mtime { builder.verbose(&format!(\"updating {:?} as {:?} changed\", stamp, max_path)); } else { builder.verbose(&format!(\"updating {:?} as deps changed\", stamp)); } t!(fs::write(&stamp, &new_contents)); deps.into_iter().map(|(d, _)| d).collect() }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-08b015ccef9e1d86e643ae2795da137a9a9befa893f54c699e36bc5bb405de7b", "text": "fn copy_apple_sanitizer_dylibs(builder: &Builder, native_dir: &Path, platform: &str, into: &Path) { for &sanitizer in &[\"asan\", \"tsan\"] { let filename = format!(\"libclang_rt.{}_{}_dynamic.dylib\", sanitizer, platform); let filename = format!(\"lib__rustc__clang_rt.{}_{}_dynamic.dylib\", sanitizer, platform); let mut src_path = native_dir.join(sanitizer); src_path.push(\"build\"); src_path.push(\"lib\");", "commid": "rust_pr_54681"}], "negative_passages": []} {"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-fd60ee6a2d366165852597c5b576b1bf05706a5d2f6e9eace6b8d6e26d2bc370", "text": "pub out_dir: PathBuf, } impl NativeLibBoilerplate { /// On OSX we don't want to ship the exact filename that compiler-rt builds. /// This conflicts with the system and ours is likely a wildly different /// version, so they can't be substituted. /// /// As a result, we rename it here but we need to also use /// `install_name_tool` on OSX to rename the commands listed inside of it to /// ensure it's linked against correctly. pub fn fixup_sanitizer_lib_name(&self, sanitizer_name: &str) { if env::var(\"TARGET\").unwrap() != \"x86_64-apple-darwin\" { return } let dir = self.out_dir.join(\"build/lib/darwin\"); let name = format!(\"clang_rt.{}_osx_dynamic\", sanitizer_name); let src = dir.join(&format!(\"lib{}.dylib\", name)); let new_name = format!(\"lib__rustc__{}.dylib\", name); let dst = dir.join(&new_name); println!(\"{} => {}\", src.display(), dst.display()); fs::rename(&src, &dst).unwrap(); let status = Command::new(\"install_name_tool\") .arg(\"-id\") .arg(format!(\"@rpath/{}\", new_name)) .arg(&dst) .status() .expect(\"failed to execute `install_name_tool`\"); assert!(status.success()); } } impl Drop for NativeLibBoilerplate { fn drop(&mut self) { if !thread::panicking() {", "commid": "rust_pr_54681"}], "negative_passages": []} {"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-fc2d1cc79a3e36439f96421a71dbb3eee1e368d3c04feedf30b8914249f22e80", "text": "pub fn sanitizer_lib_boilerplate(sanitizer_name: &str) -> Result<(NativeLibBoilerplate, String), ()> { let (link_name, search_path, dynamic) = match &*env::var(\"TARGET\").unwrap() { let (link_name, search_path, apple) = match &*env::var(\"TARGET\").unwrap() { \"x86_64-unknown-linux-gnu\" => ( format!(\"clang_rt.{}-x86_64\", sanitizer_name), \"build/lib/linux\",", "commid": "rust_pr_54681"}], "negative_passages": []} {"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-789f8f5a09448c1b0beef3310b69f75a5bdb297292e3300be506c2a6bea9a543", "text": "), _ => return Err(()), }; let to_link = if dynamic { format!(\"dylib={}\", link_name) let to_link = if apple { format!(\"dylib=__rustc__{}\", link_name) } else { format!(\"static={}\", link_name) };", "commid": "rust_pr_54681"}], "negative_passages": []} {"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-e6320ba767747de6125367819649b8366dcef86ae9a1dca2fa225e3d7bb8861d", "text": ".out_dir(&native.out_dir) .build_target(&target) .build(); native.fixup_sanitizer_lib_name(\"asan\"); } println!(\"cargo:rerun-if-env-changed=LLVM_CONFIG\"); }", "commid": "rust_pr_54681"}], "negative_passages": []} {"query_id": "q-en-rust-7f49b450e2a2223e97ed4cd19c7b077d69e37d7dae4f440356ca46bd12ff0c1e", "query": "Discovered in it turns out that our vendored copies of these libraires cause problems when they override the system versions by accident. We should rename our copies with a Rust-specific name to avoid this name clash.", "positive_passages": [{"docid": "doc-en-rust-3025cf2f433f59e58f399270a38a30acc5948098e7298f566809e886ae45ebae", "text": ".out_dir(&native.out_dir) .build_target(&target) .build(); native.fixup_sanitizer_lib_name(\"tsan\"); } println!(\"cargo:rerun-if-env-changed=LLVM_CONFIG\"); }", "commid": "rust_pr_54681"}], "negative_passages": []} {"query_id": "q-en-rust-5cc3c7a26b374dc4156df6841e15f47e5da48b4efbd566a708f74578160ee9c9", "query": "std::io::Seek I got confused by this line and over on so after a little discussion it seems like this might be better wording for those of us who are silly A seek beyond the end of a stream is allowed, but it is implementation-defined. The hyphen makes it clear \u201cimplementation defined\u201d is a known concept (and matches it\u2019s usage elsewhere) and the \u201cit is\u201d makes it a bit more clear that the full sentence is considered and intentional. While normally I find brevity improves clarity, in this case if you don\u2019t realize \u201cimplementation defined\u201d is a single concept, it sounds like a thought that trailed off after an edit\nThanks I know other projects I've worked with have Easy and Documentation as tags so I wasn't sure if you guys might do the same\nYep, they're just actual labels over on the right ;-) (I also tabbed away and forgot to put the easy label on immediately )\nDon't worry about that! It's very easy for developers to misjudge whether such descriptions will make sense to the target users. Pointing these things out is a great way to contribute to documentation.", "positive_passages": [{"docid": "doc-en-rust-e40906f5dccabed66d0773dbc8fa0338e6992883a4df3eee4a735fa15cc6b2cc", "text": "pub trait Seek { /// Seek to an offset, in bytes, in a stream. /// /// A seek beyond the end of a stream is allowed, but implementation /// defined. /// A seek beyond the end of a stream is allowed, but behavior is defined /// by the implementation. /// /// If the seek operation completed successfully, /// this method returns the new position from the start of the stream.", "commid": "rust_pr_54635"}], "negative_passages": []} {"query_id": "q-en-rust-80e8e973ae0b6e82394c8853b4f5876075770e920ec6dd8902244a2de8eea5d4", "query": "Compiling code like: results in: presumably because attempts to translate it into an x86-specific ABI, which the AArch64 backend rightly rejects. already ignores for non-x86 targets: I'm not completely sure what the right answer for Rust is. I think is the rough equivalent of the above clang bit and needs to take a few more cases into account for non-x86 Windows: does that sound right? If so, I'll code up a patch.\nSounds like a good place to me!", "positive_passages": [{"docid": "doc-en-rust-55f50da60459e31616d2f7f05340470c69c41cbb12b9f7b3402f2754a3d84552", "text": "} impl Target { /// Given a function ABI, turn \"System\" into the correct ABI for this target. /// Given a function ABI, turn it into the correct ABI for this target. pub fn adjust_abi(&self, abi: Abi) -> Abi { match abi { Abi::System => {", "commid": "rust_pr_54576"}], "negative_passages": []} {"query_id": "q-en-rust-80e8e973ae0b6e82394c8853b4f5876075770e920ec6dd8902244a2de8eea5d4", "query": "Compiling code like: results in: presumably because attempts to translate it into an x86-specific ABI, which the AArch64 backend rightly rejects. already ignores for non-x86 targets: I'm not completely sure what the right answer for Rust is. I think is the rough equivalent of the above clang bit and needs to take a few more cases into account for non-x86 Windows: does that sound right? If so, I'll code up a patch.\nSounds like a good place to me!", "positive_passages": [{"docid": "doc-en-rust-b222d6c566648b1e54c6e450a4f4afcb5bae2271461646efde5a61659936a369", "text": "Abi::C } }, // These ABI kinds are ignored on non-x86 Windows targets. // See https://docs.microsoft.com/en-us/cpp/cpp/argument-passing-and-naming-conventions // and the individual pages for __stdcall et al. Abi::Stdcall | Abi::Fastcall | Abi::Vectorcall | Abi::Thiscall => { if self.options.is_like_windows && self.arch != \"x86\" { Abi::C } else { abi } }, abi => abi } }", "commid": "rust_pr_54576"}], "negative_passages": []} {"query_id": "q-en-rust-80e8e973ae0b6e82394c8853b4f5876075770e920ec6dd8902244a2de8eea5d4", "query": "Compiling code like: results in: presumably because attempts to translate it into an x86-specific ABI, which the AArch64 backend rightly rejects. already ignores for non-x86 targets: I'm not completely sure what the right answer for Rust is. I think is the rough equivalent of the above clang bit and needs to take a few more cases into account for non-x86 Windows: does that sound right? If so, I'll code up a patch.\nSounds like a good place to me!", "positive_passages": [{"docid": "doc-en-rust-197959355bcf33c8b60ebe2f3099b1938a53d26e02d0c691012f7da90ed350a6", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. // ignore-arm // ignore-aarch64 // Test that `extern \"stdcall\"` is properly translated. // only-x86 // compile-flags: -C no-prepopulate-passes", "commid": "rust_pr_54576"}], "negative_passages": []} {"query_id": "q-en-rust-47547d64a0fcb212ce4bff5118eea027d90cf4f1bc69020078db3e4cb242cc3b", "query": "Because LLVM doesn't know what is, we currently , relying on the function. So if the current CPU's family is Haswell, we convert to . As noticed and , some Intel Pentiums belong to the Haswell microarch, but they lack AVX.
    let features = attributes::llvm_target_features(sess).collect::>(); let mut features = llvm_util::handle_native_features(sess); features.extend(attributes::llvm_target_features(sess).map(|s| s.to_owned())); let mut singlethread = sess.target.singlethread; // On the wasm target once the `atomics` feature is enabled that means that", "commid": "rust_pr_80749"}], "negative_passages": []} {"query_id": "q-en-rust-47547d64a0fcb212ce4bff5118eea027d90cf4f1bc69020078db3e4cb242cc3b", "query": "Because LLVM doesn't know what is, we currently , relying on the function. So if the current CPU's family is Haswell, we convert to . As noticed and , some Intel Pentiums belong to the Haswell microarch, but they lack AVX.
    , ); pub fn LLVMGetHostCPUFeatures() -> *mut c_char; pub fn LLVMDisposeMessage(message: *mut c_char); // Stuff that's in llvm-wrapper/ because it's not upstream yet. /// Opens an object file.", "commid": "rust_pr_80749"}], "negative_passages": []} {"query_id": "q-en-rust-47547d64a0fcb212ce4bff5118eea027d90cf4f1bc69020078db3e4cb242cc3b", "query": "Because LLVM doesn't know what is, we currently , relying on the function. So if the current CPU's family is Haswell, we convert to . As noticed and , some Intel Pentiums belong to the Haswell microarch, but they lack AVX.
    use std::ffi::CString; use std::ffi::{CStr, CString}; use std::slice; use std::str;", "commid": "rust_pr_80749"}], "negative_passages": []} {"query_id": "q-en-rust-47547d64a0fcb212ce4bff5118eea027d90cf4f1bc69020078db3e4cb242cc3b", "query": "Because LLVM doesn't know what is, we currently , relying on the function. So if the current CPU's family is Haswell, we convert to . As noticed and , some Intel Pentiums belong to the Haswell microarch, but they lack AVX.
    pub fn handle_native_features(sess: &Session) -> Vec { match sess.opts.cg.target_cpu { Some(ref s) => { if s != \"native\" { return vec![]; } let features_string = unsafe { let ptr = llvm::LLVMGetHostCPUFeatures(); let features_string = if !ptr.is_null() { CStr::from_ptr(ptr) .to_str() .unwrap_or_else(|e| { bug!(\"LLVM returned a non-utf8 features string: {}\", e); }) .to_owned() } else { bug!(\"could not allocate host CPU features, LLVM returned a `null` string\"); }; llvm::LLVMDisposeMessage(ptr); features_string }; features_string.split(\",\").map(|s| s.to_owned()).collect() } None => vec![], } } pub fn tune_cpu(sess: &Session) -> Option<&str> { match sess.opts.debugging_opts.tune_cpu { Some(ref s) => Some(handle_native(&**s)),", "commid": "rust_pr_80749"}], "negative_passages": []} {"query_id": "q-en-rust-d16736a830399247ea4d437e19f9b3ecbcf87e76343d3c5ddcf152ee71be61e0", "query": "Rust version: How to reproduce: Error:\nMinimized reproduction:\nI'd like to work on this.", "positive_passages": [{"docid": "doc-en-rust-670edca9d3b7a26af25a3ef2d6fa5e345142ae0c33466d2bf02b42b0f2c7723d", "text": "ty::RegionKind::ReLateBound(_, _), ) => {} (ty::RegionKind::ReLateBound(_, _), _) => { (ty::RegionKind::ReLateBound(_, _), _) | (_, ty::RegionKind::ReVar(_)) => { // One of these is true: // The new predicate has a HRTB in a spot where the old // predicate does not (if they both had a HRTB, the previous // match arm would have executed). // match arm would have executed). A HRBT is a 'stricter' // bound than anything else, so we want to keep the newer // predicate (with the HRBT) in place of the old predicate. // // The means we want to remove the older predicate from // user_computed_preds, since having both it and the new // OR // // The old predicate has a region variable where the new // predicate has some other kind of region. An region // variable isn't something we can actually display to a user, // so we choose ther new predicate (which doesn't have a region // varaible). // // In both cases, we want to remove the old predicate, // from user_computed_preds, and replace it with the new // one. Having both the old and the new // predicate in a ParamEnv would confuse SelectionContext // // We're currently in the predicate passed to 'retain', // so we return 'false' to remove the old predicate from // user_computed_preds return false; } (_, ty::RegionKind::ReLateBound(_, _)) => { // This is the opposite situation as the previous arm - the // old predicate has a HRTB lifetime in a place where the // new predicate does not. We want to leave the old (_, ty::RegionKind::ReLateBound(_, _)) | (ty::RegionKind::ReVar(_), _) => { // This is the opposite situation as the previous arm. // One of these is true: // // The old predicate has a HRTB lifetime in a place where the // new predicate does not. // // OR // // The new predicate has a region variable where the old // predicate has some other type of region. // // We want to leave the old // predicate in user_computed_preds, and skip adding // new_pred to user_computed_params. should_add_new = false } }, _ => {} } }", "commid": "rust_pr_55453"}], "negative_passages": []} {"query_id": "q-en-rust-d16736a830399247ea4d437e19f9b3ecbcf87e76343d3c5ddcf152ee71be61e0", "query": "Rust version: How to reproduce: Error:\nMinimized reproduction:\nI'd like to work on this.", "positive_passages": [{"docid": "doc-en-rust-36f21e0600476ef1ca4bc71bcf10a12c311718524b32d7973b4c0e3ebc53690e", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub trait ScopeHandle<'scope> {} // @has issue_54705/struct.ScopeFutureContents.html // @has - '//*[@id=\"synthetic-implementations-list\"]/*[@class=\"impl\"]//*/code' \"impl<'scope, S> // Send for ScopeFutureContents<'scope, S> where S: Sync\" // // @has - '//*[@id=\"synthetic-implementations-list\"]/*[@class=\"impl\"]//*/code' \"impl<'scope, S> // Sync for ScopeFutureContents<'scope, S> where S: Sync\" pub struct ScopeFutureContents<'scope, S> where S: ScopeHandle<'scope>, { dummy: &'scope S, this: Box>, } struct ScopeFuture<'scope, S> where S: ScopeHandle<'scope>, { contents: ScopeFutureContents<'scope, S>, } unsafe impl<'scope, S> Send for ScopeFuture<'scope, S> where S: ScopeHandle<'scope>, {} unsafe impl<'scope, S> Sync for ScopeFuture<'scope, S> where S: ScopeHandle<'scope>, {} ", "commid": "rust_pr_55453"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-80e55fd209c223fb00aed2ff7ea3ec1d0a53c463c93b8182cb57d55f2841cf74", "text": "let mut cargo = Command::new(&self.initial_cargo); let out_dir = self.stage_out(compiler, mode); // command specific path, we call clear_if_dirty with this let mut my_out = match cmd { \"build\" => self.cargo_out(compiler, mode, target), // This is the intended out directory for crate documentation. \"doc\" | \"rustdoc\" => self.crate_doc_out(target), _ => self.stage_out(compiler, mode), }; // This is for the original compiler, but if we're forced to use stage 1, then // std/test/rustc stamps won't exist in stage 2, so we need to get those from stage 1, since // we copy the libs forward. let cmp = self.compiler_for(compiler.stage, compiler.host, target); let libstd_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libstd_stamp(self, cmp, target), _ => compile::libstd_stamp(self, cmp, target), }; let libtest_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libtest_stamp(self, cmp, target), _ => compile::libtest_stamp(self, cmp, target), }; let librustc_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::librustc_stamp(self, cmp, target), _ => compile::librustc_stamp(self, cmp, target), }; // Codegen backends are not yet tracked by -Zbinary-dep-depinfo, // so we need to explicitly clear out if they've been updated. for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&out_dir, &backend); } if cmd == \"doc\" || cmd == \"rustdoc\" { if mode == Mode::Rustc || mode == Mode::ToolRustc || mode == Mode::Codegen { let my_out = match mode { // This is the intended out directory for compiler documentation. my_out = self.compiler_doc_out(target); } Mode::Rustc | Mode::ToolRustc | Mode::Codegen => self.compiler_doc_out(target), _ => self.crate_doc_out(target), }; let rustdoc = self.rustdoc(compiler); self.clear_if_dirty(&my_out, &rustdoc); } else if cmd != \"test\" { match mode { Mode::Std => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&my_out, &backend); } }, Mode::Test => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::Rustc => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::Codegen => { self.clear_if_dirty(&my_out, &librustc_stamp); }, Mode::ToolBootstrap => { }, Mode::ToolStd => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::ToolTest => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::ToolRustc => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); self.clear_if_dirty(&my_out, &librustc_stamp); }, } } cargo", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-9d639d84c2f0fe53329d4b551bedabea3c30bcb4f1a8908092c0c607b18583a7", "text": "}, } // This tells Cargo (and in turn, rustc) to output more complete // dependency information. Most importantly for rustbuild, this // includes sysroot artifacts, like libstd, which means that we don't // need to track those in rustbuild (an error prone process!). This // feature is currently unstable as there may be some bugs and such, but // it represents a big improvement in rustbuild's reliability on // rebuilds, so we're using it here. // // For some additional context, see #63470 (the PR originally adding // this), as well as #63012 which is the tracking issue for this // feature on the rustc side. cargo.arg(\"-Zbinary-dep-depinfo\"); cargo.arg(\"-j\").arg(self.jobs().to_string()); // Remove make-related flags to ensure Cargo can correctly set things up cargo.env_remove(\"MAKEFLAGS\");", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-f515d4963da911b191f7faae357370e1be6dc99e30fd6ccb2209db0947d434f6", "text": "let libdir = builder.sysroot_libdir(compiler, target); let hostdir = builder.sysroot_libdir(compiler, compiler.host); add_to_sysroot(&builder, &libdir, &hostdir, &rustdoc_stamp(builder, compiler, target)); builder.cargo(compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-432d9631a7ef91d129d2cfaff55f38dadb459167bf99bc7744fd5bc512bfc321", "text": "use std::process::{Command, Stdio, exit}; use std::str; use build_helper::{output, mtime, t, up_to_date}; use build_helper::{output, t, up_to_date}; use filetime::FileTime; use serde::Deserialize; use serde_json;", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-093ad82386bc6f2c9804225303004b7374d3bd3f54cc107e31b82a46c2e29d8e", "text": "// for reason why the sanitizers are not built in stage0. copy_apple_sanitizer_dylibs(builder, &builder.native_dir(target), \"osx\", &libdir); } builder.cargo(target_compiler, Mode::ToolStd, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-4f02897f85d8e9bfdaeae6766e030d8555e33e4613fcd3fb04fedf0d0990a802", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &libtest_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolTest, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-d2343b61ee6096e2e13dd91893e26549fb3a997a5898079b8bbabeb6dce96bb6", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &librustc_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-bfa412db247ed5d4b7b0881a683a52b44b134b94e31c20369a7f564e4858e245", "query": "That is, appears to lose incremental artifacts. cc\nYes, I believe this is because rustc/cargo will otherwise error on \"found a new std\" or something like that? I don't quite see how incremental is different from the normal compilation here?\nincremental would still work even if dependencies changed. I think the fundamental problem here is that deletes everything, whereas (which, IIRC, does not remove incremental artifacts by default) would be appropriate.\nHm, I thought was equivalent to essentially -- could you clarify that? Would it be better to use cargo clean in rustbuild?\nRegardless of what does I don't think we should execute it here. Rustbuild is overly conservative in what it deletes, and it can get away probably with just deleting the folder instead of the entire folder. That would leave around the incremental cache. I believe, however, that is like\nMaybe the benchmarks I remember having for incremental had the incremental dir elsewhere than . I sure was confused by it, since I'd expect to remove all artifacts. If it does do that, then I agree should just remove a subset of the contents of target dirs.\nLooking at the code, these calls appear to be relevant: The problem is that the directory refers to is non-trivially structured. Instead of removing , we need to remove all the files (or at least the ones that are artifacts) and the directory in .\nI keep hitting this, specifically it keeps rebuilding and , every time I change rustc, which cause them to take much longer than any of the rustc crates, and this issue has become my main bottleneck (now that I'm building on a machine).\nNominating for discussion (specifically, allocating resources towards this) at the next relevant team meeting (tagged both and as I'm not sure which has \"jurisdiction\" here).\nIt'd be helpful to know how much rustbuild can \"trust\" the compiler here. Can we keep incremental artifact directories around from previous versions of the compiler?\nThe compiler can only work with incremental artifacts that it created itself. For any other version of the compiler it has to assume that it's incompatible. It usually just ignores such incompatible artifacts, but in the bootstrapping scenario, it might not always be smart enough to do that.\nI think the question here is will rustc's incremental artifacts correctly deal with crates changing (including adding and removing crates) in the sysroot. If so, we could keep the folder around. Cargo assumes the sysroot is unchanging, which is why we remove the directory in the first place.\nIncremental doesn't care about vs , so I'd expect it to be pretty resilient here.\ntriage: P-medium. (this inefficiency in incremental comp. would be nice to resolve, but is not P-high IMO.)", "positive_passages": [{"docid": "doc-en-rust-d645c75ba90a4081207db3103439b1bcb0c9033b5d6299a205f1ce0b23845741", "text": "deps.push((path_to_add.into(), false)); } // Now we want to update the contents of the stamp file, if necessary. First // we read off the previous contents along with its mtime. If our new // contents (the list of files to copy) is different or if any dep's mtime // is newer then we rewrite the stamp file. deps.sort(); let stamp_contents = fs::read(stamp); let stamp_mtime = mtime(&stamp); let mut new_contents = Vec::new(); let mut max = None; let mut max_path = None; for (dep, proc_macro) in deps.iter() { let mtime = mtime(dep); if Some(mtime) > max { max = Some(mtime); max_path = Some(dep.clone()); } new_contents.extend(if *proc_macro { b\"h\" } else { b\"t\" }); new_contents.extend(dep.to_str().unwrap().as_bytes()); new_contents.extend(b\"0\"); } let max = max.unwrap(); let max_path = max_path.unwrap(); let contents_equal = stamp_contents .map(|contents| contents == new_contents) .unwrap_or_default(); if contents_equal && max <= stamp_mtime { builder.verbose(&format!(\"not updating {:?}; contents equal and {:?} <= {:?}\", stamp, max, stamp_mtime)); return deps.into_iter().map(|(d, _)| d).collect() } if max > stamp_mtime { builder.verbose(&format!(\"updating {:?} as {:?} changed\", stamp, max_path)); } else { builder.verbose(&format!(\"updating {:?} as deps changed\", stamp)); } t!(fs::write(&stamp, &new_contents)); deps.into_iter().map(|(d, _)| d).collect() }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-acf00203f583eb2fb99b4f1d4b8f45d078b66e984d8f1c104d0d0e25287d9bcf", "query": "On rustc 1.31.0-nightly ( 2018-10-07) and x8664-unknown-linux-gnu, we see the following surprising behavior: In release mode it seems the write has not happened in the place that it should. Mentioning threadlocal tracking issue: .\nAh, apparently this is missing a . With the behavior is correct. We need to make sure mutating a thread_local static that does not have does not compile.\ncc Did fix this by any chance?\nIt looks like this is fixed on nightly; I'm guessing is the reason. We probably should add some regression tests for the cases listed here.", "positive_passages": [{"docid": "doc-en-rust-b779fa5dbd909bb88dbfffd6c9bc396b7a06bfb84a805ac3fb42744794df8b82", "text": " error[E0594]: cannot assign to immutable static item `S` --> $DIR/thread-local-mutation.rs:11:5 | LL | S = \"after\"; //~ ERROR cannot assign to immutable | ^^^^^^^^^^^ cannot assign error: aborting due to previous error For more information about this error, try `rustc --explain E0594`. ", "commid": "rust_pr_57107"}], "negative_passages": []} {"query_id": "q-en-rust-acf00203f583eb2fb99b4f1d4b8f45d078b66e984d8f1c104d0d0e25287d9bcf", "query": "On rustc 1.31.0-nightly ( 2018-10-07) and x8664-unknown-linux-gnu, we see the following surprising behavior: In release mode it seems the write has not happened in the place that it should. Mentioning threadlocal tracking issue: .\nAh, apparently this is missing a . With the behavior is correct. We need to make sure mutating a thread_local static that does not have does not compile.\ncc Did fix this by any chance?\nIt looks like this is fixed on nightly; I'm guessing is the reason. We probably should add some regression tests for the cases listed here.", "positive_passages": [{"docid": "doc-en-rust-d7d3d35c5f8df570724aee609f46e2c1feebd6b2c3672a7002747a1c00594f25", "text": " // Regression test for #54901: immutable thread locals could be mutated. See: // https://github.com/rust-lang/rust/issues/29594#issuecomment-328177697 // https://github.com/rust-lang/rust/issues/54901 #![feature(thread_local)] #[thread_local] static S: &str = \"before\"; fn set_s() { S = \"after\"; //~ ERROR cannot assign to immutable } fn main() { println!(\"{}\", S); set_s(); println!(\"{}\", S); } ", "commid": "rust_pr_57107"}], "negative_passages": []} {"query_id": "q-en-rust-acf00203f583eb2fb99b4f1d4b8f45d078b66e984d8f1c104d0d0e25287d9bcf", "query": "On rustc 1.31.0-nightly ( 2018-10-07) and x8664-unknown-linux-gnu, we see the following surprising behavior: In release mode it seems the write has not happened in the place that it should. Mentioning threadlocal tracking issue: .\nAh, apparently this is missing a . With the behavior is correct. We need to make sure mutating a thread_local static that does not have does not compile.\ncc Did fix this by any chance?\nIt looks like this is fixed on nightly; I'm guessing is the reason. We probably should add some regression tests for the cases listed here.", "positive_passages": [{"docid": "doc-en-rust-d835b88c37b627e38319ac36c22855a98a05372accdefee9240062ccdc38f082", "text": " error[E0594]: cannot assign to immutable thread-local static item --> $DIR/thread-local-mutation.rs:11:5 | LL | S = \"after\"; //~ ERROR cannot assign to immutable | ^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0594`. ", "commid": "rust_pr_57107"}], "negative_passages": []} {"query_id": "q-en-rust-cc238fb102f31a667536e1fb25dfa64d12f7387a13f1a73115a9c198577ccf85", "query": "The following code gives \"warning: enum is never used: \": This regressed pretty recently. cc\n27.0 does not warn, 1.28.0 does.\ntriage: P-medium; This seems like a papercut that should be easy to work-around (at least if one isn't using ...).\nswitched to P-high after prompting from\nwhoops forgot to tag with T-compiler in time for the meeting...\nassigning to\nI'm grabbing this, I think I broke it and it should be easy to fix", "positive_passages": [{"docid": "doc-en-rust-7da4a6c535bd013a1a34ebc3b8d9832e6a78c15cfd4fbad08f82a7b07373344d", "text": "hir::ItemKind::Fn(..) | hir::ItemKind::Ty(..) | hir::ItemKind::Static(..) | hir::ItemKind::Existential(..) | hir::ItemKind::Const(..) => { intravisit::walk_item(self, &item); }", "commid": "rust_pr_56456"}], "negative_passages": []} {"query_id": "q-en-rust-cc238fb102f31a667536e1fb25dfa64d12f7387a13f1a73115a9c198577ccf85", "query": "The following code gives \"warning: enum is never used: \": This regressed pretty recently. cc\n27.0 does not warn, 1.28.0 does.\ntriage: P-medium; This seems like a papercut that should be easy to work-around (at least if one isn't using ...).\nswitched to P-high after prompting from\nwhoops forgot to tag with T-compiler in time for the meeting...\nassigning to\nI'm grabbing this, I think I broke it and it should be easy to fix", "positive_passages": [{"docid": "doc-en-rust-4a09586829fae00306099e0efc5d6d77679dd97d44900b73f10e4696299da462", "text": " // compile-pass #[deny(warnings)] enum Empty { } trait Bar {} impl Bar for () {} fn boo() -> impl Bar {} fn main() { boo(); } ", "commid": "rust_pr_56456"}], "negative_passages": []} {"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-e25b25a2a722a43cc5453c9f1640fd09c608779f50321307998d4c615fcd5fe7", "text": "fn build_external_function(cx: &DocContext, did: DefId) -> clean::Function { let sig = cx.tcx.fn_sig(did); let constness = if cx.tcx.is_const_fn(did) { let constness = if cx.tcx.is_min_const_fn(did) { hir::Constness::Const } else { hir::Constness::NotConst", "commid": "rust_pr_56845"}], "negative_passages": []} {"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-4ba56c5bdbb2816a2d4038fb40400b5aebd87b4a2fedfe506a560a74ebe9dd8e", "text": "(self.generics.clean(cx), (&self.decl, self.body).clean(cx)) }); let did = cx.tcx.hir().local_def_id(self.id); let constness = if cx.tcx.is_min_const_fn(did) { hir::Constness::Const } else { hir::Constness::NotConst }; Item { name: Some(self.name.clean(cx)), attrs: self.attrs.clean(cx),", "commid": "rust_pr_56845"}], "negative_passages": []} {"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-72dfd5ce98bda4b7042b0c07064728c6e95da46fd78500b70a3f2759ca75ee0e", "text": "visibility: self.vis.clean(cx), stability: self.stab.clean(cx), deprecation: self.depr.clean(cx), def_id: cx.tcx.hir().local_def_id(self.id), def_id: did, inner: FunctionItem(Function { decl, generics, header: self.header, header: hir::FnHeader { constness, ..self.header }, }), } }", "commid": "rust_pr_56845"}], "negative_passages": []} {"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-0d5d2ab69064cdbd025c4c5d2270adaaa5ef7a590f858771f82c16b6062bd972", "text": "ty::TraitContainer(_) => self.defaultness.has_value() }; if provided { let constness = if cx.tcx.is_const_fn(self.def_id) { let constness = if cx.tcx.is_min_const_fn(self.def_id) { hir::Constness::Const } else { hir::Constness::NotConst", "commid": "rust_pr_56845"}], "negative_passages": []} {"query_id": "q-en-rust-7a9dd2c811046b2412a3941dae7ab759f6b8149aefadd51f8e28ba2c70b8b2ef", "query": "Regression from 1.26.0 to 1.27.0 and later. See for example See also and", "positive_passages": [{"docid": "doc-en-rust-7f7d2422d27aa4cba089905e4d08ec078cd41ec42eea84fc5f0b05ad650b9143", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![crate_name = \"foo\"] #![unstable(feature = \"humans\", reason = \"who ever let humans program computers, we're apparently really bad at it\", issue = \"0\")] #![feature(rustc_const_unstable, const_fn, foo, foo2)] #![feature(min_const_unsafe_fn)] #![feature(staged_api)] // @has 'foo/fn.foo.html' '//pre' 'pub unsafe fn foo() -> u32' #[stable(feature = \"rust1\", since = \"1.0.0\")] #[rustc_const_unstable(feature=\"foo\")] pub const unsafe fn foo() -> u32 { 42 } // @has 'foo/fn.foo2.html' '//pre' 'pub fn foo2() -> u32' #[unstable(feature = \"humans\", issue=\"0\")] pub const fn foo2() -> u32 { 42 } // @has 'foo/fn.bar2.html' '//pre' 'pub const fn bar2() -> u32' #[stable(feature = \"rust1\", since = \"1.0.0\")] pub const fn bar2() -> u32 { 42 } // @has 'foo/fn.foo2_gated.html' '//pre' 'pub unsafe fn foo2_gated() -> u32' #[unstable(feature = \"foo2\", issue=\"0\")] pub const unsafe fn foo2_gated() -> u32 { 42 } // @has 'foo/fn.bar2_gated.html' '//pre' 'pub const unsafe fn bar2_gated() -> u32' #[stable(feature = \"rust1\", since = \"1.0.0\")] pub const unsafe fn bar2_gated() -> u32 { 42 } // @has 'foo/fn.bar_not_gated.html' '//pre' 'pub unsafe fn bar_not_gated() -> u32' pub const unsafe fn bar_not_gated() -> u32 { 42 } ", "commid": "rust_pr_56845"}], "negative_passages": []} {"query_id": "q-en-rust-2217afa2e9e35bdcd853e297fcbcbc4fbd5ffb1834a44a72593a376d295913ea", "query": "Hi, I noticed a weird internal error using . The smallest example I could make to reproduce the bug is an basic library project () whose contains: , , and all work fine; and as mentioned in the code, the place where is declared seems to matter. Here is the output and trace of : [log was from an out of date nightly rust, see my comment below for the log for the latest nightly as of this edit] Have a nice day.\nMy bad, I appear to have accidentally used a version of rust that wasn't up to date. The bug still occurs in the latest nightly, here's the backtrace: Sorry again about that.\nIt looks like is only checking for explicit negative impls at the very beginning, and not throughout the process. I should have a fix for this later today.\nLooks like this error has been there the whole time that synthetic impls have been there - that sample fails all the way back to 1.26.0.", "positive_passages": [{"docid": "doc-en-rust-7072465d7d463c9fb446f655dc828770588add1aeb34cea5758f1cbeeee44458", "text": ") -> Option<(ty::ParamEnv<'c>, ty::ParamEnv<'c>)> { let tcx = infcx.tcx; let mut select = SelectionContext::new(&infcx); let mut select = SelectionContext::with_negative(&infcx, true); let mut already_visited = FxHashSet::default(); let mut predicates = VecDeque::new();", "commid": "rust_pr_55356"}], "negative_passages": []} {"query_id": "q-en-rust-2217afa2e9e35bdcd853e297fcbcbc4fbd5ffb1834a44a72593a376d295913ea", "query": "Hi, I noticed a weird internal error using . The smallest example I could make to reproduce the bug is an basic library project () whose contains: , , and all work fine; and as mentioned in the code, the place where is declared seems to matter. Here is the output and trace of : [log was from an out of date nightly rust, see my comment below for the log for the latest nightly as of this edit] Have a nice day.\nMy bad, I appear to have accidentally used a version of rust that wasn't up to date. The bug still occurs in the latest nightly, here's the backtrace: Sorry again about that.\nIt looks like is only checking for explicit negative impls at the very beginning, and not throughout the process. I should have a fix for this later today.\nLooks like this error has been there the whole time that synthetic impls have been there - that sample fails all the way back to 1.26.0.", "positive_passages": [{"docid": "doc-en-rust-ee5eebb676cdd9e34949a00c72b98755d71a3839f38e5210ce083388a1b88c36", "text": "match &result { &Ok(Some(ref vtable)) => { // If we see an explicit negative impl (e.g. 'impl !Send for MyStruct'), // we immediately bail out, since it's impossible for us to continue. match vtable { Vtable::VtableImpl(VtableImplData { impl_def_id, .. }) => { // Blame tidy for the weird bracket placement if infcx.tcx.impl_polarity(*impl_def_id) == hir::ImplPolarity::Negative { debug!(\"evaluate_nested_obligations: Found explicit negative impl {:?}, bailing out\", impl_def_id); return None; } }, _ => {} } let obligations = vtable.clone().nested_obligations().into_iter(); if !self.evaluate_nested_obligations(", "commid": "rust_pr_55356"}], "negative_passages": []} {"query_id": "q-en-rust-2217afa2e9e35bdcd853e297fcbcbc4fbd5ffb1834a44a72593a376d295913ea", "query": "Hi, I noticed a weird internal error using . The smallest example I could make to reproduce the bug is an basic library project () whose contains: , , and all work fine; and as mentioned in the code, the place where is declared seems to matter. Here is the output and trace of : [log was from an out of date nightly rust, see my comment below for the log for the latest nightly as of this edit] Have a nice day.\nMy bad, I appear to have accidentally used a version of rust that wasn't up to date. The bug still occurs in the latest nightly, here's the backtrace: Sorry again about that.\nIt looks like is only checking for explicit negative impls at the very beginning, and not throughout the process. I should have a fix for this later today.\nLooks like this error has been there the whole time that synthetic impls have been there - that sample fails all the way back to 1.26.0.", "positive_passages": [{"docid": "doc-en-rust-916a47e650f4e2ed15de7a42b4bbab9fa94ced01a1ad5a6ef42a3c8a2652e359", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(optin_builtin_traits)] // @has issue_55321/struct.A.html // @has - '//*[@id=\"implementations-list\"]/*[@class=\"impl\"]//*/code' \"impl !Send for A\" // @has - '//*[@id=\"implementations-list\"]/*[@class=\"impl\"]//*/code' \"impl !Sync for A\" pub struct A(); impl !Send for A {} impl !Sync for A {} // @has issue_55321/struct.B.html // @has - '//*[@id=\"synthetic-implementations-list\"]/*[@class=\"impl\"]//*/code' \"impl !Send for // B\" // @has - '//*[@id=\"synthetic-implementations-list\"]/*[@class=\"impl\"]//*/code' \"impl !Sync for // B\" pub struct B(A, Box); ", "commid": "rust_pr_55356"}], "negative_passages": []} {"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-387b544c914de95389b319be1aeeb1b4522cfe26858749aa86ec2fba8dfe66c6", "text": "err.span_label(arm_span, msg); } } hir::MatchSource::TryDesugar => { // Issue #51632 if let Ok(try_snippet) = self.tcx.sess.source_map().span_to_snippet(arm_span) { err.span_suggestion_with_applicability( arm_span, \"try wrapping with a success variant\", format!(\"Ok({})\", try_snippet), Applicability::MachineApplicable, ); } } hir::MatchSource::TryDesugar => {} _ => { let msg = \"match arm with an incompatible type\"; if self.tcx.sess.source_map().is_multiline(arm_span) {", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-1e4985b5942246975c7be74a00b242e8c3492d4b3d8bdb2fd41296d89a828bf4", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // run-rustfix #![allow(dead_code)] fn missing_discourses() -> Result { Ok(1) } fn forbidden_narratives() -> Result { Ok(missing_discourses()?) //~^ ERROR try expression alternatives have incompatible types //~| HELP try wrapping with a success variant } fn main() {} ", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-1feaf58101a17b5147829d1ea1aad9e4f0c635e3a34ff7a32f221616708e1c27", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. // run-rustfix #![allow(dead_code)] fn missing_discourses() -> Result {", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-652e2841c72ee5c9c0a667ffd834e6c8873dbfffe4bb99ef5eda7faea21772c8", "text": "fn forbidden_narratives() -> Result { missing_discourses()? //~^ ERROR try expression alternatives have incompatible types //~| HELP try wrapping with a success variant } fn main() {}", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-08806f10948897901b3add83c0e58fb568c1c7aca0be1b2d7335292f32e0a945", "query": "We're incorrectly suggesting wrapping with when it won't help: It should be suggesting using using instead.\nDuplicate of . Let me commit to submitting a fix for this on (or, less likely, before) Sunday the twenty-eighth (hopefully preserving the -wrapping suggestion in the cases where it is correct, but if we have to scrap the whole suggestion, we will; false-positives are pretty bad)", "positive_passages": [{"docid": "doc-en-rust-d1bc2b172cb70190c65a14b5dfa6b929349378348910929ad014f2a551242d21", "text": "error[E0308]: try expression alternatives have incompatible types --> $DIR/issue-51632-try-desugar-incompatible-types.rs:20:5 --> $DIR/issue-51632-try-desugar-incompatible-types.rs:18:5 | LL | missing_discourses()? | ^^^^^^^^^^^^^^^^^^^^^ | | | expected enum `std::result::Result`, found isize | help: try wrapping with a success variant: `Ok(missing_discourses()?)` | ^^^^^^^^^^^^^^^^^^^^^ expected enum `std::result::Result`, found isize | = note: expected type `std::result::Result` found type `isize`", "commid": "rust_pr_55423"}], "negative_passages": []} {"query_id": "q-en-rust-083112a339d828de6cb363d898877139d9854d657aa7455b3cb276508771ddbb", "query": "Hi, I'm not sure if this has been reported or is known at all. If it has feel free to close it out. This example: causes an ICE on rustc 1.31.0-nightly ( 2018-10-25) This is the error I'm getting, backtrace included:\nOkay, the good news here is that I only see this ICE with ; it does not arise with the NLL migration mode being deployed as part of the 2018 edition. See e.g. this .\nThe code in question is rejected by AST-borrowck. It looks like it is being rejected by NLL as well, and the problem is that we hit an error when trying to generated our diagnostic. Therefore, I'm going to tag this as NLL-sound (since this seems like a case where we want to reject the code in question) and NLL-diagnostics (since the ICE itself is stemming from diagnostics code). And I think I'm going to make it P-high, because 1. ICE's are super annoying and 2. I bet its not hard to fix. But I'm not going to tag it with the Release milestone, because its not something we need to worry about backporting to the beta channel (because it does not arise with NLL migration mode).\nactually I'll remove NLL-sound; this isn't a case where NLL is incorrectly accepting the code. Its just incorrectly behaving while generating the diagnostics.", "positive_passages": [{"docid": "doc-en-rust-db872e73ae74d7f8e503dcee328f19c008b8218ea81eb5509fb1f5ef7d80777b", "text": "let mir_node_id = tcx.hir.as_local_node_id(mir_def_id).expect(\"non-local mir\"); let (return_span, mir_description) = if let hir::ExprKind::Closure(_, _, _, span, gen_move) = tcx.hir.expect_expr(mir_node_id).node { ( tcx.sess.source_map().end_point(span), if gen_move.is_some() { \" of generator\" } else { \" of closure\" }, ) } else { // unreachable? (mir.span, \"\") }; let (return_span, mir_description) = match tcx.hir.get(mir_node_id) { hir::Node::Expr(hir::Expr { node: hir::ExprKind::Closure(_, _, _, span, gen_move), .. }) => ( tcx.sess.source_map().end_point(*span), if gen_move.is_some() { \" of generator\" } else { \" of closure\" }, ), hir::Node::ImplItem(hir::ImplItem { node: hir::ImplItemKind::Method(method_sig, _), .. }) => (method_sig.decl.output.span(), \"\"), _ => (mir.span, \"\"), }; Some(RegionName { // This counter value will already have been used, so this function will increment it", "commid": "rust_pr_55822"}], "negative_passages": []} {"query_id": "q-en-rust-083112a339d828de6cb363d898877139d9854d657aa7455b3cb276508771ddbb", "query": "Hi, I'm not sure if this has been reported or is known at all. If it has feel free to close it out. This example: causes an ICE on rustc 1.31.0-nightly ( 2018-10-25) This is the error I'm getting, backtrace included:\nOkay, the good news here is that I only see this ICE with ; it does not arise with the NLL migration mode being deployed as part of the 2018 edition. See e.g. this .\nThe code in question is rejected by AST-borrowck. It looks like it is being rejected by NLL as well, and the problem is that we hit an error when trying to generated our diagnostic. Therefore, I'm going to tag this as NLL-sound (since this seems like a case where we want to reject the code in question) and NLL-diagnostics (since the ICE itself is stemming from diagnostics code). And I think I'm going to make it P-high, because 1. ICE's are super annoying and 2. I bet its not hard to fix. But I'm not going to tag it with the Release milestone, because its not something we need to worry about backporting to the beta channel (because it does not arise with NLL migration mode).\nactually I'll remove NLL-sound; this isn't a case where NLL is incorrectly accepting the code. Its just incorrectly behaving while generating the diagnostics.", "positive_passages": [{"docid": "doc-en-rust-fe3571440a5d3467e937ea21452b4aba874cd1d3ec45b413ed432c6e11c7c49e", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(nll)] struct Bar; struct Foo<'s> { bar: &'s mut Bar, } impl Foo<'_> { fn new(bar: &mut Bar) -> Self { Foo { bar } } } fn main() { } ", "commid": "rust_pr_55822"}], "negative_passages": []} {"query_id": "q-en-rust-083112a339d828de6cb363d898877139d9854d657aa7455b3cb276508771ddbb", "query": "Hi, I'm not sure if this has been reported or is known at all. If it has feel free to close it out. This example: causes an ICE on rustc 1.31.0-nightly ( 2018-10-25) This is the error I'm getting, backtrace included:\nOkay, the good news here is that I only see this ICE with ; it does not arise with the NLL migration mode being deployed as part of the 2018 edition. See e.g. this .\nThe code in question is rejected by AST-borrowck. It looks like it is being rejected by NLL as well, and the problem is that we hit an error when trying to generated our diagnostic. Therefore, I'm going to tag this as NLL-sound (since this seems like a case where we want to reject the code in question) and NLL-diagnostics (since the ICE itself is stemming from diagnostics code). And I think I'm going to make it P-high, because 1. ICE's are super annoying and 2. I bet its not hard to fix. But I'm not going to tag it with the Release milestone, because its not something we need to worry about backporting to the beta channel (because it does not arise with NLL migration mode).\nactually I'll remove NLL-sound; this isn't a case where NLL is incorrectly accepting the code. Its just incorrectly behaving while generating the diagnostics.", "positive_passages": [{"docid": "doc-en-rust-621e34e45cf1bf9a0cb7eb10d4d95180cb57df920f741c1c6fab6e732941f947", "text": " error: unsatisfied lifetime constraints --> $DIR/issue-55394.rs:21:9 | LL | fn new(bar: &mut Bar) -> Self { | - ---- return type is Foo<'2> | | | let's call the lifetime of this reference `'1` LL | Foo { bar } | ^^^^^^^^^^^ returning this value requires that `'1` must outlive `'2` error: aborting due to previous error ", "commid": "rust_pr_55822"}], "negative_passages": []} {"query_id": "q-en-rust-a6cf6888a20a53576f640282b3b54092b9083b5ffbc6a4995132c19686109e1b", "query": "review comments:\nI've looked into this some - NLL misses three errors, all of the form u.yu.x`. In each of the three cases, there is a mutable borrow of some field of the union and then a shared borrow of some other field immediately following. The issue seems to be that the mutable borrow is killed straight away as it isn't used later - therefore not causing a conflict with the shared borrow. I'm inclined to think that this is fine - and that NLL is correct in having less errors here. However, we might want to update the test to use the first mutable borrow for each case in order to make the error happen and demonstrate the diagnostic - if this is the case, I've that we can open a PR for that will update the test. cc\nI'm inclined to agree that this is an instance where the test was not robust and that we need to add uses of those first mutable borrows.\nI've submitted a PR () that makes the test more robust.", "positive_passages": [{"docid": "doc-en-rust-194aefa88179d5a108a580b4ef0cb22e11dde7d7b6c8a0bb8b9183e3d5201ce1", "text": "/// implicit closure bindings. It is needed when the closure is /// borrowing or mutating a mutable referent, e.g.: /// /// let x: &mut isize = ...; /// let y = || *x += 5; /// let x: &mut isize = ...; /// let y = || *x += 5; /// /// If we were to try to translate this closure into a more explicit /// form, we'd encounter an error with the code as written: /// /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &x }, fn_ptr); // Closure is pair of env and fn /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &x }, fn_ptr); // Closure is pair of env and fn /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// /// This is then illegal because you cannot mutate an `&mut` found /// in an aliasable location. To solve, you'd have to translate with /// an `&mut` borrow: /// /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &mut x }, fn_ptr); // changed from &x to &mut x /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// struct Env { x: & &mut isize } /// let x: &mut isize = ...; /// let y = (&mut Env { &mut x }, fn_ptr); // changed from &x to &mut x /// fn fn_ptr(env: &mut Env) { **env.x += 5; } /// /// Now the assignment to `**env.x` is legal, but creating a /// mutable pointer to `x` is not because `x` is not mutable. We", "commid": "rust_pr_55696"}], "negative_passages": []} {"query_id": "q-en-rust-a6cf6888a20a53576f640282b3b54092b9083b5ffbc6a4995132c19686109e1b", "query": "review comments:\nI've looked into this some - NLL misses three errors, all of the form u.yu.x`. In each of the three cases, there is a mutable borrow of some field of the union and then a shared borrow of some other field immediately following. The issue seems to be that the mutable borrow is killed straight away as it isn't used later - therefore not causing a conflict with the shared borrow. I'm inclined to think that this is fine - and that NLL is correct in having less errors here. However, we might want to update the test to use the first mutable borrow for each case in order to make the error happen and demonstrate the diagnostic - if this is the case, I've that we can open a PR for that will update the test. cc\nI'm inclined to agree that this is an instance where the test was not robust and that we need to add uses of those first mutable borrows.\nI've submitted a PR () that makes the test more robust.", "positive_passages": [{"docid": "doc-en-rust-81521d73c0d13cbab2c75152ab433ecefaf2f2984938861b6119afbeaa1f4d1a", "text": " error[E0502]: cannot borrow `u.y` as immutable because it is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:25:13 | LL | let a = &mut u.x.0; | ---------- mutable borrow occurs here LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^^ immutable borrow occurs here LL | use_borrow(a); | - mutable borrow later used here error[E0382]: use of moved value: `u` --> $DIR/union-borrow-move-parent-sibling.rs:29:13 --> $DIR/union-borrow-move-parent-sibling.rs:32:13 | LL | let a = u.x.0; | ----- value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^^^ value used here after move | = note: move occurs because `u` has type `U`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u.y` as immutable because it is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:38:13 | LL | let a = &mut (u.x.0).0; | -------------- mutable borrow occurs here LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^^ immutable borrow occurs here LL | use_borrow(a); | - mutable borrow later used here error[E0382]: use of moved value: `u` --> $DIR/union-borrow-move-parent-sibling.rs:41:13 --> $DIR/union-borrow-move-parent-sibling.rs:45:13 | LL | let a = (u.x.0).0; | --------- value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^^^ value used here after move | = note: move occurs because `u` has type `U`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u.x` as immutable because it is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:51:13 | LL | let a = &mut *u.y; | --------- mutable borrow occurs here LL | let b = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) | ^^^^ immutable borrow occurs here LL | use_borrow(a); | - mutable borrow later used here error[E0382]: use of moved value: `u` --> $DIR/union-borrow-move-parent-sibling.rs:53:13 --> $DIR/union-borrow-move-parent-sibling.rs:58:13 | LL | let a = *u.y; | ---- value moved here LL | let a = u.x; //~ ERROR use of moved value: `u.x` LL | let b = u.x; //~ ERROR use of moved value: `u.x` | ^^^ value used here after move | = note: move occurs because `u` has type `U`, which does not implement the `Copy` trait error: aborting due to 3 previous errors error: aborting due to 6 previous errors For more information about this error, try `rustc --explain E0382`. Some errors occurred: E0382, E0502. For more information about an error, try `rustc --explain E0382`. ", "commid": "rust_pr_55696"}], "negative_passages": []} {"query_id": "q-en-rust-a6cf6888a20a53576f640282b3b54092b9083b5ffbc6a4995132c19686109e1b", "query": "review comments:\nI've looked into this some - NLL misses three errors, all of the form u.yu.x`. In each of the three cases, there is a mutable borrow of some field of the union and then a shared borrow of some other field immediately following. The issue seems to be that the mutable borrow is killed straight away as it isn't used later - therefore not causing a conflict with the shared borrow. I'm inclined to think that this is fine - and that NLL is correct in having less errors here. However, we might want to update the test to use the first mutable borrow for each case in order to make the error happen and demonstrate the diagnostic - if this is the case, I've that we can open a PR for that will update the test. cc\nI'm inclined to agree that this is an instance where the test was not robust and that we need to add uses of those first mutable borrows.\nI've submitted a PR () that makes the test more robust.", "positive_passages": [{"docid": "doc-en-rust-9ecc1462aa9224cc27877fbff45df6b6f55bd2741f2167e98b1506a03ea4f629", "text": "y: Box>, } fn use_borrow(_: &T) {} unsafe fn parent_sibling_borrow() { let mut u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = &mut u.x.0; let a = &u.y; //~ ERROR cannot borrow `u.y` let b = &u.y; //~ ERROR cannot borrow `u.y` use_borrow(a); } unsafe fn parent_sibling_move() { let u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = u.x.0; let a = u.y; //~ ERROR use of moved value: `u.y` let b = u.y; //~ ERROR use of moved value: `u.y` } unsafe fn grandparent_sibling_borrow() { let mut u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = &mut (u.x.0).0; let a = &u.y; //~ ERROR cannot borrow `u.y` let b = &u.y; //~ ERROR cannot borrow `u.y` use_borrow(a); } unsafe fn grandparent_sibling_move() { let u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = (u.x.0).0; let a = u.y; //~ ERROR use of moved value: `u.y` let b = u.y; //~ ERROR use of moved value: `u.y` } unsafe fn deref_sibling_borrow() { let mut u = U { y: Box::default() }; let a = &mut *u.y; let a = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) let b = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) use_borrow(a); } unsafe fn deref_sibling_move() { let u = U { x: ((Vec::new(), Vec::new()), Vec::new()) }; let a = *u.y; let a = u.x; //~ ERROR use of moved value: `u.x` let b = u.x; //~ ERROR use of moved value: `u.x` }", "commid": "rust_pr_55696"}], "negative_passages": []} {"query_id": "q-en-rust-a6cf6888a20a53576f640282b3b54092b9083b5ffbc6a4995132c19686109e1b", "query": "review comments:\nI've looked into this some - NLL misses three errors, all of the form u.yu.x`. In each of the three cases, there is a mutable borrow of some field of the union and then a shared borrow of some other field immediately following. The issue seems to be that the mutable borrow is killed straight away as it isn't used later - therefore not causing a conflict with the shared borrow. I'm inclined to think that this is fine - and that NLL is correct in having less errors here. However, we might want to update the test to use the first mutable borrow for each case in order to make the error happen and demonstrate the diagnostic - if this is the case, I've that we can open a PR for that will update the test. cc\nI'm inclined to agree that this is an instance where the test was not robust and that we need to add uses of those first mutable borrows.\nI've submitted a PR () that makes the test more robust.", "positive_passages": [{"docid": "doc-en-rust-4f2bf96149b05081c215424a00bee0347c9a17c18221b7570f25412ac9712d0d", "text": "error[E0502]: cannot borrow `u.y` as immutable because `u.x.0` is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:23:14 --> $DIR/union-borrow-move-parent-sibling.rs:25:14 | LL | let a = &mut u.x.0; | ----- mutable borrow occurs here LL | let a = &u.y; //~ ERROR cannot borrow `u.y` LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^ immutable borrow occurs here LL | use_borrow(a); LL | } | - mutable borrow ends here error[E0382]: use of moved value: `u.y` --> $DIR/union-borrow-move-parent-sibling.rs:29:9 --> $DIR/union-borrow-move-parent-sibling.rs:32:9 | LL | let a = u.x.0; | - value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^ value used here after move | = note: move occurs because `u.y` has type `[type error]`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u.y` as immutable because `u.x.0.0` is also borrowed as mutable --> $DIR/union-borrow-move-parent-sibling.rs:35:14 --> $DIR/union-borrow-move-parent-sibling.rs:38:14 | LL | let a = &mut (u.x.0).0; | --------- mutable borrow occurs here LL | let a = &u.y; //~ ERROR cannot borrow `u.y` LL | let b = &u.y; //~ ERROR cannot borrow `u.y` | ^^^ immutable borrow occurs here LL | use_borrow(a); LL | } | - mutable borrow ends here error[E0382]: use of moved value: `u.y` --> $DIR/union-borrow-move-parent-sibling.rs:41:9 --> $DIR/union-borrow-move-parent-sibling.rs:45:9 | LL | let a = (u.x.0).0; | - value moved here LL | let a = u.y; //~ ERROR use of moved value: `u.y` LL | let b = u.y; //~ ERROR use of moved value: `u.y` | ^ value used here after move | = note: move occurs because `u.y` has type `[type error]`, which does not implement the `Copy` trait error[E0502]: cannot borrow `u` (via `u.x`) as immutable because `u` is also borrowed as mutable (via `*u.y`) --> $DIR/union-borrow-move-parent-sibling.rs:47:14 --> $DIR/union-borrow-move-parent-sibling.rs:51:14 | LL | let a = &mut *u.y; | ---- mutable borrow occurs here (via `*u.y`) LL | let a = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) LL | let b = &u.x; //~ ERROR cannot borrow `u` (via `u.x`) | ^^^ immutable borrow occurs here (via `u.x`) LL | use_borrow(a); LL | } | - mutable borrow ends here error[E0382]: use of moved value: `u.x` --> $DIR/union-borrow-move-parent-sibling.rs:53:9 --> $DIR/union-borrow-move-parent-sibling.rs:58:9 | LL | let a = *u.y; | - value moved here LL | let a = u.x; //~ ERROR use of moved value: `u.x` LL | let b = u.x; //~ ERROR use of moved value: `u.x` | ^ value used here after move | = note: move occurs because `u.x` has type `[type error]`, which does not implement the `Copy` trait", "commid": "rust_pr_55696"}], "negative_passages": []} {"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying match (src, dependency_of) { (ExternCrateSource::Extern(def_id), LOCAL_CRATE) => { debug!(\"try_print_visible_def_path: def_id={:?}\", def_id); return Ok(( if !span.is_dummy() { self.print_def_path(def_id, &[])? } else { self.path_crate(cnum)? }, true, )); // NOTE(eddyb) the only reason `span` might be dummy, // that we're aware of, is that it's the `std`/`core` // `extern crate` injected by default. // FIXME(eddyb) find something better to key this on, // or avoid ending up with `ExternCrateSource::Extern`, // for the injected `std`/`core`. if span.is_dummy() { return Ok((self.path_crate(cnum)?, true)); } // Disable `try_print_trimmed_def_path` behavior within // the `print_def_path` call, to avoid infinite recursion // in cases where the `extern crate foo` has non-trivial // parents, e.g. it's nested in `impl foo::Trait for Bar` // (see also issues #55779 and #87932). self = with_no_visible_paths(|| self.print_def_path(def_id, &[]))?; return Ok((self, true)); } (ExternCrateSource::Path, LOCAL_CRATE) => { debug!(\"try_print_visible_def_path: def_id={:?}\", def_id); return Ok((self.path_crate(cnum)?, true)); } _ => {}", "commid": "rust_pr_89738"}], "negative_passages": []} {"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying pub trait Trait { fn no_op(&self); } ", "commid": "rust_pr_89738"}], "negative_passages": []} {"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying pub trait Deserialize { fn deserialize(); } ", "commid": "rust_pr_89738"}], "negative_passages": []} {"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying // run-pass // edition:2018 // aux-crate:issue_55779_extern_trait=issue-55779-extern-trait.rs use issue_55779_extern_trait::Trait; struct Local; struct Helper; impl Trait for Local { fn no_op(&self) { // (Unused) extern crate declaration necessary to reproduce bug extern crate issue_55779_extern_trait; // This one works // impl Trait for Helper { fn no_op(&self) { } } // This one infinite-loops const _IMPL_SERIALIZE_FOR_HELPER: () = { // (extern crate can also appear here to reproduce bug, // as in originating example from serde) impl Trait for Helper { fn no_op(&self) { } } }; } } fn main() { } ", "commid": "rust_pr_89738"}], "negative_passages": []} {"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying // edition:2018 // aux-crate:issue_87932_a=issue-87932-a.rs pub struct A {} impl issue_87932_a::Deserialize for A { fn deserialize() { extern crate issue_87932_a as _a; } } fn main() { A::deserialize(); //~^ ERROR no function or associated item named `deserialize` found for struct `A` } ", "commid": "rust_pr_89738"}], "negative_passages": []} {"query_id": "q-en-rust-b3fd400b775bbca5ae324dc8e92392ef8e17f812be5beb6d50135a530759903a", "query": "I have two crates: Notice that the implementation of contains an . Something is not happy about that. This currently affects Serde trait impls that use a private helper type with a Serde derive, which is a common pattern. The following script reproduces the issue as of rustc 1.31.0-beta.4 ( 2018-11-01) as well as rustc 1.32.0-nightly ( 2018-11-07).\nMentioning who seems to know about extern_prelude.\nThe issue is caused by entering infinite recursion when trying to print the impl path. If returns and finishes , then this issue should be fixed as well.\n(Also, the issue reproduces on 2015 edition as well.)\nI'm hitting this on stable 1.32.0 with a , and I confirmed that replacing it with a manual impl works. Is there any other workaround?\nJust ran into this on with the serde helper pattern.\nCopying error[E0599]: no function or associated item named `deserialize` found for struct `A` in the current scope --> $DIR/issue-87932.rs:13:8 | LL | pub struct A {} | ------------ function or associated item `deserialize` not found for this ... LL | A::deserialize(); | ^^^^^^^^^^^ function or associated item not found in `A` | = help: items from traits can only be used if the trait is in scope help: the following trait is implemented but not in scope; perhaps add a `use` for it: | LL | use ::deserialize::_a::Deserialize; | error: aborting due to previous error For more information about this error, try `rustc --explain E0599`. ", "commid": "rust_pr_89738"}], "negative_passages": []} {"query_id": "q-en-rust-9158cfe10b2a84bbab2853635ac0de32baa346a9cf2344cea26692e8146c93a2", "query": "I may have found a bug in the compiler. The entirety of the code to reproduce this bug is this: The program should build successfully, assuming the rustc type checker does not detect any errors at the level of the Rust type system. The rustc type checker seems happy with the program. However, presumably after the LLVM IR generation phase, and during some IR verification phase, I get this type error:\nReduced:\nI thought we already fixed that by adding casts. One possible fix that might be less harmful would be to give all types that are subtyping-equivalent the same LLVM type. They should be having the same layout anyway, and their fields should be subtyping-equivalent (because that's what subtyping-equivalence means). The other option would be to insert casts when subtyping is taking place.\nThis looks like a new instance of . cc\nwas just an instance of the general problem, we just started generating named types for trait objects then, which made the problem show up in more cases. I agree with first proposal, i.e. destroy the distinction ahead of time. We're only wasting time by inserting casts and LLVM wastes time ignoring those casts (insert rant about how counterproductive LLVM's types are).\nReduced to its essence:\nAn example that my previous approach will not make compile (but currently gives an ICE outputtypeparametermismatch):\nNah you can do it without an ICE, but you're still within deep ICE territory\nThis is making me suspect, that when we get higher-ranked type bounds, finding a \"canonical form for subtyping\" will be impossible (note: don't be so sure. You can't? (shouldn't?) have higher-ranged type bounds in trait objects, so life might be better). Might as well insert casts. Note that my old approach has problems with anonymization in invariant types:\nmarking this as p-medium as per the\nlabels +T-types This seems like a problem around higher ranked normalization etc, so we are tagging as T-types. We think it may have been fixed as a side-effect of some LLVM change or other (cc for the details).\nSeems to be fixed since 1.23.\nTriage: The issue hasn't been fixed actually, at least on CI (CI stderr from ):\nThat's interesting. It doesn't fail in a playground. Is this a debug assert now or something?\nThe build says so maybe this is fixed with nighty that uses llvm but llvm 13 still causes problems?\nLikely, we could use if so. Is there a CI builder that uses a newer LLVM on bors?\nYeah, this is fixed by LLVM using its opaque pointers, so that's why it's possibly broken on LLVM 13?\nRe-submitted a PR with :", "positive_passages": [{"docid": "doc-en-rust-0948e8e37f710f752cd3c4b82dd9ebe471a8c522fe901a78a2cbdbbf3226c939", "text": " // run-pass // ^-- The above is needed as this issue is related to LLVM/codegen. // min-llvm-version:15.0.0 // ^-- The above is needed as this issue is fixed by the opaque pointers. fn main() { type_error(|x| &x); } fn type_error( _selector: for<'a> fn(&'a Vec Fn(&'b u8)>>) -> &'a Vec>, ) { } ", "commid": "rust_pr_105785"}], "negative_passages": []} {"query_id": "q-en-rust-b4bbc8b9b0a4ab9c7b037d4832e791777029e3f4bb7af6669a9e9091f7aff00a", "query": "The following example ICEs yields (Same is true on stable.)\nThat's because the ObjectCandidate code is not normalizing the upcast trait refs. Should we wait for Chalk for the fix?\nHitting this error, any updates?\nTriage: no longer ICEs.\nIt's still an ICE, note that we should build/run the snippet to check it ().\nTriage: It's no longer ICE with the latest nightly, I think it's fixed by .", "positive_passages": [{"docid": "doc-en-rust-b473a1afe3e65265270067683b1b0150271669aa2426a64446f223d447f2930a", "text": " // check-pass trait Mirror { type Other; } #[derive(Debug)] struct Even(usize); struct Odd; impl Mirror for Even { type Other = Odd; } impl Mirror for Odd { type Other = Even; } trait Dyn: AsRef<::Other> {} impl Dyn for Even {} impl AsRef for Even { fn as_ref(&self) -> &Even { self } } fn code(d: &dyn Dyn) -> &T::Other { d.as_ref() } fn main() { println!(\"{:?}\", code(&Even(22))); } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-b4bbc8b9b0a4ab9c7b037d4832e791777029e3f4bb7af6669a9e9091f7aff00a", "query": "The following example ICEs yields (Same is true on stable.)\nThat's because the ObjectCandidate code is not normalizing the upcast trait refs. Should we wait for Chalk for the fix?\nHitting this error, any updates?\nTriage: no longer ICEs.\nIt's still an ICE, note that we should build/run the snippet to check it ().\nTriage: It's no longer ICE with the latest nightly, I think it's fixed by .", "positive_passages": [{"docid": "doc-en-rust-9a2d688c3fa639e2118f85646c917c89d772eb29bb0de948a4a0b899e20bd1dd", "text": " fn t7p(f: impl Fn(B) -> C, g: impl Fn(A) -> B) -> impl Fn(A) -> C { move |a: A| -> C { f(g(a)) } } fn t8n(f: impl Fn(A) -> B, g: impl Fn(A) -> C) -> impl Fn(A) -> (B, C) where A: Copy, { move |a: A| -> (B, C) { let b = a; let fa = f(a); let ga = g(b); (fa, ga) } } fn main() { let f = |(_, _)| {}; let g = |(a, _)| a; let t7 = |env| |a| |b| t7p(f, g)(((env, a), b)); let t8 = t8n(t7, t7p(f, g)); //~^ ERROR: expected a `Fn<(_,)>` closure, found `impl Fn<(((_, _), _),)> } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-b4bbc8b9b0a4ab9c7b037d4832e791777029e3f4bb7af6669a9e9091f7aff00a", "query": "The following example ICEs yields (Same is true on stable.)\nThat's because the ObjectCandidate code is not normalizing the upcast trait refs. Should we wait for Chalk for the fix?\nHitting this error, any updates?\nTriage: no longer ICEs.\nIt's still an ICE, note that we should build/run the snippet to check it ().\nTriage: It's no longer ICE with the latest nightly, I think it's fixed by .", "positive_passages": [{"docid": "doc-en-rust-8419251590cf9d5321270b80c25453db0c31bfa58fb418c234cddf2b1b6144ab", "text": " error[E0277]: expected a `Fn<(_,)>` closure, found `impl Fn<(((_, _), _),)>` --> $DIR/issue-59494.rs:21:22 | LL | fn t8n(f: impl Fn(A) -> B, g: impl Fn(A) -> C) -> impl Fn(A) -> (B, C) | ---------- required by this bound in `t8n` ... LL | let t8 = t8n(t7, t7p(f, g)); | ^^^^^^^^^ expected an `Fn<(_,)>` closure, found `impl Fn<(((_, _), _),)>` | = help: the trait `Fn<(_,)>` is not implemented for `impl Fn<(((_, _), _),)>` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-b4bbc8b9b0a4ab9c7b037d4832e791777029e3f4bb7af6669a9e9091f7aff00a", "query": "The following example ICEs yields (Same is true on stable.)\nThat's because the ObjectCandidate code is not normalizing the upcast trait refs. Should we wait for Chalk for the fix?\nHitting this error, any updates?\nTriage: no longer ICEs.\nIt's still an ICE, note that we should build/run the snippet to check it ().\nTriage: It's no longer ICE with the latest nightly, I think it's fixed by .", "positive_passages": [{"docid": "doc-en-rust-f9539003620decdd46e47f872503fcda0271d127664e87f7c24b6244bd178262", "text": " // check-pass pub trait Trait1 { type C; } struct T1; impl Trait1 for T1 { type C = usize; } pub trait Callback: FnMut(::C) {} impl::C)> Callback for F {} pub struct State { callback: Option>>, } impl State { fn new() -> Self { Self { callback: None } } fn test_cb(&mut self, d: ::C) { (self.callback.as_mut().unwrap())(d) } } fn main() { let mut s = State::::new(); s.test_cb(1); } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-b4bbc8b9b0a4ab9c7b037d4832e791777029e3f4bb7af6669a9e9091f7aff00a", "query": "The following example ICEs yields (Same is true on stable.)\nThat's because the ObjectCandidate code is not normalizing the upcast trait refs. Should we wait for Chalk for the fix?\nHitting this error, any updates?\nTriage: no longer ICEs.\nIt's still an ICE, note that we should build/run the snippet to check it ().\nTriage: It's no longer ICE with the latest nightly, I think it's fixed by .", "positive_passages": [{"docid": "doc-en-rust-4cc83a86d95b76ac372d489ef3ff944323dfd3254f3b86d7070803d49c555801", "text": " // check-pass fn any() -> T { loop {} } trait Foo { type V; } trait Callback: Fn(&T, &T::V) {} impl Callback for F {} struct Bar { callback: Box>, } impl Bar { fn event(&self) { (self.callback)(any(), any()); } } struct A; struct B; impl Foo for A { type V = B; } fn main() { let foo = Bar:: { callback: Box::new(|_: &A, _: &B| ()) }; foo.event(); } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-f0cf2cf6bd9337f63f5574e2be6a701ee174858f859f97a8aaffea8d69427701", "query": "reduced from num_cpus: built with yields cc `\nWhat\u2019s the target? Ah, never mind, I missed that its .\nMore minimal: Issue does not occur on stable (or 1.30.0 nightly), but occurs on rustc beta.\nMarking T-libs as that is likely to be an issue within implementation of , but I didn\u2019t investigate further.\nExcerpt: Clearly both of those adjacent i32 aren't going to be 8-byte aligned ^^ It seems that we're storing to an key with wrong alignment for some reason.\nDon't need to look particularly far, this code generates It looks like some code is assuming that both elements in a scalar pair have the same alignment as the whole pair.\nProbably the easiest way to get to the root cause is to bisect.\nThis code is very likely the culprit: It takes the 0 and 1 GEPs, but uses the same alignment, without offset adjustment.\nOuch, that's bad. To obtain the alignment for both fields, we need to do: Note that we can't use because the components do not match fields (the latter come from the user type definitions, whereas the former are extracted as an optimization).", "positive_passages": [{"docid": "doc-en-rust-1af1a9b02a3a8d7d6e2b22d5a649173118f43fddbe8406fa95d1a643a3c42041", "text": "bx.store_with_flags(val, dest.llval, dest.align, flags); } OperandValue::Pair(a, b) => { for (i, &x) in [a, b].iter().enumerate() { let llptr = bx.struct_gep(dest.llval, i as u64); let val = base::from_immediate(bx, x); bx.store_with_flags(val, llptr, dest.align, flags); } let (a_scalar, b_scalar) = match dest.layout.abi { layout::Abi::ScalarPair(ref a, ref b) => (a, b), _ => bug!(\"store_with_flags: invalid ScalarPair layout: {:#?}\", dest.layout) }; let b_offset = a_scalar.value.size(bx).align_to(b_scalar.value.align(bx).abi); let llptr = bx.struct_gep(dest.llval, 0); let val = base::from_immediate(bx, a); let align = dest.align; bx.store_with_flags(val, llptr, align, flags); let llptr = bx.struct_gep(dest.llval, 1); let val = base::from_immediate(bx, b); let align = dest.align.restrict_for_offset(b_offset); bx.store_with_flags(val, llptr, align, flags); } } }", "commid": "rust_pr_56300"}], "negative_passages": []} {"query_id": "q-en-rust-f0cf2cf6bd9337f63f5574e2be6a701ee174858f859f97a8aaffea8d69427701", "query": "reduced from num_cpus: built with yields cc `\nWhat\u2019s the target? Ah, never mind, I missed that its .\nMore minimal: Issue does not occur on stable (or 1.30.0 nightly), but occurs on rustc beta.\nMarking T-libs as that is likely to be an issue within implementation of , but I didn\u2019t investigate further.\nExcerpt: Clearly both of those adjacent i32 aren't going to be 8-byte aligned ^^ It seems that we're storing to an key with wrong alignment for some reason.\nDon't need to look particularly far, this code generates It looks like some code is assuming that both elements in a scalar pair have the same alignment as the whole pair.\nProbably the easiest way to get to the root cause is to bisect.\nThis code is very likely the culprit: It takes the 0 and 1 GEPs, but uses the same alignment, without offset adjustment.\nOuch, that's bad. To obtain the alignment for both fields, we need to do: Note that we can't use because the components do not match fields (the latter come from the user type definitions, whereas the former are extracted as an optimization).", "positive_passages": [{"docid": "doc-en-rust-5f762bff351e7ad311bf728ad3509e1d7390dcc11abe615241373979e59bfa04", "text": " // compile-flags: -C no-prepopulate-passes #![crate_type=\"rlib\"] #[allow(dead_code)] pub struct Foo { foo: u64, bar: T, } // The store writing to bar.1 should have alignment 4. Not checking // other stores here, as the alignment will be platform-dependent. // CHECK: store i32 [[TMP1:%.+]], i32* [[TMP2:%.+]], align 4 #[no_mangle] pub fn test(x: (i32, i32)) -> Foo<(i32, i32)> { Foo { foo: 0, bar: x } } ", "commid": "rust_pr_56300"}], "negative_passages": []} {"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: \"image\" println!(\"cargo:rustc-link-lib=shell32\"); } else if target.contains(\"fuchsia\") { println!(\"cargo:rustc-link-lib=zircon\"); println!(\"cargo:rustc-link-lib=fdio\");", "commid": "rust_pr_56568"}], "negative_passages": []} {"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: \"image\" use sys::windows::os::current_exe; use sys::c; use slice; use ops::Range; use ffi::OsString; use libc::{c_int, c_void}; use fmt; use vec; use core::iter; use slice; use path::PathBuf; pub unsafe fn init(_argc: isize, _argv: *const *const u8) { }", "commid": "rust_pr_56568"}], "negative_passages": []} {"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: \"image\" Args { unsafe { let mut nArgs: c_int = 0; let lpCmdLine = c::GetCommandLineW(); let szArgList = c::CommandLineToArgvW(lpCmdLine, &mut nArgs); // szArcList can be NULL if CommandLinToArgvW failed, // but in that case nArgs is 0 so we won't actually // try to read a null pointer Args { cur: szArgList, range: 0..(nArgs as isize) } let lp_cmd_line = c::GetCommandLineW(); let parsed_args_list = parse_lp_cmd_line( lp_cmd_line as *const u16, || current_exe().map(PathBuf::into_os_string).unwrap_or_else(|_| OsString::new())); Args { parsed_args_list: parsed_args_list.into_iter() } } } /// Implements the Windows command-line argument parsing algorithm. /// /// Microsoft's documentation for the Windows CLI argument format can be found at /// . /// /// Windows includes a function to do this in shell32.dll, /// but linking with that DLL causes the process to be registered as a GUI application. /// GUI applications add a bunch of overhead, even if no windows are drawn. See /// . /// /// This function was tested for equivalence to the shell32.dll implementation in /// Windows 10 Pro v1803, using an exhaustive test suite available at /// or /// . unsafe fn parse_lp_cmd_line OsString>(lp_cmd_line: *const u16, exe_name: F) -> Vec { const BACKSLASH: u16 = '' as u16; const QUOTE: u16 = '\"' as u16; const TAB: u16 = 't' as u16; const SPACE: u16 = ' ' as u16; let mut ret_val = Vec::new(); if lp_cmd_line.is_null() || *lp_cmd_line == 0 { ret_val.push(exe_name()); return ret_val; } let mut cmd_line = { let mut end = 0; while *lp_cmd_line.offset(end) != 0 { end += 1; } slice::from_raw_parts(lp_cmd_line, end as usize) }; // The executable name at the beginning is special. cmd_line = match cmd_line[0] { // The executable name ends at the next quote mark, // no matter what. QUOTE => { let args = { let mut cut = cmd_line[1..].splitn(2, |&c| c == QUOTE); if let Some(exe) = cut.next() { ret_val.push(OsString::from_wide(exe)); } cut.next() }; if let Some(args) = args { args } else { return ret_val; } } // Implement quirk: when they say whitespace here, // they include the entire ASCII control plane: // \"However, if lpCmdLine starts with any amount of whitespace, CommandLineToArgvW // will consider the first argument to be an empty string. Excess whitespace at the // end of lpCmdLine is ignored.\" 0...SPACE => { ret_val.push(OsString::new()); &cmd_line[1..] }, // The executable name ends at the next whitespace, // no matter what. _ => { let args = { let mut cut = cmd_line.splitn(2, |&c| c > 0 && c <= SPACE); if let Some(exe) = cut.next() { ret_val.push(OsString::from_wide(exe)); } cut.next() }; if let Some(args) = args { args } else { return ret_val; } } }; let mut cur = Vec::new(); let mut in_quotes = false; let mut was_in_quotes = false; let mut backslash_count: usize = 0; for &c in cmd_line { match c { // backslash BACKSLASH => { backslash_count += 1; was_in_quotes = false; }, QUOTE if backslash_count % 2 == 0 => { cur.extend(iter::repeat(b'' as u16).take(backslash_count / 2)); backslash_count = 0; if was_in_quotes { cur.push('\"' as u16); was_in_quotes = false; } else { was_in_quotes = in_quotes; in_quotes = !in_quotes; } } QUOTE if backslash_count % 2 != 0 => { cur.extend(iter::repeat(b'' as u16).take(backslash_count / 2)); backslash_count = 0; was_in_quotes = false; cur.push(b'\"' as u16); } SPACE | TAB if !in_quotes => { cur.extend(iter::repeat(b'' as u16).take(backslash_count)); if !cur.is_empty() || was_in_quotes { ret_val.push(OsString::from_wide(&cur[..])); cur.truncate(0); } backslash_count = 0; was_in_quotes = false; } _ => { cur.extend(iter::repeat(b'' as u16).take(backslash_count)); backslash_count = 0; was_in_quotes = false; cur.push(c); } } } cur.extend(iter::repeat(b'' as u16).take(backslash_count)); // include empty quoted strings at the end of the arguments list if !cur.is_empty() || was_in_quotes || in_quotes { ret_val.push(OsString::from_wide(&cur[..])); } ret_val } pub struct Args { range: Range, cur: *mut *mut u16, parsed_args_list: vec::IntoIter, } pub struct ArgsInnerDebug<'a> {", "commid": "rust_pr_56568"}], "negative_passages": []} {"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: \"image\" fmt::Debug for ArgsInnerDebug<'a> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { f.write_str(\"[\")?; let mut first = true; for i in self.args.range.clone() { if !first { f.write_str(\", \")?; } first = false; // Here we do allocation which could be avoided. fmt::Debug::fmt(&unsafe { os_string_from_ptr(*self.args.cur.offset(i)) }, f)?; } f.write_str(\"]\")?; Ok(()) self.args.parsed_args_list.as_slice().fmt(f) } }", "commid": "rust_pr_56568"}], "negative_passages": []} {"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: \"image\" unsafe fn os_string_from_ptr(ptr: *mut u16) -> OsString { let mut len = 0; while *ptr.offset(len) != 0 { len += 1; } // Push it onto the list. let ptr = ptr as *const u16; let buf = slice::from_raw_parts(ptr, len as usize); OsStringExt::from_wide(buf) } impl Iterator for Args { type Item = OsString; fn next(&mut self) -> Option { self.range.next().map(|i| unsafe { os_string_from_ptr(*self.cur.offset(i)) } ) } fn size_hint(&self) -> (usize, Option) { self.range.size_hint() } fn next(&mut self) -> Option { self.parsed_args_list.next() } fn size_hint(&self) -> (usize, Option) { self.parsed_args_list.size_hint() } } impl DoubleEndedIterator for Args { fn next_back(&mut self) -> Option { self.range.next_back().map(|i| unsafe { os_string_from_ptr(*self.cur.offset(i)) } ) } fn next_back(&mut self) -> Option { self.parsed_args_list.next_back() } } impl ExactSizeIterator for Args { fn len(&self) -> usize { self.range.len() } fn len(&self) -> usize { self.parsed_args_list.len() } } impl Drop for Args { fn drop(&mut self) { // self.cur can be null if CommandLineToArgvW previously failed, // but LocalFree ignores NULL pointers unsafe { c::LocalFree(self.cur as *mut c_void); } #[cfg(test)] mod tests { use sys::windows::args::*; use ffi::OsString; fn chk(string: &str, parts: &[&str]) { let mut wide: Vec = OsString::from(string).encode_wide().collect(); wide.push(0); let parsed = unsafe { parse_lp_cmd_line(wide.as_ptr() as *const u16, || OsString::from(\"TEST.EXE\")) }; let expected: Vec = parts.iter().map(|k| OsString::from(k)).collect(); assert_eq!(parsed.as_slice(), expected.as_slice()); } #[test] fn empty() { chk(\"\", &[\"TEST.EXE\"]); chk(\"0\", &[\"TEST.EXE\"]); } #[test] fn single_words() { chk(\"EXE one_word\", &[\"EXE\", \"one_word\"]); chk(\"EXE a\", &[\"EXE\", \"a\"]); chk(\"EXE \ud83d\ude05\", &[\"EXE\", \"\ud83d\ude05\"]); chk(\"EXE \ud83d\ude05\ud83e\udd26\", &[\"EXE\", \"\ud83d\ude05\ud83e\udd26\"]); } #[test] fn official_examples() { chk(r#\"EXE \"abc\" d e\"#, &[\"EXE\", \"abc\", \"d\", \"e\"]); chk(r#\"EXE ab d\"e f\"g h\"#, &[\"EXE\", r#\"ab\"#, \"de fg\", \"h\"]); chk(r#\"EXE a\"b c d\"#, &[\"EXE\", r#\"a\"b\"#, \"c\", \"d\"]); chk(r#\"EXE a\"b c\" d e\"#, &[\"EXE\", r#\"ab c\"#, \"d\", \"e\"]); } #[test] fn whitespace_behavior() { chk(r#\" test\"#, &[\"\", \"test\"]); chk(r#\" test\"#, &[\"\", \"test\"]); chk(r#\" test test2\"#, &[\"\", \"test\", \"test2\"]); chk(r#\" test test2\"#, &[\"\", \"test\", \"test2\"]); chk(r#\"test test2 \"#, &[\"test\", \"test2\"]); chk(r#\"test test2 \"#, &[\"test\", \"test2\"]); chk(r#\"test \"#, &[\"test\"]); } #[test] fn genius_quotes() { chk(r#\"EXE \"\" \"\"\"#, &[\"EXE\", \"\", \"\"]); chk(r#\"EXE \"\" \"\"\"\"#, &[\"EXE\", \"\", \"\"\"]); chk( r#\"EXE \"this is \"\"\"all\"\"\" in the same argument\"\"#, &[\"EXE\", \"this is \"all\" in the same argument\"] ); chk(r#\"EXE \"a\"\"\"#, &[\"EXE\", \"a\"\"]); chk(r#\"EXE \"a\"\" a\"#, &[\"EXE\", \"a\"\", \"a\"]); // quotes cannot be escaped in command names chk(r#\"\"EXE\" check\"#, &[\"EXE\", \"check\"]); chk(r#\"\"EXE check\"\"#, &[\"EXE check\"]); chk(r#\"\"EXE \"\"\"for\"\"\" check\"#, &[\"EXE \", r#\"for\"\"#, \"check\"]); chk(r#\"\"EXE \"for\" check\"#, &[r#\"EXE \"#, r#\"for\"\"#, \"check\"]); } }", "commid": "rust_pr_56568"}], "negative_passages": []} {"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: \"image\" *mut LPCWSTR; pub fn LocalFree(ptr: *mut c_void); pub fn CommandLineToArgvW(lpCmdLine: *mut LPCWSTR, pNumArgs: *mut c_int) -> *mut *mut u16; pub fn GetTempPathW(nBufferLength: DWORD, lpBuffer: LPCWSTR) -> DWORD; pub fn OpenProcessToken(ProcessHandle: HANDLE,", "commid": "rust_pr_56568"}], "negative_passages": []} {"query_id": "q-en-rust-37291114d3cce11e9607eac4f6b604955f59c64820e17ca49f236731ebe8b5c4", "query": "An showed that removing the dependency from LLVM speed up its test suite by a huge amount. For us libstd doesn't depend on directly but it does depend on which depends on (according to the blog post). Specificaly it looks like , the same function as LLVM, is used by us! It'd be great to investigate removing the shell32 dependency from libstd (may be a few other functions here and there). We should run some tests on Windows to see if this improves rustc cycle times!\nBased on my limited investigation, it's likely that replacing with a pure Rust equivalent should be good enough to get rid of the gdi32 dependency in things that only use libstd. So really someone just needs to write that pure Rust implementation which should be easy to do.\nIt would be nice to get rid of this in the compiler too. We spawn a lot of compiler instances during bootstrapping.\nSo now that the change has been implemented, have we actually investigated how much of an impact the change made?\nPerf, an lolbench are both linux only. So nether help with this question. So don't waste your time like I did.\nHere are just number for on 8 cores / 16 threads on Windows 10 v1809: Without the dependency: 333.012 325.587 320.650 With : 310.852 319.165 323.645\nFor what it's worth, though there's no good way to differentiate PR-to-PR due to very high variance, it does look like there was a recent regression in PR build times: \"image\" EXTRACFLAGS := ws2_32.lib userenv.lib shell32.lib advapi32.lib EXTRACFLAGS := ws2_32.lib userenv.lib advapi32.lib else EXTRACFLAGS := -lws2_32 -luserenv endif", "commid": "rust_pr_56568"}], "negative_passages": []} {"query_id": "q-en-rust-4a19c7d23d60f69a2100a4374157d91427b152775377267059fc3cea45b0ebf4", "query": "The following commands no longer seem to work: With incoming, I just get check-stage2-doc-tutorial'. Stop.` and so on.\nNote: the pull request I just filed fixes the targets, but the manual target is , not as you've written in the bug here. I think if the latter used to work, it's stopped working as-intended. The document name is after all.\nI got the names from\nSorry, that was fallout from reorganization of\nadjusted wiki page", "positive_passages": [{"docid": "doc-en-rust-3bc4e1820fbe2dc736e2c20720d38e3c454e51406fd378d37f6be823026915b6", "text": "perf debuginfo doc $(foreach docname,$(DOC_TEST_NAMES),$(docname)) $(foreach docname,$(DOC_TEST_NAMES),doc-$(docname)) pretty pretty-rpass pretty-rpass-full ", "commid": "rust_pr_6689"}], "negative_passages": []} {"query_id": "q-en-rust-96087c85f192a28b83957740fe94347e90bb403d948d5c4623c35dbb0ed6a79d", "query": "The following buggy code: produces the error: This is usefully documented in TRPL (ch08-02-strings), but that is not easily discoverable from the error. Meanwhile, the error itself is a natural one that programmers from many languages (C, C++, etc) might run into. Speaking as someone who spent some time assuming that the issue was related to slices/references/usize (i.e. various new, Rust-specific concepts), not strings and UTF-8 encoding, I wonder whether a special case might be produced, replacing error E0277, when the types involved are specifically strings/strs and integers. :\nWe should add a pointing to in the following code of E0277 errors. The example has a similar case to this one, but the additional code should be something along the lines of This will cause an extra to be displayed for this case. Beyond changing the code, you will also need to add a new test case with the repro case above and (running and possibly updating the test comments for any remaining failing tests).\nI would like to work on this.\ndo not hesitate to reach out either here or on if you need help!\nPossible proposed output from :", "positive_passages": [{"docid": "doc-en-rust-7ef1abcab98ac26a1a9ec02cbb27952fa747c788e98262270ae28951a46f919e", "text": "/// ``` #[lang = \"index\"] #[rustc_on_unimplemented( on( _Self=\"&str\", note=\"you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book \" ), on( _Self=\"str\", note=\"you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book \" ), on( _Self=\"std::string::String\", note=\"you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book \" ), message=\"the type `{Self}` cannot be indexed by `{Idx}`\", label=\"`{Self}` cannot be indexed by `{Idx}`\", )]", "commid": "rust_pr_57350"}], "negative_passages": []} {"query_id": "q-en-rust-96087c85f192a28b83957740fe94347e90bb403d948d5c4623c35dbb0ed6a79d", "query": "The following buggy code: produces the error: This is usefully documented in TRPL (ch08-02-strings), but that is not easily discoverable from the error. Meanwhile, the error itself is a natural one that programmers from many languages (C, C++, etc) might run into. Speaking as someone who spent some time assuming that the issue was related to slices/references/usize (i.e. various new, Rust-specific concepts), not strings and UTF-8 encoding, I wonder whether a special case might be produced, replacing error E0277, when the types involved are specifically strings/strs and integers. :\nWe should add a pointing to in the following code of E0277 errors. The example has a similar case to this one, but the additional code should be something along the lines of This will cause an extra to be displayed for this case. Beyond changing the code, you will also need to add a new test case with the repro case above and (running and possibly updating the test comments for any remaining failing tests).\nI would like to work on this.\ndo not hesitate to reach out either here or on if you need help!\nPossible proposed output from :", "positive_passages": [{"docid": "doc-en-rust-5cc809510d9a2729d9e9b318d18316735576aeac758aee8484ba0b06cc0eb62d", "text": "/// ``` #[lang = \"index_mut\"] #[rustc_on_unimplemented( on( _Self=\"&str\", note=\"you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book \" ), on( _Self=\"str\", note=\"you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book \" ), on( _Self=\"std::string::String\", note=\"you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book \" ), message=\"the type `{Self}` cannot be mutably indexed by `{Idx}`\", label=\"`{Self}` cannot be mutably indexed by `{Idx}`\", )]", "commid": "rust_pr_57350"}], "negative_passages": []} {"query_id": "q-en-rust-96087c85f192a28b83957740fe94347e90bb403d948d5c4623c35dbb0ed6a79d", "query": "The following buggy code: produces the error: This is usefully documented in TRPL (ch08-02-strings), but that is not easily discoverable from the error. Meanwhile, the error itself is a natural one that programmers from many languages (C, C++, etc) might run into. Speaking as someone who spent some time assuming that the issue was related to slices/references/usize (i.e. various new, Rust-specific concepts), not strings and UTF-8 encoding, I wonder whether a special case might be produced, replacing error E0277, when the types involved are specifically strings/strs and integers. :\nWe should add a pointing to in the following code of E0277 errors. The example has a similar case to this one, but the additional code should be something along the lines of This will cause an extra to be displayed for this case. Beyond changing the code, you will also need to add a new test case with the repro case above and (running and possibly updating the test comments for any remaining failing tests).\nI would like to work on this.\ndo not hesitate to reach out either here or on if you need help!\nPossible proposed output from :", "positive_passages": [{"docid": "doc-en-rust-5abce897af2a9c727ccef312407e76934785e4fd7ca8e186155424a20a85f6bf", "text": "| ^^^^ `str` cannot be indexed by `{integer}` | = help: the trait `std::ops::Index<{integer}>` is not implemented for `str` = note: you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book error: aborting due to previous error", "commid": "rust_pr_57350"}], "negative_passages": []} {"query_id": "q-en-rust-96087c85f192a28b83957740fe94347e90bb403d948d5c4623c35dbb0ed6a79d", "query": "The following buggy code: produces the error: This is usefully documented in TRPL (ch08-02-strings), but that is not easily discoverable from the error. Meanwhile, the error itself is a natural one that programmers from many languages (C, C++, etc) might run into. Speaking as someone who spent some time assuming that the issue was related to slices/references/usize (i.e. various new, Rust-specific concepts), not strings and UTF-8 encoding, I wonder whether a special case might be produced, replacing error E0277, when the types involved are specifically strings/strs and integers. :\nWe should add a pointing to in the following code of E0277 errors. The example has a similar case to this one, but the additional code should be something along the lines of This will cause an extra to be displayed for this case. Beyond changing the code, you will also need to add a new test case with the repro case above and (running and possibly updating the test comments for any remaining failing tests).\nI would like to work on this.\ndo not hesitate to reach out either here or on if you need help!\nPossible proposed output from :", "positive_passages": [{"docid": "doc-en-rust-dd0f7a8153ce94d0cc40875857a4574f533e338fc4f5a2f2de3a437d7e448a08", "text": "| ^^^^^^^^^ `str` cannot be mutably indexed by `usize` | = help: the trait `std::ops::IndexMut` is not implemented for `str` = note: you can use `.chars().nth()` or `.bytes().nth()` see chapter in The Book error: aborting due to 3 previous errors", "commid": "rust_pr_57350"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-97f2cbf2ad1edd12f89a181133c454920619a378555a9c679cee499e4a80362c", "text": "/// assert_eq!(v.len(), 42); /// } /// ``` /// should be /// /// ```rust /// // should be /// fn foo(v: &[i32]) { /// assert_eq!(v.len(), 42); /// }", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-7d345cce6e2e516bbcda4d5727ccc0231b644c6f134c02122b861d92bab584ef", "text": "!preds.is_empty() && { let ty_empty_region = cx.tcx.mk_imm_ref(cx.tcx.lifetimes.re_root_empty, ty); preds.iter().all(|t| { let ty_params = &t.skip_binder().trait_ref.substs.iter().skip(1).collect::>(); let ty_params = &t .skip_binder() .trait_ref .substs .iter() .skip(1) .collect::>(); implements_trait(cx, ty_empty_region, t.def_id(), ty_params) }) },", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-4bb04d4ad7c7b4b4d560fca3ae752b356a25582f3435649c361a216b8a23acba", "text": "/// /// **Example:** /// ```rust /// // Bad /// println!(\"\"); /// /// // Good /// println!(); /// ``` pub PRINTLN_EMPTY_STRING, style,", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-03691cbb739e01aaa529e796485fdab0a918922d3d0c34e95b4ae5bac7fde485", "text": "declare_clippy_lint! { /// **What it does:** This lint warns when you use `print!()` with a format /// string that ends in a newline. /// string that /// ends in a newline. /// /// **Why is this bad?** You should use `println!()` instead, which appends the /// newline.", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-a9acced614468e7a414888121ccd85e5be9deb168ee45bed4d9e8c264afe1269", "text": "/// ```rust /// # use std::fmt::Write; /// # let mut buf = String::new(); /// /// // Bad /// writeln!(buf, \"\"); /// /// // Good /// writeln!(buf); /// ``` pub WRITELN_EMPTY_STRING, style,", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-5fc47318dfa11fca9603aefa33d7627df8167aee335e1cc01fd856eed38b648c", "text": "/// # use std::fmt::Write; /// # let mut buf = String::new(); /// # let name = \"World\"; /// /// // Bad /// write!(buf, \"Hello {}!n\", name); /// /// // Good /// writeln!(buf, \"Hello {}!\", name); /// ``` pub WRITE_WITH_NEWLINE, style,", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-39a08f976d9cf2ea042abefeb2e52efb3a54cd4bb85f03f44da638c54f7cfc2b", "text": "/// ```rust /// # use std::fmt::Write; /// # let mut buf = String::new(); /// /// // Bad /// writeln!(buf, \"{}\", \"foo\"); /// /// // Good /// writeln!(buf, \"foo\"); /// ``` pub WRITE_LITERAL, style,", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-36e124e3f02bec6f5dc8d47d7666f21b90483c5d8afdba78c68a66cead465744", "text": "if let (Some(fmt_str), expr) = self.check_tts(cx, &mac.args.inner_tokens(), true) { if fmt_str.symbol == Symbol::intern(\"\") { let mut applicability = Applicability::MachineApplicable; let suggestion = if let Some(e) = expr { snippet_with_applicability(cx, e.span, \"v\", &mut applicability) } else { applicability = Applicability::HasPlaceholders; Cow::Borrowed(\"v\") let suggestion = match expr { Some(expr) => snippet_with_applicability(cx, expr.span, \"v\", &mut applicability), None => { applicability = Applicability::HasPlaceholders; Cow::Borrowed(\"v\") }, }; span_lint_and_sugg(", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-19a25cb7546a38869ea883763103fd382bf503f8d4c9a86fe7097c9bd7dd7155", "text": "} fn run_ui_cargo(config: &mut compiletest::Config) { if cargo::is_rustc_test_suite() { return; } fn run_tests( config: &compiletest::Config, filter: &Option,", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-0286fbc1470d315343e826c245536ac7e84696408891dbe18ff9f46cd1875a69", "query": "When you download the 0.6 tarball and execute ./configure && make, a download occurs. and a wget info dump appears. It would be really awesome if this piece was integrated with the tarball. Not all environments are friendly to downloading arbitrary code off the net. E.g., I might be in a SCIF[1] and experimenting with Rust on a machine disconnected from the public net. Or I might be on a laptop working in a no-wifi area. :-/ (I know this is probably a real low priority issue, but it IS a (usually minor) hassle) [1]\nPlease note that you can avoid that if you already have a (compatible) locally installed rust compiler. Check the --local-rust and --local-rust-root parameters for ./configure. Also, I'm against bundling binaries in source tarballs. They are target-dependent and bundling all of them will make the released archive quite large. Downloading an external bootstrapper is a minor issue for downstream packagers too, though. See\nfar-future\nYeah, you can also just populate the directory in your builddir with an appropriate snapshot tarball. There's not much else we can do here. Putting binaries in a source tarball feels even worse, to me. Closing as I don't expect we're going to much change the requirements here and have several workable workarounds.", "positive_passages": [{"docid": "doc-en-rust-13ddc92944b64a8c0ff9340c269286f2b556e94680bc56e27e4abd4b75c04f85", "text": " #!/bin/sh CARGO_TARGET_DIR=$(pwd)/target/ export CARGO_TARGET_DIR echo 'Deprecated! `util/dev` usage is deprecated, please use `cargo dev` instead.' cd clippy_dev && cargo run -- \"$@\" ", "commid": "rust_pr_73660"}], "negative_passages": []} {"query_id": "q-en-rust-05181987c636b464ffc865e40d019431e5c4904fa34808be89afbb794f54dffa", "query": "As of rustc 1.33.0-nightly ( 2018-12-22) the following macro invocation fails with an ICE. Mentioning and because this may have to do with slicing introduced in . Same repro in script form:\nis working on a solution in", "positive_passages": [{"docid": "doc-en-rust-417c296d822bd2c6871de073c16e2a23bc9664f7535d7b87773239b3d574a3e8", "text": "); // See https://github.com/rust-lang/rust/issues/32354 if old_binding.is_import() || new_binding.is_import() { let binding = if new_binding.is_import() && !new_binding.span.is_dummy() { new_binding let directive = match (&new_binding.kind, &old_binding.kind) { (NameBindingKind::Import { directive, .. }, _) if !new_binding.span.is_dummy() => Some((directive, new_binding.span)), (_, NameBindingKind::Import { directive, .. }) if !old_binding.span.is_dummy() => Some((directive, old_binding.span)), _ => None, }; if let Some((directive, binding_span)) = directive { let suggested_name = if name.as_str().chars().next().unwrap().is_uppercase() { format!(\"Other{}\", name) } else { old_binding format!(\"other_{}\", name) }; let cm = self.session.source_map(); let rename_msg = \"you can use `as` to change the binding name of the import\"; if let ( Ok(snippet), NameBindingKind::Import { directive, ..}, _dummy @ false, ) = ( cm.span_to_snippet(binding.span), binding.kind.clone(), binding.span.is_dummy(), ) { let suggested_name = if name.as_str().chars().next().unwrap().is_uppercase() { format!(\"Other{}\", name) } else { format!(\"other_{}\", name) }; let mut suggestion = None; match directive.subclass { ImportDirectiveSubclass::SingleImport { type_ns_only: true, .. } => suggestion = Some(format!(\"self as {}\", suggested_name)), ImportDirectiveSubclass::SingleImport { source, .. } => { if let Some(pos) = source.span.hi().0.checked_sub(binding_span.lo().0) .map(|pos| pos as usize) { if let Ok(snippet) = self.session.source_map() .span_to_snippet(binding_span) { if pos <= snippet.len() { suggestion = Some(format!( \"{} as {}{}\", &snippet[..pos], suggested_name, if snippet.ends_with(\";\") { \";\" } else { \"\" } )) } } } } ImportDirectiveSubclass::ExternCrate { source, target, .. } => suggestion = Some(format!( \"extern crate {} as {};\", source.unwrap_or(target.name), suggested_name, )), _ => unreachable!(), } let rename_msg = \"you can use `as` to change the binding name of the import\"; if let Some(suggestion) = suggestion { err.span_suggestion_with_applicability( binding.span, &rename_msg, match directive.subclass { ImportDirectiveSubclass::SingleImport { type_ns_only: true, .. } => format!(\"self as {}\", suggested_name), ImportDirectiveSubclass::SingleImport { source, .. } => format!( \"{} as {}{}\", &snippet[..((source.span.hi().0 - binding.span.lo().0) as usize)], suggested_name, if snippet.ends_with(\";\") { \";\" } else { \"\" } ), ImportDirectiveSubclass::ExternCrate { source, target, .. } => format!( \"extern crate {} as {};\", source.unwrap_or(target.name), suggested_name, ), _ => unreachable!(), }, binding_span, rename_msg, suggestion, Applicability::MaybeIncorrect, ); } else { err.span_label(binding.span, rename_msg); err.span_label(binding_span, rename_msg); } }", "commid": "rust_pr_57908"}], "negative_passages": []} {"query_id": "q-en-rust-05181987c636b464ffc865e40d019431e5c4904fa34808be89afbb794f54dffa", "query": "As of rustc 1.33.0-nightly ( 2018-12-22) the following macro invocation fails with an ICE. Mentioning and because this may have to do with slicing introduced in . Same repro in script form:\nis working on a solution in", "positive_passages": [{"docid": "doc-en-rust-45ac3499486ef2d242869fb5e78948853d44070866d33c3865bdeb88356d4433", "text": " macro_rules! import { ( $($name:ident),* ) => { $( mod $name; pub use self::$name; //~^ ERROR the name `issue_56411_aux` is defined multiple times //~| ERROR `issue_56411_aux` is private, and cannot be re-exported )* } } import!(issue_56411_aux); fn main() { println!(\"Hello, world!\"); } ", "commid": "rust_pr_57908"}], "negative_passages": []} {"query_id": "q-en-rust-05181987c636b464ffc865e40d019431e5c4904fa34808be89afbb794f54dffa", "query": "As of rustc 1.33.0-nightly ( 2018-12-22) the following macro invocation fails with an ICE. Mentioning and because this may have to do with slicing introduced in . Same repro in script form:\nis working on a solution in", "positive_passages": [{"docid": "doc-en-rust-a300f928e46834fcc9121fd73d702311f56ca366382d8b716208a1689f9d8da9", "text": " error[E0255]: the name `issue_56411_aux` is defined multiple times --> $DIR/issue-56411.rs:5:21 | LL | mod $name; | ---------- previous definition of the module `issue_56411_aux` here LL | pub use self::$name; | ^^^^^^^^^^^ | | | `issue_56411_aux` reimported here | you can use `as` to change the binding name of the import ... LL | import!(issue_56411_aux); | ------------------------- in this macro invocation | = note: `issue_56411_aux` must be defined only once in the type namespace of this module error[E0365]: `issue_56411_aux` is private, and cannot be re-exported --> $DIR/issue-56411.rs:5:21 | LL | pub use self::$name; | ^^^^^^^^^^^ re-export of private `issue_56411_aux` ... LL | import!(issue_56411_aux); | ------------------------- in this macro invocation | = note: consider declaring type or module `issue_56411_aux` with `pub` error: aborting due to 2 previous errors Some errors occurred: E0255, E0365. For more information about an error, try `rustc --explain E0255`. ", "commid": "rust_pr_57908"}], "negative_passages": []} {"query_id": "q-en-rust-05181987c636b464ffc865e40d019431e5c4904fa34808be89afbb794f54dffa", "query": "As of rustc 1.33.0-nightly ( 2018-12-22) the following macro invocation fails with an ICE. Mentioning and because this may have to do with slicing introduced in . Same repro in script form:\nis working on a solution in", "positive_passages": [{"docid": "doc-en-rust-8402525181334ba8a9b4780c456a829706ad66ec1bf7966f3e9203a2875fd156", "text": " // compile-pass struct T {} fn main() {} ", "commid": "rust_pr_57908"}], "negative_passages": []} {"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-dfa27842d41aaa2c25c402191fc727e6eb3a4f21856d3b65b622677e1871ca70", "text": "/// /// The following return false: /// /// - private address (10.0.0.0/8, 172.16.0.0/12 and 192.168.0.0/16) /// - the loopback address (127.0.0.0/8) /// - the link-local address (169.254.0.0/16) /// - the broadcast address (255.255.255.255/32) /// - test addresses used for documentation (192.0.2.0/24, 198.51.100.0/24 and 203.0.113.0/24) /// - the unspecified address (0.0.0.0) /// - private addresses (see [`is_private()`](#method.is_private)) /// - the loopback address (see [`is_loopback()`](#method.is_loopback)) /// - the link-local address (see [`is_link_local()`](#method.is_link_local)) /// - the broadcast address (see [`is_broadcast()`](#method.is_broadcast)) /// - addresses used for documentation (see [`is_documentation()`](#method.is_documentation)) /// - the unspecified address (see [`is_unspecified()`](#method.is_unspecified)), and the whole /// 0.0.0.0/8 block /// - addresses reserved for future protocols (see /// [`is_ietf_protocol_assignment()`](#method.is_ietf_protocol_assignment), except /// `192.0.0.9/32` and `192.0.0.10/32` which are globally routable /// - addresses reserved for future use (see [`is_reserved()`](#method.is_reserved) /// - addresses reserved for networking devices benchmarking (see /// [`is_benchmarking`](#method.is_benchmarking)) /// /// [ipv4-sr]: https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml /// [`true`]: ../../std/primitive.bool.html", "commid": "rust_pr_60145"}], "negative_passages": []} {"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-2bea958c2e629217af66b006a4bbc3ad041102431890e8b8abd33362125c803d", "text": "/// use std::net::Ipv4Addr; /// /// fn main() { /// // private addresses are not global /// assert_eq!(Ipv4Addr::new(10, 254, 0, 0).is_global(), false); /// assert_eq!(Ipv4Addr::new(192, 168, 10, 65).is_global(), false); /// assert_eq!(Ipv4Addr::new(172, 16, 10, 65).is_global(), false); /// /// // the 0.0.0.0/8 block is not global /// assert_eq!(Ipv4Addr::new(0, 1, 2, 3).is_global(), false); /// // in particular, the unspecified address is not global /// assert_eq!(Ipv4Addr::new(0, 0, 0, 0).is_global(), false); /// /// // the loopback address is not global /// assert_eq!(Ipv4Addr::new(127, 0, 0, 1).is_global(), false); /// /// // link local addresses are not global /// assert_eq!(Ipv4Addr::new(169, 254, 45, 1).is_global(), false); /// /// // the broadcast address is not global /// assert_eq!(Ipv4Addr::new(255, 255, 255, 255).is_global(), false); /// /// // the broadcast address is not global /// assert_eq!(Ipv4Addr::new(192, 0, 2, 255).is_global(), false); /// assert_eq!(Ipv4Addr::new(198, 51, 100, 65).is_global(), false); /// assert_eq!(Ipv4Addr::new(203, 0, 113, 6).is_global(), false); /// /// // shared addresses are not global /// assert_eq!(Ipv4Addr::new(100, 100, 0, 0).is_global(), false); /// /// // addresses reserved for protocol assignment are not global /// assert_eq!(Ipv4Addr::new(192, 0, 0, 0).is_global(), false); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 255).is_global(), false); /// /// // addresses reserved for future use are not global /// assert_eq!(Ipv4Addr::new(250, 10, 20, 30).is_global(), false); /// /// // addresses reserved for network devices benchmarking are not global /// assert_eq!(Ipv4Addr::new(198, 18, 0, 0).is_global(), false); /// /// // All the other addresses are global /// assert_eq!(Ipv4Addr::new(1, 1, 1, 1).is_global(), true); /// assert_eq!(Ipv4Addr::new(80, 9, 12, 3).is_global(), true); /// } /// ``` pub fn is_global(&self) -> bool { !self.is_private() && !self.is_loopback() && !self.is_link_local() && !self.is_broadcast() && !self.is_documentation() && !self.is_unspecified() // check if this address is 192.0.0.9 or 192.0.0.10. These addresses are the only two // globally routable addresses in the 192.0.0.0/24 range. if u32::from(*self) == 0xc0000009 || u32::from(*self) == 0xc000000a { return true; } !self.is_private() && !self.is_loopback() && !self.is_link_local() && !self.is_broadcast() && !self.is_documentation() && !self.is_shared() && !self.is_ietf_protocol_assignment() && !self.is_reserved() && !self.is_benchmarking() // Make sure the address is not in 0.0.0.0/8 && self.octets()[0] != 0 } /// Returns [`true`] if this address is part of the Shared Address Space defined in /// [IETF RFC 6598] (`100.64.0.0/10`). /// /// [IETF RFC 6598]: https://tools.ietf.org/html/rfc6598 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(100, 64, 0, 0).is_shared(), true); /// assert_eq!(Ipv4Addr::new(100, 127, 255, 255).is_shared(), true); /// assert_eq!(Ipv4Addr::new(100, 128, 0, 0).is_shared(), false); /// } /// ``` pub fn is_shared(&self) -> bool { self.octets()[0] == 100 && (self.octets()[1] & 0b1100_0000 == 0b0100_0000) } /// Returns [`true`] if this address is part of `192.0.0.0/24`, which is reserved to /// IANA for IETF protocol assignments, as documented in [IETF RFC 6890]. /// /// Note that parts of this block are in use: /// /// - `192.0.0.8/32` is the \"IPv4 dummy address\" (see [IETF RFC 7600]) /// - `192.0.0.9/32` is the \"Port Control Protocol Anycast\" (see [IETF RFC 7723]) /// - `192.0.0.10/32` is used for NAT traversal (see [IETF RFC 8155]) /// /// [IETF RFC 6890]: https://tools.ietf.org/html/rfc6890 /// [IETF RFC 7600]: https://tools.ietf.org/html/rfc7600 /// [IETF RFC 7723]: https://tools.ietf.org/html/rfc7723 /// [IETF RFC 8155]: https://tools.ietf.org/html/rfc8155 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(192, 0, 0, 0).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 8).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 9).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 0, 255).is_ietf_protocol_assignment(), true); /// assert_eq!(Ipv4Addr::new(192, 0, 1, 0).is_ietf_protocol_assignment(), false); /// assert_eq!(Ipv4Addr::new(191, 255, 255, 255).is_ietf_protocol_assignment(), false); /// } /// ``` pub fn is_ietf_protocol_assignment(&self) -> bool { self.octets()[0] == 192 && self.octets()[1] == 0 && self.octets()[2] == 0 } /// Returns [`true`] if this address part of the `198.18.0.0/15` range, which is reserved for /// network devices benchmarking. This range is defined in [IETF RFC 2544] as `192.18.0.0` /// through `198.19.255.255` but [errata 423] corrects it to `198.18.0.0/15`. /// /// [IETF RFC 1112]: https://tools.ietf.org/html/rfc1112 /// [errate 423]: https://www.rfc-editor.org/errata/eid423 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(198, 17, 255, 255).is_benchmarking(), false); /// assert_eq!(Ipv4Addr::new(198, 18, 0, 0).is_benchmarking(), true); /// assert_eq!(Ipv4Addr::new(198, 19, 255, 255).is_benchmarking(), true); /// assert_eq!(Ipv4Addr::new(198, 20, 0, 0).is_benchmarking(), false); /// } /// ``` pub fn is_benchmarking(&self) -> bool { self.octets()[0] == 198 && (self.octets()[1] & 0xfe) == 18 } /// Returns [`true`] if this address is reserved by IANA for future use. [IETF RFC 1112] /// defines the block of reserved addresses as `240.0.0.0/4`. This range normally includes the /// broadcast address `255.255.255.255`, but this implementation explicitely excludes it, since /// it is obviously not reserved for future use. /// /// [IETF RFC 1112]: https://tools.ietf.org/html/rfc1112 /// [`true`]: ../../std/primitive.bool.html /// /// # Examples /// /// ``` /// #![feature(ip)] /// use std::net::Ipv4Addr; /// /// fn main() { /// assert_eq!(Ipv4Addr::new(240, 0, 0, 0).is_reserved(), true); /// assert_eq!(Ipv4Addr::new(255, 255, 255, 254).is_reserved(), true); /// /// assert_eq!(Ipv4Addr::new(239, 255, 255, 255).is_reserved(), false); /// // The broadcast address is not considered as reserved for future use by this /// // implementation /// assert_eq!(Ipv4Addr::new(255, 255, 255, 255).is_reserved(), false); /// } /// ``` pub fn is_reserved(&self) -> bool { self.octets()[0] & 240 == 240 && !self.is_broadcast() } /// Returns [`true`] if this is a multicast address (224.0.0.0/4).", "commid": "rust_pr_60145"}], "negative_passages": []} {"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-68a7c17fc244b0b822c1d45fa64dc3aed9b8c312972850a66d4d090f124edfea", "text": "} } /// Returns [`true`] if this is a unique local address (fc00::/7). /// Returns [`true`] if this is a unique local address (`fc00::/7`). /// /// This property is defined in [IETF RFC 4193]. ///", "commid": "rust_pr_60145"}], "negative_passages": []} {"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-c5f9c1bd09b19150b7898577dc686eff15cf9c05e90c69f6e09bab837a6dcafd", "text": "(self.segments()[0] & 0xfe00) == 0xfc00 } /// Returns [`true`] if the address is unicast and link-local (fe80::/10). /// Returns [`true`] if the address is a unicast link-local address (`fe80::/64`). /// /// This property is defined in [IETF RFC 4291]. /// A common mis-conception is to think that \"unicast link-local addresses start with /// `fe80::`\", but the [IETF RFC 4291] actually defines a stricter format for these addresses: /// /// ```no_rust /// | 10 | /// | bits | 54 bits | 64 bits | /// +----------+-------------------------+----------------------------+ /// |1111111010| 0 | interface ID | /// +----------+-------------------------+----------------------------+ /// ``` /// /// This method validates the format defined in the RFC and won't recognize the following /// addresses such as `fe80:0:0:1::` or `fe81::` as unicast link-local addresses for example. /// If you need a less strict validation use [`is_unicast_link_local()`] instead. /// /// # Examples /// /// ``` /// #![feature(ip)] /// /// use std::net::Ipv6Addr; /// /// fn main() { /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local_strict()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0xffff, 0xffff, 0xffff, 0xffff); /// assert!(ip.is_unicast_link_local_strict()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 1, 0, 0, 0, 0); /// assert!(!ip.is_unicast_link_local_strict()); /// assert!(ip.is_unicast_link_local()); /// /// let ip = Ipv6Addr::new(0xfe81, 0, 0, 0, 0, 0, 0, 0); /// assert!(!ip.is_unicast_link_local_strict()); /// assert!(ip.is_unicast_link_local()); /// } /// ``` /// /// # See also /// /// - [IETF RFC 4291 section 2.5.6] /// - [RFC 4291 errata 4406] /// - [`is_unicast_link_local()`] /// /// [IETF RFC 4291]: https://tools.ietf.org/html/rfc4291 /// [IETF RFC 4291 section 2.5.6]: https://tools.ietf.org/html/rfc4291#section-2.5.6 /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 errata 4406]: https://www.rfc-editor.org/errata/eid4406 /// [`is_unicast_link_local()`]: ../../std/net/struct.Ipv6Addr.html#method.is_unicast_link_local /// pub fn is_unicast_link_local_strict(&self) -> bool { (self.segments()[0] & 0xffff) == 0xfe80 && (self.segments()[1] & 0xffff) == 0 && (self.segments()[2] & 0xffff) == 0 && (self.segments()[3] & 0xffff) == 0 } /// Returns [`true`] if the address is a unicast link-local address (`fe80::/10`). /// /// This method returns [`true`] for addresses in the range reserved by [RFC 4291 section 2.4], /// i.e. addresses with the following format: /// /// ```no_rust /// | 10 | /// | bits | 54 bits | 64 bits | /// +----------+-------------------------+----------------------------+ /// |1111111010| arbitratry value | interface ID | /// +----------+-------------------------+----------------------------+ /// ``` /// /// As a result, this method consider addresses such as `fe80:0:0:1::` or `fe81::` to be /// unicast link-local addresses, whereas [`is_unicast_link_local_strict()`] does not. If you /// need a strict validation fully compliant with the RFC, use /// [`is_unicast_link_local_strict()`]. /// /// # Examples ///", "commid": "rust_pr_60145"}], "negative_passages": []} {"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-0e327b342bad07743c6df591e11031f3c3e17a64cb4a2633a26b702b53d83b64", "text": "/// use std::net::Ipv6Addr; /// /// fn main() { /// assert_eq!(Ipv6Addr::new(0, 0, 0, 0, 0, 0xffff, 0xc00a, 0x2ff).is_unicast_link_local(), /// false); /// assert_eq!(Ipv6Addr::new(0xfe8a, 0, 0, 0, 0, 0, 0, 0).is_unicast_link_local(), true); /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 0, 0xffff, 0xffff, 0xffff, 0xffff); /// assert!(ip.is_unicast_link_local()); /// /// let ip = Ipv6Addr::new(0xfe80, 0, 0, 1, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local()); /// assert!(!ip.is_unicast_link_local_strict()); /// /// let ip = Ipv6Addr::new(0xfe81, 0, 0, 0, 0, 0, 0, 0); /// assert!(ip.is_unicast_link_local()); /// assert!(!ip.is_unicast_link_local_strict()); /// } /// ``` /// /// # See also /// /// - [IETF RFC 4291 section 2.4] /// - [RFC 4291 errata 4406] /// /// [IETF RFC 4291 section 2.4]: https://tools.ietf.org/html/rfc4291#section-2.4 /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 errata 4406]: https://www.rfc-editor.org/errata/eid4406 /// [`is_unicast_link_local_strict()`]: ../../std/net/struct.Ipv6Addr.html#method.is_unicast_link_local_strict /// pub fn is_unicast_link_local(&self) -> bool { (self.segments()[0] & 0xffc0) == 0xfe80 } /// Returns [`true`] if this is a deprecated unicast site-local address /// (fec0::/10). /// Returns [`true`] if this is a deprecated unicast site-local address (fec0::/10). The /// unicast site-local address format is defined in [RFC 4291 section 2.5.7] as: /// /// ```no_rust /// | 10 | /// | bits | 54 bits | 64 bits | /// +----------+-------------------------+----------------------------+ /// |1111111011| subnet ID | interface ID | /// +----------+-------------------------+----------------------------+ /// ``` /// /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 section 2.5.7]: https://tools.ietf.org/html/rfc4291#section-2.5.7 /// /// # Examples ///", "commid": "rust_pr_60145"}], "negative_passages": []} {"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-45b45d485bd88c1cfebb3dd425198a98de35ee847cdec01d0f1f8ed9e40bc913", "text": "/// /// - the loopback address /// - the link-local addresses /// - the (deprecated) site-local addresses /// - unique local addresses /// - the unspecified address /// - the address range reserved for documentation /// /// This method returns [`true`] for site-local addresses as per [RFC 4291 section 2.5.7] /// /// ```no_rust /// The special behavior of [the site-local unicast] prefix defined in [RFC3513] must no longer /// be supported in new implementations (i.e., new implementations must treat this prefix as /// Global Unicast). /// ``` /// /// [`true`]: ../../std/primitive.bool.html /// [RFC 4291 section 2.5.7]: https://tools.ietf.org/html/rfc4291#section-2.5.7 /// /// # Examples ///", "commid": "rust_pr_60145"}], "negative_passages": []} {"query_id": "q-en-rust-b1a925fd45697a656dc4325a4588180891695688f60bfac4eaa470a152fa105e", "query": "This code should be correct, because according to document, all address in are not global reachable. But it panic on assertion failed in latest nightly (2019-01-11) (see )\nRelevant thread:\nSorry, fixed", "positive_passages": [{"docid": "doc-en-rust-80849e3257b31a1c46b0dd3c65319462b769c21112f05064daf23f95c5002106", "text": "/// ``` pub fn is_unicast_global(&self) -> bool { !self.is_multicast() && !self.is_loopback() && !self.is_unicast_link_local() && !self.is_unicast_site_local() && !self.is_unique_local() && !self.is_unspecified() && !self.is_documentation() && !self.is_loopback() && !self.is_unicast_link_local() && !self.is_unique_local() && !self.is_unspecified() && !self.is_documentation() } /// Returns the address's multicast scope if the address is multicast.", "commid": "rust_pr_60145"}], "negative_passages": []} {"query_id": "q-en-rust-a7e11b03b0f0bcc4af442d9d27aaaf0e45a436a547a9b34e6eabb32a8e91593c", "query": "Minimised, reproducible example: See Backtrace\nHello I'm getting the same ICE, but without any unstable features. I don't have a minimized example, though, but it may be another data point when searching for the problem. It's this commit: The commit happens with The code is not correct, it wouldn't compile even if the compiler didn't panic. It doesn't happen on stable, but I guess it's because it bails out sooner on closures taking references:\nThis still produces an error on the latest nightly (not sure if it's supposed to compile or not), but no longer ICEs.", "positive_passages": [{"docid": "doc-en-rust-01550d3a62c1f8dbf3a85e641a0aa43e97558c507a9e31fbb9a9fb04365f8a13", "text": " // Regression test for issue #57611 // Ensures that we don't ICE // FIXME: This should compile, but it currently doesn't #![feature(trait_alias)] #![feature(type_alias_impl_trait)] trait Foo { type Bar: Baz; fn bar(&self) -> Self::Bar; } struct X; impl Foo for X { type Bar = impl Baz; //~ ERROR type mismatch in closure arguments //~^ ERROR type mismatch resolving fn bar(&self) -> Self::Bar { |x| x } } trait Baz = Fn(&A) -> &B; fn main() {} ", "commid": "rust_pr_68498"}], "negative_passages": []} {"query_id": "q-en-rust-a7e11b03b0f0bcc4af442d9d27aaaf0e45a436a547a9b34e6eabb32a8e91593c", "query": "Minimised, reproducible example: See Backtrace\nHello I'm getting the same ICE, but without any unstable features. I don't have a minimized example, though, but it may be another data point when searching for the problem. It's this commit: The commit happens with The code is not correct, it wouldn't compile even if the compiler didn't panic. It doesn't happen on stable, but I guess it's because it bails out sooner on closures taking references:\nThis still produces an error on the latest nightly (not sure if it's supposed to compile or not), but no longer ICEs.", "positive_passages": [{"docid": "doc-en-rust-e5a703655c6ac5a25a62963adb6bbee3581e8db3dd6a9ab56522ca0d9fdef38c", "text": " error[E0631]: type mismatch in closure arguments --> $DIR/issue-57611-trait-alias.rs:17:5 | LL | type Bar = impl Baz; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected signature of `for<'r> fn(&'r X) -> _` ... LL | |x| x | ----- found signature of `fn(_) -> _` | = note: the return type of a function must have a statically known size error[E0271]: type mismatch resolving `for<'r> <[closure@$DIR/issue-57611-trait-alias.rs:21:9: 21:14] as std::ops::FnOnce<(&'r X,)>>::Output == &'r X` --> $DIR/issue-57611-trait-alias.rs:17:5 | LL | type Bar = impl Baz; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected bound lifetime parameter, found concrete lifetime | = note: the return type of a function must have a statically known size error: aborting due to 2 previous errors Some errors have detailed explanations: E0271, E0631. For more information about an error, try `rustc --explain E0271`. ", "commid": "rust_pr_68498"}], "negative_passages": []} {"query_id": "q-en-rust-a7e11b03b0f0bcc4af442d9d27aaaf0e45a436a547a9b34e6eabb32a8e91593c", "query": "Minimised, reproducible example: See Backtrace\nHello I'm getting the same ICE, but without any unstable features. I don't have a minimized example, though, but it may be another data point when searching for the problem. It's this commit: The commit happens with The code is not correct, it wouldn't compile even if the compiler didn't panic. It doesn't happen on stable, but I guess it's because it bails out sooner on closures taking references:\nThis still produces an error on the latest nightly (not sure if it's supposed to compile or not), but no longer ICEs.", "positive_passages": [{"docid": "doc-en-rust-6acc0b5d516671e91842c496f58bf2ab85341d8f1582912eadb21740cf7ac7b4", "text": " // Regression test for issue #57807 - ensure // that we properly unify associated types within // a type alias impl trait // check-pass #![feature(type_alias_impl_trait)] trait Bar { type A; } impl Bar for () { type A = (); } trait Foo { type A; type B: Bar; fn foo() -> Self::B; } impl Foo for () { type A = (); type B = impl Bar; fn foo() -> Self::B { () } } fn main() {} ", "commid": "rust_pr_68498"}], "negative_passages": []} {"query_id": "q-en-rust-109bb0755956bb23810de87c91ae495632a4dd5f006f25cb017913d5b4abf014", "query": "When building a statically-linked Rust binary crate for the target, linking a crate fails with the following error: Expected result: the crate successfully compiles. Actual result: the crate fails to compile, with an error during the linking stage. Platform: Ubuntu 18.04, x8664. Output of : MUSL standard C library version: 1.1.20. C compiler: MIPS GCC version 7.3, provided by Ubuntu 18.04 (). I tried searching the open and closed issues for this project, and couldn't find any others exhibiting this same error. Forgive me if this is the wrong place to file this bug, but I'm not entirely sure how to debug this problem, so I'm starting here with Rust. The file in makes a call to on big-endian hosts (). This function gets translated into the symbol for the MIPS target, and should be present in the compiler's built-in library, but it appears it's absent here. Now, I can see that uses GCC 5.3 from the OpenWRT toolchain. I'm not entirely sure why this should give any different results from using the GCC 7.3 package provided by Ubuntu 18.04 (, which provides GCC 5.5, and the results are the same (fails to link due to undefined reference to ). Here is the full error message from trying to build the crate: An easy way to reproduce this is to create a with the following contents, and then build it with :\nThe necessary functions should be to the .\nThis does look like an interesting idea, but when trying to compile with that crate, I get lots of errors of this sort: You should be able to recreate this by adding the following line to the I mentioned in the original issue, just before the line at the end: I'm currently working on a project that I'd like to keep on the stable channel. Will this crate not work at all on the stable channel?\nmodify labels: +O-musl\npossibly related to\nAny updates with this issue? Have the same problem\nAdding \"-C\", \"link-args=-lgcc\" to re-add libgcc removed by -nostdlib to rustargs fixes the problem, at least for the simple hello world. Or you could implement __bswapsi2 yourself. I'm guessing that openwrt toolchain targets mips32 as opposed of mips32r2, so it requires \"libcall\" for bswap. Hope it helps.\nIt works, thanks!", "positive_passages": [{"docid": "doc-en-rust-ab32fb4dfea5590ce7fea159e1b31bf080a2944a0afde64408446b0d5dbd8de1", "text": "[[package]] name = \"compiler_builtins\" version = \"0.1.32\" version = \"0.1.35\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"7bc4ac2c824d2bfc612cba57708198547e9a26943af0632aff033e0693074d5c\" checksum = \"e3fcd8aba10d17504c87ef12d4f62ef404c6a4703d16682a9eb5543e6cf24455\" dependencies = [ \"cc\", \"rustc-std-workspace-core\",", "commid": "rust_pr_75877"}], "negative_passages": []} {"query_id": "q-en-rust-109bb0755956bb23810de87c91ae495632a4dd5f006f25cb017913d5b4abf014", "query": "When building a statically-linked Rust binary crate for the target, linking a crate fails with the following error: Expected result: the crate successfully compiles. Actual result: the crate fails to compile, with an error during the linking stage. Platform: Ubuntu 18.04, x8664. Output of : MUSL standard C library version: 1.1.20. C compiler: MIPS GCC version 7.3, provided by Ubuntu 18.04 (). I tried searching the open and closed issues for this project, and couldn't find any others exhibiting this same error. Forgive me if this is the wrong place to file this bug, but I'm not entirely sure how to debug this problem, so I'm starting here with Rust. The file in makes a call to on big-endian hosts (). This function gets translated into the symbol for the MIPS target, and should be present in the compiler's built-in library, but it appears it's absent here. Now, I can see that uses GCC 5.3 from the OpenWRT toolchain. I'm not entirely sure why this should give any different results from using the GCC 7.3 package provided by Ubuntu 18.04 (, which provides GCC 5.5, and the results are the same (fails to link due to undefined reference to ). Here is the full error message from trying to build the crate: An easy way to reproduce this is to create a with the following contents, and then build it with :\nThe necessary functions should be to the .\nThis does look like an interesting idea, but when trying to compile with that crate, I get lots of errors of this sort: You should be able to recreate this by adding the following line to the I mentioned in the original issue, just before the line at the end: I'm currently working on a project that I'd like to keep on the stable channel. Will this crate not work at all on the stable channel?\nmodify labels: +O-musl\npossibly related to\nAny updates with this issue? Have the same problem\nAdding \"-C\", \"link-args=-lgcc\" to re-add libgcc removed by -nostdlib to rustargs fixes the problem, at least for the simple hello world. Or you could implement __bswapsi2 yourself. I'm guessing that openwrt toolchain targets mips32 as opposed of mips32r2, so it requires \"libcall\" for bswap. Hope it helps.\nIt works, thanks!", "positive_passages": [{"docid": "doc-en-rust-e72f6299730b09657cdb8486e8ea3a46f30c134469ba3a9cc4e92e9c15e09d56", "text": "panic_abort = { path = \"../panic_abort\" } core = { path = \"../core\" } libc = { version = \"0.2.74\", default-features = false, features = ['rustc-dep-of-std'] } compiler_builtins = { version = \"0.1.32\" } compiler_builtins = { version = \"0.1.35\" } profiler_builtins = { path = \"../profiler_builtins\", optional = true } unwind = { path = \"../unwind\" } hashbrown = { version = \"0.8.1\", default-features = false, features = ['rustc-dep-of-std'] }", "commid": "rust_pr_75877"}], "negative_passages": []} {"query_id": "q-en-rust-62b082ff3d00c0d5541a844fa9bf9d06cfd45dbe30ecf935020db0e6c99d1321", "query": "Encountered this when working on test cases for an RFC: cc This seems related to other issues.\nI'm confused. Either this was fixed since yesterday's nightly (it still ICEs on nightly but works locally on master), or something really odd is going on\nIs that the upstream master? Otherwise maybe someone merged a PR or something..\nThis still ICEs on nightly.\nThis now compiles successfully on the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-ae0f1dd513fac11ddd5092a628d19cb1656c4089f6a66d30c55e936aae563cc7", "text": " // check-pass #![feature(existential_type)] existential type A: Iterator; fn def_a() -> A { 0..1 } pub fn use_a() { def_a().map(|x| x); } fn main() {} ", "commid": "rust_pr_63158"}], "negative_passages": []} {"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-e7bbf5f78ef1850d8eb1d35e35864a2fe23e4d2854334bf96f228b0a71142e27", "text": "} // walk type and init value self.visit_ty(typ); if let Some(expr) = expr { self.visit_expr(expr); } self.nest_tables(id, |v| { v.visit_ty(typ); if let Some(expr) = expr { v.visit_expr(expr); } }); } // FIXME tuple structs should generate tuple-specific data.", "commid": "rust_pr_60649"}], "negative_passages": []} {"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-7c37d39505667b3457e3c30615165c38181eeda791f510297d6779a66af6da3a", "text": " // compile-flags: -Zsave-analysis // Check that this doesn't ICE when processing associated const (field expr). pub fn f() { trait Trait {} impl Trait { const FLAG: u32 = bogus.field; //~ ERROR cannot find value `bogus` } } fn main() {} ", "commid": "rust_pr_60649"}], "negative_passages": []} {"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-b523cfd67ece5f2e6a8d67c450dd03609bee563d0fa77dd5e82575f00eb6a222", "text": " error[E0425]: cannot find value `bogus` in this scope --> $DIR/issue-59134-0.rs:8:27 | LL | const FLAG: u32 = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_60649"}], "negative_passages": []} {"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-43dfcf358157d3aea9b597b0480d9d21111bcb7def5cc0a258cd40113fec2089", "text": " // compile-flags: -Zsave-analysis // Check that this doesn't ICE when processing associated const (type). fn func() { trait Trait { type MyType; const CONST: Self::MyType = bogus.field; //~ ERROR cannot find value `bogus` } } fn main() {} ", "commid": "rust_pr_60649"}], "negative_passages": []} {"query_id": "q-en-rust-040ce4823fe42b127f254e4eb557b71e51fbaf32af61e9709865b31985195f41", "query": "When using this branch of bitflags: through a patch in a local directory rls crashes with the following errors:\ni had enabled dev overrides for dependencies in using: removing that out the output from rls is now the normal debug flags but still the same errors:\nThis is now causing widespread ICEs in the RLS. Nominating to get this fixed (maybe has an idea?). See for a simpler reproducer.\nI encountered this issue trying out which causes rustc to crash on when invoked by RLS. I can confirm it's the exact same backtrace as\nI ran into this trying out on Windows 10. RLS inside vscode ends up panicking with this backtrace: (note for repro: I had changed the default feature in tui-rs to be \"crossterm\" because the default feature \"termion\" doesn't compile on windows - but after the termion compile errors, rustc was giving me a similar panic anyway)\nJFYI I could work around this problem by adding to my\nI tried that with the tui-rs (it had ) but same panic. [Edit] you're right, I had missed the inside the version.\nThanks !\nRunning into the same issue when trying to build Clap\nSame issue in vscode on linuxmint and rust 1.34.1 when trying to build glib (dependency of gtk).\nThere is no need to list every crate which depends on , it only clutters the thread and makes less visible.\nHere's a small repro, extracted from what is doing: I'm struggling to get a repro without the compile error, though.\nThank you for such a small repro! I\u2019ll take a closer look tomorrow and see what may be causing that. On Tue, 7 May 2019 at 22:32, Sean Gillespie <:\nLooks like isn't written back for a resolution error? That still repros without , right?\nCorrect, it does. Further minimized repro:\nInteresting. I managed to reduce it to: with: Any other combination is okay, it has to be under under and it has to be combined with a field access (bare unknown identifier doesn't ICE). I'm only guessing but it seems that it tries to emplace def 'path' under a which seems to skip a def path segment?\nI think this is confirmed by A, what I believe is, more accurate guess is that we somehow don't correctly nest appropriate typeck tables when visiting .\nOh, you need to have one table per body: look in HIR for fields and map all of those cases back to the AST. This makes a lot more sense now: so is not wrong, but is looking in the wrong place.\nWhat about this: Seems like we need to before walking the expression of the associated const?\nHm, probably! I imagined the problem is we don't nest it before visiting trait items (hence missing segment) but you may be right! Let's see :sweat_smile:\nWith the fix applied this still ICEs on (this time for type) which means we should nest tables for as well", "positive_passages": [{"docid": "doc-en-rust-2f5bc0c189246f20c47a42d98bf8c96ad23df9f1512c62e19e48a684ee31e93c", "text": " error[E0425]: cannot find value `bogus` in this scope --> $DIR/issue-59134-1.rs:8:37 | LL | const CONST: Self::MyType = bogus.field; | ^^^^^ not found in this scope error: aborting due to previous error For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_60649"}], "negative_passages": []} {"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-a0f593b02f0805e6349ad0127582660c3c150be14b11b7324dd781c9a0c345df", "text": " // check-pass #![feature(impl_trait_in_bindings)] #![allow(incomplete_features)] struct A<'a>(&'a ()); trait Trait {} impl Trait for () {} pub fn foo<'a>() { let _x: impl Trait> = (); } fn main() {} ", "commid": "rust_pr_73646"}], "negative_passages": []} {"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-c72a1e364bda38a98ebb595ca93588ad79f841bcdb844f214995cda1f100b8bb", "text": " #![feature(never_type, specialization)] #![allow(incomplete_features)] use std::iter::{self, Empty}; trait Trait { type Out: Iterator; fn f(&self) -> Option; } impl Trait for T { default type Out = !; //~ ERROR: `!` is not an iterator default fn f(&self) -> Option { None } } struct X; impl Trait for X { type Out = Empty; fn f(&self) -> Option { Some(iter::empty()) } } fn f(a: T) { if let Some(iter) = a.f() { println!(\"Some\"); for x in iter { println!(\"x = {}\", x); } } } pub fn main() { f(10); } ", "commid": "rust_pr_73646"}], "negative_passages": []} {"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-ec5d8ba7a76f71b639d2dee2588485967ed2b662d7264e413f7bc6ebd438b149", "text": " error[E0277]: `!` is not an iterator --> $DIR/issue-51506.rs:13:5 | LL | type Out: Iterator; | ------------------------------- required by `Trait::Out` ... LL | default type Out = !; | ^^^^^^^^^^^^^^^^^^^^^ `!` is not an iterator | = help: the trait `std::iter::Iterator` is not implemented for `!` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_73646"}], "negative_passages": []} {"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-a8f22be93271ffce94f0e0a4cea97cc19a6c822b88549b9e21545c4e23d39a82", "text": " #![crate_type = \"lib\"] #![feature(specialization)] #![feature(unsize, coerce_unsized)] #![allow(incomplete_features)] use std::ops::CoerceUnsized; pub struct SmartassPtr(A::Data); pub trait Smartass { type Data; type Data2: CoerceUnsized<*const [u8]>; } pub trait MaybeObjectSafe {} impl MaybeObjectSafe for () {} impl Smartass for T { type Data = ::Data2; default type Data2 = (); //~^ ERROR: the trait bound `(): std::ops::CoerceUnsized<*const [u8]>` is not satisfied } impl Smartass for () { type Data2 = *const [u8; 1]; } impl Smartass for dyn MaybeObjectSafe { type Data = *const [u8]; type Data2 = *const [u8; 0]; } impl CoerceUnsized> for SmartassPtr where ::Data: std::ops::CoerceUnsized<::Data> {} pub fn conv(s: SmartassPtr<()>) -> SmartassPtr { s } ", "commid": "rust_pr_73646"}], "negative_passages": []} {"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-f24c5a9e6f4926b036a7af271b3b8393eb78686dcd8f9b5505fadb8d104741a5", "text": " error[E0277]: the trait bound `(): std::ops::CoerceUnsized<*const [u8]>` is not satisfied --> $DIR/issue-44861.rs:21:5 | LL | type Data2: CoerceUnsized<*const [u8]>; | --------------------------------------- required by `Smartass::Data2` ... LL | default type Data2 = (); | ^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::ops::CoerceUnsized<*const [u8]>` is not implemented for `()` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_73646"}], "negative_passages": []} {"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-fcf95ebdfc238936020242ab77fb836da17c02a592e428eb984a94dc6c5513fd", "text": " #![feature(specialization)] #![allow(incomplete_features)] struct MyStruct {} trait MyTrait { type MyType: Default; } impl MyTrait for i32 { default type MyType = MyStruct; //~^ ERROR: the trait bound `MyStruct: std::default::Default` is not satisfied } fn main() { let _x: ::MyType = ::MyType::default(); } ", "commid": "rust_pr_73646"}], "negative_passages": []} {"query_id": "q-en-rust-0d0ad8eb07090379b80371ef9d83f447930eec7f8c315b24df253746743b42af", "query": "With specialization, it is possible to define a default associated type that does not fulfill its trait bounds. Here is a minimal example, that surely should not compile, but does with rustc 1.35.0-nightly: Unsurprisingly, adding the main function causes an ICE. Error message and backtrace (compiled at the playground):\nTriage: the latest nightly rejects the code, marking as E-needs-test.", "positive_passages": [{"docid": "doc-en-rust-2ba07713179c4f3114e52d8de411389cc8b934fd43d0875c47ff2ae312c009ac", "text": " error[E0277]: the trait bound `MyStruct: std::default::Default` is not satisfied --> $DIR/issue-59435.rs:11:5 | LL | type MyType: Default; | --------------------- required by `MyTrait::MyType` ... LL | default type MyType = MyStruct; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `std::default::Default` is not implemented for `MyStruct` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_73646"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-84194bd5d7aca18915ff8adbb1bcdff7dca705f8fb2462a4c89f39b297656a83", "text": "per_local.insert(local); } } cx.per_local[IsNotPromotable].insert(local); cx.per_local[IsNotConst].insert(local); } LocalKind::Var if mode == Mode::Fn => {", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-8c91de4f1e86ec79aeba509276b91de2b6d816f5adfa1a771623819ff8a32e56", "text": "} LocalKind::Temp if !temps[local].is_promotable() => { cx.per_local[IsNotPromotable].insert(local); cx.per_local[IsNotConst].insert(local); } _ => {}", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-648c57c8c2c990ceb5e2bb743732d366e3f16d2b0f67ed6f6121a77903069326", "text": "} } // Ensure the `IsNotPromotable` qualification is preserved. // Ensure the `IsNotConst` qualification is preserved. // NOTE(eddyb) this is actually unnecessary right now, as // we never replace the local's qualif, but we might in // the future, and so it serves to catch changes that unset", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-4e5f3f52ce4342419bc2f46b69b500f3073fbf6c8d63caf22405a65fe71b3d5f", "text": "// be replaced with calling `insert` to re-set the bit). if kind == LocalKind::Temp { if !self.temp_promotion_state[index].is_promotable() { assert!(self.cx.per_local[IsNotPromotable].contains(index)); assert!(self.cx.per_local[IsNotConst].contains(index)); } } }", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-957045bed7caa6954e23010d28385b4534243204450b66195112a6c31071f59d", "text": " // only-x86_64 #[cfg(target_arch = \"x86\")] use std::arch::x86::*; #[cfg(target_arch = \"x86_64\")] use std::arch::x86_64::*; unsafe fn pclmul(a: __m128i, b: __m128i) -> __m128i { let imm8 = 3; _mm_clmulepi64_si128(a, b, imm8) //~ ERROR argument 3 is required to be a constant } fn main() {} ", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-fa8cbcbd13cb60581fb25ed391a8c697225afec8a0f9f862117068de62e697c9", "text": " error: argument 3 is required to be a constant --> $DIR/const_arg_local.rs:10:5 | LL | _mm_clmulepi64_si128(a, b, imm8) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-1173f3983829635337afc8e30bcbdc1a6aecd3a41ef79ff8838a5dd48b50153a", "text": " // only-x86_64 #[cfg(target_arch = \"x86\")] use std::arch::x86::*; #[cfg(target_arch = \"x86_64\")] use std::arch::x86_64::*; unsafe fn pclmul(a: __m128i, b: __m128i) -> __m128i { _mm_clmulepi64_si128(a, b, *&mut 42) //~ ERROR argument 3 is required to be a constant } fn main() {} ", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-f912e43b3bde797d4f3eec09ec9048dc9cd001b3aa02e2e775c664d60c94a619", "text": " error: argument 3 is required to be a constant --> $DIR/const_arg_promotable.rs:9:5 | LL | _mm_clmulepi64_si128(a, b, *&mut 42) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-408de29b4563462d366c394ff5c2cd7d18e98909957a2a1865c91a1ab194f440", "text": " // only-x86_64 #[cfg(target_arch = \"x86\")] use std::arch::x86::*; #[cfg(target_arch = \"x86_64\")] use std::arch::x86_64::*; unsafe fn pclmul(a: __m128i, b: __m128i, imm8: i32) -> __m128i { _mm_clmulepi64_si128(a, b, imm8) //~ ERROR argument 3 is required to be a constant } fn main() {} ", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-0304faa8eaf49575dd4f6163f9e393715a9dacea3c627bf50710ce6fb9e9ddc4", "query": "Expected the following code to compile But getting Repro:\nAlso happens on beta, not on stable\nStable says:\nFor reference, the function signature as defined is exactly the same.\nThe actual ICE occurs first in nightly-2019-03-03 (...). But interesting enough, stable (as jonas pointed out) procuces an error, saying, that the third argument must be a constant. Since nightly-2019-02-16 (...) this does not emit an error anymore, but also does not ICE.\nI guess for the \"does not emit an error anymore\" can say something about it. should be responsible for it.\ntriage: P-high\nNote that because of the on the definition, you can't wrap it (without matching on and having 256 separate calls). I don't think this is intentional, but I'm not sure why it would happen, maybe arguments are treated incorrectly in the qualifier. cc\nSo I agree that this code can't compile, that's the expected behavior; if you want to wrap it, you have to use a macro - Rust functions aren't able to abstract over this type of hardware operations yet. The ICE is a regression over the error message that was being produced before. We should definitely add an ui test for that.\ntriage, assigning to self to fix the stable-to-beta regression.\nThe behavior here varies with edition as well as channel. edition 2018 ------------------ ------------------------ stable expected error beta unexpected compile-pass nightly-2019-04-05 ICE I think the most immediate thing to address is the unexpected compile-pass that has leaked into beta.\nThe easiest thing might be to revert\nI have an actual fix in (also easier to backport than a revert)\nOkay then I'm reassigning to", "positive_passages": [{"docid": "doc-en-rust-93605bb5b0a72aefda7c5c2fd7c6b71e8fd3240564e7b28c4d8aa2c97f555041", "text": " error: argument 3 is required to be a constant --> $DIR/const_arg_wrapper.rs:9:5 | LL | _mm_clmulepi64_si128(a, b, imm8) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_59724"}], "negative_passages": []} {"query_id": "q-en-rust-cf7666fa3b3e5b6f70463b09e7f7979217170ef918eec8e9f6a707ff2a596bec", "query": "Discovered a compiler panic, thought it might relate to , but was told to post a new issue. In minimizing, it seems the generic types are required to trigger the panic, I was unable to reproduce when inlining t7p or t8n. This code panics on 1.33 - 1.35, but does not panic on 1.32 (): On 1.32: On 1.35 nightly: --Bryan\ntriage: P-high. Removing nominated tag since there's little to discuss beyond its P-highness. (The fact its unassigned will be discussed regardless of nomination state.)\nBisection points to i.e. as the culprit. The backtrace in that commit looks very similar to the one reported here, and the parent commit () gives the same compiler error.\nI spent some time looking at this a few weeks ago but did not manage to find anything conclusive. At this point I'm going to be taking a break for a few months, so I am unassigning myself from this issue.\ntriage: assigning self.\ntriage: downgrading to P-medium; this does not warrant revisiting every week.\nI was able to minify it a bit further:\nTriage: It's no longer ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-b473a1afe3e65265270067683b1b0150271669aa2426a64446f223d447f2930a", "text": " // check-pass trait Mirror { type Other; } #[derive(Debug)] struct Even(usize); struct Odd; impl Mirror for Even { type Other = Odd; } impl Mirror for Odd { type Other = Even; } trait Dyn: AsRef<::Other> {} impl Dyn for Even {} impl AsRef for Even { fn as_ref(&self) -> &Even { self } } fn code(d: &dyn Dyn) -> &T::Other { d.as_ref() } fn main() { println!(\"{:?}\", code(&Even(22))); } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-cf7666fa3b3e5b6f70463b09e7f7979217170ef918eec8e9f6a707ff2a596bec", "query": "Discovered a compiler panic, thought it might relate to , but was told to post a new issue. In minimizing, it seems the generic types are required to trigger the panic, I was unable to reproduce when inlining t7p or t8n. This code panics on 1.33 - 1.35, but does not panic on 1.32 (): On 1.32: On 1.35 nightly: --Bryan\ntriage: P-high. Removing nominated tag since there's little to discuss beyond its P-highness. (The fact its unassigned will be discussed regardless of nomination state.)\nBisection points to i.e. as the culprit. The backtrace in that commit looks very similar to the one reported here, and the parent commit () gives the same compiler error.\nI spent some time looking at this a few weeks ago but did not manage to find anything conclusive. At this point I'm going to be taking a break for a few months, so I am unassigning myself from this issue.\ntriage: assigning self.\ntriage: downgrading to P-medium; this does not warrant revisiting every week.\nI was able to minify it a bit further:\nTriage: It's no longer ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-9a2d688c3fa639e2118f85646c917c89d772eb29bb0de948a4a0b899e20bd1dd", "text": " fn t7p(f: impl Fn(B) -> C, g: impl Fn(A) -> B) -> impl Fn(A) -> C { move |a: A| -> C { f(g(a)) } } fn t8n(f: impl Fn(A) -> B, g: impl Fn(A) -> C) -> impl Fn(A) -> (B, C) where A: Copy, { move |a: A| -> (B, C) { let b = a; let fa = f(a); let ga = g(b); (fa, ga) } } fn main() { let f = |(_, _)| {}; let g = |(a, _)| a; let t7 = |env| |a| |b| t7p(f, g)(((env, a), b)); let t8 = t8n(t7, t7p(f, g)); //~^ ERROR: expected a `Fn<(_,)>` closure, found `impl Fn<(((_, _), _),)> } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-cf7666fa3b3e5b6f70463b09e7f7979217170ef918eec8e9f6a707ff2a596bec", "query": "Discovered a compiler panic, thought it might relate to , but was told to post a new issue. In minimizing, it seems the generic types are required to trigger the panic, I was unable to reproduce when inlining t7p or t8n. This code panics on 1.33 - 1.35, but does not panic on 1.32 (): On 1.32: On 1.35 nightly: --Bryan\ntriage: P-high. Removing nominated tag since there's little to discuss beyond its P-highness. (The fact its unassigned will be discussed regardless of nomination state.)\nBisection points to i.e. as the culprit. The backtrace in that commit looks very similar to the one reported here, and the parent commit () gives the same compiler error.\nI spent some time looking at this a few weeks ago but did not manage to find anything conclusive. At this point I'm going to be taking a break for a few months, so I am unassigning myself from this issue.\ntriage: assigning self.\ntriage: downgrading to P-medium; this does not warrant revisiting every week.\nI was able to minify it a bit further:\nTriage: It's no longer ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-8419251590cf9d5321270b80c25453db0c31bfa58fb418c234cddf2b1b6144ab", "text": " error[E0277]: expected a `Fn<(_,)>` closure, found `impl Fn<(((_, _), _),)>` --> $DIR/issue-59494.rs:21:22 | LL | fn t8n(f: impl Fn(A) -> B, g: impl Fn(A) -> C) -> impl Fn(A) -> (B, C) | ---------- required by this bound in `t8n` ... LL | let t8 = t8n(t7, t7p(f, g)); | ^^^^^^^^^ expected an `Fn<(_,)>` closure, found `impl Fn<(((_, _), _),)>` | = help: the trait `Fn<(_,)>` is not implemented for `impl Fn<(((_, _), _),)>` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-cf7666fa3b3e5b6f70463b09e7f7979217170ef918eec8e9f6a707ff2a596bec", "query": "Discovered a compiler panic, thought it might relate to , but was told to post a new issue. In minimizing, it seems the generic types are required to trigger the panic, I was unable to reproduce when inlining t7p or t8n. This code panics on 1.33 - 1.35, but does not panic on 1.32 (): On 1.32: On 1.35 nightly: --Bryan\ntriage: P-high. Removing nominated tag since there's little to discuss beyond its P-highness. (The fact its unassigned will be discussed regardless of nomination state.)\nBisection points to i.e. as the culprit. The backtrace in that commit looks very similar to the one reported here, and the parent commit () gives the same compiler error.\nI spent some time looking at this a few weeks ago but did not manage to find anything conclusive. At this point I'm going to be taking a break for a few months, so I am unassigning myself from this issue.\ntriage: assigning self.\ntriage: downgrading to P-medium; this does not warrant revisiting every week.\nI was able to minify it a bit further:\nTriage: It's no longer ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-f9539003620decdd46e47f872503fcda0271d127664e87f7c24b6244bd178262", "text": " // check-pass pub trait Trait1 { type C; } struct T1; impl Trait1 for T1 { type C = usize; } pub trait Callback: FnMut(::C) {} impl::C)> Callback for F {} pub struct State { callback: Option>>, } impl State { fn new() -> Self { Self { callback: None } } fn test_cb(&mut self, d: ::C) { (self.callback.as_mut().unwrap())(d) } } fn main() { let mut s = State::::new(); s.test_cb(1); } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-cf7666fa3b3e5b6f70463b09e7f7979217170ef918eec8e9f6a707ff2a596bec", "query": "Discovered a compiler panic, thought it might relate to , but was told to post a new issue. In minimizing, it seems the generic types are required to trigger the panic, I was unable to reproduce when inlining t7p or t8n. This code panics on 1.33 - 1.35, but does not panic on 1.32 (): On 1.32: On 1.35 nightly: --Bryan\ntriage: P-high. Removing nominated tag since there's little to discuss beyond its P-highness. (The fact its unassigned will be discussed regardless of nomination state.)\nBisection points to i.e. as the culprit. The backtrace in that commit looks very similar to the one reported here, and the parent commit () gives the same compiler error.\nI spent some time looking at this a few weeks ago but did not manage to find anything conclusive. At this point I'm going to be taking a break for a few months, so I am unassigning myself from this issue.\ntriage: assigning self.\ntriage: downgrading to P-medium; this does not warrant revisiting every week.\nI was able to minify it a bit further:\nTriage: It's no longer ICE with the latest nightly.", "positive_passages": [{"docid": "doc-en-rust-4cc83a86d95b76ac372d489ef3ff944323dfd3254f3b86d7070803d49c555801", "text": " // check-pass fn any() -> T { loop {} } trait Foo { type V; } trait Callback: Fn(&T, &T::V) {} impl Callback for F {} struct Bar { callback: Box>, } impl Bar { fn event(&self) { (self.callback)(any(), any()); } } struct A; struct B; impl Foo for A { type V = B; } fn main() { let foo = Bar:: { callback: Box::new(|_: &A, _: &B| ()) }; foo.event(); } ", "commid": "rust_pr_78295"}], "negative_passages": []} {"query_id": "q-en-rust-4f0ae367abf9aab15a115ea33a3c3dcad49cf0f695cb1218708a7abb3b08a279", "query": "() Output: Version: 1.34.0 1.35.0 2019-04-15 nightly () Expected: Original report:\n26.0 is the first version that accepts this. 1.25.0 rejects it with:\n(not really a regression since more code now compiles)\nAdding T-compiler since is a built-in.\nOpened \u2014it should fix the issue.\ntriage: P-medium. I'm going to assume we don't need any future-compat machinery for a bug of this nature, unless the crater run on PR indicates that there are occurrences of this mis-use of in the wild.", "positive_passages": [{"docid": "doc-en-rust-920f23f558e96aca18948711ba00636edb55221b52cff92786272bb75c35b871", "text": " use errors::DiagnosticBuilder; use errors::{Applicability, DiagnosticBuilder}; use syntax::ast::{self, *}; use syntax::source_map::Spanned; use syntax::ext::base::*; use syntax::ext::build::AstBuilder; use syntax::parse::token; use syntax::parse::parser::Parser; use syntax::print::pprust; use syntax::ptr::P; use syntax::symbol::Symbol;", "commid": "rust_pr_60039"}], "negative_passages": []} {"query_id": "q-en-rust-4f0ae367abf9aab15a115ea33a3c3dcad49cf0f695cb1218708a7abb3b08a279", "query": "() Output: Version: 1.34.0 1.35.0 2019-04-15 nightly () Expected: Original report:\n26.0 is the first version that accepts this. 1.25.0 rejects it with:\n(not really a regression since more code now compiles)\nAdding T-compiler since is a built-in.\nOpened \u2014it should fix the issue.\ntriage: P-medium. I'm going to assume we don't need any future-compat machinery for a bug of this nature, unless the crater run on PR indicates that there are occurrences of this mis-use of in the wild.", "positive_passages": [{"docid": "doc-en-rust-98be43ee7c4ace576dcd3d529967d0dab7e316963b38529808ae4367fc3b28c5", "text": "return Err(err); } Ok(Assert { cond_expr: parser.parse_expr()?, custom_message: if parser.eat(&token::Comma) { let ts = parser.parse_tokens(); if !ts.is_empty() { Some(ts) } else { None } } else { None }, }) let cond_expr = parser.parse_expr()?; // Some crates use the `assert!` macro in the following form (note extra semicolon): // // assert!( // my_function(); // ); // // Warn about semicolon and suggest removing it. Eventually, this should be turned into an // error. if parser.token == token::Semi { let mut err = cx.struct_span_warn(sp, \"macro requires an expression as an argument\"); err.span_suggestion( parser.span, \"try removing semicolon\", String::new(), Applicability::MaybeIncorrect ); err.note(\"this is going to be an error in the future\"); err.emit(); parser.bump(); } // Some crates use the `assert!` macro in the following form (note missing comma before // message): // // assert!(true \"error message\"); // // Parse this as an actual message, and suggest inserting a comma. Eventually, this should be // turned into an error. let custom_message = if let token::Literal(token::Lit::Str_(_), _) = parser.token { let mut err = cx.struct_span_warn(parser.span, \"unexpected string literal\"); let comma_span = cx.source_map().next_point(parser.prev_span); err.span_suggestion_short( comma_span, \"try adding a comma\", \", \".to_string(), Applicability::MaybeIncorrect ); err.note(\"this is going to be an error in the future\"); err.emit(); parse_custom_message(&mut parser) } else if parser.eat(&token::Comma) { parse_custom_message(&mut parser) } else { None }; if parser.token != token::Eof { parser.expect_one_of(&[], &[])?; unreachable!(); } Ok(Assert { cond_expr, custom_message }) } fn parse_custom_message<'a>(parser: &mut Parser<'a>) -> Option { let ts = parser.parse_tokens(); if !ts.is_empty() { Some(ts) } else { None } }", "commid": "rust_pr_60039"}], "negative_passages": []} {"query_id": "q-en-rust-4f0ae367abf9aab15a115ea33a3c3dcad49cf0f695cb1218708a7abb3b08a279", "query": "() Output: Version: 1.34.0 1.35.0 2019-04-15 nightly () Expected: Original report:\n26.0 is the first version that accepts this. 1.25.0 rejects it with:\n(not really a regression since more code now compiles)\nAdding T-compiler since is a built-in.\nOpened \u2014it should fix the issue.\ntriage: P-medium. I'm going to assume we don't need any future-compat machinery for a bug of this nature, unless the crater run on PR indicates that there are occurrences of this mis-use of in the wild.", "positive_passages": [{"docid": "doc-en-rust-b6d293b70e1d94f35c7b07db72827ee7fecb9b99131d8ee8ead7baaf358af431", "text": " // Ensure assert macro does not ignore trailing garbage. // // See https://github.com/rust-lang/rust/issues/60024 for details. fn main() { assert!(true some extra junk, \"whatever\"); //~^ ERROR expected one of assert!(true some extra junk); //~^ ERROR expected one of assert!(true, \"whatever\" blah); //~^ ERROR no rules expected assert!(true \"whatever\" blah); //~^ WARN unexpected string literal //~^^ ERROR no rules expected assert!(true;); //~^ WARN macro requires an expression assert!(false || true \"error message\"); //~^ WARN unexpected string literal } ", "commid": "rust_pr_60039"}], "negative_passages": []} {"query_id": "q-en-rust-4f0ae367abf9aab15a115ea33a3c3dcad49cf0f695cb1218708a7abb3b08a279", "query": "() Output: Version: 1.34.0 1.35.0 2019-04-15 nightly () Expected: Original report:\n26.0 is the first version that accepts this. 1.25.0 rejects it with:\n(not really a regression since more code now compiles)\nAdding T-compiler since is a built-in.\nOpened \u2014it should fix the issue.\ntriage: P-medium. I'm going to assume we don't need any future-compat machinery for a bug of this nature, unless the crater run on PR indicates that there are occurrences of this mis-use of in the wild.", "positive_passages": [{"docid": "doc-en-rust-d2c2d123ba3cacb41d1561389fdeebdd60a273c877783979b1fd114353af6d29", "text": " error: expected one of `,`, `.`, `?`, or an operator, found `some` --> $DIR/assert-trailing-junk.rs:6:18 | LL | assert!(true some extra junk, \"whatever\"); | ^^^^ expected one of `,`, `.`, `?`, or an operator here error: expected one of `,`, `.`, `?`, or an operator, found `some` --> $DIR/assert-trailing-junk.rs:9:18 | LL | assert!(true some extra junk); | ^^^^ expected one of `,`, `.`, `?`, or an operator here error: no rules expected the token `blah` --> $DIR/assert-trailing-junk.rs:12:30 | LL | assert!(true, \"whatever\" blah); | -^^^^ no rules expected this token in macro call | | | help: missing comma here warning: unexpected string literal --> $DIR/assert-trailing-junk.rs:15:18 | LL | assert!(true \"whatever\" blah); | -^^^^^^^^^^ | | | help: try adding a comma | = note: this is going to be an error in the future error: no rules expected the token `blah` --> $DIR/assert-trailing-junk.rs:15:29 | LL | assert!(true \"whatever\" blah); | -^^^^ no rules expected this token in macro call | | | help: missing comma here warning: macro requires an expression as an argument --> $DIR/assert-trailing-junk.rs:19:5 | LL | assert!(true;); | ^^^^^^^^^^^^-^^ | | | help: try removing semicolon | = note: this is going to be an error in the future warning: unexpected string literal --> $DIR/assert-trailing-junk.rs:22:27 | LL | assert!(false || true \"error message\"); | -^^^^^^^^^^^^^^^ | | | help: try adding a comma | = note: this is going to be an error in the future error: aborting due to 4 previous errors ", "commid": "rust_pr_60039"}], "negative_passages": []} {"query_id": "q-en-rust-6a1c2eb00a3d43b46a981382040350f17e077bc813aaaee288eea0cd6b71f8fb", "query": "With the following code: comes the following error: However, this function compiles correctly without the keyword.\ncc\nYou can trigger this without async/await using : cc\nSome more reduction in genericity and clearing up a few things: The problem is that is not know in impl trait items. It's also not related to the return position, but refers to any impl trait in the function. Outside of impl trait can be resolved without a problem, so it must be something related to the way impl trait reuses the generics of its parent.\nIt seems to be a missing normalization, most likely.\nI have a local fix for reduction above, but is making this rather difficult as it makes output not work properly.\nThis also happens for definitions: My fix only seems to turn this into a cycle error for some reason. EDIT: Nevermind I just broke existential types in general lol\nMarking as blocking, although I think there's minimal future compat risk. If push came to shove I personally would be ok with removing this from the blocking list. But then it looks like already fixed it =)\nI think the problem is on this line of code: in particular, we \"instantiate\" the predicates -- meaning, substitute in the values for their type parameters -- but we never normalize them. We should be able to invoke the methods. Note that in this code we do have an available, as well, in . I think we should be able to invoke: this returns an value, which contains obligations that must be pushed into .\nHmm, that code is only invoked if is true, which I think wouldn't be the case here since there are no region bounds at all (right?). I also noticed that does not exist in this context ( is an ), and there's no nearby either. However, another place where is called without normalizing the result is here, further down in the same file: there does exist (as does ). Quick testing shows that normalizing there fixes this issue as well, and as a bonus also fixes the issue with without ICEs!\ngood catch -- actually I think the lines you found are the ones I meant to direct you to in the first place. I see you have some other problems on the PR, looking now.", "positive_passages": [{"docid": "doc-en-rust-f8365cbc449518be5c8403dc1a55d408d694025607d731363cced19be4e2ecec", "text": "let predicates_of = tcx.predicates_of(def_id); debug!(\"instantiate_opaque_types: predicates={:#?}\", predicates_of,); let bounds = predicates_of.instantiate(tcx, substs); let param_env = tcx.param_env(def_id); let InferOk { value: bounds, obligations } = infcx.partially_normalize_associated_types_in(span, self.body_id, param_env, &bounds); self.obligations.extend(obligations); debug!(\"instantiate_opaque_types: bounds={:?}\", bounds); let required_region_bounds = tcx.required_region_bounds(ty, bounds.predicates.clone());", "commid": "rust_pr_62221"}], "negative_passages": []} {"query_id": "q-en-rust-6a1c2eb00a3d43b46a981382040350f17e077bc813aaaee288eea0cd6b71f8fb", "query": "With the following code: comes the following error: However, this function compiles correctly without the keyword.\ncc\nYou can trigger this without async/await using : cc\nSome more reduction in genericity and clearing up a few things: The problem is that is not know in impl trait items. It's also not related to the return position, but refers to any impl trait in the function. Outside of impl trait can be resolved without a problem, so it must be something related to the way impl trait reuses the generics of its parent.\nIt seems to be a missing normalization, most likely.\nI have a local fix for reduction above, but is making this rather difficult as it makes output not work properly.\nThis also happens for definitions: My fix only seems to turn this into a cycle error for some reason. EDIT: Nevermind I just broke existential types in general lol\nMarking as blocking, although I think there's minimal future compat risk. If push came to shove I personally would be ok with removing this from the blocking list. But then it looks like already fixed it =)\nI think the problem is on this line of code: in particular, we \"instantiate\" the predicates -- meaning, substitute in the values for their type parameters -- but we never normalize them. We should be able to invoke the methods. Note that in this code we do have an available, as well, in . I think we should be able to invoke: this returns an value, which contains obligations that must be pushed into .\nHmm, that code is only invoked if is true, which I think wouldn't be the case here since there are no region bounds at all (right?). I also noticed that does not exist in this context ( is an ), and there's no nearby either. However, another place where is called without normalizing the result is here, further down in the same file: there does exist (as does ). Quick testing shows that normalizing there fixes this issue as well, and as a bonus also fixes the issue with without ICEs!\ngood catch -- actually I think the lines you found are the ones I meant to direct you to in the first place. I see you have some other problems on the PR, looking now.", "positive_passages": [{"docid": "doc-en-rust-14366c5b28d93e94f1962931cbaf72dfa60a2100817687a8d465a466fe139c5c", "text": "&declared_ret_ty, decl.output.span(), ); debug!(\"check_fn: declared_ret_ty: {}, revealed_ret_ty: {}\", declared_ret_ty, revealed_ret_ty); fcx.ret_coercion = Some(RefCell::new(CoerceMany::new(revealed_ret_ty))); fn_sig = fcx.tcx.mk_fn_sig( fn_sig.inputs().iter().cloned(),", "commid": "rust_pr_62221"}], "negative_passages": []} {"query_id": "q-en-rust-6a1c2eb00a3d43b46a981382040350f17e077bc813aaaee288eea0cd6b71f8fb", "query": "With the following code: comes the following error: However, this function compiles correctly without the keyword.\ncc\nYou can trigger this without async/await using : cc\nSome more reduction in genericity and clearing up a few things: The problem is that is not know in impl trait items. It's also not related to the return position, but refers to any impl trait in the function. Outside of impl trait can be resolved without a problem, so it must be something related to the way impl trait reuses the generics of its parent.\nIt seems to be a missing normalization, most likely.\nI have a local fix for reduction above, but is making this rather difficult as it makes output not work properly.\nThis also happens for definitions: My fix only seems to turn this into a cycle error for some reason. EDIT: Nevermind I just broke existential types in general lol\nMarking as blocking, although I think there's minimal future compat risk. If push came to shove I personally would be ok with removing this from the blocking list. But then it looks like already fixed it =)\nI think the problem is on this line of code: in particular, we \"instantiate\" the predicates -- meaning, substitute in the values for their type parameters -- but we never normalize them. We should be able to invoke the methods. Note that in this code we do have an available, as well, in . I think we should be able to invoke: this returns an value, which contains obligations that must be pushed into .\nHmm, that code is only invoked if is true, which I think wouldn't be the case here since there are no region bounds at all (right?). I also noticed that does not exist in this context ( is an ), and there's no nearby either. However, another place where is called without normalizing the result is here, further down in the same file: there does exist (as does ). Quick testing shows that normalizing there fixes this issue as well, and as a bonus also fixes the issue with without ICEs!\ngood catch -- actually I think the lines you found are the ones I meant to direct you to in the first place. I see you have some other problems on the PR, looking now.", "positive_passages": [{"docid": "doc-en-rust-3afaaac7632489f05781ff9ab3653b790e10ff78b3a811e4dca6086624be72d7", "text": " // check-pass // edition:2018 #![feature(async_await)] // See issue 60414 trait Trait { type Assoc; } async fn foo>() -> T::Assoc { () } fn main() {} ", "commid": "rust_pr_62221"}], "negative_passages": []} {"query_id": "q-en-rust-6a1c2eb00a3d43b46a981382040350f17e077bc813aaaee288eea0cd6b71f8fb", "query": "With the following code: comes the following error: However, this function compiles correctly without the keyword.\ncc\nYou can trigger this without async/await using : cc\nSome more reduction in genericity and clearing up a few things: The problem is that is not know in impl trait items. It's also not related to the return position, but refers to any impl trait in the function. Outside of impl trait can be resolved without a problem, so it must be something related to the way impl trait reuses the generics of its parent.\nIt seems to be a missing normalization, most likely.\nI have a local fix for reduction above, but is making this rather difficult as it makes output not work properly.\nThis also happens for definitions: My fix only seems to turn this into a cycle error for some reason. EDIT: Nevermind I just broke existential types in general lol\nMarking as blocking, although I think there's minimal future compat risk. If push came to shove I personally would be ok with removing this from the blocking list. But then it looks like already fixed it =)\nI think the problem is on this line of code: in particular, we \"instantiate\" the predicates -- meaning, substitute in the values for their type parameters -- but we never normalize them. We should be able to invoke the methods. Note that in this code we do have an available, as well, in . I think we should be able to invoke: this returns an value, which contains obligations that must be pushed into .\nHmm, that code is only invoked if is true, which I think wouldn't be the case here since there are no region bounds at all (right?). I also noticed that does not exist in this context ( is an ), and there's no nearby either. However, another place where is called without normalizing the result is here, further down in the same file: there does exist (as does ). Quick testing shows that normalizing there fixes this issue as well, and as a bonus also fixes the issue with without ICEs!\ngood catch -- actually I think the lines you found are the ones I meant to direct you to in the first place. I see you have some other problems on the PR, looking now.", "positive_passages": [{"docid": "doc-en-rust-67762d1cc9e0663f06f2ab3f603435cfbd80fbce8b3b1deb1667255d4924c360", "text": " // compile-fail // edition:2018 #![feature(async_await)] #![feature(existential_type)] #![feature(impl_trait_in_bindings)] //~^ WARNING the feature `impl_trait_in_bindings` is incomplete // See issue 60414 ///////////////////////////////////////////// // Reduction to `impl Trait` struct Foo(T); trait FooLike { type Output; } impl FooLike for Foo { type Output = T; } mod impl_trait { use super::*; trait Trait { type Assoc; } /// `T::Assoc` can't be normalized any further here. fn foo_fail() -> impl FooLike { //~^ ERROR: type mismatch Foo(()) } } ///////////////////////////////////////////// // Same with lifetimes in the trait mod lifetimes { use super::*; trait Trait<'a> { type Assoc; } /// Missing bound constraining `Assoc`, `T::Assoc` can't be normalized further. fn foo2_fail<'a, T: Trait<'a>>() -> impl FooLike { //~^ ERROR: type mismatch Foo(()) } } fn main() {} ", "commid": "rust_pr_62221"}], "negative_passages": []} {"query_id": "q-en-rust-6a1c2eb00a3d43b46a981382040350f17e077bc813aaaee288eea0cd6b71f8fb", "query": "With the following code: comes the following error: However, this function compiles correctly without the keyword.\ncc\nYou can trigger this without async/await using : cc\nSome more reduction in genericity and clearing up a few things: The problem is that is not know in impl trait items. It's also not related to the return position, but refers to any impl trait in the function. Outside of impl trait can be resolved without a problem, so it must be something related to the way impl trait reuses the generics of its parent.\nIt seems to be a missing normalization, most likely.\nI have a local fix for reduction above, but is making this rather difficult as it makes output not work properly.\nThis also happens for definitions: My fix only seems to turn this into a cycle error for some reason. EDIT: Nevermind I just broke existential types in general lol\nMarking as blocking, although I think there's minimal future compat risk. If push came to shove I personally would be ok with removing this from the blocking list. But then it looks like already fixed it =)\nI think the problem is on this line of code: in particular, we \"instantiate\" the predicates -- meaning, substitute in the values for their type parameters -- but we never normalize them. We should be able to invoke the methods. Note that in this code we do have an available, as well, in . I think we should be able to invoke: this returns an value, which contains obligations that must be pushed into .\nHmm, that code is only invoked if is true, which I think wouldn't be the case here since there are no region bounds at all (right?). I also noticed that does not exist in this context ( is an ), and there's no nearby either. However, another place where is called without normalizing the result is here, further down in the same file: there does exist (as does ). Quick testing shows that normalizing there fixes this issue as well, and as a bonus also fixes the issue with without ICEs!\ngood catch -- actually I think the lines you found are the ones I meant to direct you to in the first place. I see you have some other problems on the PR, looking now.", "positive_passages": [{"docid": "doc-en-rust-e1fd038e5f5ab5eab2b98dbf518104bdcbba1dcd6d6fc336f3717af46c941884", "text": " warning: the feature `impl_trait_in_bindings` is incomplete and may cause the compiler to crash --> $DIR/bound-normalization-fail.rs:6:12 | LL | #![feature(impl_trait_in_bindings)] | ^^^^^^^^^^^^^^^^^^^^^^ error[E0271]: type mismatch resolving ` as FooLike>::Output == ::Assoc` --> $DIR/bound-normalization-fail.rs:30:32 | LL | fn foo_fail() -> impl FooLike { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected (), found associated type | = note: expected type `()` found type `::Assoc` = note: the return type of a function must have a statically known size error[E0271]: type mismatch resolving ` as FooLike>::Output == >::Assoc` --> $DIR/bound-normalization-fail.rs:47:41 | LL | fn foo2_fail<'a, T: Trait<'a>>() -> impl FooLike { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected (), found associated type | = note: expected type `()` found type `>::Assoc` = note: the return type of a function must have a statically known size error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0271`. ", "commid": "rust_pr_62221"}], "negative_passages": []} {"query_id": "q-en-rust-6a1c2eb00a3d43b46a981382040350f17e077bc813aaaee288eea0cd6b71f8fb", "query": "With the following code: comes the following error: However, this function compiles correctly without the keyword.\ncc\nYou can trigger this without async/await using : cc\nSome more reduction in genericity and clearing up a few things: The problem is that is not know in impl trait items. It's also not related to the return position, but refers to any impl trait in the function. Outside of impl trait can be resolved without a problem, so it must be something related to the way impl trait reuses the generics of its parent.\nIt seems to be a missing normalization, most likely.\nI have a local fix for reduction above, but is making this rather difficult as it makes output not work properly.\nThis also happens for definitions: My fix only seems to turn this into a cycle error for some reason. EDIT: Nevermind I just broke existential types in general lol\nMarking as blocking, although I think there's minimal future compat risk. If push came to shove I personally would be ok with removing this from the blocking list. But then it looks like already fixed it =)\nI think the problem is on this line of code: in particular, we \"instantiate\" the predicates -- meaning, substitute in the values for their type parameters -- but we never normalize them. We should be able to invoke the methods. Note that in this code we do have an available, as well, in . I think we should be able to invoke: this returns an value, which contains obligations that must be pushed into .\nHmm, that code is only invoked if is true, which I think wouldn't be the case here since there are no region bounds at all (right?). I also noticed that does not exist in this context ( is an ), and there's no nearby either. However, another place where is called without normalizing the result is here, further down in the same file: there does exist (as does ). Quick testing shows that normalizing there fixes this issue as well, and as a bonus also fixes the issue with without ICEs!\ngood catch -- actually I think the lines you found are the ones I meant to direct you to in the first place. I see you have some other problems on the PR, looking now.", "positive_passages": [{"docid": "doc-en-rust-39b904afdd378ff36d7427ab0d6f6072e8d14cfe564ee6f854d110927bfaf4c3", "text": " // check-pass // edition:2018 #![feature(async_await)] #![feature(existential_type)] #![feature(impl_trait_in_bindings)] //~^ WARNING the feature `impl_trait_in_bindings` is incomplete // See issue 60414 ///////////////////////////////////////////// // Reduction to `impl Trait` struct Foo(T); trait FooLike { type Output; } impl FooLike for Foo { type Output = T; } mod impl_trait { use super::*; trait Trait { type Assoc; } /// `T::Assoc` should be normalized to `()` here. fn foo_pass>() -> impl FooLike { Foo(()) } } ///////////////////////////////////////////// // Same with lifetimes in the trait mod lifetimes { use super::*; trait Trait<'a> { type Assoc; } /// Like above. /// /// FIXME(#51525) -- the shorter notation `T::Assoc` winds up referencing `'static` here fn foo2_pass<'a, T: Trait<'a, Assoc=()> + 'a>( ) -> impl FooLike>::Assoc> + 'a { Foo(()) } /// Normalization to type containing bound region. /// /// FIXME(#51525) -- the shorter notation `T::Assoc` winds up referencing `'static` here fn foo2_pass2<'a, T: Trait<'a, Assoc=&'a ()> + 'a>( ) -> impl FooLike>::Assoc> + 'a { Foo(&()) } } ///////////////////////////////////////////// // Reduction using `impl Trait` in bindings mod impl_trait_in_bindings { struct Foo; trait FooLike { type Output; } impl FooLike for Foo { type Output = u32; } trait Trait { type Assoc; } fn foo>() { let _: impl FooLike = Foo; } } ///////////////////////////////////////////// // The same applied to `existential type`s mod existential_types { trait Implemented { type Assoc; } impl Implemented for T { type Assoc = u8; } trait Trait { type Out; } impl Trait for () { type Out = u8; } existential type Ex: Trait::Assoc>; fn define() -> Ex { () } } fn main() {} ", "commid": "rust_pr_62221"}], "negative_passages": []} {"query_id": "q-en-rust-6a1c2eb00a3d43b46a981382040350f17e077bc813aaaee288eea0cd6b71f8fb", "query": "With the following code: comes the following error: However, this function compiles correctly without the keyword.\ncc\nYou can trigger this without async/await using : cc\nSome more reduction in genericity and clearing up a few things: The problem is that is not know in impl trait items. It's also not related to the return position, but refers to any impl trait in the function. Outside of impl trait can be resolved without a problem, so it must be something related to the way impl trait reuses the generics of its parent.\nIt seems to be a missing normalization, most likely.\nI have a local fix for reduction above, but is making this rather difficult as it makes output not work properly.\nThis also happens for definitions: My fix only seems to turn this into a cycle error for some reason. EDIT: Nevermind I just broke existential types in general lol\nMarking as blocking, although I think there's minimal future compat risk. If push came to shove I personally would be ok with removing this from the blocking list. But then it looks like already fixed it =)\nI think the problem is on this line of code: in particular, we \"instantiate\" the predicates -- meaning, substitute in the values for their type parameters -- but we never normalize them. We should be able to invoke the methods. Note that in this code we do have an available, as well, in . I think we should be able to invoke: this returns an value, which contains obligations that must be pushed into .\nHmm, that code is only invoked if is true, which I think wouldn't be the case here since there are no region bounds at all (right?). I also noticed that does not exist in this context ( is an ), and there's no nearby either. However, another place where is called without normalizing the result is here, further down in the same file: there does exist (as does ). Quick testing shows that normalizing there fixes this issue as well, and as a bonus also fixes the issue with without ICEs!\ngood catch -- actually I think the lines you found are the ones I meant to direct you to in the first place. I see you have some other problems on the PR, looking now.", "positive_passages": [{"docid": "doc-en-rust-1530bcc5a2c04dba94dd3842917b9fdf47317666cf73815aa9f4cc31844ea6c1", "text": " warning: the feature `impl_trait_in_bindings` is incomplete and may cause the compiler to crash --> $DIR/bound-normalization-pass.rs:6:12 | LL | #![feature(impl_trait_in_bindings)] | ^^^^^^^^^^^^^^^^^^^^^^ ", "commid": "rust_pr_62221"}], "negative_passages": []} {"query_id": "q-en-rust-2bd015bcbdef3c2939019a5e62cfd0e368e124a5fabbc2ef06ed3321bd6074d2", "query": "I was using nightly rust extensively for its async/await and specialization feature. After a recent update (nightly-2019-05-02), a new ICE introduced with following logs: After a quick search it seems like is the only in direct cause (stack frame 10). ~I cannot share the code or produce a minimal case because the code base is rather big.~\nI've the minimal case for reproducing. Hope that would help!\nThanks! Looks like even this line is enough to trigger the ICE:\n: rustc complains you should have variable mut although it is already mut.\nThat's and should be fixed in the next nightly\ncc it looks like the desugaring may have broken in argument-position?\nOops, sorry. I'll take a look.\nDenominating since already has a fix\nThanks guys! This is a super fast fix!", "positive_passages": [{"docid": "doc-en-rust-311977dc66962e3e6a6e263fe17eae752ed651680cef89cd486b0c343d98446d", "text": "visit::walk_generics(this, generics); // Walk the generated arguments for the `async fn`. for a in arguments { for (i, a) in arguments.iter().enumerate() { use visit::Visitor; if let Some(arg) = &a.arg { this.visit_ty(&arg.ty); } else { this.visit_ty(&decl.inputs[i].ty); } }", "commid": "rust_pr_60527"}], "negative_passages": []} {"query_id": "q-en-rust-2bd015bcbdef3c2939019a5e62cfd0e368e124a5fabbc2ef06ed3321bd6074d2", "query": "I was using nightly rust extensively for its async/await and specialization feature. After a recent update (nightly-2019-05-02), a new ICE introduced with following logs: After a quick search it seems like is the only in direct cause (stack frame 10). ~I cannot share the code or produce a minimal case because the code base is rather big.~\nI've the minimal case for reproducing. Hope that would help!\nThanks! Looks like even this line is enough to trigger the ICE:\n: rustc complains you should have variable mut although it is already mut.\nThat's and should be fixed in the next nightly\ncc it looks like the desugaring may have broken in argument-position?\nOops, sorry. I'll take a look.\nDenominating since already has a fix\nThanks guys! This is a super fast fix!", "positive_passages": [{"docid": "doc-en-rust-2c7c2d2d47b4659ef0926f3032e01b9b4b9625448aec7f00cd5533878fe76407", "text": " // compile-pass // edition:2018 #![feature(async_await)] // This is a regression test to ensure that simple bindings (where replacement arguments aren't // created during async fn lowering) that have their DefId used during HIR lowering (such as impl // trait) are visited during def collection and thus have a DefId. async fn foo(ws: impl Iterator) {} fn main() {} ", "commid": "rust_pr_60527"}], "negative_passages": []} {"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-c87e56b9ec5932fc5ed1ab6882ae354a1ea1ed29f99a26a336848953eab95f26", "text": "if !fail { return None; } bug!(\"unexpected const parent path def {:?}\", x); tcx.sess.delay_span_bug( DUMMY_SP, &format!( \"unexpected const parent path def {:?}\", x ), ); tcx.types.err } } }", "commid": "rust_pr_60710"}], "negative_passages": []} {"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-e3aca3629cb79476ab44ed412b8d6e3e0358ade864bfbcc9e4ae073a809604f6", "text": "if !fail { return None; } bug!(\"unexpected const parent path {:?}\", x); tcx.sess.delay_span_bug( DUMMY_SP, &format!( \"unexpected const parent path {:?}\", x ), ); tcx.types.err } } }", "commid": "rust_pr_60710"}], "negative_passages": []} {"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-db81a1c26ac25aac70c3ce011770327849ed73264a44b2dd7ae807be7239af72", "text": "if !fail { return None; } bug!(\"unexpected const parent in type_of_def_id(): {:?}\", x); tcx.sess.delay_span_bug( DUMMY_SP, &format!( \"unexpected const parent in type_of_def_id(): {:?}\", x ), ); tcx.types.err } } }", "commid": "rust_pr_60710"}], "negative_passages": []} {"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-626559eb02689c187aba1f0140e4bfcf5a4b68a2bf73ec33a51c148f6d4a9afa", "text": " #![feature(const_generics)] //~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash // We should probably be able to infer the types here. However, this test is checking that we don't // get an ICE in this case. It may be modified later to not be an error. struct Foo(pub [u8; NUM_BYTES]); fn main() { let _ = Foo::<3>([1, 2, 3]); //~ ERROR type annotations needed } ", "commid": "rust_pr_60710"}], "negative_passages": []} {"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-4e0475c61ae6004c2ed012dbf18f3e83b516568b67f33a77afc0e54ea4274e20", "text": " warning: the feature `const_generics` is incomplete and may cause the compiler to crash --> $DIR/cannot-infer-type-for-const-param.rs:1:12 | LL | #![feature(const_generics)] | ^^^^^^^^^^^^^^ error[E0282]: type annotations needed --> $DIR/cannot-infer-type-for-const-param.rs:10:19 | LL | let _ = Foo::<3>([1, 2, 3]); | ^ cannot infer type for `{integer}` error: aborting due to previous error For more information about this error, try `rustc --explain E0282`. ", "commid": "rust_pr_60710"}], "negative_passages": []} {"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-1e1dfbad1ab9acf8abb0446035681f29f8b6d050640f182b063c4836daa91a62", "text": " use std::convert::TryInto; struct S; fn main() { let _: u32 = 5i32.try_into::<32>().unwrap(); //~ ERROR wrong number of const arguments S.f::<0>(); //~ ERROR no method named `f` S::<0>; //~ ERROR wrong number of const arguments } ", "commid": "rust_pr_60710"}], "negative_passages": []} {"query_id": "q-en-rust-0430da44e5f0c0cd5ec3e6ab63520772d61c140d8254453f233ecda5e0243e82", "query": "Following code sample causes a compiler panic. should be .\nAlso happens on nightly. Likely due to const generics changes (cc\nMinimal reproduction:\nAnother case:", "positive_passages": [{"docid": "doc-en-rust-b8ecddd76d1eec430f123f0f1f5941a8ea8f4b8b63cd710b11c2b83d27c11f3c", "text": " error[E0107]: wrong number of const arguments: expected 0, found 1 --> $DIR/invalid-const-arg-for-type-param.rs:6:34 | LL | let _: u32 = 5i32.try_into::<32>().unwrap(); | ^^ unexpected const argument error[E0599]: no method named `f` found for type `S` in the current scope --> $DIR/invalid-const-arg-for-type-param.rs:7:7 | LL | struct S; | --------- method `f` not found for this ... LL | S.f::<0>(); | ^ error[E0107]: wrong number of const arguments: expected 0, found 1 --> $DIR/invalid-const-arg-for-type-param.rs:8:9 | LL | S::<0>; | ^ unexpected const argument error: aborting due to 3 previous errors Some errors have detailed explanations: E0107, E0599. For more information about an error, try `rustc --explain E0107`. ", "commid": "rust_pr_60710"}], "negative_passages": []} {"query_id": "q-en-rust-433c857e9cbd27af819de118347a1b44e783c0da5549df74ff38b79cab88a35e", "query": "shows that the impl of is not idempotent. That is, if fails, the is left in an \"un-droppable\" state, and trying to re-drop the vector invokes undefined behavior - that is, the vector must be leaked. It might be possible to make it idempotent without adding significant overhead [0], but I don't know whether we should do this. I think we should be clearer about whether the impl of a type is idempotent or not, since making the wrong assumption can lead to UB, so I believe we should document this somewhere. We could document this for the type, but maybe this can also be something that can be documented at the top of the // crates for all types (e.g. impls of standard types are not idempotent). [0] Maybe setting the vector field to zero before dropping the slice elements and having the impl of set the capacity to zero before exiting is enough to avoid trying to re-drop the elements or trying to deallocate the memory (which is always freed on the first drop attempt).\ncc\n\"It's not idempotent\" should be the safe default assumption, and I know of no case so far where we've documented that a drop glue is idempotent. We might want to mention the general principle in suitable places (e.g., nomicon, UCG) but I don't think we should repeat \"as usual, double-dropping this type causes UB\" again on all specific types. We should only document the exceptions, if any. (I am also quite unsure whether going through the effort of guaranteeing that certain types' drop glue is idempotent is a good use of time, as it seems extremely niche and I don't know any use cases for that. But that just means I personally won't invest time in that discussion.)\nI think this is not an issue wit . Rather, any unsafe code needs to be careful about panic safety, especially if it is also generic. FWIW I\u2019ve always assumed that double drop could be as much a memory safety issue as use-after-free. (\u201cOne and a half\u201d drop even more so.) I don\u2019t think idempotency should be expected for any type, but that sounds like a decision for more than\nFully agreed. The best you can hope for, IMO, is that a type will make an effort to drop as much as possible even when there's a panic during dropping. actually does that by using the slice drop glue, which, if dropping one element panics, will keep dropping the other elements. This seems more useful than allowing idempotent dropping (it minimizes leakage even without any being involved), and it makes idempotent dropping unnecessary (if you catch a panic, the involved types already did everything they can to drop as much as possible, so there's no point in calling drop again). I think it makes more sense to improve panic-resiliance of our drop impls than to make them idempotent. For example, AFAIK drops two 's; if dropping the first panics, the second one will be leaked. This could be improved. One interesting question I see here is the interaction with . Does deallocate the backing store if dropping one of the elements panicked? If yes, is that a violation of the drop guarantee (assuming we extended with pinning projections -- )?\nToday double-drops are not a form of undefined behavior. They could, however, lead to undefined behavior depending on the types involved and how is implemented for those types. As part of the UCGs one could try to make double-drops UB per se. That would mean that unsafe code needs to make sure that double-drops don't happen, period. That might be a breaking change. If we don't do that, then we have some types for which double-drops are ok, and some types for which they are not, and the only way to tell is either by reading the documentation, or inspecting the type's source code. I'd rather not recommend people to rely on what the source code does. It suffices that one crate starts relying on some impl being \"accidentally idempotent\" in libstd today, for us to not be able to evolve that impl in the future without breaking code. A \"Unless stated otherwise, impls of standard library types are not guaranteed to be idempotent, and if they are, we reserve the right to change that without maintaining backwards compatibility\" note somewhere might be enough to prevent that. I expect that to happen in an idempotent impl for as well. Right now, the issue is that all elements that can be dropped are dropped, the backing allocation of the Vec is deallocated, but the len of the Vec is not set to zero, so a double drop will try to double drop the elements again (and then the drop impl of RawVec will be invoked, whose capacity has not been set to zero, which will try to double-free memory). Yes. When drop panics, the destructors of the fields are invoked. For Vec this means that the contained RawVec is dropped, which deallocates the backing storage of the Vec without setting the capacity of the Vec to zero. AFAICT, no. If you have a the only thing that will be deallocated without running destructors when that is dropped is the storage of the . Since the destructor of the does not run, the destructor of the does not run, and the backing allocation of the gets leaked, which is fine (some other thread could still write to it).\nhttps://rust- (both the proposal that is now implemented an the description of the status quo before that) is relevant to the efforts that the language makes to not let double- happen in safe Rust. This guarantee has existed for years, since before 1.0, so writers of unsafe code rely on it. Although calling twice is not inherently UB in the Rust language, many unsafe libraries are written with the assumption that it never happens. Therefore, writers of generic unsafe code should assume that double drop of a value of a type parameter can cause UB. Calling twice with the Unix system allocator causes a double , which is UB. But is only an example.\nDrop is not the only operation that has such issues; pretty much any function you call can be UB or not under certain circumstances depending on implementation details. One example is e.g. relying on not to reallocate -- if we did a before, is that guaranteed? What if we did and then ? And so on. If we really consider such details to be stable just because they are observable by \"this program was not UB but now it is\", we'd have to bake an abstract model of all of these data structures into our operational semantics, just to make some more code UB. I think that is a bad approach. UB is for enabling compiler optimizations. It is not for catching clients that exploit unstable details of the current implementation of some library. Conflating these two problems makes the definition of UB much, much more complicated, I think that's a mistake. Just because some interaction with is not UB currently, doesn't mean we are not allowed to ever change it to be UB. There is no implicit stabilization of the full set of UB-free interactions. I agree catching such \"overfit\" clients is an important problem, but let's keep that library-level discussion separate from the language-level discussion of what is and is not UB. I don't understand why you'd even want to call again, if we already agree that the first should do the maximal amount of dropping that it can. You are basically saying \"make 's drop idempotent by making the second a NOP\". I don't see the point. Seems like you want to write generic code that drops stuff again if the first drop panicked. But what would be a situation where that is ever a good idea? From all I can see, the only place where this can help is if the second drops stuff that the first \"missed\" because of the panic. I am saying, if that is the problem, then fix to not \"miss\" stuff. This misses the point, because . The interesting case is a . Then we could pin the , and from our get a , and insert that into the list. Now we are in a situation where if the 's backing store gets deallocated without dropping the , we have a safety violation. But it seems to me if can guarantee that it itself does not panic, then we are okay: if an earlier element in the list panics while dropping (maybe it's a heterogeneous list through an or trait objects), we know we still get dropped properly.\nHow can you drop the while s into the vector are still live ? The documentation of guarantees these details, e.g., see , so AFAICT Rust unsafe code can rely on these details, and breaking that code would be an API breaking change.\nI was talking about the implicit drop that happens when the vector goes out of scope. I am aware. When libraries decide to document such details, clients may of course rely on them. has no such section even though many similar questions apply; thus, clients may not rely on the same properties for even if they happen to be true currently. This matches the situation for idempotent drop: clients may rely on this if and only if the library decides to document this as a guarantee. I don't think we should change our language spec to detect clients relying on implementation details, and similarly, I don't think we should change our language spec to detect clients that rely on some being idempotent even though that is not documented (e.g., double-drop of an empty that had called on it).\nThe problem is that safe Rust can be called from Rust, and it isn't clear to me from that RFC whether writers of code can rely on other code not performing a double-drop. It isn't clear either whether code can assume that double-dropping something is ok.\nAre you saying that this is analogous to whether users should be able to rely on double-drops invoking / not invoking UB? These data-structure properties feel quite obvious to me, but I have no idea how users can today learn that, at least when using the libstd types, they should always assume that is not idempotent unless a type guarantees otherwise. AFAICT they can just try it for some type, and if it works, deduce that it is ok. Then they publish their crate, and some time later we break their code. Saying that \"we did not guarantee that it worked anywhere\" does not change the fact that the code now is broken, and we are not warning them about this either. So even if we might be right in that technically we are allowed to break that code, it might turn out that in practice, now we cannot, and that user has somehow managed to, by accident, specify that double-drops for some type must be ok. We'd have a stronger case if we explicitly call this out in the docs, warn when users do this (or panic or similar), etc.\nBoth seem equally obvious to me. We also don't document that you cannot to a vector after dropping it. leaves the data structure is a \"deinitialized\" state in which no further operation can be performed on it. I don't understand why you or anyone else might think that dropping would be special in the sense of being the only operation that you may call on such a \"deinitialized\" data structure. I mean, nobody thinks that you can an already free'd piece of the heap either, right? Surely we don't have to exhaustively list everything we do not guarantee. That could get exhausting indeed... I'm all for being more explicit, and in that sense I am okay with noting this explicitly somewhere. But in no way does that mean that other undocumented assumptions can randomly be conjured. (This is how the discussion at hand feels to me: coming up with a random assumption and then arguing that just because nothing says that you can not assume this, hence you can. I understand you do not consider it random, but I see nothing distinguishing this from any other not-documented-not-to-hold assumption that people might come up with.)\nfurther operation can be performed on it. AFAICT a second attempt to drop a value only makes sense if the first one failed (which is what the issue cited in the top comment does). You seem to be arguing that a drop that fails (panics) actually deinitializes the value and therefore trying to re-drop isn\u2019t a logical operation to perform. If this is the case, to me this imply that a drop that fails actually succeeds, such that drops are infallible, which makes me wonder why do we allow drop to panic at all?\nI mostly agree with especially wrt. to turning \"unstated non-guarantees\" through random assumptions into guarantees. However, I think if there are reasonable assumptions that could be made that we don't want to guarantee, it's a practical idea to disabuse users of those assumptions.\nNote however that themselves demonstrated that currently double-drops (of Vecs) don't work -- elements get double-dropped on the second drop. That means if a hypothetical users actually tries to determine this behavior experimentally, they'd quickly find double frees and similar issues if they are at all diligent.\nthat holds as long as users try it with types for which this is the case. maybe you can explain to me how this assumption feels random to you (and maybe the others). Rust allows fallible drop, that is, drop can fail. I don't understand how from this it follows that, because drop failed, one cannot attempt to drop the value again. The only idea I have about how this could be the case is if drop is infallible, that is, even if drop fails, the value is dropped, but then I don't understand why we allow drops to fail in the first place. AFAICT, drop failed, therefore the value was not dropped, therefore I can try to drop it again, and that is not a double-drop because the value wasn't dropped in the first place. It is just a second attempt to drop a value, and this issue is about whether attempting to drop should be idempotent until drop succeeds (once the value is dropped, you don't longer own it, so attempting to drop further is a use-after-move kind of UB AFAICT).\nMy impression is that it's been considered bad form in general to panic in a implementation, if only because it causes an abort if the drop occurred during unwinding. It's allowed just because the alternative (always aborting when a impl panics) is worse.\n...however, this could certainly be documented better. Currently the has an oddly worded note: I think it's trying to explain that double panic = abort, but as worded it doesn't really make sense. It should probably explain that more clearly, and also explicitly state that panicking in is discouraged.\nIt should also cover the case in which the double panic does not happen, and suggest what users should assume if the details of a particular drop implementation are not guaranteed. For example, that a dropped value for which drop fails must, in general, be leaked, because it is, in general, left in an unspecified partially-dropped state which is neither \"initialized\" nor \"deinitialized\" and which does not, in general, support an attempt to re-drop the value or any other operation. How bad would it be to make drop impls ?\nWell, at least I am arguing that that might be the case. You have no way to tell. Would you expect it to be legal to to a vector after panicked? I guess not. Same for calling again. \"panic\" doesn't mean \"I failed in a clean way and nothing happened\", it means \"something went wrong half-way through this operation and any part of it might or might not have happened\".\nWhy not? If the value was not dropped, then I'd expect that to work just like pushing into an empty vector would.\nFor , on , the memory is deallocated, but the length and capacity of the vector are not set to . Are you saying that it is ok to leave safe abstractions over unsafe code in such an invalid state when a happens?\n\"panic during drop\" is very different from \"not dropped\". This is true for any operation. I don't understand why you would assume that an operation that panicked had no effect and thus can be considered as having not happened. You keep just changing my words \"drop panicked\" into \"not dropped\". Please don't. :) I think it is okay to leave safe abstrcations in an invalid state when completes (whether that is return or unwind), because we can rely on this being the last operation. This is actually an advantage of Rust over C++, we don't need run-time tracking for this as we have proper static move tracking. would be the only operation where the \"return\" and \"unwind\" ways of completing would have widely different guarantees. I see no grounds for assuming it is special.\nWhere are you getting the impression that I assume that? How could you make have no effect? AFAICT you would need to revert the of the already dropped elements of the vector which is not possible. How can we rely on this? The example I show using corrupts memory at run-time, and is neither rejected statically, nor at run-time (e.g. via a proper run-time error).\nYou keep saying \"not dropped\" for \"drop panicked\", making it sound as if a drop-that-panicked was the same thing as if drop had never been called. I am sorry if I misunderstood you here, but I also don't know any other way to interpret your words. The Rust compiler ensures that is called exactly once. It inserts boolean variables to track this at run-time if necessary. Calling manually from safe code is impossible (even though its signature looks like it should be possible). Sure, unsafe code can circumvent this. Needless to say, unsafe code can circumvent any compiler-enforced restriction. But \"drop is called exactly once\" has been deeply entrenched in the guarantees and considerations around dropping in Rust as far back as I can remember. There is plenty of unsafe code in libstd that relies on this. This is as close to a guarantee as we get without formal proofs. The docs for say \"using the pointed-to value after calling dropinplace can cause undefined behavior\". It doesn't have an exception for when this operation panicked.\nI am repeating myself here, but just to be clear: I am all for improving the docs. But I find it implausible to assume that doing double-drops could ever be legal (in generic code). There is plenty of evidence for code relying on no double drops happening, there is plenty of mechanism to make sure this doesn't happen in safe code, says so in the docs. I don't even know of a in libstd that would be safe to call twice (probably there are some). As a random example, is also not idempotent and might release a lock not even held by the current thread if it gets called a second time. The run-time tracking of the \"move\" state that the compiler does would literally be useless if drop was idempotent! Heck, before we had proper move tracking we even had a compiler-implemented mechanism to make drop idempotent (overwriting memory with some bit pattern, I forgot the details) and we got rid of it because of its unnecessary run-time cost. It feels to me like you are altering the deal here, and the deal has been set and clear for many years. We should discuss how to improve the docs because clearly it is still possible to assume that drop might be idempotent , but there is no point discussing the deal itself. Unfortunately, I do not understand at all where you got the idea from that drop might be idempotent, so I don't know where the docs could be improved to fix this. This issue has nothing to do with , it is more general. Where would you expect such things to be said? In ? We basically already say that there but maybe the case where drop panics could be called out more explicitly. Or maybe the Nomicon should have a chapter on dropping?\nBy \"not dropped\" I precisely mean \"a value was not dropped\". A drop-that-panics is a drop that fails, which is not the same as if drop was never called (how could it panic if it was never called?). I assumed a couple of things. First, that a drop that fails does not successfully drop a value, and since the value has not been dropped, code is allowed to try to drop it again. Again, this does not mean that I thought that a failed call to would have no effect here. It can have an effect, but that effect is not \"the value is dropped\" if the call fails - the effect is just \"trying to drop the value failed\", which is different from \"drop was never called\". Nothing nowhere taught me otherwise. For me, it just feel obvious that a value that has not been dropped (which is what AFAICT happens if calling for it fails), can be dropped just like any other value (unless these values are special somehow). Second, I assumed that correct safe abstractions over unsafe code made sure this works. My own abstractions do this, e.g., is left in a valid but unspecified state if fails (len = capacity = 0, ptr = null, just like ). This means that, after for fails, you can call or the value without invoking undefined behavior. Last weekend, I one of the test to last week, and this test ran slower. The reason was that does not put itself into a valid state after fails, and this results in UB in the example shown, while was putting itself into a valid state in a sub-optimal way (enough to be noticeable, and easy to fix). I then asked on Discord why was that, and whether the Drop impl for Vec has a bug, or whether Drop impls never have to account for that, and the discussion was that for is not idempotent, but whether this is by design or a bug was not clear, so I filled this bug.\nIt seems, in other words, that you assumed the drop glue is exception safe in a fairly strong way (even when it fails, it upholds the safety invariants necessary for dropping). Of course, as you might guess from my phrasing, that seems like a pretty big and unwarranted assumption to me. We don't generally expect any Rust code to be very exception safe (that's why we have marker traits like ). Some basic exception safety is required from unsafe code to ensure that safe code mustn't be able to cause UB just by catching a panic, but no more. Yet to run the drop glue again after the drop glue fails, you have to dip into unsafe code, so that principle doesn't apply.\nThe whole safe API is exception safe in such a strong way (e.g. ). AFAICT, the only part of the API for which this does not hold is the of for , so I suppose the assumption might have come from there.\nThe entire safe API, yes, by necessity (and only as exception safe as required to maintain memory safety -- e.g., it will happily leak memory). Though there don't seem to be any unsafe APIs (or methods at least) that have to worry about panics, so I guess it's true that distinction might not be apparent just from looking at Vec.\nFrom the point-of-view of safe code, it does not matter whether is exception safe, because it can only be called once. From the POV of unsafe code, I knew that double-drops cannot happen, but what I did not know is that what also cannot happen is getting called twice. I thought, of course that can happen, if code calls , and that panics, and then is called again, that will call twice, but is not a double-drop, because the first time the value failed to drop. When mentions that the issue is not double-drops, but calling more than once, that's news to me.\nI would like to close this issue. We have open to document the exact guarantees around drop and panicking, and it seems like that would be the place for this; I agree with others on this thread that I don't really see how one could assume that initiating drop twice in any way (whether it be due to unwinding or otherwise) is safe. Specifically, any method which conceptually has a signature of or so, if it panics, shouldn't be called again assuming that you still have a T; that seems like it should be fairly easy to understand, unless I'm missing something, as the T is almost guaranteed to have dropped already (so it would be odd to assume that you can drop it again, as that calls something on that T -- you've lost ownership already). This sentence is confusing to me -- double-drops cannot happen sounds equivalent to Drop::drop getting called twice; Drop::drop is the drop operation? Specifically, I think the confusion lies here: \" I thought, of course that can happen, if unsafe code calls dropinplace, and that panics, and then Drop::drop is called again, that will call Drop::drop twice, but is not a double-drop, because the first time the value failed to drop\" -- this is a double drop. When dropinplace panics, it cannot assumed to have done nothing, so calling drop again means that you're violating the safety contract of dropinplace. This is why or equivalents are commonly needed when calling dropinplace, as you need to be sure that the value you're dropping in place is not dropped again (during unwinding or otherwise), otherwise your unsafe code is incorrect. Presuming there's not further discussion here I will aim to close this issue in a week or so.\nIn the context of this discussion, the question that needs to be properly answered somewhere in the reference is: If a call to fails, was the value dropped? If the answer is \"no\", then calling again isn't a \"double-drop\" because the value has not been dropped. Looking at the comments here there is a certain amount of consensus that a call that fails \"succeeds\" in dropping the value, which is why even if fails, calling it again on a value drops the value twice and that's a double drop, which is UB. That's what should be documented somewhere in the reference. Which part of the reference issue summarizes / super-seeds this discussion ? If there isn't any, could you post a summary of this discussion there before closing this, so that documenting this still gets tracked somewhere?", "positive_passages": [{"docid": "doc-en-rust-c5beaf5f28f5c6401015158a6a491b8e351d6e2cfb8af85e7b7b54a408d95577", "text": "/// Given that a [`panic!`] will call `drop` as it unwinds, any [`panic!`] /// in a `drop` implementation will likely abort. /// /// Note that even if this panics, the value is considered to be dropped; /// you must not cause `drop` to be called again. This is normally automatically /// handled by the compiler, but when using unsafe code, can sometimes occur /// unintentionally, particularly when using [`std::ptr::drop_in_place`]. /// /// [E0040]: ../../error-index.html#E0040 /// [`panic!`]: ../macro.panic.html /// [`std::mem::drop`]: ../../std/mem/fn.drop.html /// [`std::ptr::drop_in_place`]: ../../std/ptr/fn.drop_in_place.html #[stable(feature = \"rust1\", since = \"1.0.0\")] fn drop(&mut self); }", "commid": "rust_pr_67559"}], "negative_passages": []} {"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-5372cd56f544c742d10517ca9261e76618481bfa4f3aeec1e118f1d9241cd6ca", "text": "/// extract function takes three arguments: a string slice containing the source, an index in /// the slice for the beginning of the span and an index in the slice for the end of the span. fn span_to_source(&self, sp: Span, extract_source: F) -> Result where F: Fn(&str, usize, usize) -> String where F: Fn(&str, usize, usize) -> Result { if sp.lo() > sp.hi() { return Err(SpanSnippetError::IllFormedSpan(sp));", "commid": "rust_pr_63508"}], "negative_passages": []} {"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-6f937f2bff4e4aeb8f6177e2274b3d851ad0a1e622d0db203eb4d2b93077083d", "text": "} if let Some(ref src) = local_begin.sf.src { return Ok(extract_source(src, start_index, end_index)); return extract_source(src, start_index, end_index); } else if let Some(src) = local_begin.sf.external_src.borrow().get_source() { return Ok(extract_source(src, start_index, end_index)); return extract_source(src, start_index, end_index); } else { return Err(SpanSnippetError::SourceNotAvailable { filename: local_begin.sf.name.clone()", "commid": "rust_pr_63508"}], "negative_passages": []} {"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-47826f27899f6e4cca1449ecea3aef4b40c2eeee2d6086c7a142629db4af66ad", "text": "/// Returns the source snippet as `String` corresponding to the given `Span` pub fn span_to_snippet(&self, sp: Span) -> Result { self.span_to_source(sp, |src, start_index, end_index| src[start_index..end_index] .to_string()) self.span_to_source(sp, |src, start_index, end_index| src.get(start_index..end_index) .map(|s| s.to_string()) .ok_or_else(|| SpanSnippetError::IllFormedSpan(sp))) } pub fn span_to_margin(&self, sp: Span) -> Option {", "commid": "rust_pr_63508"}], "negative_passages": []} {"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-d3e63a90aa0ae85c34303be2c2373c0fde07fe06ceb7a543ba73c54323816973", "text": "/// Returns the source snippet as `String` before the given `Span` pub fn span_to_prev_source(&self, sp: Span) -> Result { self.span_to_source(sp, |src, start_index, _| src[..start_index].to_string()) self.span_to_source(sp, |src, start_index, _| src.get(..start_index) .map(|s| s.to_string()) .ok_or_else(|| SpanSnippetError::IllFormedSpan(sp))) } /// Extend the given `Span` to just after the previous occurrence of `c`. Return the same span", "commid": "rust_pr_63508"}], "negative_passages": []} {"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-59a357a72eccc3ef5006be3aed277088178cbcdb60900c8fde680428d7b54904", "text": " struct X {} fn main() { vec![X]; //\u2026 //~^ ERROR expected value, found struct `X` } ", "commid": "rust_pr_63508"}], "negative_passages": []} {"query_id": "q-en-rust-dd9d01e655fbfaa6c4210cf26f6453018b0127bf7f305f523aec0c305ef038ea", "query": "When I used the character in comment, combined with some (invalid) code preceding it, the compiler panicked. I tried this code: I expected to see this happen: rustc compiles this code. Instead, this happened: rustc panicked with the following output : Backtrace:\nRegressed in 1.34.0. Used to be:\ncc\nMiminized: Probably some span handcrafting?\nProbably some span handcrafting? +1\nI was idly curious, so I took a brief look at this. The diagnostic that triggers this was introduced in , but I don't think it is responsible for the error (it's just stepping through characters looking for a closing brace). There is some strange math in (introduced in ). It oscillates between zero-length and 1+ length spans as it is stepping over characters. When it hits a multi-byte character, it returns an invalid span. Example: Starting with a span of (0,1), using would result in (1,1), then (1,2), then (2,2), then (2,4). That last one slices into the middle of the multi-byte character (because of , where width is 3), causing a panic. I'm not familiar enough with this stuff to say how it should behave. It seems like is being used for two different purposes (getting the next zero-length span, and getting the \"next\" character), and it's not clear to me what the behavior should be since it is used in a variety of places. I don't think it can be changed to always return zero-length spans, and it can't be changed to always return non-zero length spans. Perhaps there should be two different functions?\ntriage: P-medium, removing nomination; the ICE to me appears to include enough information for a user to identify the offending character and work-around the problem (namely by getting rid of the character). I do agree with that there should probably be two distinct functions corresponding to the two distinct purposes that they identified for the current callers of .\nMore test cases have been brought up in\nI guess this is the same bug, but the examples I've seen here concern non-ASCII characters in comments. I'm getting a similar rustc panic from an erroneous code without comments: Compilation attempt:\nPlease file a new issue and reference this one.\nI think it's fixed, try .\nconfirmed fixed by .", "positive_passages": [{"docid": "doc-en-rust-5302f342feb7fdda5a5536b1bf1f8ac201a6d12ce39f56555eee72599b28e57a", "text": " error[E0423]: expected value, found struct `X` --> $DIR/issue-61226.rs:3:10 | LL | vec![X]; //\u2026 | ^ did you mean `X { /* fields */ }`? error: aborting due to previous error For more information about this error, try `rustc --explain E0423`. ", "commid": "rust_pr_63508"}], "negative_passages": []} {"query_id": "q-en-rust-5361409b1b92c074b238452d4faeabc0e6fcf44d53a3c9da0ee6c3bb9e9587d2", "query": "Creating a recursive type with infinite size by removing a leads to an internal compiler error. Might be the same issue as . To reproduce this bug, run on this code: Replace by and run again: I expected to see this happen: Instead, this happened: :\ntriage: P-high. Removing nomination.\nassigning to with expectation that they will delegate.\nnominating to try to find someone to investigate this.\nwell spotted ;) Because this issue has the tag and the other doesn't I will post the incremental test here:\ntriage: Reassigning to self and in hopes that one of us will find time to investigate, since it seems to be an issue with the dep-graph/incr-comp infrastructure.\ntriage: Downgrading to P-medium. I still want to resolve this, but it simply does not warrant revisiting every week during T-compiler triage.\nI encountered the same bug. I starting to create a new issue, but it's exactly the same than this one. In case it's needed, I uploaded the code on on the branch (there is two commits, the first is fine, the second generate the ICE) in case you want to validate your patch when you will eventually have time to fix it.\nI am running into this issue as well. Since the corresponding pr has been merged 24 days ago and the latest version was tagged 14 days ago, I am not sure, if this error should have been fixed with . (I did not find the commit on the master but I am also unsure if it was squashed with other commits.) Therefore, I thought I would at least post my error as well, just in case this is supposed to work. In my case I have an enum which holds a struct which in turn references that enum. I know that this can't work and I know how the correct way of handling this scenario is, I just wanted to let you know that this crashes the compiler. If you need a more detailed code sample, let me know and I will try to reproduce the error outside of my project.", "positive_passages": [{"docid": "doc-en-rust-ecafaef8f17adf0ba19d28c2f2701b1396e3f1aeb46e4dd789f25255097d68ac", "text": "return None } None => { if !tcx.sess.has_errors() { if !tcx.sess.has_errors_or_delayed_span_bugs() { bug!(\"try_mark_previous_green() - Forcing the DepNode should have set its color\") } else { // If the query we just forced has resulted // in some kind of compilation error, we // don't expect that the corresponding // dep-node color has been updated. // If the query we just forced has resulted in // some kind of compilation error, we cannot rely on // the dep-node color having been properly updated. // This means that the query system has reached an // invalid state. We let the compiler continue (by // returning `None`) so it can emit error messages // and wind down, but rely on the fact that this // invalid state will not be persisted to the // incremental compilation cache because of // compilation errors being present. debug!(\"try_mark_previous_green({:?}) - END - dependency {:?} resulted in compilation error\", dep_node, dep_dep_node); return None } } }", "commid": "rust_pr_66846"}], "negative_passages": []} {"query_id": "q-en-rust-5361409b1b92c074b238452d4faeabc0e6fcf44d53a3c9da0ee6c3bb9e9587d2", "query": "Creating a recursive type with infinite size by removing a leads to an internal compiler error. Might be the same issue as . To reproduce this bug, run on this code: Replace by and run again: I expected to see this happen: Instead, this happened: :\ntriage: P-high. Removing nomination.\nassigning to with expectation that they will delegate.\nnominating to try to find someone to investigate this.\nwell spotted ;) Because this issue has the tag and the other doesn't I will post the incremental test here:\ntriage: Reassigning to self and in hopes that one of us will find time to investigate, since it seems to be an issue with the dep-graph/incr-comp infrastructure.\ntriage: Downgrading to P-medium. I still want to resolve this, but it simply does not warrant revisiting every week during T-compiler triage.\nI encountered the same bug. I starting to create a new issue, but it's exactly the same than this one. In case it's needed, I uploaded the code on on the branch (there is two commits, the first is fine, the second generate the ICE) in case you want to validate your patch when you will eventually have time to fix it.\nI am running into this issue as well. Since the corresponding pr has been merged 24 days ago and the latest version was tagged 14 days ago, I am not sure, if this error should have been fixed with . (I did not find the commit on the master but I am also unsure if it was squashed with other commits.) Therefore, I thought I would at least post my error as well, just in case this is supposed to work. In my case I have an enum which holds a struct which in turn references that enum. I know that this can't work and I know how the correct way of handling this scenario is, I just wanted to let you know that this crashes the compiler. If you need a more detailed code sample, let me know and I will try to reproduce the error outside of my project.", "positive_passages": [{"docid": "doc-en-rust-c030566be5a89720ea8f11f8e8a076420a81baf46227e2719d2ed716b0c55121", "text": " // revisions: rpass cfail enum A { //[cfail]~^ ERROR 3:1: 3:7: recursive type `A` has infinite size [E0072] B(C), } #[cfg(rpass)] struct C(Box); #[cfg(cfail)] struct C(A); //[cfail]~^ ERROR 12:1: 12:13: recursive type `C` has infinite size [E0072] fn main() {} ", "commid": "rust_pr_66846"}], "negative_passages": []} {"query_id": "q-en-rust-2ac231aeb87028992127a7885dac127252c08f321f332692d893409e3ebbbca5", "query": "The code: It produced error: Although RFC not yet implemented, this code must work already. Or I'm wrong and this syntax does not supported by const-generics?\nNot sure where you see the ICE, removing that.\nThis is glitch after the previous 2 issues.. It is not ICE.\nTo Minimal code: produce error: The variant also give this error.\nNow it is ICE:\nIt's back to the original error.\nNow it produce this errors:\nThe behaviour is now as expected. We just need a test now.\nConst Generic RFC leads the code as possible: Is there another issue that is tracking it?\nYes, there are some subtle issues about making sure that constants in types are well-formed. The tracking issue for this is", "positive_passages": [{"docid": "doc-en-rust-e28e502e3060866e8a02973b53dc0aee3eb5b9ff468bac7de250f82ef75d410a", "text": " #![feature(const_generics)] //~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash pub struct MyArray([u8; COUNT + 1]); //~^ ERROR constant expression depends on a generic parameter impl MyArray { fn inner(&self) -> &[u8; COUNT + 1] { //~^ ERROR constant expression depends on a generic parameter &self.0 } } fn main() {} ", "commid": "rust_pr_70939"}], "negative_passages": []} {"query_id": "q-en-rust-2ac231aeb87028992127a7885dac127252c08f321f332692d893409e3ebbbca5", "query": "The code: It produced error: Although RFC not yet implemented, this code must work already. Or I'm wrong and this syntax does not supported by const-generics?\nNot sure where you see the ICE, removing that.\nThis is glitch after the previous 2 issues.. It is not ICE.\nTo Minimal code: produce error: The variant also give this error.\nNow it is ICE:\nIt's back to the original error.\nNow it produce this errors:\nThe behaviour is now as expected. We just need a test now.\nConst Generic RFC leads the code as possible: Is there another issue that is tracking it?\nYes, there are some subtle issues about making sure that constants in types are well-formed. The tracking issue for this is", "positive_passages": [{"docid": "doc-en-rust-2a6e46a8e2b77eb3e5351d2934ae7df109b0b2fe15df99be2f240e66a9c89b87", "text": " warning: the feature `const_generics` is incomplete and may cause the compiler to crash --> $DIR/issue-61522-array-len-succ.rs:1:12 | LL | #![feature(const_generics)] | ^^^^^^^^^^^^^^ | = note: `#[warn(incomplete_features)]` on by default error: constant expression depends on a generic parameter --> $DIR/issue-61522-array-len-succ.rs:4:40 | LL | pub struct MyArray([u8; COUNT + 1]); | ^^^^^^^^^^^^^^^ | = note: this may fail depending on what value the parameter takes error: constant expression depends on a generic parameter --> $DIR/issue-61522-array-len-succ.rs:8:24 | LL | fn inner(&self) -> &[u8; COUNT + 1] { | ^^^^^^^^^^^^^^^^ | = note: this may fail depending on what value the parameter takes error: aborting due to 2 previous errors ", "commid": "rust_pr_70939"}], "negative_passages": []} {"query_id": "q-en-rust-2ac231aeb87028992127a7885dac127252c08f321f332692d893409e3ebbbca5", "query": "The code: It produced error: Although RFC not yet implemented, this code must work already. Or I'm wrong and this syntax does not supported by const-generics?\nNot sure where you see the ICE, removing that.\nThis is glitch after the previous 2 issues.. It is not ICE.\nTo Minimal code: produce error: The variant also give this error.\nNow it is ICE:\nIt's back to the original error.\nNow it produce this errors:\nThe behaviour is now as expected. We just need a test now.\nConst Generic RFC leads the code as possible: Is there another issue that is tracking it?\nYes, there are some subtle issues about making sure that constants in types are well-formed. The tracking issue for this is", "positive_passages": [{"docid": "doc-en-rust-a0ab9c47fbdc1908e4dae5dc9832a16123a45027d23a35b6ce10fa670665ea37", "text": " // check-pass #![feature(const_generics)] //~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash trait Trait { type Assoc; } impl Trait<\"0\"> for () { type Assoc = (); } fn main() { let _: <() as Trait<\"0\">>::Assoc = (); } ", "commid": "rust_pr_70939"}], "negative_passages": []} {"query_id": "q-en-rust-2ac231aeb87028992127a7885dac127252c08f321f332692d893409e3ebbbca5", "query": "The code: It produced error: Although RFC not yet implemented, this code must work already. Or I'm wrong and this syntax does not supported by const-generics?\nNot sure where you see the ICE, removing that.\nThis is glitch after the previous 2 issues.. It is not ICE.\nTo Minimal code: produce error: The variant also give this error.\nNow it is ICE:\nIt's back to the original error.\nNow it produce this errors:\nThe behaviour is now as expected. We just need a test now.\nConst Generic RFC leads the code as possible: Is there another issue that is tracking it?\nYes, there are some subtle issues about making sure that constants in types are well-formed. The tracking issue for this is", "positive_passages": [{"docid": "doc-en-rust-a4fbcfca97d42145b8924ca710ee5542b5acd08a9643210949eb1a70dacd9206", "text": " warning: the feature `const_generics` is incomplete and may cause the compiler to crash --> $DIR/issue-66596-impl-trait-for-str-const-arg.rs:3:12 | LL | #![feature(const_generics)] | ^^^^^^^^^^^^^^ | = note: `#[warn(incomplete_features)]` on by default ", "commid": "rust_pr_70939"}], "negative_passages": []} {"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-a57874f691d537721726e265e86b6786670faa624f6f732c3f916a01d9542cd1", "text": "And if someone takes issue with something you said or did, resist the urge to be defensive. Just stop doing what it was they complained about and apologize. Even if you feel you were misinterpreted or unfairly accused, chances are good there was something you could've communicated better \u2014 remember that it's your responsibility to make your fellow Rustaceans comfortable. Everyone wants to get along and we are all here first and foremost because we want to talk about cool technology. You will find that people will be eager to assume good intent and forgive as long as you earn their trust. The enforcement policies listed above apply to all official Rust venues; including official IRC channels (#rust, #rust-internals, #rust-tools, #rust-libs, #rustc, #rust-beginners, #rust-docs, #rust-community, #rust-lang, and #cargo); GitHub repositories under rust-lang, rust-lang-nursery, and rust-lang-deprecated; and all forums under rust-lang.org (users.rust-lang.org, internals.rust-lang.org). For other projects adopting the Rust Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion. The enforcement policies listed above apply to all official Rust venues; including all communication channels (Rust Discord server, Rust Zulip server); GitHub repositories under rust-lang, rust-lang-nursery, and rust-lang-deprecated; and all forums under rust-lang.org (users.rust-lang.org, internals.rust-lang.org). For other projects adopting the Rust Code of Conduct, please contact the maintainers of those projects for enforcement. If you wish to use this code of conduct for your own project, consider explicitly mentioning your moderation policy or making a copy with your own moderation policy so as to avoid confusion. *Adapted from the [Node.js Policy on Trolling](https://blog.izs.me/2012/08/policy-on-trolling) as well as the [Contributor Covenant v1.3.0](https://www.contributor-covenant.org/version/1/3/0/).*", "commid": "rust_pr_65004"}], "negative_passages": []} {"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-5099a583a358753ed5a465142995fb817b0d632efc334262abd0701583aac178", "text": "* [Helpful Links and Information](#helpful-links-and-information) If you have questions, please make a post on [internals.rust-lang.org][internals] or hop on the [Rust Discord server][rust-discord], [Rust Zulip server][rust-zulip] or [#rust-internals][pound-rust-internals]. hop on the [Rust Discord server][rust-discord] or [Rust Zulip server][rust-zulip]. As a reminder, all contributors are expected to follow our [Code of Conduct][coc].", "commid": "rust_pr_65004"}], "negative_passages": []} {"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-1804033c9d6636cf88be62663cde10cd093390163995fc52c50e43ca393c91b4", "text": "If this is your first time contributing, the [walkthrough] chapter of the guide can give you a good example of how a typical contribution would go. [pound-rust-internals]: https://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust-internals [internals]: https://internals.rust-lang.org [rust-discord]: http://discord.gg/rust-lang [rust-zulip]: https://rust-lang.zulipchat.com", "commid": "rust_pr_65004"}], "negative_passages": []} {"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-10b5ac686e8f802a26a38a02193928e840163b24298885787cb30d759f10ce73", "text": "There are a number of other ways to contribute to Rust that don't deal with this repository. Answer questions in [#rust][pound-rust], or on [users.rust-lang.org][users], Answer questions in the _Get Help!_ channels from the [Rust Discord server][rust-discord], on [users.rust-lang.org][users], or on [StackOverflow][so]. Participate in the [RFC process](https://github.com/rust-lang/rfcs).", "commid": "rust_pr_65004"}], "negative_passages": []} {"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-c41f56b6ad926145a2d84de5fb8f667bc818a3e5d8c8d229e2b9a018972a01f9", "text": "it to [Crates.io](http://crates.io). Easier said than done, but very, very valuable! [pound-rust]: http://chat.mibbit.com/?server=irc.mozilla.org&channel=%23rust [rust-discord]: https://discord.gg/rust-lang [users]: https://users.rust-lang.org/ [so]: http://stackoverflow.com/questions/tagged/rust [community-library]: https://github.com/rust-lang/rfcs/labels/A-community-library", "commid": "rust_pr_65004"}], "negative_passages": []} {"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-d0f0e6798d568287c567e0aaa98d279bbcd98e275bfa44fc1aada47bdd87488d", "text": "To contribute to Rust, please see [CONTRIBUTING](CONTRIBUTING.md). Rust has an [IRC] culture and most real-time collaboration happens in a variety of channels on Mozilla's IRC network, irc.mozilla.org. The most popular channel is [#rust], a venue for general discussion about Rust. And a good place to ask for help would be [#rust-beginners]. Most real-time collaboration happens in a variety of channels on the [Rust Discord server][rust-discord], with channels dedicated for getting help, community, documentation, and all major contribution areas in the Rust ecosystem. A good place to ask for help would be the #help channel. The [rustc guide] might be a good place to start if you want to find out how various parts of the compiler work. Also, you may find the [rustdocs for the compiler itself][rustdocs] useful. [IRC]: https://en.wikipedia.org/wiki/Internet_Relay_Chat [#rust]: irc://irc.mozilla.org/rust [#rust-beginners]: irc://irc.mozilla.org/rust-beginners [rust-discord]: https://discord.gg/rust-lang [rustc guide]: https://rust-lang.github.io/rustc-guide/about-this-guide.html [rustdocs]: https://doc.rust-lang.org/nightly/nightly-rustc/rustc/", "commid": "rust_pr_65004"}], "negative_passages": []} {"query_id": "q-en-rust-b32c83ae9fbb19e1be040ea73663ff2d9451fbae31760395a82a970c5b42b633", "query": "There are still of IRC in the repo. Removing them is not a priority but it will avoid confusion for newcomers.\nShould these places be replaced with references to the rust discord server, or just be removed outright?\nProbably replaced. Gonna update title.\nOpened to address this.\nThis is horrible. Replacing IRC with Discord.. :/\nIf you haven't read about it yet, is the post from April.\nYeah I read already. I'd remove the mention of any particular discussion channel, instead of putting Discord in. I can't wait for Matrix to become more mature so the Rust project can use it. Falling back to Discord is quite bad for a FOSS project.\nIMO we should just mirror subreddit solution and point to Discord.", "positive_passages": [{"docid": "doc-en-rust-2a95e00d5737ea884f708f2a3bead5b091e6b9ee745c50506b31c2ac100ea214", "text": "`Config` struct. * Adding a sanity check? Take a look at `bootstrap/sanity.rs`. If you have any questions feel free to reach out on `#rust-infra` on IRC or ask on internals.rust-lang.org. When you encounter bugs, please file issues on the rust-lang/rust issue tracker. If you have any questions feel free to reach out on `#infra` channel in the [Rust Discord server][rust-discord] or ask on internals.rust-lang.org. When you encounter bugs, please file issues on the rust-lang/rust issue tracker. [rust-discord]: https://discord.gg/rust-lang ", "commid": "rust_pr_65004"}], "negative_passages": []} {"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-96d1470c33090bd5bd857da6e31c33c2893b69f474ee44be571201eb02d450b9", "text": "[[package]] name = \"directories\" version = \"1.0.2\" version = \"2.0.1\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [ \"libc 0.2.54 (registry+https://github.com/rust-lang/crates.io-index)\", \"winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)\", \"cfg-if 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)\", \"dirs-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)\", ] [[package]]", "commid": "rust_pr_61743"}], "negative_passages": []} {"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-161a5b1df5f7dce910b5ad118d395316e0a3eed083e180cafc907bef8678f33b", "text": "] [[package]] name = \"dirs-sys\" version = \"0.3.3\" source = \"registry+https://github.com/rust-lang/crates.io-index\" dependencies = [ \"cfg-if 0.1.8 (registry+https://github.com/rust-lang/crates.io-index)\", \"libc 0.2.54 (registry+https://github.com/rust-lang/crates.io-index)\", \"redox_users 0.3.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"winapi 0.3.6 (registry+https://github.com/rust-lang/crates.io-index)\", ] [[package]] name = \"dlmalloc\" version = \"0.1.3\" source = \"registry+https://github.com/rust-lang/crates.io-index\"", "commid": "rust_pr_61743"}], "negative_passages": []} {"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-d9be40c5089d302fa0ac29a9afa976c3fba508518a68af810af36aa389b2b547", "text": "version = \"0.1.0\" dependencies = [ \"byteorder 1.2.7 (registry+https://github.com/rust-lang/crates.io-index)\", \"cargo_metadata 0.7.1 (registry+https://github.com/rust-lang/crates.io-index)\", \"cargo_metadata 0.8.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"colored 1.6.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"compiletest_rs 0.3.22 (registry+https://github.com/rust-lang/crates.io-index)\", \"directories 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)\", \"directories 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)\", \"env_logger 0.6.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"hex 0.3.2 (registry+https://github.com/rust-lang/crates.io-index)\", \"log 0.4.6 (registry+https://github.com/rust-lang/crates.io-index)\",", "commid": "rust_pr_61743"}], "negative_passages": []} {"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-aa553445cd4cc5b270fa029388525394c3e2ec78f39e2a72a2fae1a3fcbd518c", "text": "\"checksum diff 0.1.11 (registry+https://github.com/rust-lang/crates.io-index)\" = \"3c2b69f912779fbb121ceb775d74d51e915af17aaebc38d28a592843a2dd0a3a\" \"checksum difference 2.0.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"524cbf6897b527295dff137cec09ecf3a05f4fddffd7dfcd1585403449e74198\" \"checksum digest 0.7.6 (registry+https://github.com/rust-lang/crates.io-index)\" = \"03b072242a8cbaf9c145665af9d250c59af3b958f83ed6824e13533cf76d5b90\" \"checksum directories 1.0.2 (registry+https://github.com/rust-lang/crates.io-index)\" = \"72d337a64190607d4fcca2cb78982c5dd57f4916e19696b48a575fa746b6cb0f\" \"checksum directories 2.0.1 (registry+https://github.com/rust-lang/crates.io-index)\" = \"2ccc83e029c3cebb4c8155c644d34e3a070ccdb4ff90d369c74cd73f7cb3c984\" \"checksum dirs 1.0.5 (registry+https://github.com/rust-lang/crates.io-index)\" = \"3fd78930633bd1c6e35c4b42b1df7b0cbc6bc191146e512bb3bedf243fcc3901\" \"checksum dirs-sys 0.3.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"937756392ec77d1f2dd9dc3ac9d69867d109a2121479d72c364e42f4cab21e2d\" \"checksum dlmalloc 0.1.3 (registry+https://github.com/rust-lang/crates.io-index)\" = \"f283302e035e61c23f2b86b3093e8c6273a4c3125742d6087e96ade001ca5e63\" \"checksum either 1.5.0 (registry+https://github.com/rust-lang/crates.io-index)\" = \"3be565ca5c557d7f59e7cfcf1844f9e3033650c929c6566f511e8005f205c1d0\" \"checksum elasticlunr-rs 2.3.4 (registry+https://github.com/rust-lang/crates.io-index)\" = \"a99a310cd1f9770e7bf8e48810c7bcbb0e078c8fb23a8c7bcf0da4c2bf61a455\"", "commid": "rust_pr_61743"}], "negative_passages": []} {"query_id": "q-en-rust-38b1f00b97bd3f57f9cd795be236adb36d4d343650179826dae3e31fb8c7d204", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and -- nominating for prioritization.\nMiri is fixed upstream, but will be broken again by so I'd like to land that before updating.", "positive_passages": [{"docid": "doc-en-rust-75ab21664fa191fa046ef74eb1f6134bf51bfa942f0a64756efd0f3375344bd0", "text": " Subproject commit e1a0f66373a1a185334a6e3be24e94161e3b4a43 Subproject commit 965160d4d7976ddead182b4a65b73f59818537de ", "commid": "rust_pr_61743"}], "negative_passages": []} {"query_id": "q-en-rust-2f77a0644870908637b8614d82f677d41914073f0e8ef0fa37df3c0ba8775cda", "query": "Probably one should never write this, and I wonder if it will be disallowed in the future. Happens on stable, beta and nightly, in all const-forms existing (includes const generics) pub fn is_ty_uninhabited_from_all_modules(self, ty: Ty<'tcx>) -> bool { pub fn is_ty_uninhabited_from_any_module(self, ty: Ty<'tcx>) -> bool { !self.ty_inhabitedness_forest(ty).is_empty() }", "commid": "rust_pr_61814"}], "negative_passages": []} {"query_id": "q-en-rust-2f77a0644870908637b8614d82f677d41914073f0e8ef0fa37df3c0ba8775cda", "query": "Probably one should never write this, and I wonder if it will be disallowed in the future. Happens on stable, beta and nightly, in all const-forms existing (includes const generics) let callee_layout = self.layout_of_local(self.frame(), mir::RETURN_PLACE, None)?; if !callee_layout.abi.is_uninhabited() { return err!(FunctionRetMismatch( self.tcx.types.never, callee_layout.ty )); let local = mir::RETURN_PLACE; let ty = self.frame().body.local_decls[local].ty; if !self.tcx.is_ty_uninhabited_from_any_module(ty) { return err!(FunctionRetMismatch(self.tcx.types.never, ty)); } } Ok(())", "commid": "rust_pr_61814"}], "negative_passages": []} {"query_id": "q-en-rust-2f77a0644870908637b8614d82f677d41914073f0e8ef0fa37df3c0ba8775cda", "query": "Probably one should never write this, and I wonder if it will be disallowed in the future. Happens on stable, beta and nightly, in all const-forms existing (includes const generics) // compile-fail pub const unsafe fn fake_type() -> T { hint_unreachable() } pub const unsafe fn hint_unreachable() -> ! { fake_type() //~ ERROR any use of this value will cause an error } trait Const { const CONSTANT: i32 = unsafe { fake_type() }; } impl Const for T {} pub fn main() -> () { dbg!(i32::CONSTANT); //~ ERROR erroneous constant used } ", "commid": "rust_pr_61814"}], "negative_passages": []} {"query_id": "q-en-rust-2f77a0644870908637b8614d82f677d41914073f0e8ef0fa37df3c0ba8775cda", "query": "Probably one should never write this, and I wonder if it will be disallowed in the future. Happens on stable, beta and nightly, in all const-forms existing (includes const generics) error: any use of this value will cause an error --> $DIR/uninhabited-const-issue-61744.rs:8:5 | LL | fake_type() | ^^^^^^^^^^^ | | | tried to call a function with return type T passing return place of type ! | inside call to `hint_unreachable` at $DIR/uninhabited-const-issue-61744.rs:4:5 | inside call to `fake_type::` at $DIR/uninhabited-const-issue-61744.rs:12:36 ... LL | const CONSTANT: i32 = unsafe { fake_type() }; | --------------------------------------------- | = note: #[deny(const_err)] on by default error[E0080]: erroneous constant used --> $DIR/uninhabited-const-issue-61744.rs:18:10 | LL | dbg!(i32::CONSTANT); | ^^^^^^^^^^^^^ referenced constant has errors error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_61814"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : if let None = self.tcx.hir().as_local_hir_id(generator_did) { if self.tcx.hir().as_local_hir_id(generator_did).is_none() { return false; }", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : // check-pass #![feature(const_generics)] //~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash struct Const; impl Const<{C}> { fn successor() -> Const<{C + 1}> { Const } } fn main() { let _x: Const::<2> = Const::<1>::successor(); } ", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : warning: the feature `const_generics` is incomplete and may cause the compiler to crash --> $DIR/issue-61747.rs:3:12 | LL | #![feature(const_generics)] | ^^^^^^^^^^^^^^ | = note: `#[warn(incomplete_features)]` on by default ", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : // check-pass #![allow(incomplete_features, dead_code, unconditional_recursion)] #![feature(const_generics)] fn fact() { fact::<{ N - 1 }>(); } fn main() {} ", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : // Fixed by #67160 trait Trait1 { type A; } trait Trait2 { type Type1: Trait1; //~^ ERROR: generic associated types are unstable //~| ERROR: type-generic associated types are not yet implemented } fn main() {} ", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : error[E0658]: generic associated types are unstable --> $DIR/issue-67424.rs:8:5 | LL | type Type1: Trait1; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: for more information, see https://github.com/rust-lang/rust/issues/44265 = help: add `#![feature(generic_associated_types)]` to the crate attributes to enable error: type-generic associated types are not yet implemented --> $DIR/issue-67424.rs:8:5 | LL | type Type1: Trait1; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: for more information, see https://github.com/rust-lang/rust/issues/44265 error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0658`. ", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : // Regression test for #66270, fixed by #66246 struct Bug { incorrect_field: 0, //~^ ERROR expected type } struct Empty {} fn main() { let Bug { any_field: Empty {}, } = Bug {}; } ", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-10b231f4947d77a8f8c997022f5cad5f063dde3f608382fb0eac348e56334249", "query": "This code (): results in an ICE:\nIsn't that a close cousin of\nNot really, this is an associated function rather than an inner function.\nI'm investigating this. Still repros on . With : error: expected type, found `0` --> $DIR/issue-66270-pat-struct-parser-recovery.rs:4:22 | LL | incorrect_field: 0, | ^ expected type error: aborting due to previous error ", "commid": "rust_pr_67543"}], "negative_passages": []} {"query_id": "q-en-rust-2be1db2a16f8c8d6e65b135ce765e939b9c196e72231f3ea9e936fadfe499359", "query": "With we are now suggesting people to use when they have a raw pointer and need a reference. With gets some expanded documentation on UB avoidance. It might be a good idea to mention that also in the actual suggestion, so people don't have to open the docs to be aware of this. Cc\nCan I take this on, since no one seems to have done it yet?\nSure, it's yours!", "positive_passages": [{"docid": "doc-en-rust-fb18e966c81a82b65c8911b19144e4414e3c22666f0e6e6fc5ebf504af45fade", "text": "err.note(\"try using `<*const T>::as_ref()` to get a reference to the type behind the pointer: https://doc.rust-lang.org/std/ primitive.pointer.html#method.as_ref\"); err.note(\"using `<*const T>::as_ref()` on a pointer which is unaligned or points to invalid or uninitialized memory is undefined behavior\"); } err }", "commid": "rust_pr_62685"}], "negative_passages": []} {"query_id": "q-en-rust-2be1db2a16f8c8d6e65b135ce765e939b9c196e72231f3ea9e936fadfe499359", "query": "With we are now suggesting people to use when they have a raw pointer and need a reference. With gets some expanded documentation on UB avoidance. It might be a good idea to mention that also in the actual suggestion, so people don't have to open the docs to be aware of this. Cc\nCan I take this on, since no one seems to have done it yet?\nSure, it's yours!", "positive_passages": [{"docid": "doc-en-rust-aa005eae0c876a47c46e3670071e47c7e5b4429ae9961bbb7a0291eb797a6f5f", "text": "| ^^^^^^^^^ | = note: try using `<*const T>::as_ref()` to get a reference to the type behind the pointer: https://doc.rust-lang.org/std/primitive.pointer.html#method.as_ref = note: using `<*const T>::as_ref()` on a pointer which is unaligned or points to invalid or uninitialized memory is undefined behavior = note: the method `to_string` exists but the following trait bounds were not satisfied: `*const u8 : std::string::ToString`", "commid": "rust_pr_62685"}], "negative_passages": []} {"query_id": "q-en-rust-2179ca3c521bd7ab9ffc57d9e8e8418f9ade43db5b06d66a6b5e5a7b2daf2d40", "query": "I compiled the following code expecting some sort of error message. Instead, the compiler panicked unexpectedly. () errors: errors: :\nTriage: Likely dupe of cc Preliminarily assigning P-high before compiler team meeting.", "positive_passages": [{"docid": "doc-en-rust-c75fde32cd4e6d26ae04f2f9294a1b2e49957d483f147f26f6f2002dcd7b2d18", "text": "(Reservation(WriteKind::MutableBorrow(bk)), BorrowKind::Shallow) | (Reservation(WriteKind::MutableBorrow(bk)), BorrowKind::Shared) if { tcx.migrate_borrowck() tcx.migrate_borrowck() && this.borrow_set.location_map.contains_key(&location) } => { let bi = this.borrow_set.location_map[&location]; debug!(", "commid": "rust_pr_61947"}], "negative_passages": []} {"query_id": "q-en-rust-2179ca3c521bd7ab9ffc57d9e8e8418f9ade43db5b06d66a6b5e5a7b2daf2d40", "query": "I compiled the following code expecting some sort of error message. Instead, the compiler panicked unexpectedly. () errors: errors: :\nTriage: Likely dupe of cc Preliminarily assigning P-high before compiler team meeting.", "positive_passages": [{"docid": "doc-en-rust-84b7391ba619e3c7cfca1391d25670ab023d85112f2af3a359944738a27be20f", "text": " fn f1<'a>(_: &'a mut ()) {} fn f2

    (_: P, _: ()) {} fn f3<'a>(x: &'a ((), &'a mut ())) { f2(|| x.0, f1(x.1)) //~^ ERROR cannot borrow `*x.1` as mutable, as it is behind a `&` reference //~| ERROR cannot borrow `*x.1` as mutable because it is also borrowed as immutable } fn main() {} ", "commid": "rust_pr_61947"}], "negative_passages": []} {"query_id": "q-en-rust-2179ca3c521bd7ab9ffc57d9e8e8418f9ade43db5b06d66a6b5e5a7b2daf2d40", "query": "I compiled the following code expecting some sort of error message. Instead, the compiler panicked unexpectedly. () errors: errors: :\nTriage: Likely dupe of cc Preliminarily assigning P-high before compiler team meeting.", "positive_passages": [{"docid": "doc-en-rust-fe7992e3e223308d52a9545fce55d624eb40ac93b52f18b1b19690a1ef4ed655", "text": " error[E0596]: cannot borrow `*x.1` as mutable, as it is behind a `&` reference --> $DIR/issue-61623.rs:6:19 | LL | fn f3<'a>(x: &'a ((), &'a mut ())) { | -------------------- help: consider changing this to be a mutable reference: `&'a mut ((), &'a mut ())` LL | f2(|| x.0, f1(x.1)) | ^^^ `x` is a `&` reference, so the data it refers to cannot be borrowed as mutable error[E0502]: cannot borrow `*x.1` as mutable because it is also borrowed as immutable --> $DIR/issue-61623.rs:6:19 | LL | f2(|| x.0, f1(x.1)) | -- -- - ^^^ mutable borrow occurs here | | | | | | | first borrow occurs due to use of `x` in closure | | immutable borrow occurs here | immutable borrow later used by call error: aborting due to 2 previous errors Some errors have detailed explanations: E0502, E0596. For more information about an error, try `rustc --explain E0502`. ", "commid": "rust_pr_61947"}], "negative_passages": []} {"query_id": "q-en-rust-20f696d3f868e72175817ce3cf0d73a0e5839e022980e8eebb0240818ac31c42", "query": "One of the many new (or newly enabled by default) warnings in Servo in today\u2019s Nightly: The suggestion is wrong, which means also fails and therefore does not apply fixes for (the many) other warnings where the suggestion is correct.\nTriage: Preliminarily assigning P-high before Felix reviews this.\nAlso cc since iirc you did some macro related work here.\nPossible solution: when a warning\u2019s span is exactly a macro invocation, don\u2019t emit a \u201csuggested fix\u201d that is likely wrong. (But still emit a warning.)\nWe have that check for most suggestions, but it might be missing for . Was this in latest nightly? I know a couple of these have been fixed in the past few weeks.\nThis is in rustc 1.37.0-nightly ( 2019-06-18). works around this and allows \u2019ing the rest of the warnings (namely old syntax for inclusive ranges) or warnings in other crates.\nOh. I just noticed the suggestion. I\u2019d missed it in that blob of text.\nApologies for the time this has taken - I've struggled to find time to dig in to this. So far, I've managed to make a smaller example (it still depends on , and ) that reproduces the issue, you can find it in (run in the directory).\nThanks to some help from I've now got a minimal test case w/out any dependencies that outputs the same tokens that the reduced example from servo did (you can see it in ). When inspecting the span that the lint is emitted for, it isn't marked as coming from a macro expansion, and it appears that it was created that way during the macro expansion. I'd appreciate any opinions on what the correct fix is here - is it a bug in / that they output what appears to be incorrect spans? Should rustc be able to handle this case? If so, where should I be looking to mark the span or what check should I be performing to identify it as a result of macro expansion?\ncc\nI believe this is a compiler bug, not something that needs a fix in syn or quote. is a call_site span. It is the span of tokens in macro output that are not attributable to any particular input token. Usually that will be most of the tokens in any macro output; diagnostics need to take this into account.", "positive_passages": [{"docid": "doc-en-rust-546c5fb5eb1a48fba3f54e051cde4415cad0450dce16c6153cab30f0e29e5a85", "text": "} fn maybe_lint_bare_trait(&self, span: Span, id: NodeId, is_global: bool) { self.sess.buffer_lint_with_diagnostic( builtin::BARE_TRAIT_OBJECTS, id, span, \"trait objects without an explicit `dyn` are deprecated\", builtin::BuiltinLintDiagnostics::BareTraitObject(span, is_global), ) // FIXME(davidtwco): This is a hack to detect macros which produce spans of the // call site which do not have a macro backtrace. See #61963. let is_macro_callsite = self.sess.source_map() .span_to_snippet(span) .map(|snippet| snippet.starts_with(\"#[\")) .unwrap_or(true); if !is_macro_callsite { self.sess.buffer_lint_with_diagnostic( builtin::BARE_TRAIT_OBJECTS, id, span, \"trait objects without an explicit `dyn` are deprecated\", builtin::BuiltinLintDiagnostics::BareTraitObject(span, is_global), ) } } fn wrap_in_try_constructor(", "commid": "rust_pr_63014"}], "negative_passages": []} {"query_id": "q-en-rust-20f696d3f868e72175817ce3cf0d73a0e5839e022980e8eebb0240818ac31c42", "query": "One of the many new (or newly enabled by default) warnings in Servo in today\u2019s Nightly: The suggestion is wrong, which means also fails and therefore does not apply fixes for (the many) other warnings where the suggestion is correct.\nTriage: Preliminarily assigning P-high before Felix reviews this.\nAlso cc since iirc you did some macro related work here.\nPossible solution: when a warning\u2019s span is exactly a macro invocation, don\u2019t emit a \u201csuggested fix\u201d that is likely wrong. (But still emit a warning.)\nWe have that check for most suggestions, but it might be missing for . Was this in latest nightly? I know a couple of these have been fixed in the past few weeks.\nThis is in rustc 1.37.0-nightly ( 2019-06-18). works around this and allows \u2019ing the rest of the warnings (namely old syntax for inclusive ranges) or warnings in other crates.\nOh. I just noticed the suggestion. I\u2019d missed it in that blob of text.\nApologies for the time this has taken - I've struggled to find time to dig in to this. So far, I've managed to make a smaller example (it still depends on , and ) that reproduces the issue, you can find it in (run in the directory).\nThanks to some help from I've now got a minimal test case w/out any dependencies that outputs the same tokens that the reduced example from servo did (you can see it in ). When inspecting the span that the lint is emitted for, it isn't marked as coming from a macro expansion, and it appears that it was created that way during the macro expansion. I'd appreciate any opinions on what the correct fix is here - is it a bug in / that they output what appears to be incorrect spans? Should rustc be able to handle this case? If so, where should I be looking to mark the span or what check should I be performing to identify it as a result of macro expansion?\ncc\nI believe this is a compiler bug, not something that needs a fix in syn or quote. is a call_site span. It is the span of tokens in macro output that are not attributable to any particular input token. Usually that will be most of the tokens in any macro output; diagnostics need to take this into account.", "positive_passages": [{"docid": "doc-en-rust-ccacac1d35e5af7ba5f1796a2c1d0e16c166b5a03afbe8582919e39a089ffc49", "text": " // force-host // no-prefer-dynamic #![crate_type = \"proc-macro\"] extern crate proc_macro; use proc_macro::{Group, TokenStream, TokenTree}; // This macro exists as part of a reproduction of #61963 but without using quote/syn/proc_macro2. #[proc_macro_derive(DomObject)] pub fn expand_token_stream(input: TokenStream) -> TokenStream { // Construct a dummy span - `#0 bytes(0..0)` - which is present in the input because // of the specially crafted generated tokens in the `attribute-crate` proc-macro. let dummy_span = input.clone().into_iter().nth(0).unwrap().span(); // Define what the macro would output if constructed properly from the source using syn/quote. let output: TokenStream = \"impl Bar for ((), Qux >) { } impl Bar for ((), Box) { }\".parse().unwrap(); let mut tokens: Vec<_> = output.into_iter().collect(); // Adjust token spans to match the original crate (which would use `quote`). Some of the // generated tokens point to the dummy span. for token in tokens.iter_mut() { if let TokenTree::Group(group) = token { let mut tokens: Vec<_> = group.stream().into_iter().collect(); for token in tokens.iter_mut().skip(2) { token.set_span(dummy_span); } let mut stream = TokenStream::new(); stream.extend(tokens); *group = Group::new(group.delimiter(), stream); } } let mut output = TokenStream::new(); output.extend(tokens); output } ", "commid": "rust_pr_63014"}], "negative_passages": []} {"query_id": "q-en-rust-20f696d3f868e72175817ce3cf0d73a0e5839e022980e8eebb0240818ac31c42", "query": "One of the many new (or newly enabled by default) warnings in Servo in today\u2019s Nightly: The suggestion is wrong, which means also fails and therefore does not apply fixes for (the many) other warnings where the suggestion is correct.\nTriage: Preliminarily assigning P-high before Felix reviews this.\nAlso cc since iirc you did some macro related work here.\nPossible solution: when a warning\u2019s span is exactly a macro invocation, don\u2019t emit a \u201csuggested fix\u201d that is likely wrong. (But still emit a warning.)\nWe have that check for most suggestions, but it might be missing for . Was this in latest nightly? I know a couple of these have been fixed in the past few weeks.\nThis is in rustc 1.37.0-nightly ( 2019-06-18). works around this and allows \u2019ing the rest of the warnings (namely old syntax for inclusive ranges) or warnings in other crates.\nOh. I just noticed the suggestion. I\u2019d missed it in that blob of text.\nApologies for the time this has taken - I've struggled to find time to dig in to this. So far, I've managed to make a smaller example (it still depends on , and ) that reproduces the issue, you can find it in (run in the directory).\nThanks to some help from I've now got a minimal test case w/out any dependencies that outputs the same tokens that the reduced example from servo did (you can see it in ). When inspecting the span that the lint is emitted for, it isn't marked as coming from a macro expansion, and it appears that it was created that way during the macro expansion. I'd appreciate any opinions on what the correct fix is here - is it a bug in / that they output what appears to be incorrect spans? Should rustc be able to handle this case? If so, where should I be looking to mark the span or what check should I be performing to identify it as a result of macro expansion?\ncc\nI believe this is a compiler bug, not something that needs a fix in syn or quote. is a call_site span. It is the span of tokens in macro output that are not attributable to any particular input token. Usually that will be most of the tokens in any macro output; diagnostics need to take this into account.", "positive_passages": [{"docid": "doc-en-rust-e75013aedfb72fa8294f0a245641754215c14356e3eca42f3786d726e179334c", "text": " // force-host // no-prefer-dynamic #![crate_type = \"proc-macro\"] extern crate proc_macro; use proc_macro::{Group, Spacing, Punct, TokenTree, TokenStream}; // This macro exists as part of a reproduction of #61963 but without using quote/syn/proc_macro2. #[proc_macro_attribute] pub fn dom_struct(_: TokenStream, input: TokenStream) -> TokenStream { // Construct the expected output tokens - the input but with a `#[derive(DomObject)]` applied. let attributes: TokenStream = \"#[derive(DomObject)]\".to_string().parse().unwrap(); let output: TokenStream = attributes.into_iter() .chain(input.into_iter()).collect(); let mut tokens: Vec<_> = output.into_iter().collect(); // Adjust the spacing of `>` tokens to match what `quote` would produce. for token in tokens.iter_mut() { if let TokenTree::Group(group) = token { let mut tokens: Vec<_> = group.stream().into_iter().collect(); for token in tokens.iter_mut() { if let TokenTree::Punct(p) = token { if p.as_char() == '>' { *p = Punct::new('>', Spacing::Alone); } } } let mut stream = TokenStream::new(); stream.extend(tokens); *group = Group::new(group.delimiter(), stream); } } let mut output = TokenStream::new(); output.extend(tokens); output } ", "commid": "rust_pr_63014"}], "negative_passages": []} {"query_id": "q-en-rust-20f696d3f868e72175817ce3cf0d73a0e5839e022980e8eebb0240818ac31c42", "query": "One of the many new (or newly enabled by default) warnings in Servo in today\u2019s Nightly: The suggestion is wrong, which means also fails and therefore does not apply fixes for (the many) other warnings where the suggestion is correct.\nTriage: Preliminarily assigning P-high before Felix reviews this.\nAlso cc since iirc you did some macro related work here.\nPossible solution: when a warning\u2019s span is exactly a macro invocation, don\u2019t emit a \u201csuggested fix\u201d that is likely wrong. (But still emit a warning.)\nWe have that check for most suggestions, but it might be missing for . Was this in latest nightly? I know a couple of these have been fixed in the past few weeks.\nThis is in rustc 1.37.0-nightly ( 2019-06-18). works around this and allows \u2019ing the rest of the warnings (namely old syntax for inclusive ranges) or warnings in other crates.\nOh. I just noticed the suggestion. I\u2019d missed it in that blob of text.\nApologies for the time this has taken - I've struggled to find time to dig in to this. So far, I've managed to make a smaller example (it still depends on , and ) that reproduces the issue, you can find it in (run in the directory).\nThanks to some help from I've now got a minimal test case w/out any dependencies that outputs the same tokens that the reduced example from servo did (you can see it in ). When inspecting the span that the lint is emitted for, it isn't marked as coming from a macro expansion, and it appears that it was created that way during the macro expansion. I'd appreciate any opinions on what the correct fix is here - is it a bug in / that they output what appears to be incorrect spans? Should rustc be able to handle this case? If so, where should I be looking to mark the span or what check should I be performing to identify it as a result of macro expansion?\ncc\nI believe this is a compiler bug, not something that needs a fix in syn or quote. is a call_site span. It is the span of tokens in macro output that are not attributable to any particular input token. Usually that will be most of the tokens in any macro output; diagnostics need to take this into account.", "positive_passages": [{"docid": "doc-en-rust-0440909e1e72289fc26744bc626f8d668347b04c26a55fdc64b796bff058369c", "text": " // aux-build:issue-61963.rs // aux-build:issue-61963-1.rs #![deny(bare_trait_objects)] #[macro_use] extern crate issue_61963; #[macro_use] extern crate issue_61963_1; // This test checks that the bare trait object lint does not trigger on macro attributes that // generate code which would trigger the lint. pub struct Baz; pub trait Bar { } pub struct Qux(T); #[dom_struct] pub struct Foo { qux: Qux>, bar: Box, //~^ ERROR trait objects without an explicit `dyn` are deprecated [bare_trait_objects] } fn main() {} ", "commid": "rust_pr_63014"}], "negative_passages": []} {"query_id": "q-en-rust-20f696d3f868e72175817ce3cf0d73a0e5839e022980e8eebb0240818ac31c42", "query": "One of the many new (or newly enabled by default) warnings in Servo in today\u2019s Nightly: The suggestion is wrong, which means also fails and therefore does not apply fixes for (the many) other warnings where the suggestion is correct.\nTriage: Preliminarily assigning P-high before Felix reviews this.\nAlso cc since iirc you did some macro related work here.\nPossible solution: when a warning\u2019s span is exactly a macro invocation, don\u2019t emit a \u201csuggested fix\u201d that is likely wrong. (But still emit a warning.)\nWe have that check for most suggestions, but it might be missing for . Was this in latest nightly? I know a couple of these have been fixed in the past few weeks.\nThis is in rustc 1.37.0-nightly ( 2019-06-18). works around this and allows \u2019ing the rest of the warnings (namely old syntax for inclusive ranges) or warnings in other crates.\nOh. I just noticed the suggestion. I\u2019d missed it in that blob of text.\nApologies for the time this has taken - I've struggled to find time to dig in to this. So far, I've managed to make a smaller example (it still depends on , and ) that reproduces the issue, you can find it in (run in the directory).\nThanks to some help from I've now got a minimal test case w/out any dependencies that outputs the same tokens that the reduced example from servo did (you can see it in ). When inspecting the span that the lint is emitted for, it isn't marked as coming from a macro expansion, and it appears that it was created that way during the macro expansion. I'd appreciate any opinions on what the correct fix is here - is it a bug in / that they output what appears to be incorrect spans? Should rustc be able to handle this case? If so, where should I be looking to mark the span or what check should I be performing to identify it as a result of macro expansion?\ncc\nI believe this is a compiler bug, not something that needs a fix in syn or quote. is a call_site span. It is the span of tokens in macro output that are not attributable to any particular input token. Usually that will be most of the tokens in any macro output; diagnostics need to take this into account.", "positive_passages": [{"docid": "doc-en-rust-339bfca84b1d788d05b5318bd9b4c5dab834c666401e5c1a48e8be85347bfb7c", "text": " error: trait objects without an explicit `dyn` are deprecated --> $DIR/issue-61963.rs:20:14 | LL | bar: Box, | ^^^ help: use `dyn`: `dyn Bar` | note: lint level defined here --> $DIR/issue-61963.rs:3:9 | LL | #![deny(bare_trait_objects)] | ^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_63014"}], "negative_passages": []} {"query_id": "q-en-rust-3cdad1f570797de323d8525a97742031c05e32d5f630caebc78751aa1c1c0ca8", "query": "Consider the following three examples: A and C are accepted and B has a compilation error. However, the message reported in B looks more like a lint than a compilation error. A consequence of this error is that adding a non-public function to a module (e.g. the in B) may break code that imports from that module. This causes surprises when refactoring. Shouldn't \"A non-empty glob must import something with the glob's visibility\" be a lint? The original discussion is here:\nThe error was introduced in as a part of import modularization, and discussed in one of the related issues, but I can't find where exactly. The error was introduced by analogy with errors for single imports (and just to be conservative): The error is not technically necessary, and we should be able to report it as a lint for glob imports while keeping it an error for single imports.\nWould this be a \"good first issue\" or is it complex to fix?\nNo, not complex. Find where \"non-empty glob must import something\" is reported. Replace with .\nI want to do this issue but the problem I have is with a test (ui/imports/reexports) that checks for this error, I don't know the compiler structure so I don't know when compiling this file the lint will not be reported because of the other errors or not. Else I changed the to", "positive_passages": [{"docid": "doc-en-rust-2c5d7cf0546089045d1f053c120d5089c9e334f66911e50a09c0a0d1db3998c6", "text": "if !is_prelude && max_vis.get() != ty::Visibility::Invisible && // Allow empty globs. !max_vis.get().is_at_least(directive.vis.get(), &*self) { let msg = \"A non-empty glob must import something with the glob's visibility\"; self.r.session.span_err(directive.span, msg); let msg = \"glob import doesn't reexport anything because no candidate is public enough\"; self.r.session.buffer_lint(UNUSED_IMPORTS, directive.id, directive.span, msg); } return None; }", "commid": "rust_pr_65539"}], "negative_passages": []} {"query_id": "q-en-rust-3cdad1f570797de323d8525a97742031c05e32d5f630caebc78751aa1c1c0ca8", "query": "Consider the following three examples: A and C are accepted and B has a compilation error. However, the message reported in B looks more like a lint than a compilation error. A consequence of this error is that adding a non-public function to a module (e.g. the in B) may break code that imports from that module. This causes surprises when refactoring. Shouldn't \"A non-empty glob must import something with the glob's visibility\" be a lint? The original discussion is here:\nThe error was introduced in as a part of import modularization, and discussed in one of the related issues, but I can't find where exactly. The error was introduced by analogy with errors for single imports (and just to be conservative): The error is not technically necessary, and we should be able to report it as a lint for glob imports while keeping it an error for single imports.\nWould this be a \"good first issue\" or is it complex to fix?\nNo, not complex. Find where \"non-empty glob must import something\" is reported. Replace with .\nI want to do this issue but the problem I have is with a test (ui/imports/reexports) that checks for this error, I don't know the compiler structure so I don't know when compiling this file the lint will not be reported because of the other errors or not. Else I changed the to", "positive_passages": [{"docid": "doc-en-rust-e100dd78abc083f99e66cd3ac0c9591f2b0b96176704402301cfab74c05b5dd1", "text": " #![warn(unused_imports)] mod a { fn foo() {} mod foo {} mod a { pub use super::foo; //~ ERROR cannot be re-exported pub use super::*; //~ ERROR must import something with the glob's visibility pub use super::*; //~^ WARNING glob import doesn't reexport anything because no candidate is public enough } } mod b { pub fn foo() {} mod foo { pub struct S; } mod foo { pub struct S; } pub mod a { pub use super::foo; // This is OK since the value `foo` is visible enough.", "commid": "rust_pr_65539"}], "negative_passages": []} {"query_id": "q-en-rust-3cdad1f570797de323d8525a97742031c05e32d5f630caebc78751aa1c1c0ca8", "query": "Consider the following three examples: A and C are accepted and B has a compilation error. However, the message reported in B looks more like a lint than a compilation error. A consequence of this error is that adding a non-public function to a module (e.g. the in B) may break code that imports from that module. This causes surprises when refactoring. Shouldn't \"A non-empty glob must import something with the glob's visibility\" be a lint? The original discussion is here:\nThe error was introduced in as a part of import modularization, and discussed in one of the related issues, but I can't find where exactly. The error was introduced by analogy with errors for single imports (and just to be conservative): The error is not technically necessary, and we should be able to report it as a lint for glob imports while keeping it an error for single imports.\nWould this be a \"good first issue\" or is it complex to fix?\nNo, not complex. Find where \"non-empty glob must import something\" is reported. Replace with .\nI want to do this issue but the problem I have is with a test (ui/imports/reexports) that checks for this error, I don't know the compiler structure so I don't know when compiling this file the lint will not be reported because of the other errors or not. Else I changed the to", "positive_passages": [{"docid": "doc-en-rust-e81bc549430bcc4da8e18c8aa6bb033c4c640f47e1e25f50f8a0eb25392cc441", "text": "error[E0364]: `foo` is private, and cannot be re-exported --> $DIR/reexports.rs:6:17 --> $DIR/reexports.rs:8:17 | LL | pub use super::foo; | ^^^^^^^^^^ | note: consider marking `foo` as `pub` in the imported module --> $DIR/reexports.rs:6:17 --> $DIR/reexports.rs:8:17 | LL | pub use super::foo; | ^^^^^^^^^^ error: A non-empty glob must import something with the glob's visibility --> $DIR/reexports.rs:7:17 | LL | pub use super::*; | ^^^^^^^^ error[E0603]: module `foo` is private --> $DIR/reexports.rs:28:15 --> $DIR/reexports.rs:33:15 | LL | use b::a::foo::S; | ^^^ error[E0603]: module `foo` is private --> $DIR/reexports.rs:29:15 --> $DIR/reexports.rs:34:15 | LL | use b::b::foo::S as T; | ^^^ error: aborting due to 4 previous errors warning: glob import doesn't reexport anything because no candidate is public enough --> $DIR/reexports.rs:9:17 | LL | pub use super::*; | ^^^^^^^^ | note: lint level defined here --> $DIR/reexports.rs:1:9 | LL | #![warn(unused_imports)] | ^^^^^^^^^^^^^^ error: aborting due to 3 previous errors Some errors have detailed explanations: E0364, E0603. For more information about an error, try `rustc --explain E0364`.", "commid": "rust_pr_65539"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-86704df67fbee7a75598b4ec01e5e5e9edae516161ee71f94318a14fe2ec230c", "text": "COPY host-x86_64/dist-x86_64-linux/build-binutils.sh /tmp/ RUN ./build-binutils.sh # libssh2 (a dependency of Cargo) requires cmake 2.8.11 or higher but CentOS # only has 2.6.4, so build our own COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Need a newer version of gcc than centos has to compile LLVM nowadays # Need at least GCC 5.1 to compile LLVM nowadays COPY host-x86_64/dist-x86_64-linux/build-gcc.sh /tmp/ RUN ./build-gcc.sh RUN ./build-gcc.sh && apt-get remove -y gcc g++ # CentOS 5.5 has Python 2.4 by default, but LLVM needs 2.7+ # Debian 6 has Python 2.6 by default, but LLVM needs 2.7+ COPY host-x86_64/dist-x86_64-linux/build-python.sh /tmp/ RUN ./build-python.sh # Now build LLVM+Clang 7, afterwards configuring further compilations to use the # LLVM needs cmake 3.4.3 or higher, and is planning to raise to 3.13.4. COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Now build LLVM+Clang, afterwards configuring further compilations to use the # clang/clang++ compilers. COPY host-x86_64/dist-x86_64-linux/build-clang.sh host-x86_64/dist-x86_64-linux/llvm-project-centos.patch /tmp/ COPY host-x86_64/dist-x86_64-linux/build-clang.sh /tmp/ RUN ./build-clang.sh ENV CC=clang CXX=clang++ # Apparently CentOS 5.5 desn't have `git` in yum, but we're gonna need it for # cloning, so download and build it here. COPY host-x86_64/dist-x86_64-linux/build-git.sh /tmp/ RUN ./build-git.sh # for sanitizers, we need kernel headers files newer than the ones CentOS ships # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-headers.sh /tmp/ RUN ./build-headers.sh # OpenSSL requires a more recent version of perl # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-perl.sh /tmp/ RUN ./build-perl.sh COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-eb323ced085deef56200e4059a069ecb7dc0dfc2336e150f8ea1bbe7ac805c46", "text": "# libcurl, instead it should compile its own. ENV LIBCURL_NO_PKG_CONFIG 1 # There was a bad interaction between \"old\" 32-bit binaries on current 64-bit # kernels with selinux enabled, where ASLR mmap would sometimes choose a low # address and then block it for being below `vm.mmap_min_addr` -> `EACCES`. # This is probably a kernel bug, but setting `ulimit -Hs` works around it. # See also `src/ci/run.sh` where this takes effect. ENV SET_HARD_RLIMIT_STACK 1 ENV DIST_REQUIRE_ALL_TOOLS 1", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-b335b72fe572704d5ea3ce746b5234372c25c1a7d77b184ce379f85067b347ee", "text": " FROM centos:5 # We use Debian 6 (glibc 2.11, kernel 2.6.32) as a common base for other # distros that still need Rust support: RHEL 6 (glibc 2.12, kernel 2.6.32) and # SLES 11 SP4 (glibc 2.11, kernel 3.0). FROM debian:6 WORKDIR /build # Centos 5 is EOL and is no longer available from the usual mirrors, so switch # to http://vault.centos.org/ RUN sed -i 's/enabled=1/enabled=0/' /etc/yum/pluginconf.d/fastestmirror.conf RUN sed -i 's/mirrorlist/#mirrorlist/' /etc/yum.repos.d/*.repo RUN sed -i 's|#(baseurl.*)mirror.centos.org/centos/$releasever|1vault.centos.org/5.11|' /etc/yum.repos.d/*.repo # Debian 6 is EOL and no longer available from the usual mirrors, # so we'll need to switch to http://archive.debian.org/ RUN sed -i '/updates/d' /etc/apt/sources.list && sed -i 's/httpredir/archive/' /etc/apt/sources.list RUN yum upgrade -y && yum install -y curl RUN apt-get update && apt-get install --allow-unauthenticated -y --no-install-recommends automake bzip2 ca-certificates curl file g++ g++-multilib gcc gcc-c++ gcc-multilib git lib32z1-dev libedit-dev libncurses-dev make glibc-devel patch perl zlib-devel file xz which pkgconfig pkg-config unzip wget autoconf gettext xz-utils zlib1g-dev ENV PATH=/rustroot/bin:$PATH ENV LD_LIBRARY_PATH=/rustroot/lib64:/rustroot/lib ENV LD_LIBRARY_PATH=/rustroot/lib64:/rustroot/lib32:/rustroot/lib ENV PKG_CONFIG_PATH=/rustroot/lib/pkgconfig WORKDIR /tmp RUN mkdir /home/user COPY host-x86_64/dist-x86_64-linux/shared.sh /tmp/ # We need a build of openssl which supports SNI to download artifacts from", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-656373ef21fd754284e0e19da4ed3494db327b81ae20b0ffbb7be8f712d84cac", "text": "COPY host-x86_64/dist-x86_64-linux/build-openssl.sh /tmp/ RUN ./build-openssl.sh # The `curl` binary on CentOS doesn't support SNI which is needed for fetching # The `curl` binary on Debian 6 doesn't support SNI which is needed for fetching # some https urls we have, so install a new version of libcurl + curl which is # using the openssl we just built previously. # # Note that we also disable a bunch of optional features of curl that we don't # really need. COPY host-x86_64/dist-x86_64-linux/build-curl.sh /tmp/ RUN ./build-curl.sh RUN ./build-curl.sh && apt-get remove -y curl # binutils < 2.22 has a bug where the 32-bit executables it generates # immediately segfault in Rust, so we need to install our own binutils.", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-105fee5e8cff6ca966c8e7276a2157c592068a11adf4dd26a5bbd829cba828a2", "text": "COPY host-x86_64/dist-x86_64-linux/build-binutils.sh /tmp/ RUN ./build-binutils.sh # libssh2 (a dependency of Cargo) requires cmake 2.8.11 or higher but CentOS # only has 2.6.4, so build our own COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Build a version of gcc capable of building LLVM 6 # Need at least GCC 5.1 to compile LLVM nowadays COPY host-x86_64/dist-x86_64-linux/build-gcc.sh /tmp/ RUN ./build-gcc.sh RUN ./build-gcc.sh && apt-get remove -y gcc g++ # CentOS 5.5 has Python 2.4 by default, but LLVM needs 2.7+ # Debian 6 has Python 2.6 by default, but LLVM needs 2.7+ COPY host-x86_64/dist-x86_64-linux/build-python.sh /tmp/ RUN ./build-python.sh # Now build LLVM+Clang 7, afterwards configuring further compilations to use the # LLVM needs cmake 3.4.3 or higher, and is planning to raise to 3.13.4. COPY host-x86_64/dist-x86_64-linux/build-cmake.sh /tmp/ RUN ./build-cmake.sh # Now build LLVM+Clang, afterwards configuring further compilations to use the # clang/clang++ compilers. COPY host-x86_64/dist-x86_64-linux/build-clang.sh host-x86_64/dist-x86_64-linux/llvm-project-centos.patch /tmp/ COPY host-x86_64/dist-x86_64-linux/build-clang.sh /tmp/ RUN ./build-clang.sh ENV CC=clang CXX=clang++ # Apparently CentOS 5.5 desn't have `git` in yum, but we're gonna need it for # cloning, so download and build it here. COPY host-x86_64/dist-x86_64-linux/build-git.sh /tmp/ RUN ./build-git.sh # for sanitizers, we need kernel headers files newer than the ones CentOS ships # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-headers.sh /tmp/ RUN ./build-headers.sh # OpenSSL requires a more recent version of perl # with so we install newer ones here COPY host-x86_64/dist-x86_64-linux/build-perl.sh /tmp/ RUN ./build-perl.sh COPY scripts/sccache.sh /scripts/ RUN sh /scripts/sccache.sh", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-a0e58996ffd8f8577fa88f608132d77e514da3cb89e702eccb49e1446ebad497", "text": "curl -L https://github.com/llvm/llvm-project/archive/$LLVM.tar.gz | tar xzf - --strip-components=1 yum install -y patch patch -Np1 < ../llvm-project-centos.patch mkdir clang-build cd clang-build", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-147951eaa9bd7cfba78e9f8c8d45cd93951bc1476e28afc0795f5c7e403b33d4", "text": "set -ex source shared.sh curl https://cmake.org/files/v3.6/cmake-3.6.3.tar.gz | tar xzf - CMAKE=3.13.4 curl -L https://github.com/Kitware/CMake/releases/download/v$CMAKE/cmake-$CMAKE.tar.gz | tar xzf - mkdir cmake-build cd cmake-build hide_output ../cmake-3.6.3/configure --prefix=/rustroot hide_output ../cmake-$CMAKE/configure --prefix=/rustroot hide_output make -j10 hide_output make install cd .. rm -rf cmake-build rm -rf cmake-3.6.3 rm -rf cmake-$CMAKE ", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-7f2daa7f707e5e1ba91f4cad7dffef675dc62e149b88dab1724e7df6a36bb7b3", "text": "cd .. rm -rf curl-build rm -rf curl-$VERSION yum erase -y curl ", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-803836717353ca6b73f111855c5accd5b4b473766988ba4367ddf95d70ed2ec7", "text": "cd .. rm -rf gcc-build rm -rf gcc-$GCC yum erase -y gcc gcc-c++ binutils ", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-bf57e7b18ed8b9ab5e27682bd209117a0b094e52120110102ca62034de171272", "query": "The docs currently claim that the minimum Linux kernel version for is 2.6.18 (released September 20th, 2006). This is because RHEL 5 used that kernel version. However, RHEL 5 entered ELS on March 31, 2017. Should we continue to support RHEL 5 for , or should we increase the minimum Linux Kernel version to 2.6.27 (2nd LTS) or 2.6.32 (RHEL 6, 3rd LTS)? . Even bumping the min-version to 2.6.27 would allow us to remove most of the Linux-specific hacks in . Example: .\nGiven that RHEL is the only reason we keep comically old kernel versions around, I would propose that Rust only support RHEL until Maintenance Support ends. This is (essentially) what we already did for RHEL 4. Rust never supported RHEL 4, and back when RHEL 5 still had maintenance support. It would be nice to get someone from Red Hat or a RHEL customer to comment. This policy would allow us to increment the minimum kernel from 2.6.18 to 2.6.32 (and remove a lot of Linux-specific hacks). Note that RHEL has a weird (and very long) support system for RHEL 4, 5, and 6 (sources: and ). It has 5 major steps: Full Support Normal support of the OS Maintenance Support 1 Bug/Security fixes Limited hardware refresh Maintenance Support 2 Bug/Security fixes The end of this phase is considered \"Product retirement\" Extended Life Cycle Support (ELS) Additional paid product from Red Hat Gives updates for a longer period of time No additional releases/images Extended Life Phase (ELP) No more updates Limited Technical Support End date not given by Red Hat Current status of RHEL versions: RHEL 4 Not supported by Rust Currently in ELP (no end date specified) ELS ended March 31, 2017 RHEL 5 Supported by Rust Currently in ELS Maintenance Support ended March 31, 2017 ELS ends November 30, 2020 RHEL 6 Supported by Rust Currently in Maintenance Support 2 Maintenance Support ends November 30, 2020\ncc which I think they could be related.\nIt also may be worth to drop support of Windows XP and Vista (especially considering that panics are broken on XP since June 2016, see: ). Previously it was discussed . cc\nBesides , which other things would we be able to clean up?\nGood question! There are 6 workarounds in for older Linux versions (that I could find). Increasing the minimum version to 2.6.32 (aka 3rd Kernel LTS, aka RHEL 6) would fix 5 of them. Code links are inline: (mentioned above, in 2.6.23) ( in 2.6.24, there is also the mention of a bug occuring on \"some linux kernel at some point\") to atomically set the flag on the pipe fds ( in 2.6.27) ( in 2.6.27) to permit use of ( in 2.6.28) ( in 4.5, not fixed by this proposal) As you can see, the workarounds fixed by this proposal all have a similar flavor.\nI am Red Hat's maintainer for Rust on RHEL -- thanks for the cc. I try to keep an eye on new issues, but this one slipped past me. Red Hat only ships the Rust toolchain to customers for RHEL 7 and RHEL 8. If our customers would like to use Rust on older RHEL, they can do so via , and we'll support them in the same way we would for any other third-party software. Internally we do also build and use on RHEL 6, mostly because it's needed to ship Firefox updates. This is where it gets a little hairy, because each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel. I would have to apply local workarounds if upstream Rust loses the ability to run on 2.6.18 kernels. We prefer to keep fixes upstream as much as possible, both for open sharing and to avoid bitrot. Is there much of a benefit to removing the current workarounds? I agree all of that code is annoying and cumbersome, but it's already written, and as far as I know it hasn't been a maintenance burden. Do you see otherwise? If there are any known issues with such Linux compatibility code, I am definitely willing to take assignment for fixing them.\n(Not a Rust maintainer, but I'm a Rust user and maintain a lot of other OSS software so I have well-developed feelings around supporting Very Old Software :-)) Do you have any sense of on what timeline Rust would be able to drop support for 2.6.18 kernels without causing you pain? In general I don't think people mind supporting configurations that have users and are painful to work around, but needing to support them for forever is a bitter pill to swallow! Particularly as they get harder and harder to test over time (already I have no idea how to test on really old kernels besides building it myself). So if there was an estimate \"we'd like to be able to support this until Q2 2020\", even if it's not set in stone, I think that would be very helpful!\nThe other benefit would be that Rust wouldn't have to use CentOS 5 for CI, which means we don't have to patch LLVM (and Emscripten LLVM) to compile on those systems. Of course that's also fairly limited in scope.\nAs soon as we stop shipping Firefox and Thunderbird updates on RHEL 6, I won't need Rust there anymore. AFAIK this does correspond to the end of Maintenance Support, November 30, 2020. Then I'll be on to RHEL 7 builders as my minimum, probably still with some RHEL 6 2.6.32 kernels involved. It should be fine for me if we update CI to CentOS 6. This is mostly concerned with how the release binaries are linked to GLIBC symbol versions, which is all in userspace. It's a broader community question whether any other Rust users still care about running on RHEL or CentOS 5. (Small caveat - glibc support for a symbol can still return , .)\nI noticed that Red Hat also has Extended Life Cycle Support (ELS) for RHEL 6 until June 30, 2024. Will you need RHEL 5 to work during this time? I don't know how ELS works with Firefox updates. Also, is there any reason RHEL 4 support wasn't an issue prior to March 31, 2017 (while RHEL 5 was still normally supported)? This issue came up for me when dealing with opening files in , see for more info. No single RHEL 5 issue is that bad, it's mainly just the sum of a bunch of tiny issues.\nI'm not on that team, but AFAIK we don't ship Firefox updates for ELS. The last build I see for RHEL 5 was . Maybe there could be an exception for a severe security issue, but I really doubt we'd rebase to newer Firefox for that, which means Rust requirements won't change. New Firefox ESR versions do require a newer Rust toolchain too, which is generally why we have to keep up. Otherwise we could just freeze some older compat rustc while upstream moves on. Rust wasn't required until Firefox 52.\nthat makes perfect sense to me, thanks for clarifying. So the proposed policy would be: Support RHEL until RHEL is retired (i.e. ends normal support). This would mean: Supporting RHEL 5 (v2.6.18) until November 30, 2020 Supporting RHEL 6 (v2.6.32) until June 30, 2024 Supporting RHEL 7 (v3.10) until May, 2029 This is a much longer support tail than any other Linux distros (that I know of), so it would also be the effective minimum kernel version of , , and their dependencies. How does that sound to people? EDIT: The alternative to this would be Support RHEL until it is retired, taking the table above and incrementing the RHEL versions by 1. It would also mean being able to drop RHEL 5 support now.\nSure, that's ideal for me. :smile: As mentioned, is just needed for kernel support, as far as I'm concerned. Userspace concerns like GLIBC symbol versions can stick to currently supported for my needs.\nRed Hat should backport the features we need (CLOEXEC and getrandom, in particular) to whatever kernels that it wants Rust to support. I don't know any good reason why they don't do so, other than it's cheaper to convince the whole world to keep supporting older kernel versions than it is to do the backports. We should change that dynamic.\nThe alternative to this would be Support RHEL N until it is retired I think we should not officially support retired OS versions (well, maybe with some grace period), so I prefer this option. Supporting 14 year old kernels seems a bit too much to me. each RHEL N is bootstrapped on RHEL N-1 and often kept that way -- meaning a lot of our RHEL 6 builders are still running on a RHEL 5 kernel Is it possible to apply kernel patches to those builders to add support for functionality like CLOEXEC? Or build Firefox separately on RHEL 6?\nI think it won't be fruitful for us to debate the merits of stable enterprise kernels, but no, this is not a possibility. An alternative take is \"Red Hat should be responsible for the work in Rust to maintain the support they want\" -- here I am, ready and willing. I definitely can't change those kernels. The ones that are stuck on N-1 are precisely to avoid rocking the boat, and backporting features is a big change. Some of our arches do eventually update the builders to their matching N kernel, but some don't, and I don't know all the reasons. If the workarounds are removed, then I will have to reapply them in our own builds. This assumes I keep using only our own binaries -- if I ever have to re-bootstrap from upstream binaries for stage0, I would also have to use some interception (like ) to hack in the workarounds. This is all possible, but much worse than the status quo of maintaining support here.\nI think this sounds reasonable. We aren't going to be able to update these kernels, it's just a question of when we would drop support for these OSes. Especially given that we wont need RHEL 5 CI support, I think that leaving things as they are is fine. I opened this issue to verify two things: 1) That we weren't intending on supporting RHEL 5 \"forever\", and had a clear date when to drop support. 2) That someone from Red Hat actively cared about supporting these older kernels. It looks like both these things are true. We should remove RHEL 5 workarounds after November 30, 2020. This issue can be postponed until then. EDIT: For our use case in , we were able to add compatibility for RHEL 5 .\nSee for a recent case where supporting such ancient kernels required extra manual work ( to the 2 lines mentioned by above). I am not sure how often that comes up, but probably the hardest part is to even notice that it happens -- and even then those codepaths will likely linger untested.\nYes, but this will still be true in general. If the CI kernel is newer than whatever kernel baseline we choose, it will be possible for newer syscalls to pass undetected. I don't know if we have any control over that -- can we choose a particular VM image? So that leaves us with a stated policy of support. The more that developers are aware of such issues, the better, but I'm the one most likely to notice when my build actually fails. If I come back with a fix, I at least need the project to be receptive. I haven't been in the habit of building nightly on our RHEL6 builders, but maybe I should! Catching this early is better for everyone involved...\nThat seems reasonable. How quick is the Rust related bootstrapping? Could it be run automatically (or more frequently)? This is an interesting point. If I hadn't made that change, would have kept working on RHEL 5, it just would have leaked an FD. I'm not sure how we would even test for this sort of thing. This isn't a RHEL 5 specific concern, I just don't know how Rust tests these types of resource leak issues in general.\nAbout 2 hours. Automation is harder, since I believe our build system needs my credentials, but I'll look into what is possible here. Yeah, that's a lot more subtle than an !", "positive_passages": [{"docid": "doc-en-rust-a2758b9b1e22620c729fe6690090f83f342f595604fa09e3d46d6773f68ed07e", "text": "ulimit -c unlimited fi # There was a bad interaction between \"old\" 32-bit binaries on current 64-bit # kernels with selinux enabled, where ASLR mmap would sometimes choose a low # address and then block it for being below `vm.mmap_min_addr` -> `EACCES`. # This is probably a kernel bug, but setting `ulimit -Hs` works around it. # See also `dist-i686-linux` where this setting is enabled. if [ \"$SET_HARD_RLIMIT_STACK\" = \"1\" ]; then rlimit_stack=$(ulimit -Ss) if [ \"$rlimit_stack\" != \"\" ]; then ulimit -Hs \"$rlimit_stack\" fi fi ci_dir=`cd $(dirname $0) && pwd` source \"$ci_dir/shared.sh\"", "commid": "rust_pr_74163"}], "negative_passages": []} {"query_id": "q-en-rust-057bf056c4b8388e5fa16727b81b3be344503fb40beb800f6b139d8eab71d2af", "query": "Found with the help of .\nThere's two slicing operations in . I'm trying to figure out how a non-boundary index could be appearing in there... maybe is calling with a span that begins at index 10? (where does this come from? ?) Building a debug compiler to check those log statements... Edit: Building the debug compiler was a bust. Embarassingly, things seem to have changed and I don't know how to get those debug macros to fire in the current compiler.\ndo you mean you were using and not seeing debug output? If so, that is because the environment variable name under was changed to . (I make this mistake pretty much every day due to muscle memory.) As an example, here is the tail of my log output:\ntriage: P-medium. Removing nomination.\nThis was broken between 1.15 and 1.16, but those are so old and the nature of the ICE itself has changed from 1.28 to 1.29, so I do not think bisection would be worthwhile.\n // ignore-tidy-trailing-newlines // error-pattern: aborting due to 3 previous errors y![ \u03e4, No newline at end of file", "commid": "rust_pr_66429"}], "negative_passages": []} {"query_id": "q-en-rust-057bf056c4b8388e5fa16727b81b3be344503fb40beb800f6b139d8eab71d2af", "query": "Found with the help of .\nThere's two slicing operations in . I'm trying to figure out how a non-boundary index could be appearing in there... maybe is calling with a span that begins at index 10? (where does this come from? ?) Building a debug compiler to check those log statements... Edit: Building the debug compiler was a bust. Embarassingly, things seem to have changed and I don't know how to get those debug macros to fire in the current compiler.\ndo you mean you were using and not seeing debug output? If so, that is because the environment variable name under was changed to . (I make this mistake pretty much every day due to muscle memory.) As an example, here is the tail of my log output:\ntriage: P-medium. Removing nomination.\nThis was broken between 1.15 and 1.16, but those are so old and the nature of the ICE itself has changed from 1.28 to 1.29, so I do not think bisection would be worthwhile.\n error: this file contains an un-closed delimiter --> $DIR/issue-62524.rs:4:3 | LL | y![ | - un-closed delimiter LL | \u03e4, | ^ error: macros that expand to items must be delimited with braces or followed by a semicolon --> $DIR/issue-62524.rs:3:3 | LL | y![ | ___^ LL | | \u03e4, | |__^ | help: change the delimiters to curly braces | LL | y!{ LL | \u03e4} | help: add a semicolon | LL | \u03e4,; | ^ error: cannot find macro `y` in this scope --> $DIR/issue-62524.rs:3:1 | LL | y![ | ^ error: aborting due to 3 previous errors ", "commid": "rust_pr_66429"}], "negative_passages": []} {"query_id": "q-en-rust-a25e4cd50ff77c3a965f8e56ea92fe717b1615e3675ac5574f9bd753fc09bda1", "query": "The godbolt link: I think this snippet should just return : The generated asm with :\nI think there are 2 issues: is not const fn is not const fn See this modified example:\nI may misunderstand but in the C version, does clang require similar conditions to optimize code? modify labels: A-codegen A-const-eval\nThose are preprocessor directives and are handled at compile time so their Rust equivalent would be:\nstates that macro:\nGood point, those two are equal: I don't know enough about const and I could be totally wrong but I think in the first case expands to something like: It cannot make constant because is not yet complete.\nInteresting, If I look at MIR output, I think rustc already have enough information: // Ensure the asm for array comparisons is properly optimized. //@ compile-flags: -C opt-level=2 #![crate_type = \"lib\"] // CHECK-LABEL: @compare // CHECK: start: // CHECK-NEXT: ret i1 true #[no_mangle] pub fn compare() -> bool { let bytes = 12.5f32.to_ne_bytes(); bytes == if cfg!(target_endian = \"big\") { [0x41, 0x48, 0x00, 0x00] } else { [0x00, 0x00, 0x48, 0x41] } } ", "commid": "rust_pr_125298"}], "negative_passages": []} {"query_id": "q-en-rust-b9f51e18f76b96128e0c74dfc1823cbda2a80a5358e277b4496f3fc0b9d49ccf", "query": "A clean rust project on the latest nightly rustc with just this : produces the following error on cargo check/build: ty: const_var.ty, ty: self.fold_ty(const_var.ty), } ) }", "commid": "rust_pr_65652"}], "negative_passages": []} {"query_id": "q-en-rust-b9f51e18f76b96128e0c74dfc1823cbda2a80a5358e277b4496f3fc0b9d49ccf", "query": "A clean rust project on the latest nightly rustc with just this : produces the following error on cargo check/build: // revisions:rpass1 #![feature(const_generics)] struct Struct(T); impl Struct<[T; N]> { fn f() {} fn g() { Self::f(); } } fn main() { Struct::<[u32; 3]>::g(); } ", "commid": "rust_pr_65652"}], "negative_passages": []} {"query_id": "q-en-rust-b9f51e18f76b96128e0c74dfc1823cbda2a80a5358e277b4496f3fc0b9d49ccf", "query": "A clean rust project on the latest nightly rustc with just this : produces the following error on cargo check/build: // revisions:rpass1 #![feature(const_generics)] struct FakeArray(T); impl FakeArray { fn len(&self) -> usize { N } } fn main() { let fa = FakeArray::(1); assert_eq!(fa.len(), 32); } ", "commid": "rust_pr_65652"}], "negative_passages": []} {"query_id": "q-en-rust-b9f51e18f76b96128e0c74dfc1823cbda2a80a5358e277b4496f3fc0b9d49ccf", "query": "A clean rust project on the latest nightly rustc with just this : produces the following error on cargo check/build: // revisions:cfail1 #![feature(const_generics)] //[cfail1]~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash struct S([T; N]); fn f(x: T) -> S { panic!() } fn main() { f(0u8); //[cfail1]~^ ERROR type annotations needed } ", "commid": "rust_pr_65652"}], "negative_passages": []} {"query_id": "q-en-rust-b9f51e18f76b96128e0c74dfc1823cbda2a80a5358e277b4496f3fc0b9d49ccf", "query": "A clean rust project on the latest nightly rustc with just this : produces the following error on cargo check/build: // revisions:cfail1 #![feature(const_generics)] //[cfail1]~^ WARN the feature `const_generics` is incomplete and may cause the compiler to crash fn combinator() -> [T; S] {} //[cfail1]~^ ERROR mismatched types fn main() { combinator().into_iter(); //[cfail1]~^ ERROR type annotations needed } ", "commid": "rust_pr_65652"}], "negative_passages": []} {"query_id": "q-en-rust-b9f51e18f76b96128e0c74dfc1823cbda2a80a5358e277b4496f3fc0b9d49ccf", "query": "A clean rust project on the latest nightly rustc with just this : produces the following error on cargo check/build: // revisions:rpass1 #![feature(const_generics)] pub struct Foo([T; 0]); impl Foo { pub fn new() -> Self { Foo([]) } } fn main() { let _: Foo = Foo::new(); } ", "commid": "rust_pr_65652"}], "negative_passages": []} {"query_id": "q-en-rust-b398223f8c559d92553d524d0ab191ab805fe73852ce030490fdbd945bea1363", "query": "The following code () leads to an ICE: Broken since 1.32.0 up to the current nightly, earlier versions are not affected. bool { let attrs = self.parse_arg_attributes()?; if let Ok(Some(mut arg)) = self.parse_self_arg() { if let Some(mut arg) = self.parse_self_arg()? { arg.attrs = attrs.into(); return self.recover_bad_self_arg(arg, is_trait_item); }", "commid": "rust_pr_62668"}], "negative_passages": []} {"query_id": "q-en-rust-b398223f8c559d92553d524d0ab191ab805fe73852ce030490fdbd945bea1363", "query": "The following code () leads to an ICE: Broken since 1.32.0 up to the current nightly, earlier versions are not affected. // Regression test for issue #62660: if a receiver's type does not // successfully parse, emit the correct error instead of ICE-ing the compiler. struct Foo; impl Foo { pub fn foo(_: i32, self: Box`, found `)` } fn main() {} ", "commid": "rust_pr_62668"}], "negative_passages": []} {"query_id": "q-en-rust-b398223f8c559d92553d524d0ab191ab805fe73852ce030490fdbd945bea1363", "query": "The following code () leads to an ICE: Broken since 1.32.0 up to the current nightly, earlier versions are not affected. error: expected one of `!`, `(`, `+`, `,`, `::`, `<`, or `>`, found `)` --> $DIR/issue-62660.rs:7:38 | LL | pub fn foo(_: i32, self: Box", "commid": "rust_pr_62668"}], "negative_passages": []} {"query_id": "q-en-rust-054c43f6d38c883076148f11bd7817760468641ac14a793dcadc78d44826b3eb", "query": "Reproduction code: If I re-export in the module, it works as expected, having enum in the type-namespace and variant in the value-namespace ().\nCompiler team check-in: I'm marking this as P-medium for now as it is not classified as a regression, although it seems like a fairly major problem. appears to be on it, though.\nFeel free to add the I-nominated label if you think it should be P-high\nlate last year. could you take a look at this to see if it is valid for the invariant being checked not to hold in this case? If I comment out the the output is reasonable:\nI'll see what happens (that's why I self-assigned). The assert is there to catch the cases that resolve logic assumes to be impossible. It may work ok if the assert is removed, but it may also mean some deeper issues, this needs a more detailed look.\nThe above error message without the looks a lot like and .\nbut it may also mean some deeper issues was exactly the case here! Thank you for reporting. Fixed in", "positive_passages": [{"docid": "doc-en-rust-ceb3ca0b21d1a816d65a598d7ca66a36ee278a92e10dac5d69caaea93d25cf80", "text": "/// consolidate multiple unresolved import errors into a single diagnostic. fn finalize_import(&mut self, import: &'b Import<'b>) -> Option { let orig_vis = import.vis.replace(ty::Visibility::Invisible); let orig_blacklisted_binding = match &import.kind { ImportKind::Single { target_bindings, .. } => { Some(mem::replace(&mut self.r.blacklisted_binding, target_bindings[TypeNS].get())) } _ => None, }; let prev_ambiguity_errors_len = self.r.ambiguity_errors.len(); let path_res = self.r.resolve_path( &import.module_path,", "commid": "rust_pr_70236"}], "negative_passages": []} {"query_id": "q-en-rust-054c43f6d38c883076148f11bd7817760468641ac14a793dcadc78d44826b3eb", "query": "Reproduction code: If I re-export in the module, it works as expected, having enum in the type-namespace and variant in the value-namespace ().\nCompiler team check-in: I'm marking this as P-medium for now as it is not classified as a regression, although it seems like a fairly major problem. appears to be on it, though.\nFeel free to add the I-nominated label if you think it should be P-high\nlate last year. could you take a look at this to see if it is valid for the invariant being checked not to hold in this case? If I comment out the the output is reasonable:\nI'll see what happens (that's why I self-assigned). The assert is there to catch the cases that resolve logic assumes to be impossible. It may work ok if the assert is removed, but it may also mean some deeper issues, this needs a more detailed look.\nThe above error message without the looks a lot like and .\nbut it may also mean some deeper issues was exactly the case here! Thank you for reporting. Fixed in", "positive_passages": [{"docid": "doc-en-rust-da38693ec3259ab1f2d5981e0519995c8ef20dc7ced19186b79d2c0b64786c1f", "text": "import.crate_lint(), ); let no_ambiguity = self.r.ambiguity_errors.len() == prev_ambiguity_errors_len; if let Some(orig_blacklisted_binding) = orig_blacklisted_binding { self.r.blacklisted_binding = orig_blacklisted_binding; } import.vis.set(orig_vis); if let PathResult::Failed { .. } | PathResult::NonModule(..) = path_res { // Consider erroneous imports used to avoid duplicate diagnostics.", "commid": "rust_pr_70236"}], "negative_passages": []} {"query_id": "q-en-rust-054c43f6d38c883076148f11bd7817760468641ac14a793dcadc78d44826b3eb", "query": "Reproduction code: If I re-export in the module, it works as expected, having enum in the type-namespace and variant in the value-namespace ().\nCompiler team check-in: I'm marking this as P-medium for now as it is not classified as a regression, although it seems like a fairly major problem. appears to be on it, though.\nFeel free to add the I-nominated label if you think it should be P-high\nlate last year. could you take a look at this to see if it is valid for the invariant being checked not to hold in this case? If I comment out the the output is reasonable:\nI'll see what happens (that's why I self-assigned). The assert is there to catch the cases that resolve logic assumes to be impossible. It may work ok if the assert is removed, but it may also mean some deeper issues, this needs a more detailed look.\nThe above error message without the looks a lot like and .\nbut it may also mean some deeper issues was exactly the case here! Thank you for reporting. Fixed in", "positive_passages": [{"docid": "doc-en-rust-280c5619fee8f08964e3c6c8e5481291b8dd73b73e03fff1933366810bb0c918", "text": " // check-pass mod m { pub enum Same { Same, } } use m::*; // The variant `Same` introduced by this import is not considered when resolving the prefix // `Same::` during import validation (issue #62767). use Same::Same; fn main() {} ", "commid": "rust_pr_70236"}], "negative_passages": []} {"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-addaaca558d869a75bf57596fb5c9e8af643bc0be551fe0d0a2af1863b07706a", "text": "let ret = f(self); let last_token = if self.token_cursor.stack.len() == prev { &mut self.token_cursor.frame.last_token } else if self.token_cursor.stack.get(prev).is_none() { // This can happen due to a bad interaction of two unrelated recovery mechanisms with // mismatched delimiters *and* recovery lookahead on the likely typo `pub ident(` // (#62881). return Ok((ret?, TokenStream::new(vec![]))); } else { &mut self.token_cursor.stack[prev].last_token };", "commid": "rust_pr_62887.0"}], "negative_passages": []} {"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-9d5261a652d7c4ddc4433b155a078ff3ac9dce9bbd634a37c17f2f84d4eb24a8", "text": "// Pull out the tokens that we've collected from the call to `f` above. let mut collected_tokens = match *last_token { LastToken::Collecting(ref mut v) => mem::take(v), LastToken::Was(_) => panic!(\"our vector went away?\"), LastToken::Was(ref was) => { let msg = format!(\"our vector went away? - found Was({:?})\", was); debug!(\"collect_tokens: {}\", msg); self.sess.span_diagnostic.delay_span_bug(self.token.span, &msg); // This can happen due to a bad interaction of two unrelated recovery mechanisms // with mismatched delimiters *and* recovery lookahead on the likely typo // `pub ident(` (#62895, different but similar to the case above). return Ok((ret?, TokenStream::new(vec![]))); } }; // If we're not at EOF our current token wasn't actually consumed by", "commid": "rust_pr_62887.0"}], "negative_passages": []} {"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-b18e884983d8c3d2388e4ff8230565fa2a682e9af0dfe93e21c5257476e05c5a", "text": " fn main() {} fn f() -> isize { fn f() -> isize {} pub f< //~^ ERROR missing `fn` or `struct` for function or struct definition //~| ERROR mismatched types //~ ERROR this file contains an un-closed delimiter ", "commid": "rust_pr_62887.0"}], "negative_passages": []} {"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-297af6a496f40ac0241b367d2bc3d410286b5cf64a0f1d31aaa46109912b0edd", "text": " error: this file contains an un-closed delimiter --> $DIR/issue-62881.rs:6:53 | LL | fn f() -> isize { fn f() -> isize {} pub f< | - un-closed delimiter ... LL | | ^ error: missing `fn` or `struct` for function or struct definition --> $DIR/issue-62881.rs:3:41 | LL | fn f() -> isize { fn f() -> isize {} pub f< | ^ error[E0308]: mismatched types --> $DIR/issue-62881.rs:3:29 | LL | fn f() -> isize { fn f() -> isize {} pub f< | - ^^^^^ expected isize, found () | | | this function's body doesn't return | = note: expected type `isize` found type `()` error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_62887.0"}], "negative_passages": []} {"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-f50bfb56418ef1acf9cd4336579b5e5f20bee9e9cb9c48d6aca7c40391f2c5fd", "text": " fn main() {} fn v() -> isize { //~ ERROR mismatched types mod _ { //~ ERROR expected identifier pub fn g() -> isizee { //~ ERROR cannot find type `isizee` in this scope mod _ { //~ ERROR expected identifier pub g() -> is //~ ERROR missing `fn` for function definition (), w20); } (), w20); //~ ERROR expected item, found `;` } ", "commid": "rust_pr_62887.0"}], "negative_passages": []} {"query_id": "q-en-rust-29abb5ef31b1fcd0d5114ac873e6d18fa269ef369679885f469477bc9aacbc42", "query": "I'm getting an internal compiler error on the following program (found by ): I'm seeing the error on stable, beta, and nightly. As with , the error occurs on () but not on (). , the proposed fix for , does not fix this error. cc", "positive_passages": [{"docid": "doc-en-rust-5bb18e57c706e54c0ae914e460537ac34bafcebfca3d53348e24e9ce6ae9e7a7", "text": " error: expected identifier, found reserved identifier `_` --> $DIR/issue-62895.rs:4:5 | LL | mod _ { | ^ expected identifier, found reserved identifier error: expected identifier, found reserved identifier `_` --> $DIR/issue-62895.rs:6:5 | LL | mod _ { | ^ expected identifier, found reserved identifier error: missing `fn` for function definition --> $DIR/issue-62895.rs:7:4 | LL | pub g() -> is | ^^^^ help: add `fn` here to parse `g` as a public function | LL | pub fn g() -> is | ^^ error: expected item, found `;` --> $DIR/issue-62895.rs:10:9 | LL | (), w20); | ^ help: remove this semicolon error[E0412]: cannot find type `isizee` in this scope --> $DIR/issue-62895.rs:5:15 | LL | pub fn g() -> isizee { | ^^^^^^ help: a builtin type with a similar name exists: `isize` error[E0308]: mismatched types --> $DIR/issue-62895.rs:3:11 | LL | fn v() -> isize { | - ^^^^^ expected isize, found () | | | this function's body doesn't return | = note: expected type `isize` found type `()` error: aborting due to 6 previous errors Some errors have detailed explanations: E0308, E0412. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_62887.0"}], "negative_passages": []} {"query_id": "q-en-rust-722e1db4196169a0b89fb76519c81af5d11cff2d3152604fc46782a78ffa4cf2", "query": "This fails with: I believe this should compile because: non-async function with the same signature does compile; insignificant tweaks, like adding a wrapper struct, make it compile. Mentioning who fixed a similar error message in . Mentioning who touched this error message in and . rustc 1.38.0-nightly ( 2019-07-26) $DIR/min-choice-reject-ambiguous.rs:17:5 | LL | type_test::<'_, T>() // This should pass if we pick 'b. | ^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds | help: consider adding an explicit lifetime bound... | LL | T: 'b + 'a, | ++++ error[E0309]: the parameter type `T` may not live long enough --> $DIR/min-choice-reject-ambiguous.rs:28:5 | LL | type_test::<'_, T>() // This should pass if we pick 'c. | ^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds | help: consider adding an explicit lifetime bound... | LL | T: 'c + 'a, | ++++ error[E0700]: hidden type for `impl Cap<'b> + Cap<'c>` captures lifetime that does not appear in bounds --> $DIR/min-choice-reject-ambiguous.rs:39:5 | LL | fn test_ambiguous<'a, 'b, 'c>(s: &'a u8) -> impl Cap<'b> + Cap<'c> | -- hidden type `&'a u8` captures the lifetime `'a` as defined here ... LL | s | ^ | help: to declare that `impl Cap<'b> + Cap<'c>` captures `'a`, you can add an explicit `'a` lifetime bound | LL | fn test_ambiguous<'a, 'b, 'c>(s: &'a u8) -> impl Cap<'b> + Cap<'c> + 'a | ++++ error: aborting due to 3 previous errors Some errors have detailed explanations: E0309, E0700. For more information about an error, try `rustc --explain E0309`. ", "commid": "rust_pr_105300.0"}], "negative_passages": []} {"query_id": "q-en-rust-722e1db4196169a0b89fb76519c81af5d11cff2d3152604fc46782a78ffa4cf2", "query": "This fails with: I believe this should compile because: non-async function with the same signature does compile; insignificant tweaks, like adding a wrapper struct, make it compile. Mentioning who fixed a similar error message in . Mentioning who touched this error message in and . rustc 1.38.0-nightly ( 2019-07-26) $DIR/nested-impl-trait-fail.rs:17:5 | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> | -- hidden type `[&'s u8; 1]` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> + 's | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b> + 's> | ++++ error[E0700]: hidden type for `impl Cap<'a> + Cap<'b>` captures lifetime that does not appear in bounds --> $DIR/nested-impl-trait-fail.rs:17:5 | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> | -- hidden type `&'s u8` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b>> + 's | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | fn fail_early_bound<'s, 'a, 'b>(a: &'s u8) -> impl IntoIterator + Cap<'b> + 's> | ++++ error[E0700]: hidden type for `impl IntoIterator + Cap<'b>>` captures lifetime that does not appear in bounds --> $DIR/nested-impl-trait-fail.rs:28:5 | LL | fn fail_late_bound<'s, 'a, 'b>( | -- hidden type `[&'s u8; 1]` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b>> + 's { | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b> + 's> { | ++++ error[E0700]: hidden type for `impl Cap<'a> + Cap<'b>` captures lifetime that does not appear in bounds --> $DIR/nested-impl-trait-fail.rs:28:5 | LL | fn fail_late_bound<'s, 'a, 'b>( | -- hidden type `&'s u8` captures the lifetime `'s` as defined here ... LL | [a] | ^^^ | help: to declare that `impl IntoIterator + Cap<'b>>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b>> + 's { | ++++ help: to declare that `impl Cap<'a> + Cap<'b>` captures `'s`, you can add an explicit `'s` lifetime bound | LL | ) -> impl IntoIterator + Cap<'b> + 's> { | ++++ error: aborting due to 4 previous errors For more information about this error, try `rustc --explain E0700`. ", "commid": "rust_pr_105300.0"}], "negative_passages": []} {"query_id": "q-en-rust-722e1db4196169a0b89fb76519c81af5d11cff2d3152604fc46782a78ffa4cf2", "query": "This fails with: I believe this should compile because: non-async function with the same signature does compile; insignificant tweaks, like adding a wrapper struct, make it compile. Mentioning who fixed a similar error message in . Mentioning who touched this error message in and . rustc 1.38.0-nightly ( 2019-07-26) $DIR/never-assign-dead-code.rs:10:5 | LL | drop(x); | ^^^^^^^ | ^^^^ warning: unused variable: `x` --> $DIR/never-assign-dead-code.rs:9:9", "commid": "rust_pr_64229.0"}], "negative_passages": []} {"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-e29d1e113ab3d9a8f550edde2789ae7f7c5c77b872b47c1f65d5d4019ce6b57a", "text": "LL | #![deny(unreachable_code)] | ^^^^^^^^^^^^^^^^ error: unreachable expression error: unreachable call --> $DIR/expr_call.rs:18:5 | LL | bar(return); | ^^^^^^^^^^^ | ^^^ error: aborting due to 2 previous errors", "commid": "rust_pr_64229.0"}], "negative_passages": []} {"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-933b9225efc0e63a787f41ecd163e29614f96103b01e94685a82c6b61b1d260f", "text": "LL | #![deny(unreachable_code)] | ^^^^^^^^^^^^^^^^ error: unreachable expression --> $DIR/expr_method.rs:21:5 error: unreachable call --> $DIR/expr_method.rs:21:9 | LL | Foo.bar(return); | ^^^^^^^^^^^^^^^ | ^^^ error: aborting due to 2 previous errors", "commid": "rust_pr_64229.0"}], "negative_passages": []} {"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-94f1bf0d0604a90d685d242863a9e8fb530539a5cabfe110ada13306ebe2c8ee", "text": "get_u8()); //~ ERROR unreachable expression } fn diverge_second() { call( //~ ERROR unreachable expression call( //~ ERROR unreachable call get_u8(), diverge()); }", "commid": "rust_pr_64229.0"}], "negative_passages": []} {"query_id": "q-en-rust-2fb9fa6469da854d8149cb563726f531120bd823a98098e8e8653f0da72a17ee", "query": "rustc warns that the entire line is an unreachable expression however the code is clearly executed as can be see If the line was truly unreachable, there would be no panic... This bug appears on stable, beta and nightly\nThe is the unreachable part, so perhaps the error should be rephrased and pointed at that function call only\nHmm, so the span is wrong?\nthe order of execution is panic! and then everything else, so in some sense everything but the panic is unreachable. I'm not sure what a good span for that would be.\nThe method receiver is evaluated first, so the and calls still happen. The span can point just at the .\nAh, hm, I didn't expect that to be the case, but it makes sense. Then this seems relatively straightforward as to what the expected output is at least.\nI feel like this should also add a note that the panic is called before everything else. It could also suggest but that seems more like a clippy lint.\nSimilar case: Maybe we can simply add a note that won't be called because evaluation of its arguments leaves the current code path. I agree that suggestion is indeed too specific and may be better suited to clippy. If this is acceptable, I would like to pick it up! As long as it's fine for me to progress more slowly than normal\nGitHub wasn't clever enough to read that the PR said \"might close\" instead of \"close.\" The PR seems to imply more work can be done on this.", "positive_passages": [{"docid": "doc-en-rust-2a93dd5fd8cacc966c9cee4f3dde5081d83faf170c362c5d5604ed7e0a43e4bf", "text": "LL | #![deny(unreachable_code)] | ^^^^^^^^^^^^^^^^ error: unreachable expression error: unreachable call --> $DIR/unreachable-in-call.rs:17:5 | LL | / call( LL | | get_u8(), LL | | diverge()); | |__________________^ LL | call( | ^^^^ error: aborting due to 2 previous errors", "commid": "rust_pr_64229.0"}], "negative_passages": []} {"query_id": "q-en-rust-39d5c4547e08ffd8f1589ccb012434bfb30da4436d533b9615a437af1b51d94e", "query": "Because is the 's default for application-level logging and should not affect the output of the tools.\neh, it's only recently we've shifted away from using it for tools -- but seems like the trend is definitely to do so. I've filed", "positive_passages": [{"docid": "doc-en-rust-ddcbce222ab4254d3ba333480e78d37ae91ebe4f4f557ffbbbe5e44a92a5f1fe", "text": "32_000_000 // 32MB on other platforms }; rustc_driver::set_sigpipe_handler(); env_logger::init(); env_logger::init_from_env(\"RUSTDOC_LOG\"); let res = std::thread::Builder::new().stack_size(thread_stack_size).spawn(move || { get_args().map(|args| main_args(&args)).unwrap_or(1) }).unwrap().join().unwrap_or(rustc_driver::EXIT_FAILURE);", "commid": "rust_pr_64329.0"}], "negative_passages": []} {"query_id": "q-en-rust-49aa8ef8a4c00751dcefcd82fd43b142aa53459c3860af9e25ad919ff50ec42c", "query": "I encountered this as well. What project are you trying to build? My theory is that it tries to run something like (in ?) and then fails to parse the output which is a mix of and ...\nWay earlier, it runs it right at the start of a compilation to get the rustc version and enabled . Probably\nCould someone use to narrow this down to a particular commit?\nIt looks to be caused by (I would assume it is in that rollup). It is now printing the time-passes information even for .\nwould you be up for a fix here?\nI'll take a look on Monday. In the meantime, here's my best guess: I an extra entry to the output that prints the total time for the compilation. That extra line may not be indented at all, and it's possible this means it isn't ignored the same way as indented lines.\nMy theory about indentation was wrong. It was indeed a bad interaction between and , as others suggested, because they both print to . has a fix.", "positive_passages": [{"docid": "doc-en-rust-45d0fbb24e7bfa008c301977c095de37ff013766467f0d2eb1d892c150ae234d", "text": "impl Callbacks for TimePassesCallbacks { fn config(&mut self, config: &mut interface::Config) { // If a --prints=... option has been given, we don't print the \"total\" // time because it will mess up the --prints output. See #64339. self.time_passes = config.opts.debugging_opts.time_passes || config.opts.debugging_opts.time; config.opts.prints.is_empty() && (config.opts.debugging_opts.time_passes || config.opts.debugging_opts.time); } }", "commid": "rust_pr_64497.0"}], "negative_passages": []} {"query_id": "q-en-rust-d72d9e96e93ba2b534c46a6283c2b104602764cc53d6d1b34be98bc927d23f45", "query": "Hello, I ran into a problem using lifetime with async struct methods, here is the example: struct Foo<'a{ swag: &'a i32 } impl Foo<'{ async fn bar(&self) -i32 { 1337 } } The error is: '_ Everything is fine if I use explicit lifetime like . Version: 1.39 Nighly (2019-09-19 on playground)\nThis is happening because the desugaring of uses the lifetime. This is sort of unfortunate, but seems backwards compatible to fix, so I don't think we should block on this.\nWe decided to focus on this one in the WG meeting -- you still interested in working on it?\nYes, I'm still interested in working on this. I don't think that this should be too hard to fix though if someone else wants it.", "positive_passages": [{"docid": "doc-en-rust-b9e460e2f7cb2e779a1464bed27ff203be96a2bfbea5188fc46e0a5eecd8aba1", "text": "/// header, we convert it to an in-band lifetime. fn collect_fresh_in_band_lifetime(&mut self, span: Span) -> ParamName { assert!(self.is_collecting_in_band_lifetimes); let index = self.lifetimes_to_define.len(); let index = self.lifetimes_to_define.len() + self.in_scope_lifetimes.len(); let hir_name = ParamName::Fresh(index); self.lifetimes_to_define.push((span, hir_name)); hir_name", "commid": "rust_pr_65142.0"}], "negative_passages": []} {"query_id": "q-en-rust-d72d9e96e93ba2b534c46a6283c2b104602764cc53d6d1b34be98bc927d23f45", "query": "Hello, I ran into a problem using lifetime with async struct methods, here is the example: struct Foo<'a{ swag: &'a i32 } impl Foo<'{ async fn bar(&self) -i32 { 1337 } } The error is: '_ Everything is fine if I use explicit lifetime like . Version: 1.39 Nighly (2019-09-19 on playground)\nThis is happening because the desugaring of uses the lifetime. This is sort of unfortunate, but seems backwards compatible to fix, so I don't think we should block on this.\nWe decided to focus on this one in the WG meeting -- you still interested in working on it?\nYes, I'm still interested in working on this. I don't think that this should be too hard to fix though if someone else wants it.", "positive_passages": [{"docid": "doc-en-rust-c1f87b4a6af9fb431b397ed5d7e997a8f025880a4cbba5b68dbf1384d52b60c9", "text": " // check-pass // Check that the anonymous lifetimes used here aren't considered to shadow one // another. Note that `async fn` is different to `fn` here because the lifetimes // are numbered by HIR lowering, rather than lifetime resolution. // edition:2018 struct A<'a, 'b>(&'a &'b i32); struct B<'a>(&'a i32); impl A<'_, '_> { async fn assoc(x: &u32, y: B<'_>) { async fn nested(x: &u32, y: A<'_, '_>) {} } async fn assoc2(x: &u32, y: A<'_, '_>) { impl A<'_, '_> { async fn nested_assoc(x: &u32, y: B<'_>) {} } } } fn main() {} ", "commid": "rust_pr_65142.0"}], "negative_passages": []} {"query_id": "q-en-rust-bef42648396272ed79e1a35ec64e2f10da9948c07ca00717f877741142cc6db4", "query": "If you have a of utf8 data there's three ways to get a : There doesn't appear to be any way to hand over the existing and have any necessary character changes into be done in place_. The type doesn't, for example, have a way to just do a lossy in-place conversion.\n- I would like to implement this (I have no contributions yet in the project). May i please claim the request?\nI think that normally the Libs team is given a chance to comment on the issue (it's only been 3 hours), but the worst that can happen is that they reject your PR. All I can say for sure is that I will not be implementing this particular patch myself.\nThanks for letting me know, I will wait until someone from the lib team will respond.\nSeems like the right API here is actually , since I believe there is never a need to allocate more bytes. Unfortunately I guess we'd need two APIs for this to be perfect, one on String as well, since there's no safe way to get from running this function to a of the bytes, right? The only safe way would be to do which is guaranteed to succeed but is costly...\n1) I don't want slice to slice. I definitely want owned to owned. I'm not sure why you'd think that slice to slice is somehow more correct. My goal is that whenever possible no additional allocations will immediately happen during the conversion (obviously this isn't always possible). 2) You definitely can have byte sequences where the lossy conversion into utf8 makes the sequence longer and thus could hit the capacity of the allocation and thus could trigger a reallocation. Trivial example: becomes , which is longer.\nHm, I had thought that the lossy conversion always replaced a single byte with a single byte, if that's not the case then the slice case is indeed not all that helpful probably. If this was true then the slice case would be good to have as a more general form, though.\nSo we want this signature: , right? That seems like it can be via a PR relatively easily.\nThat is one option. Another would be to add a method to so that you can consume that error into the lossy String. The value here is not having to start over on the utf8 parsing.\nThere was a post with possible designs on internals:\nI hope gets soon :)\nI prefer this approach for a couple of reasons. It signals whether or not the conversion was lossless. Whereas if we went with the signature , some pairs of inputs collide, like the pair and , or the pair and . Also, it will deter people from checking for the presence of to tell if something went wrong. This is a footgun I've seen in the wild. And even for users who \"just want a string\" and don't care about the error, you can still reasonably get that in single readable line:", "positive_passages": [{"docid": "doc-en-rust-74588767088f8b655322e6c35c64a42398e500c459a6917f6783e3cb3785e54a", "text": "Cow::Owned(res) } /// Converts a [`Vec`] to a `String`, substituting invalid UTF-8 /// sequences with replacement characters. /// /// See [`from_utf8_lossy`] for more details. /// /// [`from_utf8_lossy`]: String::from_utf8_lossy /// /// Note that this function does not guarantee reuse of the original `Vec` /// allocation. /// /// # Examples /// /// Basic usage: /// /// ``` /// #![feature(string_from_utf8_lossy_owned)] /// // some bytes, in a vector /// let sparkle_heart = vec![240, 159, 146, 150]; /// /// let sparkle_heart = String::from_utf8_lossy_owned(sparkle_heart); /// /// assert_eq!(String::from(\"\ud83d\udc96\"), sparkle_heart); /// ``` /// /// Incorrect bytes: /// /// ``` /// #![feature(string_from_utf8_lossy_owned)] /// // some invalid bytes /// let input: Vec = b\"Hello xF0x90x80World\".into(); /// let output = String::from_utf8_lossy_owned(input); /// /// assert_eq!(String::from(\"Hello \ufffdWorld\"), output); /// ``` #[must_use] #[cfg(not(no_global_oom_handling))] #[unstable(feature = \"string_from_utf8_lossy_owned\", issue = \"129436\")] pub fn from_utf8_lossy_owned(v: Vec) -> String { if let Cow::Owned(string) = String::from_utf8_lossy(&v) { string } else { // SAFETY: `String::from_utf8_lossy`'s contract ensures that if // it returns a `Cow::Borrowed`, it is a valid UTF-8 string. // Otherwise, it returns a new allocation of an owned `String`, with // replacement characters for invalid sequences, which is returned // above. unsafe { String::from_utf8_unchecked(v) } } } /// Decode a UTF-16\u2013encoded vector `v` into a `String`, returning [`Err`] /// if `v` contains any invalid data. ///", "commid": "rust_pr_129439.0"}], "negative_passages": []} {"query_id": "q-en-rust-bef42648396272ed79e1a35ec64e2f10da9948c07ca00717f877741142cc6db4", "query": "If you have a of utf8 data there's three ways to get a : There doesn't appear to be any way to hand over the existing and have any necessary character changes into be done in place_. The type doesn't, for example, have a way to just do a lossy in-place conversion.\n- I would like to implement this (I have no contributions yet in the project). May i please claim the request?\nI think that normally the Libs team is given a chance to comment on the issue (it's only been 3 hours), but the worst that can happen is that they reject your PR. All I can say for sure is that I will not be implementing this particular patch myself.\nThanks for letting me know, I will wait until someone from the lib team will respond.\nSeems like the right API here is actually , since I believe there is never a need to allocate more bytes. Unfortunately I guess we'd need two APIs for this to be perfect, one on String as well, since there's no safe way to get from running this function to a of the bytes, right? The only safe way would be to do which is guaranteed to succeed but is costly...\n1) I don't want slice to slice. I definitely want owned to owned. I'm not sure why you'd think that slice to slice is somehow more correct. My goal is that whenever possible no additional allocations will immediately happen during the conversion (obviously this isn't always possible). 2) You definitely can have byte sequences where the lossy conversion into utf8 makes the sequence longer and thus could hit the capacity of the allocation and thus could trigger a reallocation. Trivial example: becomes , which is longer.\nHm, I had thought that the lossy conversion always replaced a single byte with a single byte, if that's not the case then the slice case is indeed not all that helpful probably. If this was true then the slice case would be good to have as a more general form, though.\nSo we want this signature: , right? That seems like it can be via a PR relatively easily.\nThat is one option. Another would be to add a method to so that you can consume that error into the lossy String. The value here is not having to start over on the utf8 parsing.\nThere was a post with possible designs on internals:\nI hope gets soon :)\nI prefer this approach for a couple of reasons. It signals whether or not the conversion was lossless. Whereas if we went with the signature , some pairs of inputs collide, like the pair and , or the pair and . Also, it will deter people from checking for the presence of to tell if something went wrong. This is a footgun I've seen in the wild. And even for users who \"just want a string\" and don't care about the error, you can still reasonably get that in single readable line:", "positive_passages": [{"docid": "doc-en-rust-07e0dc6d7eb12a3f1bdc6c78e714b141488395ba01f120491887c4dfca5b3860", "text": "&self.bytes[..] } /// Converts the bytes into a `String` lossily, substituting invalid UTF-8 /// sequences with replacement characters. /// /// See [`String::from_utf8_lossy`] for more details on replacement of /// invalid sequences, and [`String::from_utf8_lossy_owned`] for the /// `String` function which corresponds to this function. /// /// # Examples /// /// ``` /// #![feature(string_from_utf8_lossy_owned)] /// // some invalid bytes /// let input: Vec = b\"Hello xF0x90x80World\".into(); /// let output = String::from_utf8(input).unwrap_or_else(|e| e.into_utf8_lossy()); /// /// assert_eq!(String::from(\"Hello \ufffdWorld\"), output); /// ``` #[must_use] #[cfg(not(no_global_oom_handling))] #[unstable(feature = \"string_from_utf8_lossy_owned\", issue = \"129436\")] pub fn into_utf8_lossy(self) -> String { String::from_utf8_lossy_owned(self.bytes) } /// Returns the bytes that were attempted to convert to a `String`. /// /// This method is carefully constructed to avoid allocation. It will", "commid": "rust_pr_129439.0"}], "negative_passages": []} {"query_id": "q-en-rust-fcbe98ea5f88f4e1815b1c9de18ab365c7de07a1ba01ba69212c0b6dc6ec3ad4", "query": "Godbolt link: So I expect these two functions has the same assembly: But the result is not:\nmodify labels: -C-bug -I-slow T-compiler\nHey I tried solving this but wasn't really sure how to. I believe the implementation to the LLVM generation is right here. Hope someone can solve this!\nTurn out I was wrong! The is working correctly (i.e. those two functions are not equivalent): I would add some doctests to show that and close this issue.", "positive_passages": [{"docid": "doc-en-rust-94c19b4cdf05af99a8f051d5845b70905f203aef446bf9fdbe364daedfe69d16", "text": "``` \", $Feature, \"assert_eq!(100\", stringify!($SelfT), \".saturating_add(1), 101); assert_eq!(\", stringify!($SelfT), \"::max_value().saturating_add(100), \", stringify!($SelfT), \"::max_value());\", \"::max_value()); assert_eq!(\", stringify!($SelfT), \"::min_value().saturating_add(-1), \", stringify!($SelfT), \"::min_value());\", $EndFeature, \" ```\"),", "commid": "rust_pr_64943.0"}], "negative_passages": []} {"query_id": "q-en-rust-fcbe98ea5f88f4e1815b1c9de18ab365c7de07a1ba01ba69212c0b6dc6ec3ad4", "query": "Godbolt link: So I expect these two functions has the same assembly: But the result is not:\nmodify labels: -C-bug -I-slow T-compiler\nHey I tried solving this but wasn't really sure how to. I believe the implementation to the LLVM generation is right here. Hope someone can solve this!\nTurn out I was wrong! The is working correctly (i.e. those two functions are not equivalent): I would add some doctests to show that and close this issue.", "positive_passages": [{"docid": "doc-en-rust-d540cafe3d99a996bc156d5b345801078c0b04360daf7a763e5bbc06d9a69225", "text": "} } doc_comment! { concat!(\"Saturating integer subtraction. Computes `self - rhs`, saturating at the numeric bounds instead of overflowing.", "commid": "rust_pr_64943.0"}], "negative_passages": []} {"query_id": "q-en-rust-fcbe98ea5f88f4e1815b1c9de18ab365c7de07a1ba01ba69212c0b6dc6ec3ad4", "query": "Godbolt link: So I expect these two functions has the same assembly: But the result is not:\nmodify labels: -C-bug -I-slow T-compiler\nHey I tried solving this but wasn't really sure how to. I believe the implementation to the LLVM generation is right here. Hope someone can solve this!\nTurn out I was wrong! The is working correctly (i.e. those two functions are not equivalent): I would add some doctests to show that and close this issue.", "positive_passages": [{"docid": "doc-en-rust-08c47cd6bce9687f2c8613ac40d3db3de3b82fad0a610d51bc2189aaa1c0373e", "text": "``` \", $Feature, \"assert_eq!(100\", stringify!($SelfT), \".saturating_sub(127), -27); assert_eq!(\", stringify!($SelfT), \"::min_value().saturating_sub(100), \", stringify!($SelfT), \"::min_value());\", \"::min_value()); assert_eq!(\", stringify!($SelfT), \"::max_value().saturating_sub(-1), \", stringify!($SelfT), \"::max_value());\", $EndFeature, \" ```\"), #[stable(feature = \"rust1\", since = \"1.0.0\")]", "commid": "rust_pr_64943.0"}], "negative_passages": []} {"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-d327a68663d4bffa5b0661d90859b574aa36645fc1d080166a0c1262120befd0", "text": "} else { \"items from traits can only be used if the trait is implemented and in scope\" }); let mut msg = format!( let message = |action| format!( \"the following {traits_define} an item `{name}`, perhaps you need to {action} {one_of_them}:\", traits_define = if candidates.len() == 1 {", "commid": "rust_pr_65242.0"}], "negative_passages": []} {"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-ba833a948cf08c1014b53d2988926f7e3123fbcad7da37bfa8eed7cedc447342", "text": "} else { \"traits define\" }, action = if let Some(param) = param_type { format!(\"restrict type parameter `{}` with\", param) } else { \"implement\".to_string() }, action = action, one_of_them = if candidates.len() == 1 { \"it\" } else {", "commid": "rust_pr_65242.0"}], "negative_passages": []} {"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-15277cc96da04b9002d17734570ab6610639f7b0e928fe7eda3bd3add7d5ba31", "text": "// Get the `hir::Param` to verify whether it already has any bounds. // We do this to avoid suggesting code that ends up as `T: FooBar`, // instead we suggest `T: Foo + Bar` in that case. let mut has_bounds = None; let mut impl_trait = false; if let Node::GenericParam(ref param) = hir.get(id) { let kind = ¶m.kind; if let hir::GenericParamKind::Type { synthetic: Some(_), .. } = kind { // We've found `fn foo(x: impl Trait)` instead of // `fn foo(x: T)`. We want to suggest the correct // `fn foo(x: impl Trait + TraitBound)` instead of // `fn foo(x: T)`. (See #63706.) impl_trait = true; has_bounds = param.bounds.get(1); } else { has_bounds = param.bounds.get(0); match hir.get(id) { Node::GenericParam(ref param) => { let mut impl_trait = false; let has_bounds = if let hir::GenericParamKind::Type { synthetic: Some(_), .. } = ¶m.kind { // We've found `fn foo(x: impl Trait)` instead of // `fn foo(x: T)`. We want to suggest the correct // `fn foo(x: impl Trait + TraitBound)` instead of // `fn foo(x: T)`. (#63706) impl_trait = true; param.bounds.get(1) } else { param.bounds.get(0) }; let sp = hir.span(id); let sp = if let Some(first_bound) = has_bounds { // `sp` only covers `T`, change it so that it covers // `T:` when appropriate sp.until(first_bound.span()) } else { sp }; // FIXME: contrast `t.def_id` against `param.bounds` to not suggest // traits already there. That can happen when the cause is that // we're in a const scope or associated function used as a method. err.span_suggestions( sp, &message(format!( \"restrict type parameter `{}` with\", param.name.ident().as_str(), )), candidates.iter().map(|t| format!( \"{}{} {}{}\", param.name.ident().as_str(), if impl_trait { \" +\" } else { \":\" }, self.tcx.def_path_str(t.def_id), if has_bounds.is_some() { \" + \"} else { \"\" }, )), Applicability::MaybeIncorrect, ); suggested = true; } Node::Item(hir::Item { kind: hir::ItemKind::Trait(.., bounds, _), ident, .. }) => { let (sp, sep, article) = if bounds.is_empty() { (ident.span.shrink_to_hi(), \":\", \"a\") } else { (bounds.last().unwrap().span().shrink_to_hi(), \" +\", \"another\") }; err.span_suggestions( sp, &message(format!(\"add {} supertrait for\", article)), candidates.iter().map(|t| format!( \"{} {}\", sep, self.tcx.def_path_str(t.def_id), )), Applicability::MaybeIncorrect, ); suggested = true; } _ => {} } let sp = hir.span(id); // `sp` only covers `T`, change it so that it covers `T:` when appropriate. let sp = if let Some(first_bound) = has_bounds { sp.until(first_bound.span()) } else { sp }; // FIXME: contrast `t.def_id` against `param.bounds` to not suggest traits // already there. That can happen when the cause is that we're in a const // scope or associated function used as a method. err.span_suggestions( sp, &msg[..], candidates.iter().map(|t| format!( \"{}{} {}{}\", param, if impl_trait { \" +\" } else { \":\" }, self.tcx.def_path_str(t.def_id), if has_bounds.is_some() { \" + \" } else { \"\" }, )), Applicability::MaybeIncorrect, ); suggested = true; } }; } if !suggested { let mut msg = message(if let Some(param) = param_type { format!(\"restrict type parameter `{}` with\", param) } else { \"implement\".to_string() }); for (i, trait_info) in candidates.iter().enumerate() { msg.push_str(&format!( \"ncandidate #{}: `{}`\",", "commid": "rust_pr_65242.0"}], "negative_passages": []} {"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-5d8dc1e940ab0b118710e52c0419c2d37d382cf6c9ed5ecfc4d941c6b5fd48c0", "text": " // run-rustfix // check-only #[derive(Debug)] struct Demo { a: String } trait GetString { fn get_a(&self) -> &String; } trait UseString: std::fmt::Debug + GetString { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } trait UseString2: GetString { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } impl GetString for Demo { fn get_a(&self) -> &String { &self.a } } impl UseString for Demo {} impl UseString2 for Demo {} #[cfg(test)] mod tests { use crate::{Demo, UseString}; #[test] fn it_works() { let d = Demo { a: \"test\".to_string() }; d.use_string(); } } fn main() {} ", "commid": "rust_pr_65242.0"}], "negative_passages": []} {"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-fc8d26f6ecda2e1ef1d0d07c9cb081fb850db21b39efc8ed840014f802d9ce5a", "text": " // run-rustfix // check-only #[derive(Debug)] struct Demo { a: String } trait GetString { fn get_a(&self) -> &String; } trait UseString: std::fmt::Debug { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } trait UseString2 { fn use_string(&self) { println!(\"{:?}\", self.get_a()); //~ ERROR no method named `get_a` found for type `&Self` } } impl GetString for Demo { fn get_a(&self) -> &String { &self.a } } impl UseString for Demo {} impl UseString2 for Demo {} #[cfg(test)] mod tests { use crate::{Demo, UseString}; #[test] fn it_works() { let d = Demo { a: \"test\".to_string() }; d.use_string(); } } fn main() {} ", "commid": "rust_pr_65242.0"}], "negative_passages": []} {"query_id": "q-en-rust-535353bdfb3d3e5ccefea87e0d54068dc3ecfb7c686a60b3a012fad9cbb5afa0", "query": "The following example code yields an error: It's unclear from this error message how to proceed - I'm not sure of the syntax, location, or where I should be putting the \"GetString\" trait bound. It should be shown with a better example or link when this error is presented, as the current message is not sufficient for a resolution to be arrived at.\ncc\nFixing in . To fix your code write:\nThanks!", "positive_passages": [{"docid": "doc-en-rust-b4f8e9b750d3cd8e7363971026bcea66587a5271e4f60b8e1c8081e3fbff6e02", "text": " error[E0599]: no method named `get_a` found for type `&Self` in the current scope --> $DIR/constrain-trait.rs:15:31 | LL | println!(\"{:?}\", self.get_a()); | ^^^^^ method not found in `&Self` | = help: items from traits can only be used if the type parameter is bounded by the trait help: the following trait defines an item `get_a`, perhaps you need to add another supertrait for it: | LL | trait UseString: std::fmt::Debug + GetString { | ^^^^^^^^^^^ error[E0599]: no method named `get_a` found for type `&Self` in the current scope --> $DIR/constrain-trait.rs:21:31 | LL | println!(\"{:?}\", self.get_a()); | ^^^^^ method not found in `&Self` | = help: items from traits can only be used if the type parameter is bounded by the trait help: the following trait defines an item `get_a`, perhaps you need to add a supertrait for it: | LL | trait UseString2: GetString { | ^^^^^^^^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0599`. ", "commid": "rust_pr_65242.0"}], "negative_passages": []} {"query_id": "q-en-rust-0626f870abdc7b02cb3164359835e6879e411877a00fb45670709faceda10775", "query": "cc Code first. This code: , gives this set of error messages: The part that I'm most concerned about is: Before eRFC \"if- and while-let-chains, take 2\" (rust-lang/rfcs, , ), this code USED to result in this error message: The reason I know this is because , in the section where we're trying to explain the difference between statements and expressions. The error message I'm seeing now is muddying the waters by saying \" expressions\". Based on my reading of the eRFC, it's only supposed to change , but as kind of a side effect is now sort-of an expression? The don't clear it up for me, as they only describe , not plain s. What I expected is that even though the eRFC has been accepted and implemented, plain would still be considered a statement and the error message wouldn't talk about \" expressions\". If my expectation is valid, then the compiler error message is a bug. If my expectation is invalid, please let me know so that I can work on updating the book. Thanks! $DIR/break-outside-loop.rs:30:13 | LL | || { | -- enclosing closure LL | break 'lab; | ^^^^^^^^^^ cannot `break` inside of a closure error: aborting due to 6 previous errors Some errors have detailed explanations: E0267, E0268. For more information about an error, try `rustc --explain E0267`.", "commid": "rust_pr_65518.0"}], "negative_passages": []} {"query_id": "q-en-rust-017e8c583a49d3178e5e0e66e0628a3f29d6120f4d2fd75bea1da2fe14a34301", "query": "The following code was working in nightly-2019-09-05, but no longer works in nightly-2019-10-15. The workaround is not quite intuitive. () Errors: $DIR/issue-64130-4-async-move.rs:19:15 | LL | match client.status() {", "commid": "rust_pr_68269.0"}], "negative_passages": []} {"query_id": "q-en-rust-017e8c583a49d3178e5e0e66e0628a3f29d6120f4d2fd75bea1da2fe14a34301", "query": "The following code was working in nightly-2019-09-05, but no longer works in nightly-2019-10-15. The workaround is not quite intuitive. () Errors: $DIR/issue-65436-raw-ptr-not-send.rs:12:5 | LL | fn assert_send(_: T) {} | ----------- ---- required by this bound in `assert_send` ... LL | assert_send(async { | ^^^^^^^^^^^ future returned by `main` is not `Send` | = help: within `impl std::future::Future`, the trait `std::marker::Send` is not implemented for `*const u8` note: future is not `Send` as this value is used across an await --> $DIR/issue-65436-raw-ptr-not-send.rs:14:9 | LL | bar(Foo(std::ptr::null())).await; | ^^^^^^^^----------------^^^^^^^^- `std::ptr::null()` is later dropped here | | | | | has type `*const u8` | await occurs here, with `std::ptr::null()` maybe used later help: consider moving this into a `let` binding to create a shorter lived borrow --> $DIR/issue-65436-raw-ptr-not-send.rs:14:13 | LL | bar(Foo(std::ptr::null())).await; | ^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_68269.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-f49e04ae2f9c5904778cfc36a00e60a120fa247d33ab37c694267d9ab2f2071b", "text": "[[package]] name = \"libc\" version = \"0.2.77\" version = \"0.2.79\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"f2f96b10ec2560088a8e76961b00d47107b3a625fecb76dedb29ee7ccbf98235\" checksum = \"2448f6066e80e3bfc792e9c98bf705b4b0fc6e8ef5b43e5889aff0eaa9c58743\" dependencies = [ \"rustc-std-workspace-core\", ]", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-681de54f276df9f91faf671de9c8bbb5ad7a402249d5ab587a3135e440088aa2", "text": "base.position_independent_executables = true; base.has_elf_tls = false; base.requires_uwtable = true; base.crt_static_respected = false; base }", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-94e5488e445f9c91451318c93075d0e90a12a757c156c26a7cac5f0c7417e7c4", "text": "position_independent_executables: true, relro_level: RelroLevel::Full, has_elf_tls: true, crt_static_respected: true, ..Default::default() } }", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-72de40c2a86908bfdce1ce6b5b612a28c054189651e9447772688fa8a197698a", "text": "// These targets statically link libc by default base.crt_static_default = true; // These targets allow the user to choose between static and dynamic linking. base.crt_static_respected = true; base }", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-865cdda57dd6430fdc01fdf2792ef74703cc29b3550cbf045170bd83d44f6916", "text": "panic_unwind = { path = \"../panic_unwind\", optional = true } panic_abort = { path = \"../panic_abort\" } core = { path = \"../core\" } libc = { version = \"0.2.77\", default-features = false, features = ['rustc-dep-of-std'] } libc = { version = \"0.2.79\", default-features = false, features = ['rustc-dep-of-std'] } compiler_builtins = { version = \"0.1.35\" } profiler_builtins = { path = \"../profiler_builtins\", optional = true } unwind = { path = \"../unwind\" }", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-8bd01924e82c23e2343d595f06da461bed8c7f7dee3c489fc45de630d95a230e", "text": "} fn exited(&self) -> bool { // On Linux-like OSes this function is safe, on others it is not. See // libc issue: https://github.com/rust-lang/libc/issues/1888. #[cfg_attr( any(target_os = \"linux\", target_os = \"android\", target_os = \"emscripten\"), allow(unused_unsafe) )] unsafe { libc::WIFEXITED(self.0) } libc::WIFEXITED(self.0) } pub fn success(&self) -> bool {", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-a82e2ce3b21cbb12b5f52623839105347ca09140a4859f96a3ab38210040fb82", "text": "} pub fn code(&self) -> Option { // On Linux-like OSes this function is safe, on others it is not. See // libc issue: https://github.com/rust-lang/libc/issues/1888. #[cfg_attr( any(target_os = \"linux\", target_os = \"android\", target_os = \"emscripten\"), allow(unused_unsafe) )] if self.exited() { Some(unsafe { libc::WEXITSTATUS(self.0) }) } else { None } if self.exited() { Some(libc::WEXITSTATUS(self.0)) } else { None } } pub fn signal(&self) -> Option { // On Linux-like OSes this function is safe, on others it is not. See // libc issue: https://github.com/rust-lang/libc/issues/1888. #[cfg_attr( any(target_os = \"linux\", target_os = \"android\", target_os = \"emscripten\"), allow(unused_unsafe) )] if !self.exited() { Some(unsafe { libc::WTERMSIG(self.0) }) } else { None } if !self.exited() { Some(libc::WTERMSIG(self.0)) } else { None } } }", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-46aadcd4808a6fdd896b5959b1b561b82d66d05ded39425b2bfc7cd25218af8e", "text": "[dependencies] core = { path = \"../core\" } libc = { version = \"0.2.51\", features = ['rustc-dep-of-std'], default-features = false } libc = { version = \"0.2.79\", features = ['rustc-dep-of-std'], default-features = false } compiler_builtins = \"0.1.0\" cfg-if = \"0.1.8\"", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-c05491ea5c7d1287ef2d9c3e7a85b0597b39e3edc35caa93ab98fd201409f7d8", "text": "} else if target.contains(\"x86_64-fortanix-unknown-sgx\") { llvm_libunwind::compile(); } else if target.contains(\"linux\") { // linking for Linux is handled in lib.rs if target.contains(\"musl\") { // linking for musl is handled in lib.rs llvm_libunwind::compile(); } else if !target.contains(\"android\") { println!(\"cargo:rustc-link-lib=gcc_s\"); } } else if target.contains(\"freebsd\") { println!(\"cargo:rustc-link-lib=gcc_s\");", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-2ab3cfaceab02dfac85b57b18ecfa4819fd9cdcc936be6a1a67a2bf8c3d94d82", "query": "Reasons for: . According to , musl caused a program to run 9x slower. Musl is also known to be slower in certain configurations in C. This varies a lot with the use case and some might even see a performance increase with musl. docker containers using . It is possible to link glibc statically as demonstrated . From what I can tell, on the user facing side, this would be enable using as described in .\nThe reddit thread mentions libm. And another source of speedups might be the malloc implementation (glibc 2.26 introduced thread-local caches). One can already use custom global allocators, maybe it would make sense to link a custom libm too?\nFirst blocker is the license. Glibc is licensed under LGPL, it has dynamic linking exception so one can freely link dynamically to it. Static linking doesn't fall under this exception. I'm not a lawyer but I think Rust would have to be licensed as (L)GPL to fit it. Another thing is glibc doesn't really support static linking, sure it works well for small software but bites you when you need more of it's features.\n(Obviously not a lawyer but), I'm pretty sure that as long as the code it is linked against is (L)GPLed, it should work.\nAlso as long as the program it is linked against is licensed under a GPL compatible license, it is allowed. The MIT license, is compatible. All this information can be confirmed by looking at the above link. You can even link proprietary code against LGPL if you provide the object files.\nbecause of we need special linking arguments because of the symbols. see also for and glibc uses for the nss modules.\nThe former should be doable. The latter is a tradeoff that people will need to understand if they statically link glibc, and it's something they already have to deal with if they use .\nImplementation instructions: Add an entry for glibc-based targets you want to support to (Same as for musl, but -, because is undesirable ().) Perhaps libm or libpthread will need to be linked in the same way. Set to in target specifications for glibc-based targets you want to support (in ). Test that it actually works, this will require some building of libstd with cargo overridden libc dependency.\nI'd be happy to submit the necessary changes, if you can outline what needs doing. () See the comment above.\nThanks!\nI have this successfully working; just waiting on a release of the libc crate containing .", "positive_passages": [{"docid": "doc-en-rust-e6ed2e5564a94a6e1bc77fb1dfafe4469e0fe0d969438656bece631e68052dab", "text": "#[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))] extern \"C\" {} // When building with crt-static, we get `gcc_eh` from the `libc` crate, since // glibc needs it, and needs it listed later on the linker command line. We // don't want to duplicate it here. #[cfg(all(target_os = \"linux\", target_env = \"gnu\", not(feature = \"llvm-libunwind\")))] #[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))] extern \"C\" {} #[cfg(target_os = \"redox\")] #[link(name = \"gcc_eh\", kind = \"static-nobundle\", cfg(target_feature = \"crt-static\"))] #[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))]", "commid": "rust_pr_77386.0"}], "negative_passages": []} {"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-9e467815931607ec7a17a3a21a91ca7d5c7b2cae5bf664a7c8bf1bc07f9770ea", "text": "- name: install MSYS2 run: src/ci/scripts/install-msys2.sh if: success() && !env.SKIP_JOB - name: install MSYS2 packages run: src/ci/scripts/install-msys2-packages.sh if: success() && !env.SKIP_JOB - name: install MinGW run: src/ci/scripts/install-mingw.sh if: success() && !env.SKIP_JOB", "commid": "rust_pr_73188.0"}], "negative_passages": []} {"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-82e51614b0413a4984b2d835bcd05dd04969ac90c8b75bf724a61f0f09a3efbc", "text": "displayName: Install msys2 condition: and(succeeded(), not(variables.SKIP_JOB)) - bash: src/ci/scripts/install-msys2-packages.sh displayName: Install msys2 packages condition: and(succeeded(), not(variables.SKIP_JOB)) - bash: src/ci/scripts/install-mingw.sh displayName: Install MinGW condition: and(succeeded(), not(variables.SKIP_JOB))", "commid": "rust_pr_73188.0"}], "negative_passages": []} {"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-2edbba3efef26f18ee349936e4a56cc032560c9579ebe118669ecacb5081c60c", "text": "run: src/ci/scripts/install-msys2.sh <<: *step - name: install MSYS2 packages run: src/ci/scripts/install-msys2-packages.sh <<: *step - name: install MinGW run: src/ci/scripts/install-mingw.sh <<: *step", "commid": "rust_pr_73188.0"}], "negative_passages": []} {"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-d10178388aa1c8283683c5dec762781d01812e08a026f43ca524d367aaecd20a", "text": " #!/bin/bash set -euo pipefail IFS=$'nt' source \"$(cd \"$(dirname \"$0\")\" && pwd)/../shared.sh\" if isWindows; then pacman -S --noconfirm --needed base-devel ca-certificates make diffutils tar binutils # Detect the native Python version installed on the agent. On GitHub # Actions, the C:hostedtoolcachewindowsPython directory contains a # subdirectory for each installed Python version. # # The -V flag of the sort command sorts the input by version number. native_python_version=\"$(ls /c/hostedtoolcache/windows/Python | sort -Vr | head -n 1)\" # Make sure we use the native python interpreter instead of some msys equivalent # one way or another. The msys interpreters seem to have weird path conversions # baked in which break LLVM's build system one way or another, so let's use the # native version which keeps everything as native as possible. python_home=\"/c/hostedtoolcache/windows/Python/${native_python_version}/x64\" cp \"${python_home}/python.exe\" \"${python_home}/python3.exe\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64Scripts\" fi ", "commid": "rust_pr_73188.0"}], "negative_passages": []} {"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-d40eaec0a74b3529b0d730cc78863e7219a38110f245200b22f7236ca2b78e73", "text": "#!/bin/bash # Download and install MSYS2, needed primarily for the test suite (run-make) but # also used by the MinGW toolchain for assembling things. # # FIXME: we should probe the default azure image and see if we can use the MSYS2 # toolchain there. (if there's even one there). For now though this gets the job # done. set -euo pipefail IFS=$'nt'", "commid": "rust_pr_73188.0"}], "negative_passages": []} {"query_id": "q-en-rust-0594d473233460ab81a84b76ecf113288d0f1915abf241ae595d34670daf459d", "query": "On 2019-10-24 we had a CI outage due to the ca-certificates msys2 package being broken. The hack implemented (:heart:) is to download an older, known good package beforehand and install it after msys2 updated its packages. Once is fixed we should revert the hack, but I think at this point it's worth investing in vendoring msys2 as a whole.\nmsys2 is using and other tools from archlinux distro. Arch Linux provides per-day official repositories snapshots. User can \"pin\" to a specific date. I think it is possible to reuse that mechanism to vendoring msys2 as a whole, by providing a mirror of a specific date's version. See here for more descriptions .\nthat would be nice but msys doesn't remove old packages from their repo so the mirror would be enormous. Search for in , only those packages take dozens of GiB. Also extracting it from archive would make CI a bit faster because you won't be modifying thousands of small files over and over. On Linux that is not a problem but working through cygwin emulation a bit struggles on this part.\nNo update on this, planning to discuss what to vendor at the all hands.\nAt the last infra meeting we decided a good step is to move towards using the pre-installed msys2 packages. In theory this is as simple as removing the msys2 installation step and pointing things at the right location on the images in CI.", "positive_passages": [{"docid": "doc-en-rust-3642795ec531c306b07feeee17a3d3afcc245a9f38920b5e99448f5d15f0e4cf", "text": "source \"$(cd \"$(dirname \"$0\")\" && pwd)/../shared.sh\" if isWindows; then # Pre-followed the api/v2 URL to the CDN since the API can be a bit flakey curl -sSL https://packages.chocolatey.org/msys2.20190524.0.0.20191030.nupkg > msys2.nupkg curl -sSL https://packages.chocolatey.org/chocolatey-core.extension.1.3.5.1.nupkg > chocolatey-core.extension.nupkg choco install -s . msys2 --params=\"/InstallDir:$(ciCheckoutPath)/msys2 /NoPath\" -y --no-progress rm msys2.nupkg chocolatey-core.extension.nupkg mkdir -p \"$(ciCheckoutPath)/msys2/home/${USERNAME}\" ciCommandAddPath \"$(ciCheckoutPath)/msys2/usr/bin\" msys2Path=\"c:/msys64\" mkdir -p \"${msys2Path}/home/${USERNAME}\" ciCommandAddPath \"${msys2Path}/usr/bin\" echo \"switching shell to use our own bash\" ciCommandSetEnv CI_OVERRIDE_SHELL \"$(ciCheckoutPath)/msys2/usr/bin/bash.exe\" ciCommandSetEnv CI_OVERRIDE_SHELL \"${msys2Path}/usr/bin/bash.exe\" # Detect the native Python version installed on the agent. On GitHub # Actions, the C:hostedtoolcachewindowsPython directory contains a # subdirectory for each installed Python version. # # The -V flag of the sort command sorts the input by version number. native_python_version=\"$(ls /c/hostedtoolcache/windows/Python | sort -Vr | head -n 1)\" # Make sure we use the native python interpreter instead of some msys equivalent # one way or another. The msys interpreters seem to have weird path conversions # baked in which break LLVM's build system one way or another, so let's use the # native version which keeps everything as native as possible. python_home=\"/c/hostedtoolcache/windows/Python/${native_python_version}/x64\" cp \"${python_home}/python.exe\" \"${python_home}/python3.exe\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64\" ciCommandAddPath \"C:hostedtoolcachewindowsPython${native_python_version}x64Scripts\" fi", "commid": "rust_pr_73188.0"}], "negative_passages": []} {"query_id": "q-en-rust-aff97ec191aea99de5c89b95e89c19f74b815e377511c0f4750bd408cafd42ab", "query": "Hi, I'm building a project and I'm using the nightly channel. I was testing it with several builds between 2019-09-20 and 2019-10-27, all with the same results \u2013 when targetting native, the compilation succeeds, when targetting WASM, we see ICE. Unfortunately, it was hard for me to minimize the example. Fortunately, the codebase is pretty small. Here is the exact commit: After downloading it you can run and it should succeed. But if you run (which is 1-line wrapper over ), we get: After commenting out this line, everything should compile OK (WASM target should compile as well): EDIT The backtrace from the compiler is not very helpful: However, there is some additional info:\nThat code is a mess! After an hour of digging through the code and removing half of it - the code that triggers the error is: I try to produce a MCVE out of it currently\nYes, the code is in a very work-in-progress state, so there are a lot of places which should be refactored or even removed, sorry for that. The line that I mentioned in my original post calls the function that you have pointed to. The function alone compiles well to WASM. It does not compile when used as follow:\nSeems to be related to . Here's something small-ish: I may give it a shot later, but now I'm gonna head home :)\nthat is amazing! I've created yet shorter version (still, there may be things to simplify here):\nmodify labels: -O-wasm +F-typealiasimpl_trait\n< 50 loc :tada:\nthank you for tracking it down\ncould you please change the title of this issue to something different, e.g. \"ICE with impl Fn alias\"? It's not about WASM anymore ;)\nBecause a lot has changed, here's a summary: // compile-flags: -Z mir-opt-level=3 #![feature(box_syntax)] fn main() {", "commid": "rust_pr_68236.0"}], "negative_passages": []} {"query_id": "q-en-rust-aff97ec191aea99de5c89b95e89c19f74b815e377511c0f4750bd408cafd42ab", "query": "Hi, I'm building a project and I'm using the nightly channel. I was testing it with several builds between 2019-09-20 and 2019-10-27, all with the same results \u2013 when targetting native, the compilation succeeds, when targetting WASM, we see ICE. Unfortunately, it was hard for me to minimize the example. Fortunately, the codebase is pretty small. Here is the exact commit: After downloading it you can run and it should succeed. But if you run (which is 1-line wrapper over ), we get: After commenting out this line, everything should compile OK (WASM target should compile as well): EDIT The backtrace from the compiler is not very helpful: However, there is some additional info:\nThat code is a mess! After an hour of digging through the code and removing half of it - the code that triggers the error is: I try to produce a MCVE out of it currently\nYes, the code is in a very work-in-progress state, so there are a lot of places which should be refactored or even removed, sorry for that. The line that I mentioned in my original post calls the function that you have pointed to. The function alone compiles well to WASM. It does not compile when used as follow:\nSeems to be related to . Here's something small-ish: I may give it a shot later, but now I'm gonna head home :)\nthat is amazing! I've created yet shorter version (still, there may be things to simplify here):\nmodify labels: -O-wasm +F-typealiasimpl_trait\n< 50 loc :tada:\nthank you for tracking it down\ncould you please change the title of this issue to something different, e.g. \"ICE with impl Fn alias\"? It's not about WASM anymore ;)\nBecause a lot has changed, here's a summary: // build-pass trait AssociatedConstant { const DATA: (); } impl AssociatedConstant for F where F: FnOnce() -> T, T: AssociatedConstant, { const DATA: () = T::DATA; } impl AssociatedConstant for () { const DATA: () = (); } fn foo() -> impl AssociatedConstant { () } fn get_data(_: T) -> &'static () { &T::DATA } fn main() { get_data(foo); } ", "commid": "rust_pr_68236.0"}], "negative_passages": []} {"query_id": "q-en-rust-aff97ec191aea99de5c89b95e89c19f74b815e377511c0f4750bd408cafd42ab", "query": "Hi, I'm building a project and I'm using the nightly channel. I was testing it with several builds between 2019-09-20 and 2019-10-27, all with the same results \u2013 when targetting native, the compilation succeeds, when targetting WASM, we see ICE. Unfortunately, it was hard for me to minimize the example. Fortunately, the codebase is pretty small. Here is the exact commit: After downloading it you can run and it should succeed. But if you run (which is 1-line wrapper over ), we get: After commenting out this line, everything should compile OK (WASM target should compile as well): EDIT The backtrace from the compiler is not very helpful: However, there is some additional info:\nThat code is a mess! After an hour of digging through the code and removing half of it - the code that triggers the error is: I try to produce a MCVE out of it currently\nYes, the code is in a very work-in-progress state, so there are a lot of places which should be refactored or even removed, sorry for that. The line that I mentioned in my original post calls the function that you have pointed to. The function alone compiles well to WASM. It does not compile when used as follow:\nSeems to be related to . Here's something small-ish: I may give it a shot later, but now I'm gonna head home :)\nthat is amazing! I've created yet shorter version (still, there may be things to simplify here):\nmodify labels: -O-wasm +F-typealiasimpl_trait\n< 50 loc :tada:\nthank you for tracking it down\ncould you please change the title of this issue to something different, e.g. \"ICE with impl Fn alias\"? It's not about WASM anymore ;)\nBecause a lot has changed, here's a summary: // build-pass #![feature(type_alias_impl_trait)] use std::marker::PhantomData; /* copied Index and TryFrom for convinience (and simplicity) */ trait MyIndex { type O; fn my_index(self) -> Self::O; } trait MyFrom: Sized { type Error; fn my_from(value: T) -> Result; } /* MCVE starts here */ trait F {} impl F for () {} type DummyT = impl F; fn _dummy_t() -> DummyT {} struct Phantom1(PhantomData); struct Phantom2(PhantomData); struct Scope(Phantom2>); impl Scope { fn new() -> Self { unimplemented!() } } impl MyFrom> for Phantom1 { type Error = (); fn my_from(_: Phantom2) -> Result { unimplemented!() } } impl>>, U> MyIndex> for Scope { type O = T; fn my_index(self) -> Self::O { MyFrom::my_from(self.0).ok().unwrap() } } fn main() { let _pos: Phantom1> = Scope::new().my_index(); } ", "commid": "rust_pr_68236.0"}], "negative_passages": []} {"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-b71eeedf3456a8e68482610fd4bd4c390497087b08e20938df17f67fc15cfc29", "text": "[[package]] name = \"jsonrpc-core\" version = \"13.1.0\" version = \"13.2.0\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"dd42951eb35079520ee29b7efbac654d85821b397ef88c8151600ef7e2d00217\" checksum = \"91d767c183a7e58618a609499d359ce3820700b3ebb4823a18c343b4a2a41a0d\" dependencies = [ \"futures\", \"log\",", "commid": "rust_pr_65957.0"}], "negative_passages": []} {"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-fe27cf5b5c1de90933256cfe46f8a405946e8573ad277dbc962767d68904df70", "text": "[[package]] name = \"rls\" version = \"1.39.0\" version = \"1.40.0\" dependencies = [ \"cargo\", \"cargo_metadata 0.8.0\", \"clippy_lints\", \"crossbeam-channel\", \"difference\", \"env_logger 0.6.2\", \"env_logger 0.7.0\", \"failure\", \"futures\", \"heck\",", "commid": "rust_pr_65957.0"}], "negative_passages": []} {"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-9b66978004e6c3dd8adc6b45dead2b7f5a46767f3c972662e0499cd5828ae748", "text": "\"num_cpus\", \"ordslice\", \"racer\", \"rand 0.6.1\", \"rand 0.7.0\", \"rayon\", \"regex\", \"rls-analysis\",", "commid": "rust_pr_65957.0"}], "negative_passages": []} {"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-15089504226b79a5c4300aa64f93c6ac382a52ff85c27bc06f8b6ade9fa45739", "text": "\"rls-rustc\", \"rls-span\", \"rls-vfs\", \"rustc-serialize\", \"rustc-workspace-hack\", \"rustc_tools_util 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)\", \"rustfmt-nightly\",", "commid": "rust_pr_65957.0"}], "negative_passages": []} {"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-064cb92de7efe8859fb5035f9bc2fa0b464e76363a03d3a3f96b7012eaee46a1", "text": "version = \"0.6.0\" dependencies = [ \"clippy_lints\", \"env_logger 0.6.2\", \"env_logger 0.7.0\", \"failure\", \"futures\", \"log\", \"rand 0.6.1\", \"rand 0.7.0\", \"rls-data\", \"rls-ipc\", \"serde\",", "commid": "rust_pr_65957.0"}], "negative_passages": []} {"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-52ac8ddd14dd95fc60b7252e3183a4652a7af9e34665736a21f94a99cbe06e44", "text": "] [[package]] name = \"rustc-serialize\" version = \"0.3.24\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"dcf128d1287d2ea9d80910b5f1120d0b8eede3fbf1abe91c40d39ea7d51e6fda\" [[package]] name = \"rustc-std-workspace-alloc\" version = \"1.99.0\" dependencies = [", "commid": "rust_pr_65957.0"}], "negative_passages": []} {"query_id": "q-en-rust-850c94e9c8ed398c6b33b6954bd2411490d72e4af922592db5032013124867f7", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rls no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-6a9323dde171f77890d1d5cff7514ed82343d328d737f2e190b856c0f08b068e", "text": " Subproject commit a18df16181947edd5eb593ea0f2321e0035448ee Subproject commit 5db91c7b94ca81eead6b25bcf6196b869a44ece0 ", "commid": "rust_pr_65957.0"}], "negative_passages": []} {"query_id": "q-en-rust-0ba470772d9b9a7362d68a5614c6cf43d8a346aaed3be651a5f0a378dda192ef", "query": "Hi there! This is the second time I've compiled code with Rust ever, and my first time filing a bug report, so I may need some guidance if the info here isn't quite enough to work with :). Learning about println! in Rust by Example, I incorrectly entered the below code. I looked at the documentation and realized I had made a mistake, but the compiler still panicked and recommended I open a bug report. I really believe in Rust and want to see this project succeed, hopefully this information is helpful! I tried this code: println!(\"Pi is roughly: {:.*}, 3, 3.\") Expected to see this happen: \"Pi is roughly 3.142\" Instead, this happened: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', note: run with environment variable to display a backtrace. error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: rustc 1.38.0 ( 2019-09-23) running on x8664-unknown-linux-gnu :rustc 1.38.0 ( 2019-09-23) binary: rustc commit-hash: commit-date: 2019-09-23 host: x8664-unknown-linux-gnu release: 1.38.0 LLVM version: 9.0 Backtrace: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', stack backtrace: 0: backtrace::backtrace::libunwind::trace at 1: backtrace::backtrace::traceunsynchronized at 2: std::syscommon::backtrace::print at 3: std::syscommon::backtrace::print at 4: std::panicking::defaulthook::{{closure}} at 5: std::panicking::defaulthook at 6: rustc::util::common::panichook 7: std::panicking::rustpanicwithhook at 8: std::panicking::continuepanicfmt at 9: rustbeginunwind at 10: core::panicking::panicfmt at 11: core::panicking::panicboundscheck at 12: syntaxext::format::expandpreparsedformatargs 13: syntaxext::format::expandformatargsimpl 14: e.span_label( self.args[pos].span, \"this parameter corresponds to the precision flag\", ); if let Some(arg) = self.args.get(pos) { e.span_label( arg.span, \"this parameter corresponds to the precision flag\", ); } zero_based_note = true; } _ => {}", "commid": "rust_pr_66093.0"}], "negative_passages": []} {"query_id": "q-en-rust-0ba470772d9b9a7362d68a5614c6cf43d8a346aaed3be651a5f0a378dda192ef", "query": "Hi there! This is the second time I've compiled code with Rust ever, and my first time filing a bug report, so I may need some guidance if the info here isn't quite enough to work with :). Learning about println! in Rust by Example, I incorrectly entered the below code. I looked at the documentation and realized I had made a mistake, but the compiler still panicked and recommended I open a bug report. I really believe in Rust and want to see this project succeed, hopefully this information is helpful! I tried this code: println!(\"Pi is roughly: {:.*}, 3, 3.\") Expected to see this happen: \"Pi is roughly 3.142\" Instead, this happened: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', note: run with environment variable to display a backtrace. error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: rustc 1.38.0 ( 2019-09-23) running on x8664-unknown-linux-gnu :rustc 1.38.0 ( 2019-09-23) binary: rustc commit-hash: commit-date: 2019-09-23 host: x8664-unknown-linux-gnu release: 1.38.0 LLVM version: 9.0 Backtrace: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', stack backtrace: 0: backtrace::backtrace::libunwind::trace at 1: backtrace::backtrace::traceunsynchronized at 2: std::syscommon::backtrace::print at 3: std::syscommon::backtrace::print at 4: std::panicking::defaulthook::{{closure}} at 5: std::panicking::defaulthook at 6: rustc::util::common::panichook 7: std::panicking::rustpanicwithhook at 8: std::panicking::continuepanicfmt at 9: rustbeginunwind at 10: core::panicking::panicfmt at 11: core::panicking::panicboundscheck at 12: syntaxext::format::expandpreparsedformatargs 13: syntaxext::format::expandformatargsimpl 14: // We used to ICE here because we tried to unconditionally access the first argument, which // doesn't exist. println!(\"{:.*}\"); //~^ ERROR 2 positional arguments in format string, but no arguments were given }", "commid": "rust_pr_66093.0"}], "negative_passages": []} {"query_id": "q-en-rust-0ba470772d9b9a7362d68a5614c6cf43d8a346aaed3be651a5f0a378dda192ef", "query": "Hi there! This is the second time I've compiled code with Rust ever, and my first time filing a bug report, so I may need some guidance if the info here isn't quite enough to work with :). Learning about println! in Rust by Example, I incorrectly entered the below code. I looked at the documentation and realized I had made a mistake, but the compiler still panicked and recommended I open a bug report. I really believe in Rust and want to see this project succeed, hopefully this information is helpful! I tried this code: println!(\"Pi is roughly: {:.*}, 3, 3.\") Expected to see this happen: \"Pi is roughly 3.142\" Instead, this happened: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', note: run with environment variable to display a backtrace. error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: rustc 1.38.0 ( 2019-09-23) running on x8664-unknown-linux-gnu :rustc 1.38.0 ( 2019-09-23) binary: rustc commit-hash: commit-date: 2019-09-23 host: x8664-unknown-linux-gnu release: 1.38.0 LLVM version: 9.0 Backtrace: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', stack backtrace: 0: backtrace::backtrace::libunwind::trace at 1: backtrace::backtrace::traceunsynchronized at 2: std::syscommon::backtrace::print at 3: std::syscommon::backtrace::print at 4: std::panicking::defaulthook::{{closure}} at 5: std::panicking::defaulthook at 6: rustc::util::common::panichook 7: std::panicking::rustpanicwithhook at 8: std::panicking::continuepanicfmt at 9: rustbeginunwind at 10: core::panicking::panicfmt at 11: core::panicking::panicboundscheck at 12: syntaxext::format::expandpreparsedformatargs 13: syntaxext::format::expandformatargsimpl 14: error: 2 positional arguments in format string, but no arguments were given --> $DIR/ifmt-bad-arg.rs:92:15 | LL | println!(\"{:.*}\"); | ^^--^ | | | this precision flag adds an extra required argument at position 0, which is why there are 2 arguments expected | = note: positional arguments are zero-based = note: for information about formatting flags, visit https://doc.rust-lang.org/std/fmt/index.html error[E0308]: mismatched types --> $DIR/ifmt-bad-arg.rs:78:32 |", "commid": "rust_pr_66093.0"}], "negative_passages": []} {"query_id": "q-en-rust-0ba470772d9b9a7362d68a5614c6cf43d8a346aaed3be651a5f0a378dda192ef", "query": "Hi there! This is the second time I've compiled code with Rust ever, and my first time filing a bug report, so I may need some guidance if the info here isn't quite enough to work with :). Learning about println! in Rust by Example, I incorrectly entered the below code. I looked at the documentation and realized I had made a mistake, but the compiler still panicked and recommended I open a bug report. I really believe in Rust and want to see this project succeed, hopefully this information is helpful! I tried this code: println!(\"Pi is roughly: {:.*}, 3, 3.\") Expected to see this happen: \"Pi is roughly 3.142\" Instead, this happened: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', note: run with environment variable to display a backtrace. error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: rustc 1.38.0 ( 2019-09-23) running on x8664-unknown-linux-gnu :rustc 1.38.0 ( 2019-09-23) binary: rustc commit-hash: commit-date: 2019-09-23 host: x8664-unknown-linux-gnu release: 1.38.0 LLVM version: 9.0 Backtrace: thread 'rustc' panicked at 'index out of bounds: the len is 0 but the index is 0', stack backtrace: 0: backtrace::backtrace::libunwind::trace at 1: backtrace::backtrace::traceunsynchronized at 2: std::syscommon::backtrace::print at 3: std::syscommon::backtrace::print at 4: std::panicking::defaulthook::{{closure}} at 5: std::panicking::defaulthook at 6: rustc::util::common::panichook 7: std::panicking::rustpanicwithhook at 8: std::panicking::continuepanicfmt at 9: rustbeginunwind at 10: core::panicking::panicfmt at 11: core::panicking::panicboundscheck at 12: syntaxext::format::expandpreparsedformatargs 13: syntaxext::format::expandformatargsimpl 14: error: aborting due to 35 previous errors error: aborting due to 36 previous errors For more information about this error, try `rustc --explain E0308`.", "commid": "rust_pr_66093.0"}], "negative_passages": []} {"query_id": "q-en-rust-81420c78bbb85d1b2e0bfdd61cdb5d0578e7c97937360f9d0d4ca1e9a2c565d7", "query": "When trying to compile the following code, a panic occurs on :\nAlso happens on nightly\nICE reproduces all the way back to 1.36.0; before that, it still ICEs but in a different place.\nMaximal reduction: ICE arises in: Blame: cc\nI suspect this is a duplicate of . But the test case is slightly different so I'll leave it open for now\ntriage: P-medium (via analogy with priority on ), removing nomination label.\nTriage: This is no longer ICE with the latest nightly:", "positive_passages": [{"docid": "doc-en-rust-e97862e80f899ec9185f223a75571a2d72d289c1c4c5750f74f2dc768588091a", "text": " use std::mem::size_of; struct Foo<'s> { //~ ERROR: parameter `'s` is never used array: [(); size_of::<&Self>()], //~^ ERROR: generic `Self` types are currently not permitted in anonymous constants } // The below is taken from https://github.com/rust-lang/rust/issues/66152#issuecomment-550275017 // as the root cause seems the same. const fn foo() -> usize { 0 } struct Bar<'a> { //~ ERROR: parameter `'a` is never used beta: [(); foo::<&'a ()>()], //~ ERROR: a non-static lifetime is not allowed in a `const` } fn main() {} ", "commid": "rust_pr_96730.0"}], "negative_passages": []} {"query_id": "q-en-rust-81420c78bbb85d1b2e0bfdd61cdb5d0578e7c97937360f9d0d4ca1e9a2c565d7", "query": "When trying to compile the following code, a panic occurs on :\nAlso happens on nightly\nICE reproduces all the way back to 1.36.0; before that, it still ICEs but in a different place.\nMaximal reduction: ICE arises in: Blame: cc\nI suspect this is a duplicate of . But the test case is slightly different so I'll leave it open for now\ntriage: P-medium (via analogy with priority on ), removing nomination label.\nTriage: This is no longer ICE with the latest nightly:", "positive_passages": [{"docid": "doc-en-rust-e4f5e9de171a36b455dba626d55ab6d0d972442dd4441f77f0e9030f7bc725e0", "text": " error[E0658]: a non-static lifetime is not allowed in a `const` --> $DIR/issue-64173-unused-lifetimes.rs:16:23 | LL | beta: [(); foo::<&'a ()>()], | ^^ | = note: see issue #76560 for more information = help: add `#![feature(generic_const_exprs)]` to the crate attributes to enable error: generic `Self` types are currently not permitted in anonymous constants --> $DIR/issue-64173-unused-lifetimes.rs:4:28 | LL | array: [(); size_of::<&Self>()], | ^^^^ error[E0392]: parameter `'s` is never used --> $DIR/issue-64173-unused-lifetimes.rs:3:12 | LL | struct Foo<'s> { | ^^ unused parameter | = help: consider removing `'s`, referring to it in a field, or using a marker such as `PhantomData` error[E0392]: parameter `'a` is never used --> $DIR/issue-64173-unused-lifetimes.rs:15:12 | LL | struct Bar<'a> { | ^^ unused parameter | = help: consider removing `'a`, referring to it in a field, or using a marker such as `PhantomData` error: aborting due to 4 previous errors Some errors have detailed explanations: E0392, E0658. For more information about an error, try `rustc --explain E0392`. ", "commid": "rust_pr_96730.0"}], "negative_passages": []} {"query_id": "q-en-rust-4ad649da1d61a80b48d176e11fb0aa5fb9d0f0bc11f44964f4bb20ff0fdea2b4", "query": "It is linked to This is the tests we need to add (the list is non-exhaustive): [x] a UI test for each setting [x] click on element in the sidebar hidden by default to be sure it gets fully un-collapsed and scrolled to [x] make tests for each shortcut [x] make tests for source code navigation (through files in the first time should be enough) [ ] run all the previous tests in mobile mode\nI think this issue can now be closed. For the mobile tests, I also think we them where it was necessary.", "positive_passages": [{"docid": "doc-en-rust-acebd2d356ba0d55cf90d53727ab9127fccbf64c74807a5568cb102e418f61f3", "text": " // This test ensures that the \"Auto-hide item methods' documentation\" setting is working as // expected. define-function: ( \"check-setting\", (storage_value, setting_attribute_value, toggle_attribute_value), block { assert-local-storage: {\"rustdoc-auto-hide-method-docs\": |storage_value|} click: \"#settings-menu\" wait-for: \"#settings\" assert-property: (\"#auto-hide-method-docs\", {\"checked\": |setting_attribute_value|}) assert-attribute: (\".toggle.method-toggle\", {\"open\": |toggle_attribute_value|}) } ) goto: \"file://\" + |DOC_PATH| + \"/lib2/struct.Foo.html\" // We check that the setting is disabled by default. call-function: (\"check-setting\", { \"storage_value\": null, \"setting_attribute_value\": \"false\", \"toggle_attribute_value\": \"\", }) // Now we change its value. click: \"#auto-hide-method-docs\" assert-local-storage: {\"rustdoc-auto-hide-method-docs\": \"true\"} // We check that the changes were applied as expected. reload: call-function: (\"check-setting\", { \"storage_value\": \"true\", \"setting_attribute_value\": \"true\", \"toggle_attribute_value\": null, }) // And now we re-disable the setting. click: \"#auto-hide-method-docs\" assert-local-storage: {\"rustdoc-auto-hide-method-docs\": \"false\"} // And we check everything is back the way it was before. reload: call-function: (\"check-setting\", { \"storage_value\": \"false\", \"setting_attribute_value\": \"false\", \"toggle_attribute_value\": \"\", }) ", "commid": "rust_pr_109570"}], "negative_passages": []} {"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-5cf0127f7fd466265f33ade06ea5c963345fa237cd541f225f10491c1d7f3908", "text": "use rustc_data_structures::fx::FxHashMap; use rustc_index::vec::IndexVec; use rustc::ty::layout::{ LayoutOf, TyLayout, LayoutError, HasTyCtxt, TargetDataLayout, HasDataLayout, LayoutOf, TyLayout, LayoutError, HasTyCtxt, TargetDataLayout, HasDataLayout, Size, }; use crate::rustc::ty::subst::Subst;", "commid": "rust_pr_66394.0"}], "negative_passages": []} {"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-d6b7d04fee0f3cd2f61bf052e4b14100472a7de468aaaaaaf02479c332a63834", "text": "use crate::const_eval::error_to_const_error; use crate::transform::{MirPass, MirSource}; /// The maximum number of bytes that we'll allocate space for a return value. const MAX_ALLOC_LIMIT: u64 = 1024; pub struct ConstProp; impl<'tcx> MirPass<'tcx> for ConstProp {", "commid": "rust_pr_66394.0"}], "negative_passages": []} {"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-c1d2797f5e0a912cd0d293cd0fdad1274dd743b2e1d0647c11eaf959138fd67f", "text": "ecx .layout_of(body.return_ty().subst(tcx, substs)) .ok() // Don't bother allocating memory for ZST types which have no values. .filter(|ret_layout| !ret_layout.is_zst()) // Don't bother allocating memory for ZST types which have no values // or for large values. .filter(|ret_layout| !ret_layout.is_zst() && ret_layout.size < Size::from_bytes(MAX_ALLOC_LIMIT)) .map(|ret_layout| ecx.allocate(ret_layout, MemoryKind::Stack)); ecx.push_stack_frame(", "commid": "rust_pr_66394.0"}], "negative_passages": []} {"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-3abbdcdebdb75745c2635e225e6ef7e055fe3c77a62144d5c8909f50ccf27a98", "text": ") -> Option<()> { let span = source_info.span; // #66397: Don't try to eval into large places as that can cause an OOM if place_layout.size >= Size::from_bytes(MAX_ALLOC_LIMIT) { return None; } let overflow_check = self.tcx.sess.overflow_checks(); // Perform any special handling for specific Rvalue types.", "commid": "rust_pr_66394.0"}], "negative_passages": []} {"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-ee49c55b0e6613225b8b877eaa75eeb5b5587617370a78996b0f956679259df0", "text": " // check-pass // only-x86_64 // Checks that the compiler does not actually try to allocate 4 TB during compilation and OOM crash. fn foo() -> [u8; 4 * 1024 * 1024 * 1024 * 1024] { unimplemented!() } fn main() { foo(); } ", "commid": "rust_pr_66394.0"}], "negative_passages": []} {"query_id": "q-en-rust-2550961ff8fac6dc683f0437cd61e5d9c38f1568f0f3c74668ce389a308f3278", "query": "A MSVC x8664 bors job has failed on two unrelated PRs (, ) with the : In both cases, attempted to allocate several GB of memory while compiling the crate. One of these failures was in the job, and the other was in . This failure appears to be spurious: a previous version of passed the same job.\nI wonder if we could get a backtrace or something from this allocation -- it seems... suspicious that we're only sometimes OOMing. It's also interesting to note that this is almost exactly 4 GB in a single allocation, which is surprising generally ( is not a big crate -- 300 lines or so, with comments etc).\nOn my PR it was a different (almost) power of two: edit In both cases, the size of the allocation is 8 bytes less than a power of two.\nSo I minimized this to: on a recent nightly gives: It looks like the compiler is trying to allocate enough space to store a value of the return type. As long as I use a large enough value it reproduces reliably and it isn't specific to Windows: . I bisected this to and I'm guessing (cc is the root cause.\nYes, that seems likely. I will take a look at this tonight.\nshould we put on hold while this is sorted out?\nYeah, let's do that just to be safe.\ntriage: P-high. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-9faf1e6ca6dfca36947b5e6bdc946d39a150daadbc8d8960bbcb4640021077db", "text": " // check-pass // only-x86_64 // Checks that the compiler does not actually try to allocate 4 TB during compilation and OOM crash. fn main() { [0; 4 * 1024 * 1024 * 1024 * 1024]; } ", "commid": "rust_pr_66394.0"}], "negative_passages": []} {"query_id": "q-en-rust-e4dadc355bd87a465136e53a4eafefe0ad047116c4141b2562c0b5a0123bb91a", "query": "When using zipped iterators in a certain section of code, I hit an ICE. I've attached the project source -- sorry I could not produce a minimum example though I tried. Also, it's closed source, though I own it so can choose to provide the snapshot. Symptoms: Normally using zipped iterators doesn't cause an ICE, but in this case it is consistent (clean build, using , and ). Not zipping the iterators, and using prevents the ICE from occurring. This code fails to compile: This code compiles: Compilation error message: : 1.39 (stable): 1.40 (nightly): Backtrace: >), ) { if let Err(mut errors) = self.fulfillment_cx.borrow_mut().select_where_possible(self) { let result = self.fulfillment_cx.borrow_mut().select_where_possible(self); if let Err(mut errors) = result { mutate_fullfillment_errors(&mut errors); self.report_fulfillment_errors(&errors, self.inh.body_id, fallback_has_occurred); }", "commid": "rust_pr_66388.0"}], "negative_passages": []} {"query_id": "q-en-rust-e4dadc355bd87a465136e53a4eafefe0ad047116c4141b2562c0b5a0123bb91a", "query": "When using zipped iterators in a certain section of code, I hit an ICE. I've attached the project source -- sorry I could not produce a minimum example though I tried. Also, it's closed source, though I own it so can choose to provide the snapshot. Symptoms: Normally using zipped iterators doesn't cause an ICE, but in this case it is consistent (clean build, using , and ). Not zipping the iterators, and using prevents the ICE from occurring. This code fails to compile: This code compiles: Compilation error message: : 1.39 (stable): 1.40 (nightly): Backtrace: // #66353: ICE when trying to recover from incorrect associated type trait _Func { fn func(_: Self); } trait _A { type AssocT; } fn main() { _Func::< <() as _A>::AssocT >::func(()); //~^ ERROR the trait bound `(): _A` is not satisfied //~| ERROR the trait bound `(): _Func<_>` is not satisfied } ", "commid": "rust_pr_66388.0"}], "negative_passages": []} {"query_id": "q-en-rust-e4dadc355bd87a465136e53a4eafefe0ad047116c4141b2562c0b5a0123bb91a", "query": "When using zipped iterators in a certain section of code, I hit an ICE. I've attached the project source -- sorry I could not produce a minimum example though I tried. Also, it's closed source, though I own it so can choose to provide the snapshot. Symptoms: Normally using zipped iterators doesn't cause an ICE, but in this case it is consistent (clean build, using , and ). Not zipping the iterators, and using prevents the ICE from occurring. This code fails to compile: This code compiles: Compilation error message: : 1.39 (stable): 1.40 (nightly): Backtrace: error[E0277]: the trait bound `(): _A` is not satisfied --> $DIR/issue-66353.rs:12:14 | LL | _Func::< <() as _A>::AssocT >::func(()); | ^^^^^^^^^^^^^^^^^^ the trait `_A` is not implemented for `()` error[E0277]: the trait bound `(): _Func<_>` is not satisfied --> $DIR/issue-66353.rs:12:41 | LL | fn func(_: Self); | ----------------- required by `_Func::func` ... LL | _Func::< <() as _A>::AssocT >::func(()); | ^^ the trait `_Func<_>` is not implemented for `()` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_66388.0"}], "negative_passages": []} {"query_id": "q-en-rust-99d6276ef090487cc560df6ba2e84e8160867b5480ae60ced7e23214dab25919", "query": "For example: This gives you a warning that should be snake case in the match arm. But at that point you don't really have much choice about it. Variable name warnings should be where the variable is named, and it isn't really named there. (Maybe there's some syntax like but who is going to bother with that?) $DIR/issue-66362-no-snake-case-warning-for-field-puns.rs:7:9 | LL | lowerCamelCaseName: bool, | ^^^^^^^^^^^^^^^^^^ help: convert the identifier to snake case: `lower_camel_case_name` | note: lint level defined here --> $DIR/issue-66362-no-snake-case-warning-for-field-puns.rs:1:9 | LL | #![deny(non_snake_case)] | ^^^^^^^^^^^^^^ error: variable `lowerCamelCaseBinding` should have a snake case name --> $DIR/issue-66362-no-snake-case-warning-for-field-puns.rs:20:38 | LL | Foo::Good { snake_case_name: lowerCamelCaseBinding } => { } | ^^^^^^^^^^^^^^^^^^^^^ help: convert the identifier to snake case: `lower_camel_case_binding` error: variable `anotherLowerCamelCaseBinding` should have a snake case name --> $DIR/issue-66362-no-snake-case-warning-for-field-puns.rs:24:41 | LL | if let Foo::Good { snake_case_name: anotherLowerCamelCaseBinding } = b { } | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: convert the identifier to snake case: `another_lower_camel_case_binding` error: variable `yetAnotherLowerCamelCaseBinding` should have a snake case name --> $DIR/issue-66362-no-snake-case-warning-for-field-puns.rs:27:43 | LL | if let Foo::Bad { lowerCamelCaseName: yetAnotherLowerCamelCaseBinding } = b { } | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: convert the identifier to snake case: `yet_another_lower_camel_case_binding` error: aborting due to 4 previous errors ", "commid": "rust_pr_66660.0"}], "negative_passages": []} {"query_id": "q-en-rust-d71b16d4607174d57ba73e9d95f06f2b41e1ab72932f98df17b457c3b78e0dad", "query": "One issue that came up in the discussion is that the type implements . This might have been introduced accidentally. probably does not matter at all, given that users will only observe a by reference. However has the impliciation that we will not be able to add non thread-safe methods to - e.g. in order to optimize thread-local wake-ups again. It might be interesting to see whether and support could be removed from the type. Unfortunately that is however a breaking change - even though it is not likely that any code currently uses out of the direct path. In a similar fashion it had been observed in the implementation of () that the type is also and . While it had been expected for - given that s are used to wake-up tasks from different threads, it might not have been for . One downside of s being is that it prevents optimizations. E.g. in linked ticket the original implementation contained an optimization that while a was not d (and thereby equipped with a different vtable) it could wake-up the local eventloop again by just setting a non-synchronized boolean flag in the current thread. However given that could be transferred to another thread, and called from there even within the context - this optmization is invalid. Here it would also be interesting if the requirement could be removed. I expect the amount of real-world usages to be in the same ballpark as sending across threads - hopefully 0. But it's again a breaking change cc , , ,\nEven if this weren't a breaking change, I'd want to see some really good evidence these optimizations would matter for real world code before choosing to make these types non-. I don't think getting rid of an atomic update in is a compelling example. Basically, in my view the current API is intentional and correct.\nwas introduced to leave room for future unforeseen requirements, and being restricts its usefulness as such, which is unfortunate. I can see a case for since we have , but it's difficult to imagine a case where it makes sense to be accessing a single from multiple threads concurrently.\nI have an example of where the current API is resulting in performance issues: I am trying to implement a combined Executor/Reactor around a winit event loop. To wake up the event loop from a different thread we have to post a message to the event loop which can be a non-trivial operation. But on the same thread, we know that we don't need to wake up the eventloop, so we can use much cheaper mechanisms like an atomic bool.\nBut that would be true if wakers were just also, which was an intentional part of the simplification from the previous design.\nI think the point is that being precludes the otherwise noninvasive restoration of a more efficient through an additional method on .\nI don't think it should be the judgement of the libs team (or any of us) to determine whether something is good enough and not needs to be further optimized for any use-case. Rusts goal as a language is to enable zero cost abstractions. This requires in my opinion to not be opinionated on any design choices which have a performance/footprint impact. I think some of the decisions that have been taken in the async/await world however are opinionated, and will have an impact on use-cases that currently are still better implemented with different mechanisms. I do not really want to elaborate on anything more particular, because I fear this would bring the discussion back to a arguing whether any X% performance degradation is still good enough. That is an OK decision to have for a software project whichs needs to decide whether Rusts async/await support is good enough for them, and whether they have to apply workarounds or not. But it's not a discussion which will move the issue here any more forward, because for yet another project the outcoming of the discussion might be different. PS: I do think it's okay and good if libraries like tokio or async-std are being more opinionated about what threading models they support, and they might sacrifice performance gains in rare usage scenarios for easy of use. But and language features are different, and we expect people to use them also for niche use-cases (e.g. there is certainly a lot of cool evaluation going on with the use of async/await in embedded contexts or kernels - where requirements might be very different than for a webserver based on tokio).\nThis is a trade off: Context is either Sync or it isn't, some users benefit from one choice (they can use references to Context as threadsafe) and some users benefit from the other choice (they can have nonthreadsafe constructs inside the Context construct). Ultimately the libs team has to decide one way or the other on these points in the API where there is a trade off between thread safety and not. However, this decision has already been made and it would be a breaking change to change it, so this discussion is moot.\nDo you have an example where this is actually done within the ecosystem, or a use-case for it? As far as I can tell, this would require spawning a thread using something like and capturing the from within a ?\nA common way a user could depend on a type being is to store it in an and send it across threads. This isn't very likely for Context but it is a very reasonable pattern for .\nI think its a fair point that we didnt intentionally make context send and sync, and that this precludes adding more fields to context that are not send and sync, and that since context is just a buffer for future changes, it would probably have been better to make it non-threadsafe. But now its a breaking change. If crater showed no regressions and there's no known patterns it nullifies, I would be open to changing this about Context personally, but I am somewhat doubtful the libs team as a whole would approve making a breaking change to support hypothetical future extensions. If anyone on the thread wants to pursue a change to context (not waker), they should get crater results as the next step of the conversation.\nThere is no discussion about being or not - it obviously has to be. The question is purely about . And in order to do what you describe you have to store an and send it somewhere. Which is very doubtful to happen, due to the not very useful lifetime and due to already being an like thing internally. As mentioned, the most likely way to see this behavior is some scoped thread API being used inside - or people doing some very advanced synchronization against an IO driver running in another thread. For all those there are certainly better ways than to rely on being . E.g. to return a flag from the synchronized section and call from the original thread when done. Or to and send the if there is doubt whether it needs to be persisted somewhere else. And yes, the impact on is even bigger. It was meant as an extension point. But we can not add any functions to it which are not thread-safe. E.g. if we want to add methods which spawn another task on the same thread as the current executor thread, and which allows for spawning futures (like Tokios ) - we would have an issue.\nWhy would these be \"certainly better\" than just calling from the scoped threads?\nTo be a little more expansive: the scoped threadpool you're talking about could very well not be scoped inside a poll method - rather, the waker could be cloned once and then owned by one thread and referenced by many other scoped threads in some sort of threadpool construct for CPU bound tasks. This seems like a perfectly valid implementation which allows you to divide up the work among many threads without cloning the waker many times. This is potentially an optimization. I don't think this optimization is very important, but I don't think the optimizations allowed by making waker are very important either. The point is that there's a trade off between the optimizations allowed by assuming references to wakers can be shared across threads and the optimizations allowed by assuming they can't be, its not the case that one side is inherently the \"zero cost\" side.\nWe discussed this at the recent Libs meeting and felt that deferring to would make sense here.\nThough I don't have the bandwidth to carry this, this definitely seems interesting. Seeing projects such as and closed-source initiatives lead me to believe that there may actually be a decent case to enable some form of -to avoid the synchronization overhead on single-threaded runtimes. I don't know if we should make this a priority, but at least we probably shouldn't close it just yet.\nI think bringing back would be a nice additional improvement. But for the moment it would be nice to just fix the general sync-ness, which blocks all other fixes and improvements.\nAs the person who originally introduced , it was very much a conscious decision to get rid of it and to make and . This was called out explicitly .\nThe cited discussion in the RFC justifies making Send, and predates the decision to (re)introduce . Making doesn't defeat the objectives discussed there. In particular, it does not impact ergonomics for the common case at all.\nThis was recently closed, however the issue mentions both and . The recent change only affects . There is good reason that is , however after reading this issue I'm unsure that there are good reasons why is .\n(NOT A CONTRIBUTION) Waker supports wakebyref, and so it is possible to pass to another thread and wake it from that thread. This functionality would not be possible without Waker being Sync. Supporting wakers that are either not Send or not Sync will best be done by adding new APIs to Context and a new LocalWaker type.\nForgive me the naive question, but how does a type work with ? As I understand, requires , as a concrete type not a trait, so this would also require a trait? At this point we have two completely incompatible async systems... Which is only useful if is not cheap to clone, and has the burden of another lifetime bound. It surprises me that the docs don't say anything about the intended cost of .\n(NOT A CONTRIBUTION) Future::poll does not take Waker as an argument, Future::poll takes Context. Context could have the ability to set a LocalWaker, so that an executor could set this. Reactors which operate on the same thread as the future that polls them could migrate to using the LocalWaker argument instead of Waker. Here is a pre-RFC that someone wrote with a possible API: Commentary like this is both factually wrong and not helpful for the mood of the thread.", "positive_passages": [{"docid": "doc-en-rust-9ef16e66879a654ef86eecc14ed9f4d9b998a77e3329b46560ba17bde9758bbe", "text": "// are contravariant while return-position lifetimes are // covariant). _marker: PhantomData &'a ()>, // Ensure `Context` is `!Send` and `!Sync` in order to allow // for future `!Send` and / or `!Sync` fields. _marker2: PhantomData<*mut ()>, } impl<'a> Context<'a> {", "commid": "rust_pr_95985.0"}], "negative_passages": []} {"query_id": "q-en-rust-d71b16d4607174d57ba73e9d95f06f2b41e1ab72932f98df17b457c3b78e0dad", "query": "One issue that came up in the discussion is that the type implements . This might have been introduced accidentally. probably does not matter at all, given that users will only observe a by reference. However has the impliciation that we will not be able to add non thread-safe methods to - e.g. in order to optimize thread-local wake-ups again. It might be interesting to see whether and support could be removed from the type. Unfortunately that is however a breaking change - even though it is not likely that any code currently uses out of the direct path. In a similar fashion it had been observed in the implementation of () that the type is also and . While it had been expected for - given that s are used to wake-up tasks from different threads, it might not have been for . One downside of s being is that it prevents optimizations. E.g. in linked ticket the original implementation contained an optimization that while a was not d (and thereby equipped with a different vtable) it could wake-up the local eventloop again by just setting a non-synchronized boolean flag in the current thread. However given that could be transferred to another thread, and called from there even within the context - this optmization is invalid. Here it would also be interesting if the requirement could be removed. I expect the amount of real-world usages to be in the same ballpark as sending across threads - hopefully 0. But it's again a breaking change cc , , ,\nEven if this weren't a breaking change, I'd want to see some really good evidence these optimizations would matter for real world code before choosing to make these types non-. I don't think getting rid of an atomic update in is a compelling example. Basically, in my view the current API is intentional and correct.\nwas introduced to leave room for future unforeseen requirements, and being restricts its usefulness as such, which is unfortunate. I can see a case for since we have , but it's difficult to imagine a case where it makes sense to be accessing a single from multiple threads concurrently.\nI have an example of where the current API is resulting in performance issues: I am trying to implement a combined Executor/Reactor around a winit event loop. To wake up the event loop from a different thread we have to post a message to the event loop which can be a non-trivial operation. But on the same thread, we know that we don't need to wake up the eventloop, so we can use much cheaper mechanisms like an atomic bool.\nBut that would be true if wakers were just also, which was an intentional part of the simplification from the previous design.\nI think the point is that being precludes the otherwise noninvasive restoration of a more efficient through an additional method on .\nI don't think it should be the judgement of the libs team (or any of us) to determine whether something is good enough and not needs to be further optimized for any use-case. Rusts goal as a language is to enable zero cost abstractions. This requires in my opinion to not be opinionated on any design choices which have a performance/footprint impact. I think some of the decisions that have been taken in the async/await world however are opinionated, and will have an impact on use-cases that currently are still better implemented with different mechanisms. I do not really want to elaborate on anything more particular, because I fear this would bring the discussion back to a arguing whether any X% performance degradation is still good enough. That is an OK decision to have for a software project whichs needs to decide whether Rusts async/await support is good enough for them, and whether they have to apply workarounds or not. But it's not a discussion which will move the issue here any more forward, because for yet another project the outcoming of the discussion might be different. PS: I do think it's okay and good if libraries like tokio or async-std are being more opinionated about what threading models they support, and they might sacrifice performance gains in rare usage scenarios for easy of use. But and language features are different, and we expect people to use them also for niche use-cases (e.g. there is certainly a lot of cool evaluation going on with the use of async/await in embedded contexts or kernels - where requirements might be very different than for a webserver based on tokio).\nThis is a trade off: Context is either Sync or it isn't, some users benefit from one choice (they can use references to Context as threadsafe) and some users benefit from the other choice (they can have nonthreadsafe constructs inside the Context construct). Ultimately the libs team has to decide one way or the other on these points in the API where there is a trade off between thread safety and not. However, this decision has already been made and it would be a breaking change to change it, so this discussion is moot.\nDo you have an example where this is actually done within the ecosystem, or a use-case for it? As far as I can tell, this would require spawning a thread using something like and capturing the from within a ?\nA common way a user could depend on a type being is to store it in an and send it across threads. This isn't very likely for Context but it is a very reasonable pattern for .\nI think its a fair point that we didnt intentionally make context send and sync, and that this precludes adding more fields to context that are not send and sync, and that since context is just a buffer for future changes, it would probably have been better to make it non-threadsafe. But now its a breaking change. If crater showed no regressions and there's no known patterns it nullifies, I would be open to changing this about Context personally, but I am somewhat doubtful the libs team as a whole would approve making a breaking change to support hypothetical future extensions. If anyone on the thread wants to pursue a change to context (not waker), they should get crater results as the next step of the conversation.\nThere is no discussion about being or not - it obviously has to be. The question is purely about . And in order to do what you describe you have to store an and send it somewhere. Which is very doubtful to happen, due to the not very useful lifetime and due to already being an like thing internally. As mentioned, the most likely way to see this behavior is some scoped thread API being used inside - or people doing some very advanced synchronization against an IO driver running in another thread. For all those there are certainly better ways than to rely on being . E.g. to return a flag from the synchronized section and call from the original thread when done. Or to and send the if there is doubt whether it needs to be persisted somewhere else. And yes, the impact on is even bigger. It was meant as an extension point. But we can not add any functions to it which are not thread-safe. E.g. if we want to add methods which spawn another task on the same thread as the current executor thread, and which allows for spawning futures (like Tokios ) - we would have an issue.\nWhy would these be \"certainly better\" than just calling from the scoped threads?\nTo be a little more expansive: the scoped threadpool you're talking about could very well not be scoped inside a poll method - rather, the waker could be cloned once and then owned by one thread and referenced by many other scoped threads in some sort of threadpool construct for CPU bound tasks. This seems like a perfectly valid implementation which allows you to divide up the work among many threads without cloning the waker many times. This is potentially an optimization. I don't think this optimization is very important, but I don't think the optimizations allowed by making waker are very important either. The point is that there's a trade off between the optimizations allowed by assuming references to wakers can be shared across threads and the optimizations allowed by assuming they can't be, its not the case that one side is inherently the \"zero cost\" side.\nWe discussed this at the recent Libs meeting and felt that deferring to would make sense here.\nThough I don't have the bandwidth to carry this, this definitely seems interesting. Seeing projects such as and closed-source initiatives lead me to believe that there may actually be a decent case to enable some form of -to avoid the synchronization overhead on single-threaded runtimes. I don't know if we should make this a priority, but at least we probably shouldn't close it just yet.\nI think bringing back would be a nice additional improvement. But for the moment it would be nice to just fix the general sync-ness, which blocks all other fixes and improvements.\nAs the person who originally introduced , it was very much a conscious decision to get rid of it and to make and . This was called out explicitly .\nThe cited discussion in the RFC justifies making Send, and predates the decision to (re)introduce . Making doesn't defeat the objectives discussed there. In particular, it does not impact ergonomics for the common case at all.\nThis was recently closed, however the issue mentions both and . The recent change only affects . There is good reason that is , however after reading this issue I'm unsure that there are good reasons why is .\n(NOT A CONTRIBUTION) Waker supports wakebyref, and so it is possible to pass to another thread and wake it from that thread. This functionality would not be possible without Waker being Sync. Supporting wakers that are either not Send or not Sync will best be done by adding new APIs to Context and a new LocalWaker type.\nForgive me the naive question, but how does a type work with ? As I understand, requires , as a concrete type not a trait, so this would also require a trait? At this point we have two completely incompatible async systems... Which is only useful if is not cheap to clone, and has the burden of another lifetime bound. It surprises me that the docs don't say anything about the intended cost of .\n(NOT A CONTRIBUTION) Future::poll does not take Waker as an argument, Future::poll takes Context. Context could have the ability to set a LocalWaker, so that an executor could set this. Reactors which operate on the same thread as the future that polls them could migrate to using the LocalWaker argument instead of Waker. Here is a pre-RFC that someone wrote with a possible API: Commentary like this is both factually wrong and not helpful for the mood of the thread.", "positive_passages": [{"docid": "doc-en-rust-8accce3db88399bb62ac1ead3ca55a20f04cabf97d000206d5e6fa27b0a27672", "text": "#[must_use] #[inline] pub const fn from_waker(waker: &'a Waker) -> Self { Context { waker, _marker: PhantomData } Context { waker, _marker: PhantomData, _marker2: PhantomData } } /// Returns a reference to the [`Waker`] for the current task.", "commid": "rust_pr_95985.0"}], "negative_passages": []} {"query_id": "q-en-rust-d71b16d4607174d57ba73e9d95f06f2b41e1ab72932f98df17b457c3b78e0dad", "query": "One issue that came up in the discussion is that the type implements . This might have been introduced accidentally. probably does not matter at all, given that users will only observe a by reference. However has the impliciation that we will not be able to add non thread-safe methods to - e.g. in order to optimize thread-local wake-ups again. It might be interesting to see whether and support could be removed from the type. Unfortunately that is however a breaking change - even though it is not likely that any code currently uses out of the direct path. In a similar fashion it had been observed in the implementation of () that the type is also and . While it had been expected for - given that s are used to wake-up tasks from different threads, it might not have been for . One downside of s being is that it prevents optimizations. E.g. in linked ticket the original implementation contained an optimization that while a was not d (and thereby equipped with a different vtable) it could wake-up the local eventloop again by just setting a non-synchronized boolean flag in the current thread. However given that could be transferred to another thread, and called from there even within the context - this optmization is invalid. Here it would also be interesting if the requirement could be removed. I expect the amount of real-world usages to be in the same ballpark as sending across threads - hopefully 0. But it's again a breaking change cc , , ,\nEven if this weren't a breaking change, I'd want to see some really good evidence these optimizations would matter for real world code before choosing to make these types non-. I don't think getting rid of an atomic update in is a compelling example. Basically, in my view the current API is intentional and correct.\nwas introduced to leave room for future unforeseen requirements, and being restricts its usefulness as such, which is unfortunate. I can see a case for since we have , but it's difficult to imagine a case where it makes sense to be accessing a single from multiple threads concurrently.\nI have an example of where the current API is resulting in performance issues: I am trying to implement a combined Executor/Reactor around a winit event loop. To wake up the event loop from a different thread we have to post a message to the event loop which can be a non-trivial operation. But on the same thread, we know that we don't need to wake up the eventloop, so we can use much cheaper mechanisms like an atomic bool.\nBut that would be true if wakers were just also, which was an intentional part of the simplification from the previous design.\nI think the point is that being precludes the otherwise noninvasive restoration of a more efficient through an additional method on .\nI don't think it should be the judgement of the libs team (or any of us) to determine whether something is good enough and not needs to be further optimized for any use-case. Rusts goal as a language is to enable zero cost abstractions. This requires in my opinion to not be opinionated on any design choices which have a performance/footprint impact. I think some of the decisions that have been taken in the async/await world however are opinionated, and will have an impact on use-cases that currently are still better implemented with different mechanisms. I do not really want to elaborate on anything more particular, because I fear this would bring the discussion back to a arguing whether any X% performance degradation is still good enough. That is an OK decision to have for a software project whichs needs to decide whether Rusts async/await support is good enough for them, and whether they have to apply workarounds or not. But it's not a discussion which will move the issue here any more forward, because for yet another project the outcoming of the discussion might be different. PS: I do think it's okay and good if libraries like tokio or async-std are being more opinionated about what threading models they support, and they might sacrifice performance gains in rare usage scenarios for easy of use. But and language features are different, and we expect people to use them also for niche use-cases (e.g. there is certainly a lot of cool evaluation going on with the use of async/await in embedded contexts or kernels - where requirements might be very different than for a webserver based on tokio).\nThis is a trade off: Context is either Sync or it isn't, some users benefit from one choice (they can use references to Context as threadsafe) and some users benefit from the other choice (they can have nonthreadsafe constructs inside the Context construct). Ultimately the libs team has to decide one way or the other on these points in the API where there is a trade off between thread safety and not. However, this decision has already been made and it would be a breaking change to change it, so this discussion is moot.\nDo you have an example where this is actually done within the ecosystem, or a use-case for it? As far as I can tell, this would require spawning a thread using something like and capturing the from within a ?\nA common way a user could depend on a type being is to store it in an and send it across threads. This isn't very likely for Context but it is a very reasonable pattern for .\nI think its a fair point that we didnt intentionally make context send and sync, and that this precludes adding more fields to context that are not send and sync, and that since context is just a buffer for future changes, it would probably have been better to make it non-threadsafe. But now its a breaking change. If crater showed no regressions and there's no known patterns it nullifies, I would be open to changing this about Context personally, but I am somewhat doubtful the libs team as a whole would approve making a breaking change to support hypothetical future extensions. If anyone on the thread wants to pursue a change to context (not waker), they should get crater results as the next step of the conversation.\nThere is no discussion about being or not - it obviously has to be. The question is purely about . And in order to do what you describe you have to store an and send it somewhere. Which is very doubtful to happen, due to the not very useful lifetime and due to already being an like thing internally. As mentioned, the most likely way to see this behavior is some scoped thread API being used inside - or people doing some very advanced synchronization against an IO driver running in another thread. For all those there are certainly better ways than to rely on being . E.g. to return a flag from the synchronized section and call from the original thread when done. Or to and send the if there is doubt whether it needs to be persisted somewhere else. And yes, the impact on is even bigger. It was meant as an extension point. But we can not add any functions to it which are not thread-safe. E.g. if we want to add methods which spawn another task on the same thread as the current executor thread, and which allows for spawning futures (like Tokios ) - we would have an issue.\nWhy would these be \"certainly better\" than just calling from the scoped threads?\nTo be a little more expansive: the scoped threadpool you're talking about could very well not be scoped inside a poll method - rather, the waker could be cloned once and then owned by one thread and referenced by many other scoped threads in some sort of threadpool construct for CPU bound tasks. This seems like a perfectly valid implementation which allows you to divide up the work among many threads without cloning the waker many times. This is potentially an optimization. I don't think this optimization is very important, but I don't think the optimizations allowed by making waker are very important either. The point is that there's a trade off between the optimizations allowed by assuming references to wakers can be shared across threads and the optimizations allowed by assuming they can't be, its not the case that one side is inherently the \"zero cost\" side.\nWe discussed this at the recent Libs meeting and felt that deferring to would make sense here.\nThough I don't have the bandwidth to carry this, this definitely seems interesting. Seeing projects such as and closed-source initiatives lead me to believe that there may actually be a decent case to enable some form of -to avoid the synchronization overhead on single-threaded runtimes. I don't know if we should make this a priority, but at least we probably shouldn't close it just yet.\nI think bringing back would be a nice additional improvement. But for the moment it would be nice to just fix the general sync-ness, which blocks all other fixes and improvements.\nAs the person who originally introduced , it was very much a conscious decision to get rid of it and to make and . This was called out explicitly .\nThe cited discussion in the RFC justifies making Send, and predates the decision to (re)introduce . Making doesn't defeat the objectives discussed there. In particular, it does not impact ergonomics for the common case at all.\nThis was recently closed, however the issue mentions both and . The recent change only affects . There is good reason that is , however after reading this issue I'm unsure that there are good reasons why is .\n(NOT A CONTRIBUTION) Waker supports wakebyref, and so it is possible to pass to another thread and wake it from that thread. This functionality would not be possible without Waker being Sync. Supporting wakers that are either not Send or not Sync will best be done by adding new APIs to Context and a new LocalWaker type.\nForgive me the naive question, but how does a type work with ? As I understand, requires , as a concrete type not a trait, so this would also require a trait? At this point we have two completely incompatible async systems... Which is only useful if is not cheap to clone, and has the burden of another lifetime bound. It surprises me that the docs don't say anything about the intended cost of .\n(NOT A CONTRIBUTION) Future::poll does not take Waker as an argument, Future::poll takes Context. Context could have the ability to set a LocalWaker, so that an executor could set this. Reactors which operate on the same thread as the future that polls them could migrate to using the LocalWaker argument instead of Waker. Here is a pre-RFC that someone wrote with a possible API: Commentary like this is both factually wrong and not helpful for the mood of the thread.", "positive_passages": [{"docid": "doc-en-rust-876c353a5ebd9d6d5e46e7e1ae0e48bd6ecff32a9de5d4f6df58b8bbe1c0015e", "text": " use core::task::{Context, Poll, RawWaker, RawWakerVTable, Waker}; use core::task::{Poll, RawWaker, RawWakerVTable, Waker}; #[test] fn poll_const() {", "commid": "rust_pr_95985.0"}], "negative_passages": []} {"query_id": "q-en-rust-d71b16d4607174d57ba73e9d95f06f2b41e1ab72932f98df17b457c3b78e0dad", "query": "One issue that came up in the discussion is that the type implements . This might have been introduced accidentally. probably does not matter at all, given that users will only observe a by reference. However has the impliciation that we will not be able to add non thread-safe methods to - e.g. in order to optimize thread-local wake-ups again. It might be interesting to see whether and support could be removed from the type. Unfortunately that is however a breaking change - even though it is not likely that any code currently uses out of the direct path. In a similar fashion it had been observed in the implementation of () that the type is also and . While it had been expected for - given that s are used to wake-up tasks from different threads, it might not have been for . One downside of s being is that it prevents optimizations. E.g. in linked ticket the original implementation contained an optimization that while a was not d (and thereby equipped with a different vtable) it could wake-up the local eventloop again by just setting a non-synchronized boolean flag in the current thread. However given that could be transferred to another thread, and called from there even within the context - this optmization is invalid. Here it would also be interesting if the requirement could be removed. I expect the amount of real-world usages to be in the same ballpark as sending across threads - hopefully 0. But it's again a breaking change cc , , ,\nEven if this weren't a breaking change, I'd want to see some really good evidence these optimizations would matter for real world code before choosing to make these types non-. I don't think getting rid of an atomic update in is a compelling example. Basically, in my view the current API is intentional and correct.\nwas introduced to leave room for future unforeseen requirements, and being restricts its usefulness as such, which is unfortunate. I can see a case for since we have , but it's difficult to imagine a case where it makes sense to be accessing a single from multiple threads concurrently.\nI have an example of where the current API is resulting in performance issues: I am trying to implement a combined Executor/Reactor around a winit event loop. To wake up the event loop from a different thread we have to post a message to the event loop which can be a non-trivial operation. But on the same thread, we know that we don't need to wake up the eventloop, so we can use much cheaper mechanisms like an atomic bool.\nBut that would be true if wakers were just also, which was an intentional part of the simplification from the previous design.\nI think the point is that being precludes the otherwise noninvasive restoration of a more efficient through an additional method on .\nI don't think it should be the judgement of the libs team (or any of us) to determine whether something is good enough and not needs to be further optimized for any use-case. Rusts goal as a language is to enable zero cost abstractions. This requires in my opinion to not be opinionated on any design choices which have a performance/footprint impact. I think some of the decisions that have been taken in the async/await world however are opinionated, and will have an impact on use-cases that currently are still better implemented with different mechanisms. I do not really want to elaborate on anything more particular, because I fear this would bring the discussion back to a arguing whether any X% performance degradation is still good enough. That is an OK decision to have for a software project whichs needs to decide whether Rusts async/await support is good enough for them, and whether they have to apply workarounds or not. But it's not a discussion which will move the issue here any more forward, because for yet another project the outcoming of the discussion might be different. PS: I do think it's okay and good if libraries like tokio or async-std are being more opinionated about what threading models they support, and they might sacrifice performance gains in rare usage scenarios for easy of use. But and language features are different, and we expect people to use them also for niche use-cases (e.g. there is certainly a lot of cool evaluation going on with the use of async/await in embedded contexts or kernels - where requirements might be very different than for a webserver based on tokio).\nThis is a trade off: Context is either Sync or it isn't, some users benefit from one choice (they can use references to Context as threadsafe) and some users benefit from the other choice (they can have nonthreadsafe constructs inside the Context construct). Ultimately the libs team has to decide one way or the other on these points in the API where there is a trade off between thread safety and not. However, this decision has already been made and it would be a breaking change to change it, so this discussion is moot.\nDo you have an example where this is actually done within the ecosystem, or a use-case for it? As far as I can tell, this would require spawning a thread using something like and capturing the from within a ?\nA common way a user could depend on a type being is to store it in an and send it across threads. This isn't very likely for Context but it is a very reasonable pattern for .\nI think its a fair point that we didnt intentionally make context send and sync, and that this precludes adding more fields to context that are not send and sync, and that since context is just a buffer for future changes, it would probably have been better to make it non-threadsafe. But now its a breaking change. If crater showed no regressions and there's no known patterns it nullifies, I would be open to changing this about Context personally, but I am somewhat doubtful the libs team as a whole would approve making a breaking change to support hypothetical future extensions. If anyone on the thread wants to pursue a change to context (not waker), they should get crater results as the next step of the conversation.\nThere is no discussion about being or not - it obviously has to be. The question is purely about . And in order to do what you describe you have to store an and send it somewhere. Which is very doubtful to happen, due to the not very useful lifetime and due to already being an like thing internally. As mentioned, the most likely way to see this behavior is some scoped thread API being used inside - or people doing some very advanced synchronization against an IO driver running in another thread. For all those there are certainly better ways than to rely on being . E.g. to return a flag from the synchronized section and call from the original thread when done. Or to and send the if there is doubt whether it needs to be persisted somewhere else. And yes, the impact on is even bigger. It was meant as an extension point. But we can not add any functions to it which are not thread-safe. E.g. if we want to add methods which spawn another task on the same thread as the current executor thread, and which allows for spawning futures (like Tokios ) - we would have an issue.\nWhy would these be \"certainly better\" than just calling from the scoped threads?\nTo be a little more expansive: the scoped threadpool you're talking about could very well not be scoped inside a poll method - rather, the waker could be cloned once and then owned by one thread and referenced by many other scoped threads in some sort of threadpool construct for CPU bound tasks. This seems like a perfectly valid implementation which allows you to divide up the work among many threads without cloning the waker many times. This is potentially an optimization. I don't think this optimization is very important, but I don't think the optimizations allowed by making waker are very important either. The point is that there's a trade off between the optimizations allowed by assuming references to wakers can be shared across threads and the optimizations allowed by assuming they can't be, its not the case that one side is inherently the \"zero cost\" side.\nWe discussed this at the recent Libs meeting and felt that deferring to would make sense here.\nThough I don't have the bandwidth to carry this, this definitely seems interesting. Seeing projects such as and closed-source initiatives lead me to believe that there may actually be a decent case to enable some form of -to avoid the synchronization overhead on single-threaded runtimes. I don't know if we should make this a priority, but at least we probably shouldn't close it just yet.\nI think bringing back would be a nice additional improvement. But for the moment it would be nice to just fix the general sync-ness, which blocks all other fixes and improvements.\nAs the person who originally introduced , it was very much a conscious decision to get rid of it and to make and . This was called out explicitly .\nThe cited discussion in the RFC justifies making Send, and predates the decision to (re)introduce . Making doesn't defeat the objectives discussed there. In particular, it does not impact ergonomics for the common case at all.\nThis was recently closed, however the issue mentions both and . The recent change only affects . There is good reason that is , however after reading this issue I'm unsure that there are good reasons why is .\n(NOT A CONTRIBUTION) Waker supports wakebyref, and so it is possible to pass to another thread and wake it from that thread. This functionality would not be possible without Waker being Sync. Supporting wakers that are either not Send or not Sync will best be done by adding new APIs to Context and a new LocalWaker type.\nForgive me the naive question, but how does a type work with ? As I understand, requires , as a concrete type not a trait, so this would also require a trait? At this point we have two completely incompatible async systems... Which is only useful if is not cheap to clone, and has the burden of another lifetime bound. It surprises me that the docs don't say anything about the intended cost of .\n(NOT A CONTRIBUTION) Future::poll does not take Waker as an argument, Future::poll takes Context. Context could have the ability to set a LocalWaker, so that an executor could set this. Reactors which operate on the same thread as the future that polls them could migrate to using the LocalWaker argument instead of Waker. Here is a pre-RFC that someone wrote with a possible API: Commentary like this is both factually wrong and not helpful for the mood of the thread.", "positive_passages": [{"docid": "doc-en-rust-7d0dcfed4463ef882f27def797166c874a42323686ed9cb2c439f6f82ca71d80", "text": "static WAKER: Waker = unsafe { Waker::from_raw(VOID_WAKER) }; static CONTEXT: Context<'static> = Context::from_waker(&WAKER); static WAKER_REF: &'static Waker = CONTEXT.waker(); WAKER_REF.wake_by_ref(); WAKER.wake_by_ref(); }", "commid": "rust_pr_95985.0"}], "negative_passages": []} {"query_id": "q-en-rust-7f533f7f6641406ed74aea22c264ad9b9554f9fdc560a3dc927fc1c0830c750c", "query": "This works; let's consider adding a test for it since it's a bit of a special case. Reported to be working in by\ndo you mind if I add this? My laptop is no longer broken ^^ so I can actually get some work done without disappearing for a week at a time. I put this in a file called and ran , is that about the right approach?\nIt is, but I'd recommend using to make it go faster. Also use to just test that file. Also, I'd rename the test to not suggest ... something like .\nI ran about an hour ago and that worked fine (although it took a while ^^). However, since then I updated nightly and I'm now getting compile errors when building stage 1: What am I doing wrong? Full error: (cross posted from https://rust-)", "positive_passages": [{"docid": "doc-en-rust-37cd077b4a10670ee5b5aedb786861c29994e74419aa9e68e108268c98b6d627", "text": " // check-pass #![feature(const_if_match)] enum E { A, B, C } const fn f(e: E) -> usize { match e { _ => 0 } } fn main() { const X: usize = f(E::C); assert_eq!(X, 0); assert_eq!(f(E::A), 0); } ", "commid": "rust_pr_66786.0"}], "negative_passages": []} {"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-ea431610114ba5dccf9f4e91c5d7f9dd25f5b30f295f08a7e97e1cd93977e0f2", "text": "variant_index: VariantIdx, dest: PlaceTy<'tcx, M::PointerTag>, ) -> InterpResult<'tcx> { let variant_scalar = Scalar::from_u32(variant_index.as_u32()).into(); // Layout computation excludes uninhabited variants from consideration // therefore there's no way to represent those variants in the given layout. if dest.layout.for_variant(self, variant_index).abi.is_uninhabited() { throw_ub!(Unreachable); } match dest.layout.variants { layout::Variants::Single { index } => { if index != variant_index { throw_ub!(InvalidDiscriminant(variant_scalar)); } assert_eq!(index, variant_index); } layout::Variants::Multiple { discr_kind: layout::DiscriminantKind::Tag,", "commid": "rust_pr_66960.0"}], "negative_passages": []} {"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-72e60209aeb8c12608cd98adfeb26a27dab5fa303fda88fc621f36621fde0452", "text": "discr_index, .. } => { if !dest.layout.ty.variant_range(*self.tcx).unwrap().contains(&variant_index) { throw_ub!(InvalidDiscriminant(variant_scalar)); } // No need to validate that the discriminant here because the // `TyLayout::for_variant()` call earlier already checks the variant is valid. let discr_val = dest.layout.ty.discriminant_for_variant(*self.tcx, variant_index).unwrap().val;", "commid": "rust_pr_66960.0"}], "negative_passages": []} {"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-fedbed21ca0fcd8a211a5cdb0109e51e2b4e21fe0c8dd1eab26daf9cfbcfe8ad", "text": "discr_index, .. } => { if !variant_index.as_usize() < dest.layout.ty.ty_adt_def().unwrap().variants.len() { throw_ub!(InvalidDiscriminant(variant_scalar)); } // No need to validate that the discriminant here because the // `TyLayout::for_variant()` call earlier already checks the variant is valid. if variant_index != dataful_variant { let variants_start = niche_variants.start().as_u32(); let variant_index_relative = variant_index.as_u32()", "commid": "rust_pr_66960.0"}], "negative_passages": []} {"query_id": "q-en-rust-01d2a6e0236cf8f43cd09d26f30dba7a41be7bb417992c9cbd42d0737301118d", "query": "I'm trying to build my crate on nightly, and I'm getting a compiler panic. I bisected several nightly builds to try and narrow down when the problem started. The last known good nightly was , and the first failing was The panic also occurs in . The crates causing the panic are available here:\ncc\ntriage: P-high, removing nomination label.\nMinimal repro:\nAs some info, I've narrowed down the offending commit to \u2013\u2013 most likely the changes in .", "positive_passages": [{"docid": "doc-en-rust-03704bbf2f682f0a4fbcb6335d6aa1e284ca6d1487ca81a289948d40340397f7", "text": " // build-pass // compile-flags: --crate-type lib // Regression test for ICE which occurred when const propagating an enum with three variants // one of which is uninhabited. pub enum ApiError {} #[allow(dead_code)] pub struct TokioError { b: bool, } pub enum Error { Api { source: ApiError, }, Ethereum, Tokio { source: TokioError, }, } struct Api; impl IntoError for Api { type Source = ApiError; fn into_error(self, error: Self::Source) -> Error { Error::Api { source: (|v| v)(error), } } } pub trait IntoError { /// The underlying error type Source; /// Combine the information to produce the error fn into_error(self, source: Self::Source) -> E; } ", "commid": "rust_pr_66960.0"}], "negative_passages": []} {"query_id": "q-en-rust-126781cdf804036bb308f82da3f8cc9a4ef6f10fd31e0ff0fa44f6a847161e95", "query": "I hit this problem while trying to make a task joining function. The problem occured when I started messing around with Arc // check-pass // ignore-emscripten no llvm_asm! support #![feature(llvm_asm)] pub fn boot(addr: Option) { unsafe { llvm_asm!(\"mov sp, $0\"::\"r\" (addr)); } } fn main() {} ", "commid": "rust_pr_71182.0"}], "negative_passages": []} {"query_id": "q-en-rust-126781cdf804036bb308f82da3f8cc9a4ef6f10fd31e0ff0fa44f6a847161e95", "query": "I hit this problem while trying to make a task joining function. The problem occured when I started messing around with Arc // edition:2018 use std::sync::{Arc, Mutex}; pub async fn f(_: ()) {} pub async fn run() { let x: Arc> = unimplemented!(); f(*x.lock().unwrap()).await; } ", "commid": "rust_pr_71182.0"}], "negative_passages": []} {"query_id": "q-en-rust-126781cdf804036bb308f82da3f8cc9a4ef6f10fd31e0ff0fa44f6a847161e95", "query": "I hit this problem while trying to make a task joining function. The problem occured when I started messing around with Arc // aux-build: issue_67893.rs // edition:2018 // dont-check-compiler-stderr // FIXME(#71222): Add above flag because of the difference of stderrs on some env. extern crate issue_67893; fn g(_: impl Send) {} fn main() { g(issue_67893::run()) //~^ ERROR: `std::sync::MutexGuard<'_, ()>` cannot be sent between threads safely } ", "commid": "rust_pr_71182.0"}], "negative_passages": []} {"query_id": "q-en-rust-126781cdf804036bb308f82da3f8cc9a4ef6f10fd31e0ff0fa44f6a847161e95", "query": "I hit this problem while trying to make a task joining function. The problem occured when I started messing around with Arc #![feature(intrinsics)] extern \"C\" { pub static FOO: extern \"rust-intrinsic\" fn(); } fn main() { FOO() //~ ERROR: use of extern static is unsafe } ", "commid": "rust_pr_71182.0"}], "negative_passages": []} {"query_id": "q-en-rust-126781cdf804036bb308f82da3f8cc9a4ef6f10fd31e0ff0fa44f6a847161e95", "query": "I hit this problem while trying to make a task joining function. The problem occured when I started messing around with Arc error[E0133]: use of extern static is unsafe and requires unsafe function or block --> $DIR/issue-28575.rs:8:5 | LL | FOO() | ^^^ use of extern static | = note: extern statics are not controlled by the Rust type system: invalid data, aliasing violations or data races will cause undefined behavior error: aborting due to previous error For more information about this error, try `rustc --explain E0133`. ", "commid": "rust_pr_71182.0"}], "negative_passages": []} {"query_id": "q-en-rust-126781cdf804036bb308f82da3f8cc9a4ef6f10fd31e0ff0fa44f6a847161e95", "query": "I hit this problem while trying to make a task joining function. The problem occured when I started messing around with Arc pub static TEST_STR: &'static str = \"Hello world\"; ", "commid": "rust_pr_71182.0"}], "negative_passages": []} {"query_id": "q-en-rust-126781cdf804036bb308f82da3f8cc9a4ef6f10fd31e0ff0fa44f6a847161e95", "query": "I hit this problem while trying to make a task joining function. The problem occured when I started messing around with Arc // aux-build: issue_24843.rs // check-pass extern crate issue_24843; static _TEST_STR_2: &'static str = &issue_24843::TEST_STR; fn main() {} ", "commid": "rust_pr_71182.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto::: private::Sealed + Sized { #[unstable(feature = \"float_approx_unchecked_to\", issue = \"67058\")] #[unstable(feature = \"convert_float_to_int\", issue = \"67057\")] #[doc(hidden)] unsafe fn approx_unchecked(self) -> Int; unsafe fn to_int_unchecked(self) -> Int; } macro_rules! impl_float_to_int {", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto:: for $Float { #[doc(hidden)] #[inline] unsafe fn approx_unchecked(self) -> $Int { crate::intrinsics::float_to_int_approx_unchecked(self) unsafe fn to_int_unchecked(self) -> $Int { #[cfg(bootstrap)] { crate::intrinsics::float_to_int_approx_unchecked(self) } #[cfg(not(bootstrap))] { crate::intrinsics::float_to_int_unchecked(self) } } } )+", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto::) /// This is under stabilization at #[cfg(bootstrap)] pub fn float_to_int_approx_unchecked(value: Float) -> Int; /// Convert with LLVM\u2019s fptoui/fptosi, which may return undef for values out of range /// () /// /// Stabilized as `f32::to_int_unchecked` and `f64::to_int_unchecked`. #[cfg(not(bootstrap))] pub fn float_to_int_unchecked(value: Float) -> Int; /// Returns the number of bits set in an integer type `T` /// /// The stabilized versions of this intrinsic are available on the integer", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto:: #[unstable(feature = \"float_approx_unchecked_to\", issue = \"67058\")] #[stable(feature = \"float_approx_unchecked_to\", since = \"1.44.0\")] #[inline] pub unsafe fn approx_unchecked_to(self) -> Int pub unsafe fn to_int_unchecked(self) -> Int where Self: FloatToInt, { FloatToInt::::approx_unchecked(self) FloatToInt::::to_int_unchecked(self) } /// Raw transmutation to `u32`.", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto:: /// #![feature(float_approx_unchecked_to)] /// /// let value = 4.6_f32; /// let rounded = unsafe { value.approx_unchecked_to::() }; /// let rounded = unsafe { value.to_int_unchecked::() }; /// assert_eq!(rounded, 4); /// /// let value = -128.9_f32; /// let rounded = unsafe { value.approx_unchecked_to::() }; /// let rounded = unsafe { value.to_int_unchecked::() }; /// assert_eq!(rounded, std::i8::MIN); /// ``` ///", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto:: #[unstable(feature = \"float_approx_unchecked_to\", issue = \"67058\")] #[stable(feature = \"float_approx_unchecked_to\", since = \"1.44.0\")] #[inline] pub unsafe fn approx_unchecked_to(self) -> Int pub unsafe fn to_int_unchecked(self) -> Int where Self: FloatToInt, { FloatToInt::::approx_unchecked(self) FloatToInt::::to_int_unchecked(self) } /// Raw transmutation to `u64`.", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto:: \"float_to_int_approx_unchecked\" => { \"float_to_int_unchecked\" => { if float_type_width(arg_tys[0]).is_none() { span_invalid_monomorphization_error( tcx.sess, span, &format!( \"invalid monomorphization of `float_to_int_approx_unchecked` \"invalid monomorphization of `float_to_int_unchecked` intrinsic: expected basic float type, found `{}`\", arg_tys[0]", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto:: \"invalid monomorphization of `float_to_int_approx_unchecked` \"invalid monomorphization of `float_to_int_unchecked` intrinsic: expected basic integer type, found `{}`\", ret_ty", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-6cf86ef85e3f58463df2f0afa844964d29923613a7cb40d6acfd7169cf886946", "query": "First discussed in issue As of Rust 1.39, casting a floating point number to an integer with is Undefined Behavior if the value is out of range. fixes this soundness hole by making \u201csaturate\u201d to the maximum or minimum value of the integer type (or zero for ), but has measurable negative performance impact in some benchmarks. There is some consensus in that thread for enabling saturation by default anyway, but provide an alternative for users who know through some other mean that their values are in range. PR adds that method to each of and . /// #![feature(floatapproxuncheckedto)] /// /// let value = 4.6f32; /// let rounded = unsafe { value.approxuncheckedto:: { (1, vec![param(0), param(0)], param(0)) } \"float_to_int_approx_unchecked\" => (2, vec![param(0)], param(1)), \"float_to_int_unchecked\" => (2, vec![param(0)], param(1)), \"assume\" => (0, vec![tcx.types.bool], tcx.mk_unit()), \"likely\" => (0, vec![tcx.types.bool], tcx.types.bool),", "commid": "rust_pr_70487.0"}], "negative_passages": []} {"query_id": "q-en-rust-3da03148d9aad50460710bedd7f4d95b136d23a7c68f1e478d39dc18d558dcdf", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri has failing tests. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-f0386f10cab418a34ef0a377c8c6eab0bf4b7bab0cfc5b219bb39a6895efd23a", "text": "\"rand 0.7.0\", \"rustc-workspace-hack\", \"rustc_version\", \"serde\", \"shell-escape\", \"vergen\", ]", "commid": "rust_pr_67147.0"}], "negative_passages": []} {"query_id": "q-en-rust-3da03148d9aad50460710bedd7f4d95b136d23a7c68f1e478d39dc18d558dcdf", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri has failing tests. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.", "positive_passages": [{"docid": "doc-en-rust-3074a20d9f00d0968d03284ada31b51a1376d435dc37bea62c39d5a757bfe36c", "text": " Subproject commit a0ba079b6af0f8c07c33dd8af72a51c997e58967 Subproject commit 048af409232fc2d7f8fbe5469080dc8bb702c498 ", "commid": "rust_pr_67147.0"}], "negative_passages": []} {"query_id": "q-en-rust-f6c09d2afd95bcfbd5d3a90596bfe8178701495aadbc5854d1cf8595064616dd", "query": "(reduced sample courtesy of Output:\ntriage: diagnostics produced prior to ICE seem like they probably help user fix problem (maybe)? So calling this P-medium. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-c462aa0a510377d577b98749dab0e21987b55b65ae66b0413f50c3d1f425b66d", "text": "None => return Ok(()), }; // Render the replacements for each suggestion let suggestions = suggestion.splice_lines(&**sm); if suggestions.is_empty() { // Suggestions coming from macros can have malformed spans. This is a heavy handed // approach to avoid ICEs by ignoring the suggestion outright. return Ok(()); } let mut buffer = StyledBuffer::new(); // Render the suggestion message", "commid": "rust_pr_68256.0"}], "negative_passages": []} {"query_id": "q-en-rust-f6c09d2afd95bcfbd5d3a90596bfe8178701495aadbc5854d1cf8595064616dd", "query": "(reduced sample courtesy of Output:\ntriage: diagnostics produced prior to ICE seem like they probably help user fix problem (maybe)? So calling this P-medium. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-84a76ef7e8a8e848cc47a3b8eafd85f47db370e72f7667ed4e9d18cdff0e9ad1", "text": "Some(Style::HeaderMsg), ); // Render the replacements for each suggestion let suggestions = suggestion.splice_lines(&**sm); let mut row_num = 2; let mut notice_capitalization = false; for (complete, parts, only_capitalization) in suggestions.iter().take(MAX_SUGGESTIONS) {", "commid": "rust_pr_68256.0"}], "negative_passages": []} {"query_id": "q-en-rust-f6c09d2afd95bcfbd5d3a90596bfe8178701495aadbc5854d1cf8595064616dd", "query": "(reduced sample courtesy of Output:\ntriage: diagnostics produced prior to ICE seem like they probably help user fix problem (maybe)? So calling this P-medium. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-ab2eb3e72b10acdf772f8b5b5b61c143310789387fdfcc4515ae118ef6b81fa6", "text": "let show_underline = !(parts.len() == 1 && parts[0].snippet.trim() == complete.trim()) && complete.lines().count() == 1; let lines = sm.span_to_lines(parts[0].span).unwrap(); let lines = sm .span_to_lines(parts[0].span) .expect(\"span_to_lines failed when emitting suggestion\"); assert!(!lines.lines.is_empty());", "commid": "rust_pr_68256.0"}], "negative_passages": []} {"query_id": "q-en-rust-f6c09d2afd95bcfbd5d3a90596bfe8178701495aadbc5854d1cf8595064616dd", "query": "(reduced sample courtesy of Output:\ntriage: diagnostics produced prior to ICE seem like they probably help user fix problem (maybe)? So calling this P-medium. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-89936a17ab6826cf7982a2f80a1621cf8670375728d09ac2fd5fefecd32b9eae", "text": "pub use emitter::ColorConfig; use log::debug; use Level::*; use emitter::{is_case_difference, Emitter, EmitterWriter};", "commid": "rust_pr_68256.0"}], "negative_passages": []} {"query_id": "q-en-rust-f6c09d2afd95bcfbd5d3a90596bfe8178701495aadbc5854d1cf8595064616dd", "query": "(reduced sample courtesy of Output:\ntriage: diagnostics produced prior to ICE seem like they probably help user fix problem (maybe)? So calling this P-medium. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-7f5ee8f37285b464fd0a1a3b9ef1e9cf732200eb1397b2ed5160bb5e970785f6", "text": "self.substitutions .iter() .filter(|subst| { // Suggestions coming from macros can have malformed spans. This is a heavy // handed approach to avoid ICEs by ignoring the suggestion outright. let invalid = subst.parts.iter().any(|item| cm.is_valid_span(item.span).is_err()); if invalid { debug!(\"splice_lines: suggestion contains an invalid span: {:?}\", subst); } !invalid }) .cloned() .map(|mut substitution| { // Assumption: all spans are in the same file, and all spans", "commid": "rust_pr_68256.0"}], "negative_passages": []} {"query_id": "q-en-rust-f6c09d2afd95bcfbd5d3a90596bfe8178701495aadbc5854d1cf8595064616dd", "query": "(reduced sample courtesy of Output:\ntriage: diagnostics produced prior to ICE seem like they probably help user fix problem (maybe)? So calling this P-medium. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-6871631a1062c0012e8bd0325b26d441a3fa2e9bb41164549c898da6a2312807", "text": "lo.line != hi.line } pub fn span_to_lines(&self, sp: Span) -> FileLinesResult { debug!(\"span_to_lines(sp={:?})\", sp); pub fn is_valid_span(&self, sp: Span) -> Result<(Loc, Loc), SpanLinesError> { let lo = self.lookup_char_pos(sp.lo()); debug!(\"span_to_lines: lo={:?}\", lo); let hi = self.lookup_char_pos(sp.hi()); debug!(\"span_to_lines: hi={:?}\", hi); if lo.file.start_pos != hi.file.start_pos { return Err(SpanLinesError::DistinctSources(DistinctSources { begin: (lo.file.name.clone(), lo.file.start_pos), end: (hi.file.name.clone(), hi.file.start_pos), })); } Ok((lo, hi)) } pub fn span_to_lines(&self, sp: Span) -> FileLinesResult { debug!(\"span_to_lines(sp={:?})\", sp); let (lo, hi) = self.is_valid_span(sp)?; assert!(hi.line >= lo.line); let mut lines = Vec::with_capacity(hi.line - lo.line + 1);", "commid": "rust_pr_68256.0"}], "negative_passages": []} {"query_id": "q-en-rust-f6c09d2afd95bcfbd5d3a90596bfe8178701495aadbc5854d1cf8595064616dd", "query": "(reduced sample courtesy of Output:\ntriage: diagnostics produced prior to ICE seem like they probably help user fix problem (maybe)? So calling this P-medium. Removing nomination label.", "positive_passages": [{"docid": "doc-en-rust-c5af145fa1030826aead87a33062ddcf6a744ce9dddc92c6ab425308262aa776", "text": "LL | const MUTABLE_BEHIND_RAW: *mut i32 = &UnsafeCell::new(42) as *const _ as *mut _; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ thread 'rustc' panicked at 'no errors encountered even though `delay_span_bug` issued', src/librustc_errors/lib.rs:346:17 thread 'rustc' panicked at 'no errors encountered even though `delay_span_bug` issued', src/librustc_errors/lib.rs:356:17 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace error: internal compiler error: unexpected panic", "commid": "rust_pr_68256.0"}], "negative_passages": []} {"query_id": "q-en-rust-7fcad33d8025ec2655e17a2db32867a20c97e93666c0ba69cd95235a600df7ce", "query": "Creating a stack closure which references an owned pointer and then transferring ownership of the owned box before invoking the stack closure results in a crash. For example, in Rust 0.6, this will compile and the resulting program will then crash with a SEGV\nDoes it still crash on the branch?\nIt dumps core for me, on incoming: Adding an xfailed test. We want this to fail compilation, I guess.\nThis bug could be fixed by the proposed solution in this post: (namely, bullet 1, not 2)\nDup of and more specifically .", "positive_passages": [{"docid": "doc-en-rust-2677e6b41201c4791d4d5d3ed67949636389ec4a5c1404c9580048cdc76f02c3", "text": " //xfail-test // Creating a stack closure which references an owned pointer and then // transferring ownership of the owned box before invoking the stack // closure results in a crash. fn twice(x: ~uint) -> uint { *x * 2 } fn invoke(f : &fn() -> uint) { f(); } fn main() { let x : ~uint = ~9; let sq : &fn() -> uint = || { *x * *x }; twice(x); invoke(sq); } No newline at end of file", "commid": "rust_pr_6770"}], "negative_passages": []} {"query_id": "q-en-rust-ddc95b04ee35f65de570fc307a1f17f3c18cd20594dbb9ce6ea07e32f556e5f2", "query": "This code: Returns this error: Which is very unhelpful; it should at least in some way point to the line with the . This happens on rustc 1.40.0 ( 2019-12-16). For reference, this is the equivalent non-async error message (much more helpful): $DIR/issue-67765-async-diagnostic.rs:13:11 | LL | let b = &s[..]; | - `s` is borrowed here LL | LL | Err(b)?; | ^ returns a value referencing data owned by the current function error: aborting due to previous error For more information about this error, try `rustc --explain E0515`. ", "commid": "rust_pr_73806.0"}], "negative_passages": []} {"query_id": "q-en-rust-ddc95b04ee35f65de570fc307a1f17f3c18cd20594dbb9ce6ea07e32f556e5f2", "query": "This code: Returns this error: Which is very unhelpful; it should at least in some way point to the line with the . This happens on rustc 1.40.0 ( 2019-12-16). For reference, this is the equivalent non-async error message (much more helpful): $DIR/unnamed-closure-doesnt-life-long-enough-issue-67634.rs:2:49 error[E0373]: closure may outlive the current function, but it borrows `a`, which is owned by the current function --> $DIR/unnamed-closure-doesnt-life-long-enough-issue-67634.rs:2:44 | LL | [0].iter().flat_map(|a| [0].iter().map(|_| &a)); | - ^- ...but `a` will be dropped here, when the enclosing closure returns | | | | | `a` would have to be valid for `'_`... | has type `&i32` | ^^^ - `a` is borrowed here | | | may outlive borrowed value `a` | = note: functions cannot return a borrow to data owned within the function's scope, functions can only return borrows to data passed as arguments = note: to learn more, visit note: closure is returned here --> $DIR/unnamed-closure-doesnt-life-long-enough-issue-67634.rs:2:29 | LL | [0].iter().flat_map(|a| [0].iter().map(|_| &a)); | ^^^^^^^^^^^^^^^^^^^^^^ help: to force the closure to take ownership of `a` (and any other referenced variables), use the `move` keyword | LL | [0].iter().flat_map(|a| [0].iter().map(move |_| &a)); | ^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0597`. For more information about this error, try `rustc --explain E0373`. ", "commid": "rust_pr_73806.0"}], "negative_passages": []} {"query_id": "q-en-rust-ddc95b04ee35f65de570fc307a1f17f3c18cd20594dbb9ce6ea07e32f556e5f2", "query": "This code: Returns this error: Which is very unhelpful; it should at least in some way point to the line with the . This happens on rustc 1.40.0 ( 2019-12-16). For reference, this is the equivalent non-async error message (much more helpful): $DIR/return-disjoint-regions.rs:4:5 | LL | let y = &x; | -- `x` is borrowed here LL | (y, y) | ^^^^^^ returns a value referencing data owned by the current function error: aborting due to previous error For more information about this error, try `rustc --explain E0515`. ", "commid": "rust_pr_73806.0"}], "negative_passages": []} {"query_id": "q-en-rust-eb54dff47d80294a1f419036c6a66320561f63e7909c935ed556015defe0e8e0", "query": "Summary: The compiler suggests changing to when trying to modify in a method. For trait implemantions this does not work because it would require changing the trait definition too (which is often not possible). Example: I was trying to implement the trait from the standard library for a custom type: () This of course doesn't work since you can't modify data behind a reference: However, I noticed that the help message suggests changing the to , which is incorrect since that would require changing the trait definition in the standard library too. Suggestion: I don't know if this is possible, but maybe this help message should only be shown for \"normal\" methods and hidden when implementing trait methods?\nMaybe only hide the suggestion if the trait comes from a foreign crate?\nOr suggest to change the trait (if it is in this crate) not change the function in the part.\nOne more idea: suggest ///", "positive_passages": [{"docid": "doc-en-rust-d6006530f576be816ee609d5352413c32e1c401584a6a72bce70e0317997e7f6", "text": "use rustc_middle::ty::{self, Ty, TyCtxt}; use rustc_middle::{ hir::place::PlaceBase, mir::{self, ClearCrossCrate, Local, LocalDecl, LocalInfo, Location}, mir::{self, ClearCrossCrate, Local, LocalDecl, LocalInfo, LocalKind, Location}, }; use rustc_span::source_map::DesugaringKind; use rustc_span::symbol::{kw, Symbol};", "commid": "rust_pr_85100.0"}], "negative_passages": []} {"query_id": "q-en-rust-eb54dff47d80294a1f419036c6a66320561f63e7909c935ed556015defe0e8e0", "query": "Summary: The compiler suggests changing to when trying to modify in a method. For trait implemantions this does not work because it would require changing the trait definition too (which is often not possible). Example: I was trying to implement the trait from the standard library for a custom type: () This of course doesn't work since you can't modify data behind a reference: However, I noticed that the help message suggests changing the to , which is incorrect since that would require changing the trait definition in the standard library too. Suggestion: I don't know if this is possible, but maybe this help message should only be shown for \"normal\" methods and hidden when implementing trait methods?\nMaybe only hide the suggestion if the trait comes from a foreign crate?\nOr suggest to change the trait (if it is in this crate) not change the function in the part.\nOne more idea: suggest ///", "positive_passages": [{"docid": "doc-en-rust-6dd4822ae99f8d53fa48595da97e7d80e31f2467a8d613488eac7676ab571f26", "text": "match label { Some((true, err_help_span, suggested_code)) => { err.span_suggestion( err_help_span, &format!( \"consider changing this to be a mutable {}\", pointer_desc ), suggested_code, Applicability::MachineApplicable, ); let (is_trait_sig, local_trait) = self.is_error_in_trait(local); if !is_trait_sig { err.span_suggestion( err_help_span, &format!( \"consider changing this to be a mutable {}\", pointer_desc ), suggested_code, Applicability::MachineApplicable, ); } else if let Some(x) = local_trait { err.span_suggestion( x, &format!( \"consider changing that to be a mutable {}\", pointer_desc ), suggested_code, Applicability::MachineApplicable, ); } } Some((false, err_label_span, message)) => { err.span_label(err_label_span, &message);", "commid": "rust_pr_85100.0"}], "negative_passages": []} {"query_id": "q-en-rust-eb54dff47d80294a1f419036c6a66320561f63e7909c935ed556015defe0e8e0", "query": "Summary: The compiler suggests changing to when trying to modify in a method. For trait implemantions this does not work because it would require changing the trait definition too (which is often not possible). Example: I was trying to implement the trait from the standard library for a custom type: () This of course doesn't work since you can't modify data behind a reference: However, I noticed that the help message suggests changing the to , which is incorrect since that would require changing the trait definition in the standard library too. Suggestion: I don't know if this is possible, but maybe this help message should only be shown for \"normal\" methods and hidden when implementing trait methods?\nMaybe only hide the suggestion if the trait comes from a foreign crate?\nOr suggest to change the trait (if it is in this crate) not change the function in the part.\nOne more idea: suggest ///", "positive_passages": [{"docid": "doc-en-rust-afd691419b0c2bb1282cdd3d703b41fe9faabbd567966b8c5974e7978962ef97", "text": "err.buffer(&mut self.errors_buffer); } /// User cannot make signature of a trait mutable without changing the /// trait. So we find if this error belongs to a trait and if so we move /// suggestion to the trait or disable it if it is out of scope of this crate fn is_error_in_trait(&self, local: Local) -> (bool, Option) { if self.body.local_kind(local) != LocalKind::Arg { return (false, None); } let hir_map = self.infcx.tcx.hir(); let my_def = self.body.source.def_id(); let my_hir = hir_map.local_def_id_to_hir_id(my_def.as_local().unwrap()); let td = if let Some(a) = self.infcx.tcx.impl_of_method(my_def).and_then(|x| self.infcx.tcx.trait_id_of_impl(x)) { a } else { return (false, None); }; ( true, td.as_local().and_then(|tld| { let h = hir_map.local_def_id_to_hir_id(tld); match hir_map.find(h) { Some(Node::Item(hir::Item { kind: hir::ItemKind::Trait(_, _, _, _, items), .. })) => { let mut f_in_trait_opt = None; for hir::TraitItemRef { id: fi, kind: k, .. } in *items { let hi = fi.hir_id(); if !matches!(k, hir::AssocItemKind::Fn { .. }) { continue; } if hir_map.name(hi) != hir_map.name(my_hir) { continue; } f_in_trait_opt = Some(hi); break; } f_in_trait_opt.and_then(|f_in_trait| match hir_map.find(f_in_trait) { Some(Node::TraitItem(hir::TraitItem { kind: hir::TraitItemKind::Fn( hir::FnSig { decl: hir::FnDecl { inputs, .. }, .. }, _, ), .. })) => { let hir::Ty { span, .. } = inputs[local.index() - 1]; Some(span) } _ => None, }) } _ => None, } }), ) } // point to span of upvar making closure call require mutable borrow fn show_mutating_upvar( &self,", "commid": "rust_pr_85100.0"}], "negative_passages": []} {"query_id": "q-en-rust-eb54dff47d80294a1f419036c6a66320561f63e7909c935ed556015defe0e8e0", "query": "Summary: The compiler suggests changing to when trying to modify in a method. For trait implemantions this does not work because it would require changing the trait definition too (which is often not possible). Example: I was trying to implement the trait from the standard library for a custom type: () This of course doesn't work since you can't modify data behind a reference: However, I noticed that the help message suggests changing the to , which is incorrect since that would require changing the trait definition in the standard library too. Suggestion: I don't know if this is possible, but maybe this help message should only be shown for \"normal\" methods and hidden when implementing trait methods?\nMaybe only hide the suggestion if the trait comes from a foreign crate?\nOr suggest to change the trait (if it is in this crate) not change the function in the part.\nOne more idea: suggest ///", "positive_passages": [{"docid": "doc-en-rust-83684e5ac5e778e8524acfbbe193646454bcd2e903c6117218f5984e0623a3c3", "text": " use std::alloc::{GlobalAlloc, Layout}; struct Test(u32); unsafe impl GlobalAlloc for Test { unsafe fn alloc(&self, _layout: Layout) -> *mut u8 { self.0 += 1; //~ ERROR cannot assign 0 as *mut u8 } unsafe fn dealloc(&self, _ptr: *mut u8, _layout: Layout) { unimplemented!(); } } fn main() { } ", "commid": "rust_pr_85100.0"}], "negative_passages": []} {"query_id": "q-en-rust-eb54dff47d80294a1f419036c6a66320561f63e7909c935ed556015defe0e8e0", "query": "Summary: The compiler suggests changing to when trying to modify in a method. For trait implemantions this does not work because it would require changing the trait definition too (which is often not possible). Example: I was trying to implement the trait from the standard library for a custom type: () This of course doesn't work since you can't modify data behind a reference: However, I noticed that the help message suggests changing the to , which is incorrect since that would require changing the trait definition in the standard library too. Suggestion: I don't know if this is possible, but maybe this help message should only be shown for \"normal\" methods and hidden when implementing trait methods?\nMaybe only hide the suggestion if the trait comes from a foreign crate?\nOr suggest to change the trait (if it is in this crate) not change the function in the part.\nOne more idea: suggest ///", "positive_passages": [{"docid": "doc-en-rust-90333228226ca7e2beb5f88311fe94b126e596ae2d7c44dbf633f2d6f0d4662d", "text": " error[E0594]: cannot assign to `self.0` which is behind a `&` reference --> $DIR/issue-68049-1.rs:7:9 | LL | self.0 += 1; | ^^^^^^^^^^^ `self` is a `&` reference, so the data it refers to cannot be written error: aborting due to previous error For more information about this error, try `rustc --explain E0594`. ", "commid": "rust_pr_85100.0"}], "negative_passages": []} {"query_id": "q-en-rust-eb54dff47d80294a1f419036c6a66320561f63e7909c935ed556015defe0e8e0", "query": "Summary: The compiler suggests changing to when trying to modify in a method. For trait implemantions this does not work because it would require changing the trait definition too (which is often not possible). Example: I was trying to implement the trait from the standard library for a custom type: () This of course doesn't work since you can't modify data behind a reference: However, I noticed that the help message suggests changing the to , which is incorrect since that would require changing the trait definition in the standard library too. Suggestion: I don't know if this is possible, but maybe this help message should only be shown for \"normal\" methods and hidden when implementing trait methods?\nMaybe only hide the suggestion if the trait comes from a foreign crate?\nOr suggest to change the trait (if it is in this crate) not change the function in the part.\nOne more idea: suggest ///", "positive_passages": [{"docid": "doc-en-rust-ccfd68feb53099ed325b6b705412e1a0a1512f0d9f20168a526f880abc2e87ce", "text": " trait Hello { fn example(&self, input: &i32); // should suggest here } struct Test1(i32); impl Hello for Test1 { fn example(&self, input: &i32) { // should not suggest here *input = self.0; //~ ERROR cannot assign } } struct Test2(i32); impl Hello for Test2 { fn example(&self, input: &i32) { // should not suggest here self.0 += *input; //~ ERROR cannot assign } } fn main() { } ", "commid": "rust_pr_85100.0"}], "negative_passages": []} {"query_id": "q-en-rust-eb54dff47d80294a1f419036c6a66320561f63e7909c935ed556015defe0e8e0", "query": "Summary: The compiler suggests changing to when trying to modify in a method. For trait implemantions this does not work because it would require changing the trait definition too (which is often not possible). Example: I was trying to implement the trait from the standard library for a custom type: () This of course doesn't work since you can't modify data behind a reference: However, I noticed that the help message suggests changing the to , which is incorrect since that would require changing the trait definition in the standard library too. Suggestion: I don't know if this is possible, but maybe this help message should only be shown for \"normal\" methods and hidden when implementing trait methods?\nMaybe only hide the suggestion if the trait comes from a foreign crate?\nOr suggest to change the trait (if it is in this crate) not change the function in the part.\nOne more idea: suggest ///", "positive_passages": [{"docid": "doc-en-rust-01782ad78f726019c8e51f66887c59d86418fceecbb1f690083c31aaba912c64", "text": " error[E0594]: cannot assign to `*input` which is behind a `&` reference --> $DIR/issue-68049-2.rs:9:7 | LL | fn example(&self, input: &i32); // should suggest here | ---- help: consider changing that to be a mutable reference: `&mut i32` ... LL | *input = self.0; | ^^^^^^^^^^^^^^^ `input` is a `&` reference, so the data it refers to cannot be written error[E0594]: cannot assign to `self.0` which is behind a `&` reference --> $DIR/issue-68049-2.rs:17:5 | LL | fn example(&self, input: &i32); // should suggest here | ----- help: consider changing that to be a mutable reference: `&mut self` ... LL | self.0 += *input; | ^^^^^^^^^^^^^^^^ `self` is a `&` reference, so the data it refers to cannot be written error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0594`. ", "commid": "rust_pr_85100.0"}], "negative_passages": []} {"query_id": "q-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "query": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "positive_passages": [{"docid": "doc-en-rust-593177600d2ba67a9e05dfa75406f4bdadd50042d20e8e8eea0a1f8544466e16", "text": "self.sess.gated_spans.gate(sym::const_extern_fn, lo.to(self.token.span)); } let ext = self.parse_extern()?; self.bump(); // `fn` self.expect_keyword(kw::Fn)?; let header = FnHeader { unsafety,", "commid": "rust_pr_68073.0"}], "negative_passages": []} {"query_id": "q-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "query": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "positive_passages": [{"docid": "doc-en-rust-ac73cd55658f4738286ff4c668c03c0c81dd14360edc9c5ec169057c14b8302e", "text": " fn main() {} #[cfg(FALSE)] fn container() { const unsafe WhereIsFerris Now() {} //~^ ERROR expected one of `extern` or `fn` } ", "commid": "rust_pr_68073.0"}], "negative_passages": []} {"query_id": "q-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "query": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "positive_passages": [{"docid": "doc-en-rust-ae07bb08d697e95b96a20955f23b5ed7f611e7e0e38cefdff50d32c098054c3a", "text": " error: expected one of `extern` or `fn`, found `WhereIsFerris` --> $DIR/issue-68062-const-extern-fns-dont-need-fn-specifier-2.rs:5:18 | LL | const unsafe WhereIsFerris Now() {} | ^^^^^^^^^^^^^ expected one of `extern` or `fn` error: aborting due to previous error ", "commid": "rust_pr_68073.0"}], "negative_passages": []} {"query_id": "q-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "query": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "positive_passages": [{"docid": "doc-en-rust-a1dae52acc3c2367bff88d7bdd2e939cd40fcb03345c040997d4d6f68e60e3a0", "text": " fn main() {} #[cfg(FALSE)] fn container() { const extern \"Rust\" PUT_ANYTHING_YOU_WANT_HERE bug() -> usize { 1 } //~^ ERROR expected `fn` //~| ERROR `const extern fn` definitions are unstable } ", "commid": "rust_pr_68073.0"}], "negative_passages": []} {"query_id": "q-en-rust-4b896e75ea201d5026a72d90704af60b8d8f16ff87fb6c41e963080cb6fd89bb", "query": "The following program compiles, and prints :\nAs pointed out by on discord , this also works on stable:\nRegression introduced in 1.40.0, specifically this line:\nI have a fix in", "positive_passages": [{"docid": "doc-en-rust-f926511c9e4338b5ef9732fabdf16292a663578281b3edf680c727bc01e3d20c", "text": " error: expected `fn`, found `PUT_ANYTHING_YOU_WANT_HERE` --> $DIR/issue-68062-const-extern-fns-dont-need-fn-specifier.rs:5:25 | LL | const extern \"Rust\" PUT_ANYTHING_YOU_WANT_HERE bug() -> usize { 1 } | ^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `fn` error[E0658]: `const extern fn` definitions are unstable --> $DIR/issue-68062-const-extern-fns-dont-need-fn-specifier.rs:5:5 | LL | const extern \"Rust\" PUT_ANYTHING_YOU_WANT_HERE bug() -> usize { 1 } | ^^^^^^^^^^^^ | = note: for more information, see https://github.com/rust-lang/rust/issues/64926 = help: add `#![feature(const_extern_fn)]` to the crate attributes to enable error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0658`. ", "commid": "rust_pr_68073.0"}], "negative_passages": []} {"query_id": "q-en-rust-1facde79cdd2b633e7a785c990593f1196569deff25ee7f9a9662f976509c087", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool miri no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great! cc the PR reviewer, and nominating for compiler team prioritization.\nhas landed, so a fix should be possible now. are you on this or should I try my luck?\nT-compiler triage: P-medium, removing nomination tag.", "positive_passages": [{"docid": "doc-en-rust-e052befbed65c0b839e91440c79396dff3284082d46d23b5cda4c61cc309263e", "text": " Subproject commit 4e44aa010c4c7d616182a3078cafb39da6f6c0a2 Subproject commit 6a0f14bef7784e57a57a996cae3f94dbd2490e7a ", "commid": "rust_pr_68258.0"}], "negative_passages": []} {"query_id": "q-en-rust-765193ceb53a379a0d4d07dbc8bad1d17219f6940ea138ec6c4e761475ad7ade", "query": "Hello, I'm in the process of upgrading Rust to 1.41.0 on Alpine Linux and it appears that a TEXTREL has been introduced in 1.41.0: The full build log is located here if it's of interest:\nThis apparently breaks Android builds, since that forbids text relocations\n+1 I am one of the unfortunate winner of a release-blocking CI build failure for Android now that we've upgraded to 1.41.0. We've disabled android-x86 builds for now :(\nI a minimal repro to my repository here: , in the hope that it can help debugging this (and because I was curious to dig into this myself :-) ). You can clone the repo, then In the container you can run . I bisected it using rustup and nightly releases, 2012-12-09 is good, 2019-12-10 is bad: When using nightly-2019-12-10:\nConveniently, no rollups! Auto merge of - tmandry:bump-compiler-builtins, r=alexcrichton Auto merge of - mark-i-m:fix-rustc-guide-2, r=ehuss Auto merge of - lqd:placeholder_loans, r=matthewjasper Auto merge of - estebank:issue-, r=eddyb Auto merge of - cjgillot:corrida, r=Mark-Simulacrum Auto merge of - GuillaumeGomez:move-clean-types, r=kinnison I would personally suspect . Can you try checking this via Specifically, comparing and ?\nWhile rustup-toolchain-install-master compiles ( ): I see that upgrades compiler-builtins. I had a look at the commits in compiler-builtins which touch , and I think we can likely narrow it down to At this point I'm too much out of my depth, this needs people more knowledgeable than me . I hope I could help debugging this, and will continue watching the issue as I'm learning a great deal just by investigating this. Besides the actual fix, there are two additional things I'd love to understand: 1) why doesn't detect this? It looks like the obj file is not marked in any way has having a TEXTREL, but maybe the inline asm from that commit defines rustprobestack in such a way that relocating it needs touching the .text section (wild guess) 2) how would we go at testing this? Where in the rust project would I add a test for this? And given that scanelf can't detect this, maybe a good way to \"integration test\" this would be to build a minimal rust static lib, and try to link it into a shared object with like I'm doing in the repro code. Do I understand correctly that \"all code produced by rustc should be relocatable\" is a guaranteed property (thus worth testing), or am I completely mistaken?\nMeanwhile, I can confirm that we got the right commit ():\ntriage: P-high. Assigning to self. Leaving nominated because I'd like to discuss this at today's meeting.\ncc who did the rollup of compiler-builtins, do we know which of the following PRs may have caused the problem rust-lang/compiler-builtins (seems pretty clearly ok) rust-lang/compiler-builtins (remapping ? seems ok) rust-lang/compiler-builtins (modernize test crate, seems ok) rust-lang/compiler-builtins (atomic intrinsics? seems ok) rust-lang/compiler-builtins (atomic intrinsics? seems ok) rust-lang/compiler-builtins (gitmodules? probably ok) rust-lang/compiler-builtins (bcmp? seems ok) rust-lang/compiler-builtins (eliminate __fltused on uefi targets?) rust-lang/compiler-builtins (FFI unsafe warnings for i128/u128) rust-lang/compiler-builtins rust-lang/compiler-builtins hypothesizes , do you know that this could be the cause,\nIt could be. Likely the difference is in the attributes of the probestack function (before that change it was a rust function, afterward it was defined in raw assembly). could you try adding the line around ? If that fixes it, it may break it in other environments (e.g. when generating code for a shared vs static library). We'd probably need to thread some new cfg flags through, but I'm not sure if that's actually possible. The other way is going back to defining the assembly in a naked function, but , prevented me from doing this the first time. cc\ncc (though they're on vacation this week I believe) on the inline assembly options as potentially relevant for RFC and if they have thoughts on how to fix it for this case\nI would agree with the conclusion that is the most likely candidate here for a cause of the regression. That being said I have no idea what TEXTREL is, what's happening there to cause it, or how we might fix this.\nLooking at the objdump output, you can see the difference between the 2019-12-09 and 2019-12-10 nightlies: The textrel happens because the has global visibility and can therefore (in theory) be overridden at runtime, e.g. with LDPRELOAD. This means that the dynamic linker will need to patch the .text section at load time to fix up the call instruction. As suggested, adding should fix the problem since the linker will then resolve references to the symbol within the shared library itself.\nThat's probably the correct fix then, I'm just not 100% sure it will work on all targets . If it fails we should see it in CI, though.\nWill this be released in a 1.41.1, or should distros just apply this downstream? We'd very much like Rust 1.41.0 without TEXTRELs on x86.\nAlso, thanks for the quick fix of course :)\nI would not currently wait for a 1.41.1 release. I hope that it will make it into 1.42 though.\nAlright, thanks for the info.\nThanks for fixing this! I verified that this solves the issue on our end, and opened a PR with a simple regression test. The test fails on i686-unknown-linux-gnu when I revert compiler-builtins to 0.1.24.\nI'm not sure this is the right place to bring it up, but without this fix it is impossible to deploy a rust static library to platforms that enforce properly-relocatable code (x86 android). It didn't make the cut for 1.41.1, is there any chance for 1.42? Can we help to make this happen? Thanks!\nis beta nominated, it will be backported soon and included in 1.42 release.\nFantastic, thanks. Do you think it would be worth it to also include the regression test () in the release?\nCCing the reviewers\nawesome, I am waiting for a release with this fix. Any idea when 1.42 is planned?\nStable releases usually occur every 6 weeks, so it should land in ~2 weeks.\nI've nominated the regression test as well!", "positive_passages": [{"docid": "doc-en-rust-1584534e611153751ebd32af438b2ba9ab5dfaea72dcc3268cd19c1faa4bf773", "text": "[[package]] name = \"compiler_builtins\" version = \"0.1.24\" version = \"0.1.25\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"b9975aefa63997ef75ca9cf013ff1bb81487aaa0b622c21053afd3b92979a7af\" checksum = \"438ac08ddc5efe81452f984a9e33ba425b00b31d1f48e6acd9e2210aa28cc52e\" dependencies = [ \"cc\", \"rustc-std-workspace-core\",", "commid": "rust_pr_69086"}], "negative_passages": []} {"query_id": "q-en-rust-ab7e8ba2706a81b15781413991b66afbebde03fbe1eecb7c72bf50a3b2ecbb63", "query": "Incorrect hint for variable with a similar name defined using raw identifiers () Errors: Should be $DIR/suggestion-raw-68962.rs:7:5 | LL | fina; | ^^^^ help: a local variable with a similar name exists: `r#final` error[E0425]: cannot find function `f` in this scope --> $DIR/suggestion-raw-68962.rs:10:5 | LL | fn r#fn() {} | --------- similarly named function `r#fn` defined here ... LL | f(); | ^ help: a function with a similar name exists: `r#fn` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_124510"}], "negative_passages": []} {"query_id": "q-en-rust-56c934d7dc147942e806ed6ec6cab7acf0786fd9a16eff40e5d19bc69f67fec3", "query": "Hello, I'm trying to build a pure-LLVM Rust (no linking to at all). Building rust with set as in returns failure since Clang (the C compiler) doesn't allow . Here's my : Here are the relevant bits of the log: Thanks. A complete log is available here:\nI think I may have found the culprit. I'm not well-versed in building rustc (I particularly don't know why rust can build its own LLVM instance, but I assume this is to accommodate systems where LLVM doesn't even exist?), but isn't the choice between and is pretty much irrelevant per libc? And by the way, using gcc as , while it does accept the command, it warns (instead of erroring out like clang-9) because isn't supported in the C compiler.", "positive_passages": [{"docid": "doc-en-rust-de68941d225417a9a68a5ec5b7051a59f803f996c563192137e49623f9be2fac", "text": "# Use LLVM libunwind as the implementation for Rust's unwinder. # Accepted values are 'in-tree' (formerly true), 'system' or 'no' (formerly false). # This option only applies for Linux and Fuchsia targets. # On Linux target, if crt-static is not enabled, 'no' means dynamic link to # `libgcc_s.so`, 'in-tree' means static link to the in-tree build of llvm libunwind # and 'system' means dynamic link to `libunwind.so`. If crt-static is enabled, # the behavior is depend on the libc. On musl target, 'no' and 'in-tree' both # means static link to the in-tree build of llvm libunwind, and 'system' means # static link to `libunwind.a` provided by system. Due to the limitation of glibc, # it must link to `libgcc_eh.a` to get a working output, and this option have no effect. #llvm-libunwind = 'no' # Enable Windows Control Flow Guard checks in the standard library.", "commid": "rust_pr_84124"}], "negative_passages": []} {"query_id": "q-en-rust-56c934d7dc147942e806ed6ec6cab7acf0786fd9a16eff40e5d19bc69f67fec3", "query": "Hello, I'm trying to build a pure-LLVM Rust (no linking to at all). Building rust with set as in returns failure since Clang (the C compiler) doesn't allow . Here's my : Here are the relevant bits of the log: Thanks. A complete log is available here:\nI think I may have found the culprit. I'm not well-versed in building rustc (I particularly don't know why rust can build its own LLVM instance, but I assume this is to accommodate systems where LLVM doesn't even exist?), but isn't the choice between and is pretty much irrelevant per libc? And by the way, using gcc as , while it does accept the command, it warns (instead of erroring out like clang-9) because isn't supported in the C compiler.", "positive_passages": [{"docid": "doc-en-rust-56c0e841f2fc20e688dee10b813baceccb19d80562ab5b106a651318f619d0e7", "text": "cc = \"1.0.67\" [features] # Only applies for Linux and Fuchsia targets # Static link to the in-tree build of llvm libunwind llvm-libunwind = [] # Only applies for Linux and Fuchsia targets # If crt-static is enabled, static link to `libunwind.a` provided by system # If crt-static is disabled, dynamic link to `libunwind.so` provided by system system-llvm-libunwind = []", "commid": "rust_pr_84124"}], "negative_passages": []} {"query_id": "q-en-rust-56c934d7dc147942e806ed6ec6cab7acf0786fd9a16eff40e5d19bc69f67fec3", "query": "Hello, I'm trying to build a pure-LLVM Rust (no linking to at all). Building rust with set as in returns failure since Clang (the C compiler) doesn't allow . Here's my : Here are the relevant bits of the log: Thanks. A complete log is available here:\nI think I may have found the culprit. I'm not well-versed in building rustc (I particularly don't know why rust can build its own LLVM instance, but I assume this is to accommodate systems where LLVM doesn't even exist?), but isn't the choice between and is pretty much irrelevant per libc? And by the way, using gcc as , while it does accept the command, it warns (instead of erroring out like clang-9) because isn't supported in the C compiler.", "positive_passages": [{"docid": "doc-en-rust-c18614354a9d3924db1652ee4edf892d2f29125c79caf543c63b2bcd0c1e209e", "text": "println!(\"cargo:rerun-if-changed=build.rs\"); let target = env::var(\"TARGET\").expect(\"TARGET was not set\"); if cfg!(feature = \"system-llvm-libunwind\") { if cfg!(target_os = \"linux\") && cfg!(feature = \"system-llvm-libunwind\") { // linking for Linux is handled in lib.rs return; }", "commid": "rust_pr_84124"}], "negative_passages": []} {"query_id": "q-en-rust-56c934d7dc147942e806ed6ec6cab7acf0786fd9a16eff40e5d19bc69f67fec3", "query": "Hello, I'm trying to build a pure-LLVM Rust (no linking to at all). Building rust with set as in returns failure since Clang (the C compiler) doesn't allow . Here's my : Here are the relevant bits of the log: Thanks. A complete log is available here:\nI think I may have found the culprit. I'm not well-versed in building rustc (I particularly don't know why rust can build its own LLVM instance, but I assume this is to accommodate systems where LLVM doesn't even exist?), but isn't the choice between and is pretty much irrelevant per libc? And by the way, using gcc as , while it does accept the command, it warns (instead of erroring out like clang-9) because isn't supported in the C compiler.", "positive_passages": [{"docid": "doc-en-rust-e41269dd2da32ce27e8eb961e06f7a1b80782eb453030a5fca5bc7eea538b0c3", "text": "pub fn compile() { let target = env::var(\"TARGET\").expect(\"TARGET was not set\"); let target_env = env::var(\"CARGO_CFG_TARGET_ENV\").unwrap(); let target_vendor = env::var(\"CARGO_CFG_TARGET_VENDOR\").unwrap(); let target_endian_little = env::var(\"CARGO_CFG_TARGET_ENDIAN\").unwrap() != \"big\"; let cfg = &mut cc::Build::new(); cfg.cpp(true); cfg.cpp_set_stdlib(None); cfg.warnings(false); let mut cc_cfg = cc::Build::new(); let mut cpp_cfg = cc::Build::new(); let root = Path::new(\"../../src/llvm-project/libunwind\"); // libunwind expects a __LITTLE_ENDIAN__ macro to be set for LE archs, cf. #65765 if target_endian_little { cfg.define(\"__LITTLE_ENDIAN__\", Some(\"1\")); cpp_cfg.cpp(true); cpp_cfg.cpp_set_stdlib(None); cpp_cfg.flag(\"-nostdinc++\"); cpp_cfg.flag(\"-fno-exceptions\"); cpp_cfg.flag(\"-fno-rtti\"); cpp_cfg.flag_if_supported(\"-fvisibility-global-new-delete-hidden\"); // Don't set this for clang // By default, Clang builds C code in GNU C17 mode. // By default, Clang builds C++ code according to the C++98 standard, // with many C++11 features accepted as extensions. if cpp_cfg.get_compiler().is_like_gnu() { cpp_cfg.flag(\"-std=c++11\"); cc_cfg.flag(\"-std=c99\"); } if target_env == \"msvc\" { // Don't pull in extra libraries on MSVC cfg.flag(\"/Zl\"); cfg.flag(\"/EHsc\"); cfg.define(\"_CRT_SECURE_NO_WARNINGS\", None); cfg.define(\"_LIBUNWIND_DISABLE_VISIBILITY_ANNOTATIONS\", None); } else if target.contains(\"x86_64-fortanix-unknown-sgx\") { cfg.cpp(false); cfg.static_flag(true); cfg.opt_level(3); cfg.flag(\"-nostdinc++\"); cfg.flag(\"-fno-exceptions\"); cfg.flag(\"-fno-rtti\"); cfg.flag(\"-fstrict-aliasing\"); cfg.flag(\"-funwind-tables\"); cfg.flag(\"-fvisibility=hidden\"); cfg.flag(\"-fno-stack-protector\"); cfg.flag(\"-ffreestanding\"); cfg.flag(\"-fexceptions\"); // easiest way to undefine since no API available in cc::Build to undefine cfg.flag(\"-U_FORTIFY_SOURCE\"); cfg.define(\"_FORTIFY_SOURCE\", \"0\"); cfg.flag_if_supported(\"-fvisibility-global-new-delete-hidden\"); if target.contains(\"x86_64-fortanix-unknown-sgx\") || target_env == \"musl\" { // use the same GCC C compiler command to compile C++ code so we do not need to setup the // C++ compiler env variables on the builders. // Don't set this for clang++, as clang++ is able to compile this without libc++. if cpp_cfg.get_compiler().is_like_gnu() { cpp_cfg.cpp(false); } } cfg.define(\"_LIBUNWIND_DISABLE_VISIBILITY_ANNOTATIONS\", None); cfg.define(\"RUST_SGX\", \"1\"); cfg.define(\"__NO_STRING_INLINES\", None); cfg.define(\"__NO_MATH_INLINES\", None); cfg.define(\"_LIBUNWIND_IS_BAREMETAL\", None); cfg.define(\"__LIBUNWIND_IS_NATIVE_ONLY\", None); cfg.define(\"NDEBUG\", None); } else { cfg.flag(\"-std=c99\"); cfg.flag(\"-std=c++11\"); cfg.flag(\"-nostdinc++\"); cfg.flag(\"-fno-exceptions\"); cfg.flag(\"-fno-rtti\"); for cfg in [&mut cc_cfg, &mut cpp_cfg].iter_mut() { cfg.warnings(false); cfg.flag(\"-fstrict-aliasing\"); cfg.flag(\"-funwind-tables\"); cfg.flag(\"-fvisibility=hidden\"); cfg.flag_if_supported(\"-fvisibility-global-new-delete-hidden\"); cfg.define(\"_LIBUNWIND_DISABLE_VISIBILITY_ANNOTATIONS\", None); cfg.include(root.join(\"include\")); cfg.cargo_metadata(false); if target.contains(\"x86_64-fortanix-unknown-sgx\") { cfg.static_flag(true); cfg.opt_level(3); cfg.flag(\"-fno-stack-protector\"); cfg.flag(\"-ffreestanding\"); cfg.flag(\"-fexceptions\"); // easiest way to undefine since no API available in cc::Build to undefine cfg.flag(\"-U_FORTIFY_SOURCE\"); cfg.define(\"_FORTIFY_SOURCE\", \"0\"); cfg.define(\"RUST_SGX\", \"1\"); cfg.define(\"__NO_STRING_INLINES\", None); cfg.define(\"__NO_MATH_INLINES\", None); cfg.define(\"_LIBUNWIND_IS_BAREMETAL\", None); cfg.define(\"__LIBUNWIND_IS_NATIVE_ONLY\", None); cfg.define(\"NDEBUG\", None); } } let mut unwind_sources = vec![ \"Unwind-EHABI.cpp\", \"Unwind-seh.cpp\", let mut c_sources = vec![ \"Unwind-sjlj.c\", \"UnwindLevel1-gcc-ext.c\", \"UnwindLevel1.c\", \"UnwindRegistersRestore.S\", \"UnwindRegistersSave.S\", \"libunwind.cpp\", ]; if target_vendor == \"apple\" { unwind_sources.push(\"Unwind_AppleExtras.cpp\"); } let cpp_sources = vec![\"Unwind-EHABI.cpp\", \"Unwind-seh.cpp\", \"libunwind.cpp\"]; let cpp_len = cpp_sources.len(); if target.contains(\"x86_64-fortanix-unknown-sgx\") { unwind_sources.push(\"UnwindRustSgx.c\"); c_sources.push(\"UnwindRustSgx.c\"); } let root = Path::new(\"../../src/llvm-project/libunwind\"); cfg.include(root.join(\"include\")); for src in unwind_sources { cfg.file(root.join(\"src\").join(src)); for src in c_sources { cc_cfg.file(root.join(\"src\").join(src).canonicalize().unwrap()); } if target_env == \"musl\" { // use the same C compiler command to compile C++ code so we do not need to setup the // C++ compiler env variables on the builders cfg.cpp(false); // linking for musl is handled in lib.rs cfg.cargo_metadata(false); println!(\"cargo:rustc-link-search=native={}\", env::var(\"OUT_DIR\").unwrap()); for src in cpp_sources { cpp_cfg.file(root.join(\"src\").join(src).canonicalize().unwrap()); } cfg.compile(\"unwind\"); let out_dir = env::var(\"OUT_DIR\").unwrap(); println!(\"cargo:rustc-link-search=native={}\", &out_dir); cpp_cfg.compile(\"unwind-cpp\"); let mut count = 0; for entry in std::fs::read_dir(&out_dir).unwrap() { let obj = entry.unwrap().path().canonicalize().unwrap(); if let Some(ext) = obj.extension() { if ext == \"o\" { cc_cfg.object(&obj); count += 1; } } } assert_eq!(cpp_len, count, \"Can't get object files from {:?}\", &out_dir); cc_cfg.compile(\"unwind\"); } }", "commid": "rust_pr_84124"}], "negative_passages": []} {"query_id": "q-en-rust-56c934d7dc147942e806ed6ec6cab7acf0786fd9a16eff40e5d19bc69f67fec3", "query": "Hello, I'm trying to build a pure-LLVM Rust (no linking to at all). Building rust with set as in returns failure since Clang (the C compiler) doesn't allow . Here's my : Here are the relevant bits of the log: Thanks. A complete log is available here:\nI think I may have found the culprit. I'm not well-versed in building rustc (I particularly don't know why rust can build its own LLVM instance, but I assume this is to accommodate systems where LLVM doesn't even exist?), but isn't the choice between and is pretty much irrelevant per libc? And by the way, using gcc as , while it does accept the command, it warns (instead of erroring out like clang-9) because isn't supported in the C compiler.", "positive_passages": [{"docid": "doc-en-rust-3916ad9bf5406bdfb0ef543138744e6f6c6f7d357db9c078cb8551c0562dae60", "text": "} #[cfg(target_env = \"musl\")] #[link(name = \"unwind\", kind = \"static\", cfg(target_feature = \"crt-static\"))] #[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))] extern \"C\" {} cfg_if::cfg_if! { if #[cfg(all(feature = \"llvm-libunwind\", feature = \"system-llvm-libunwind\"))] { compile_error!(\"`llvm-libunwind` and `system-llvm-libunwind` cannot be enabled at the same time\"); } else if #[cfg(feature = \"llvm-libunwind\")] { #[link(name = \"unwind\", kind = \"static\")] extern \"C\" {} } else if #[cfg(feature = \"system-llvm-libunwind\")] { #[link(name = \"unwind\", kind = \"static-nobundle\", cfg(target_feature = \"crt-static\"))] #[link(name = \"unwind\", cfg(not(target_feature = \"crt-static\")))] extern \"C\" {} } else { #[link(name = \"unwind\", kind = \"static\", cfg(target_feature = \"crt-static\"))] #[link(name = \"gcc_s\", cfg(not(target_feature = \"crt-static\")))] extern \"C\" {} } } // When building with crt-static, we get `gcc_eh` from the `libc` crate, since // glibc needs it, and needs it listed later on the linker command line. We", "commid": "rust_pr_84124"}], "negative_passages": []} {"query_id": "q-en-rust-56c934d7dc147942e806ed6ec6cab7acf0786fd9a16eff40e5d19bc69f67fec3", "query": "Hello, I'm trying to build a pure-LLVM Rust (no linking to at all). Building rust with set as in returns failure since Clang (the C compiler) doesn't allow . Here's my : Here are the relevant bits of the log: Thanks. A complete log is available here:\nI think I may have found the culprit. I'm not well-versed in building rustc (I particularly don't know why rust can build its own LLVM instance, but I assume this is to accommodate systems where LLVM doesn't even exist?), but isn't the choice between and is pretty much irrelevant per libc? And by the way, using gcc as , while it does accept the command, it warns (instead of erroring out like clang-9) because isn't supported in the C compiler.", "positive_passages": [{"docid": "doc-en-rust-0a48d988f6e6b92c42c6b7f585971b39d94e2719475dbff2e4d8fdbc5aba2caa", "text": "extern \"C\" {} #[cfg(all(target_vendor = \"fortanix\", target_env = \"sgx\"))] #[link(name = \"unwind\", kind = \"static-nobundle\")] #[link(name = \"unwind\", kind = \"static\")] extern \"C\" {}", "commid": "rust_pr_84124"}], "negative_passages": []} {"query_id": "q-en-rust-60063ba8473934ee9bd7e4284a7cacba500fc872fd03304e003be29415f414cb", "query": "The page currently talks only about the use of move with closures. The keyword is also valid before an async block. For example: . This page could be updated to include that usage. $DIR/issue-69306.rs:5:28 | LL | impl S0 { | - this type parameter LL | const C: S0 = Self(0); | ^ expected type parameter `T`, found integer | = note: expected type parameter `T` found type `{integer}` = help: type parameters must be constrained to match other types = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> $DIR/issue-69306.rs:5:23 | LL | impl S0 { | - this type parameter LL | const C: S0 = Self(0); | ^^^^^^^ expected `u8`, found type parameter `T` | = note: expected struct `S0` found struct `S0` = help: type parameters must be constrained to match other types = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> $DIR/issue-69306.rs:10:14 | LL | impl S0 { | - this type parameter ... LL | Self(0); | ^ expected type parameter `T`, found integer | = note: expected type parameter `T` found type `{integer}` = help: type parameters must be constrained to match other types = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> $DIR/issue-69306.rs:27:14 | LL | impl Foo for as Fun>::Out { | - this type parameter LL | fn foo() { LL | Self(0); | ^ expected type parameter `T`, found integer | = note: expected type parameter `T` found type `{integer}` = help: type parameters must be constrained to match other types = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> $DIR/issue-69306.rs:33:32 | LL | impl S1 { | - this type parameter LL | const C: S1 = Self(0, 1); | ^ expected type parameter `T`, found integer | = note: expected type parameter `T` found type `{integer}` = help: type parameters must be constrained to match other types = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> $DIR/issue-69306.rs:33:27 | LL | impl S1 { | - this type parameter LL | const C: S1 = Self(0, 1); | ^^^^^^^^^^ expected `u8`, found type parameter `T` | = note: expected struct `S1` found struct `S1` = help: type parameters must be constrained to match other types = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> $DIR/issue-69306.rs:41:14 | LL | impl S2 { | - expected type parameter LL | fn map(x: U) -> S2 { | - found type parameter LL | Self(x) | ^ expected type parameter `T`, found type parameter `U` | = note: expected type parameter `T` found type parameter `U` = note: a type parameter was expected, but a different one was found; you might be missing a type parameter or trait bound = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error[E0308]: mismatched types --> $DIR/issue-69306.rs:41:9 | LL | impl S2 { | - found type parameter LL | fn map(x: U) -> S2 { | - ----- expected `S2` because of return type | | | expected type parameter LL | Self(x) | ^^^^^^^ expected type parameter `U`, found type parameter `T` | = note: expected struct `S2` found struct `S2` = note: a type parameter was expected, but a different one was found; you might be missing a type parameter or trait bound = note: for more information, visit https://doc.rust-lang.org/book/ch10-02-traits.html#traits-as-parameters error: aborting due to 8 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_69340"}], "negative_passages": []} {"query_id": "q-en-rust-a47ff1d9b323579687aefee333b8b1c2e9ec9428aaf2a497043da746043b8c33", "query": " $DIR/issue-70381.rs:4:16 | LL | println!(\"r\u00a1{}\") | ^^ error: aborting due to previous error ", "commid": "rust_pr_75972"}], "negative_passages": []} {"query_id": "q-en-rust-b9295e75379c7c0e02672585eca96a9c061c9f5f3eaf74294be0df058abba19e", "query": "https://crater- cc\nPossibly accidentally injected by Would be good to find out why this is happening, so let's ping cleanup and see if we can bisect / shrink the problem down to something usable for finding a fix.\nping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nFailed to confirm that is the cause: Seems like it regressed prior to\nFor code I used:\nmodify labels: -E-needs-bisection\nReduced example: () Errors:\nSplit the macro in two parts to make it easier to follow, and removed the special lifetime : () Error: I think we can now modify labels: -E-needs-mcve\nI believe that the most probable culprit is , cc\nis indeed the culprit, I can reproduce the bug on\nThis was . Assigning for now until we have a better understanding of what's going on.\nFixed in", "positive_passages": [{"docid": "doc-en-rust-1dd915ee8c081e115e02e1af3003243d6ee574b4cc9df46505dbbdcca33e7e1c", "text": "/// Checks whether the non-terminal may contain a single (non-keyword) identifier. fn may_be_ident(nt: &token::Nonterminal) -> bool { match *nt { token::NtItem(_) | token::NtBlock(_) | token::NtVis(_) => false, token::NtItem(_) | token::NtBlock(_) | token::NtVis(_) | token::NtLifetime(_) => false, _ => true, } }", "commid": "rust_pr_70768"}], "negative_passages": []} {"query_id": "q-en-rust-b9295e75379c7c0e02672585eca96a9c061c9f5f3eaf74294be0df058abba19e", "query": "https://crater- cc\nPossibly accidentally injected by Would be good to find out why this is happening, so let's ping cleanup and see if we can bisect / shrink the problem down to something usable for finding a fix.\nping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nFailed to confirm that is the cause: Seems like it regressed prior to\nFor code I used:\nmodify labels: -E-needs-bisection\nReduced example: () Errors:\nSplit the macro in two parts to make it easier to follow, and removed the special lifetime : () Error: I think we can now modify labels: -E-needs-mcve\nI believe that the most probable culprit is , cc\nis indeed the culprit, I can reproduce the bug on\nThis was . Assigning for now until we have a better understanding of what's going on.\nFixed in", "positive_passages": [{"docid": "doc-en-rust-94b9b18119ac32a927151a429f56c9611fb53e97472ec15862695502476e399d", "text": " // check-pass macro_rules! foo { ($(: $p:path)? $(: $l:lifetime)? ) => { bar! {$(: $p)? $(: $l)? } }; } macro_rules! bar { ($(: $p:path)? $(: $l:lifetime)? ) => {}; } foo! {: 'a } fn main() {} ", "commid": "rust_pr_70768"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-444772a1b34f8a07b3322b39d89aa6e32c470c3fc35174b14a284e8b5d3860d0", "text": "}; let sp = cx.with_def_site_ctxt(sp); let e = match env::var(&var.as_str()) { Err(..) => { let value = env::var(&var.as_str()).ok().as_deref().map(Symbol::intern); cx.parse_sess.env_depinfo.borrow_mut().insert((Symbol::intern(&var), value)); let e = match value { None => { let lt = cx.lifetime(sp, Ident::new(kw::StaticLifetime, sp)); cx.expr_path(cx.path_all( sp,", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-95790ad64670d72a0aa4c1d176eef3342fda7d60f721bcc337ff5bbb0fa4b244", "text": "))], )) } Ok(s) => cx.expr_call_global( Some(value) => cx.expr_call_global( sp, cx.std_path(&[sym::option, sym::Option, sym::Some]), vec![cx.expr_str(sp, Symbol::intern(&s))], vec![cx.expr_str(sp, value)], ), }; MacEager::expr(e)", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-e63278a51455de8bae18f1753811d63bd0804540df459e6067309488a7598afd", "text": "} let sp = cx.with_def_site_ctxt(sp); let e = match env::var(&*var.as_str()) { Err(_) => { let value = env::var(&*var.as_str()).ok().as_deref().map(Symbol::intern); cx.parse_sess.env_depinfo.borrow_mut().insert((var, value)); let e = match value { None => { cx.span_err(sp, &msg.as_str()); return DummyResult::any(sp); } Ok(s) => cx.expr_str(sp, Symbol::intern(&s)), Some(value) => cx.expr_str(sp, value), }; MacEager::expr(e) }", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-2d7b05d95f16a7d93047a0255ab045d5755fd1fb2bbec31decdc017059bd8f51", "text": "#![feature(bool_to_option)] #![feature(crate_visibility_modifier)] #![feature(decl_macro)] #![feature(inner_deref)] #![feature(nll)] #![feature(or_patterns)] #![feature(proc_macro_internals)]", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-1b9c871e4b15a7cbe5af2686915a55091110e6186981f45d199ae0e075a44dcd", "text": "filename.to_string().replace(\" \", \" \") } // Makefile comments only need escaping newlines and ``. // The result can be unescaped by anything that can unescape `escape_default` and friends. fn escape_dep_env(symbol: Symbol) -> String { let s = symbol.as_str(); let mut escaped = String::with_capacity(s.len()); for c in s.chars() { match c { 'n' => escaped.push_str(r\"n\"), 'r' => escaped.push_str(r\"r\"), '' => escaped.push_str(r\"\"), _ => escaped.push(c), } } escaped } fn write_out_deps( sess: &Session, boxed_resolver: &Steal>>,", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-dd546358021fda9ea3890c3dc621cf8e5b440b890b8d527561a4ba293dda2d34", "text": "for path in files { writeln!(file, \"{}:\", path)?; } // Emit special comments with information about accessed environment variables. let env_depinfo = sess.parse_sess.env_depinfo.borrow(); if !env_depinfo.is_empty() { let mut envs: Vec<_> = env_depinfo .iter() .map(|(k, v)| (escape_dep_env(*k), v.map(escape_dep_env))) .collect(); envs.sort_unstable(); writeln!(file)?; for (k, v) in envs { write!(file, \"# env-dep:{}\", k)?; if let Some(v) = v { write!(file, \"={}\", v)?; } writeln!(file)?; } } Ok(()) })();", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-9953e9e87255b1000e5fefe0d41b015ee2292db282f830b23e2961513b86d685", "text": "pub symbol_gallery: SymbolGallery, /// The parser has reached `Eof` due to an unclosed brace. Used to silence unnecessary errors. pub reached_eof: Lock, /// Environment variables accessed during the build and their values when they exist. pub env_depinfo: Lock)>>, } impl ParseSess {", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-a46acd8c00c94a6d9fea43b2a56bc67e2ce213cb123dc5535ec17b7609dfb04a", "text": "gated_spans: GatedSpans::default(), symbol_gallery: SymbolGallery::default(), reached_eof: Lock::new(false), env_depinfo: Default::default(), } }", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-c02149744916cd57b94551de8dfecb6b3af7df0f11235f7ba071cef30cba28e0", "text": " -include ../../run-make-fulldeps/tools.mk all: EXISTING_ENV=1 EXISTING_OPT_ENV=1 $(RUSTC) --emit dep-info main.rs $(CGREP) \"# env-dep:EXISTING_ENV=1\" < $(TMPDIR)/main.d $(CGREP) \"# env-dep:EXISTING_OPT_ENV=1\" < $(TMPDIR)/main.d $(CGREP) \"# env-dep:NONEXISTENT_OPT_ENV\" < $(TMPDIR)/main.d $(CGREP) \"# env-dep:ESCAPEnESCAPE\" < $(TMPDIR)/main.d ", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-de944dfea6367df91708bcfc224cd591548c0516bc5aff2571a0de3258f36a10", "query": "This is still in 's build script: But the use site has been moved here: This means that won't be rebuilt when that env var changes. For the record, I haven't seen a bug from this, was just curious where may be used and spotted the mismatch. cc\nFound another one: is used in but it's still mentioned by . cc Could we automate ensuring this? e.g. by searching for in tidy.\nYeah, we can try I guess. Not sure :) Maybe the right solution is that Cargo would, other than some whitelist, remove unknown variables from rustc's environment -- and the whitelist can be expanded via the build script env-tracking.\nhmm I wonder if abusing the file format (from ) or encoding this extra info in a comment, would be a good idea.\nYou mean such that if you use we automatically tell Cargo to rebuild on changes? (That's ). But yeah, I think that would be good. Ideally this would be \"well-integrated\" vs. the current state of everyone having to remember to do it.", "positive_passages": [{"docid": "doc-en-rust-4182df688e093f032bf4b478db19bcb923d5339839fb43c9b82bd03e42bda5a5", "text": " fn main() { env!(\"EXISTING_ENV\"); option_env!(\"EXISTING_OPT_ENV\"); option_env!(\"NONEXISTENT_OPT_ENV\"); option_env!(\"ESCAPEnESCAPE\"); } ", "commid": "rust_pr_71858"}], "negative_passages": []} {"query_id": "q-en-rust-67a247e00e15c5239ee3b914ddb1939f490c3550a1fd2954f328db1834e4f4b4", "query": "Sorry for the vague title. () bug!(), }; let param_env = tcx.param_env(source.def_id()); for (local, decl) in body.local_decls.iter_enumerated() { // Ignore locals which are internal or not live if !live_locals.contains(local) || decl.internal { continue; } let decl_ty = tcx.normalize_erasing_regions(param_env, decl.ty); // Sanity check that typeck knows about the type of locals which are // live across a suspension point if !allowed.contains(&decl.ty) && !allowed_upvars.contains(&decl.ty) { if !allowed.contains(&decl_ty) && !allowed_upvars.contains(&decl_ty) { span_bug!( body.span, \"Broken MIR: generator contains type {} in MIR, ", "commid": "rust_pr_70957"}], "negative_passages": []} {"query_id": "q-en-rust-67a247e00e15c5239ee3b914ddb1939f490c3550a1fd2954f328db1834e4f4b4", "query": "Sorry for the vague title. () // check-pass // edition:2018 // compile-flags: --crate-type=lib pub async fn test() { const C: usize = 4; foo(&mut [0u8; C]).await; } async fn foo(_: &mut [u8]) {} ", "commid": "rust_pr_70957"}], "negative_passages": []} {"query_id": "q-en-rust-0fdac81316b57566ca06965d9e5a48c18cdbdaad09c86180ed1adcf0e5768e33", "query": "Especially the fact that the lines are broken up seems like a bug. () Errors: modify labels: A-diagnostics, C-bug, D-papercut, T-compiler\nThere are two things here have a label that has embedded newlines which causes the broken down ASCII art should find a way to reverse the order in which multiline spans are sorted, but this will require handling multiple labels. The current order is optimized for not having to tweak the label positions, but when no labels are involved, we could do much better.\nSo, using raw text from the program as part of our label feels like a hack. We should try to come up with another way of getting a name (and falling back to a description like \"async block\" or something if that doesn't work). Failing a good way of doing that, I guess we can scan the program text for newlines and only use it if there aren't any..\nAgree, we should never be using snippets in labels. The change needs to happen in You have the which lets you get the and from there use an appropriate description. I think there is an open PR that adds description to all variants.", "positive_passages": [{"docid": "doc-en-rust-516bb9c64f74a17cab2ab69b406dda5a30eae5475bffb59b6798d515581cf90d", "text": "format!(\"does not implement `{}`\", trait_ref.print_only_trait_path()) }; let mut explain_yield = |interior_span: Span, yield_span: Span, scope_span: Option| { let mut span = MultiSpan::from_span(yield_span); if let Ok(snippet) = source_map.span_to_snippet(interior_span) { span.push_span_label( yield_span, format!(\"{} occurs here, with `{}` maybe used later\", await_or_yield, snippet), ); // If available, use the scope span to annotate the drop location. if let Some(scope_span) = scope_span { span.push_span_label( source_map.end_point(scope_span), format!(\"`{}` is later dropped here\", snippet), ); let mut explain_yield = |interior_span: Span, yield_span: Span, scope_span: Option| { let mut span = MultiSpan::from_span(yield_span); if let Ok(snippet) = source_map.span_to_snippet(interior_span) { // #70935: If snippet contains newlines, display \"the value\" instead // so that we do not emit complex diagnostics. let snippet = &format!(\"`{}`\", snippet); let snippet = if snippet.contains('n') { \"the value\" } else { snippet }; // The multispan can be complex here, like: // note: future is not `Send` as this value is used across an await // --> $DIR/issue-70935-complex-spans.rs:13:9 // | // LL | baz(|| async{ // | __________^___- // | | _________| // | || // LL | || foo(tx.clone()); // LL | || }).await; // | || - ^- value is later dropped here // | ||_________|______| // | |__________| await occurs here, with value maybe used later // | has type `closure` which is not `Send` // // So, detect it and separate into some notes, like: // // note: future is not `Send` as this value is used across an await // --> $DIR/issue-70935-complex-spans.rs:13:9 // | // LL | / baz(|| async{ // LL | | foo(tx.clone()); // LL | | }).await; // | |________________^ first, await occurs here, with the value maybe used later... // note: the value is later dropped here // --> $DIR/issue-70935-complex-spans.rs:15:17 // | // LL | }).await; // | ^ // // If available, use the scope span to annotate the drop location. if let Some(scope_span) = scope_span { let scope_span = source_map.end_point(scope_span); let is_overlapped = yield_span.overlaps(scope_span) || yield_span.overlaps(interior_span); if is_overlapped { span.push_span_label( yield_span, format!( \"first, {} occurs here, with {} maybe used later...\", await_or_yield, snippet ), ); err.span_note( span, &format!( \"{} {} as this value is used across {}\", future_or_generator, trait_explanation, an_await_or_yield ), ); if source_map.is_multiline(interior_span) { err.span_note( scope_span, &format!(\"{} is later dropped here\", snippet), ); err.span_note( interior_span, &format!( \"this has type `{}` which {}\", target_ty, trait_explanation ), ); } else { let mut span = MultiSpan::from_span(scope_span); span.push_span_label( interior_span, format!(\"has type `{}` which {}\", target_ty, trait_explanation), ); err.span_note(span, &format!(\"{} is later dropped here\", snippet)); } } else { span.push_span_label( yield_span, format!( \"{} occurs here, with {} maybe used later\", await_or_yield, snippet ), ); span.push_span_label( scope_span, format!(\"{} is later dropped here\", snippet), ); span.push_span_label( interior_span, format!(\"has type `{}` which {}\", target_ty, trait_explanation), ); err.span_note( span, &format!( \"{} {} as this value is used across {}\", future_or_generator, trait_explanation, an_await_or_yield ), ); } } else { span.push_span_label( yield_span, format!( \"{} occurs here, with {} maybe used later\", await_or_yield, snippet ), ); span.push_span_label( interior_span, format!(\"has type `{}` which {}\", target_ty, trait_explanation), ); err.span_note( span, &format!( \"{} {} as this value is used across {}\", future_or_generator, trait_explanation, an_await_or_yield ), ); } } } span.push_span_label( interior_span, format!(\"has type `{}` which {}\", target_ty, trait_explanation), ); err.span_note( span, &format!( \"{} {} as this value is used across {}\", future_or_generator, trait_explanation, an_await_or_yield ), ); }; }; match interior_or_upvar_span { GeneratorInteriorOrUpvar::Interior(interior_span) => { if let Some((scope_span, yield_span, expr, from_awaited_ty)) = interior_extra_info {", "commid": "rust_pr_75020"}], "negative_passages": []} {"query_id": "q-en-rust-0fdac81316b57566ca06965d9e5a48c18cdbdaad09c86180ed1adcf0e5768e33", "query": "Especially the fact that the lines are broken up seems like a bug. () Errors: modify labels: A-diagnostics, C-bug, D-papercut, T-compiler\nThere are two things here have a label that has embedded newlines which causes the broken down ASCII art should find a way to reverse the order in which multiline spans are sorted, but this will require handling multiple labels. The current order is optimized for not having to tweak the label positions, but when no labels are involved, we could do much better.\nSo, using raw text from the program as part of our label feels like a hack. We should try to come up with another way of getting a name (and falling back to a description like \"async block\" or something if that doesn't work). Failing a good way of doing that, I guess we can scan the program text for newlines and only use it if there aren't any..\nAgree, we should never be using snippets in labels. The change needs to happen in You have the which lets you get the and from there use an appropriate description. I think there is an open PR that adds description to all variants.", "positive_passages": [{"docid": "doc-en-rust-aebc34780163dcbb8e687f4f1dc5c07bef7611033fe0a6476d3555992f3994e2", "text": " // edition:2018 // #70935: Check if we do not emit snippet // with newlines which lead complex diagnostics. use std::future::Future; async fn baz(_c: impl FnMut() -> T) where T: Future { } fn foo(tx: std::sync::mpsc::Sender) -> impl Future + Send { //~^ ERROR: future cannot be sent between threads safely async move { baz(|| async{ foo(tx.clone()); }).await; } } fn bar(_s: impl Future + Send) { } fn main() { let (tx, _rx) = std::sync::mpsc::channel(); bar(foo(tx)); } ", "commid": "rust_pr_75020"}], "negative_passages": []} {"query_id": "q-en-rust-0fdac81316b57566ca06965d9e5a48c18cdbdaad09c86180ed1adcf0e5768e33", "query": "Especially the fact that the lines are broken up seems like a bug. () Errors: modify labels: A-diagnostics, C-bug, D-papercut, T-compiler\nThere are two things here have a label that has embedded newlines which causes the broken down ASCII art should find a way to reverse the order in which multiline spans are sorted, but this will require handling multiple labels. The current order is optimized for not having to tweak the label positions, but when no labels are involved, we could do much better.\nSo, using raw text from the program as part of our label feels like a hack. We should try to come up with another way of getting a name (and falling back to a description like \"async block\" or something if that doesn't work). Failing a good way of doing that, I guess we can scan the program text for newlines and only use it if there aren't any..\nAgree, we should never be using snippets in labels. The change needs to happen in You have the which lets you get the and from there use an appropriate description. I think there is an open PR that adds description to all variants.", "positive_passages": [{"docid": "doc-en-rust-65d8b0e4c7c6d94a7f043be70a945f2cca58f36c7b7e6503db6d1ec827f2f466", "text": " error: future cannot be sent between threads safely --> $DIR/issue-70935-complex-spans.rs:10:45 | LL | fn foo(tx: std::sync::mpsc::Sender) -> impl Future + Send { | ^^^^^^^^^^^^^^^^^^ future created by async block is not `Send` | = help: the trait `Sync` is not implemented for `Sender` note: future is not `Send` as this value is used across an await --> $DIR/issue-70935-complex-spans.rs:13:9 | LL | / baz(|| async{ LL | | foo(tx.clone()); LL | | }).await; | |________________^ first, await occurs here, with the value maybe used later... note: the value is later dropped here --> $DIR/issue-70935-complex-spans.rs:15:17 | LL | }).await; | ^ note: this has type `[closure@$DIR/issue-70935-complex-spans.rs:13:13: 15:10]` which is not `Send` --> $DIR/issue-70935-complex-spans.rs:13:13 | LL | baz(|| async{ | _____________^ LL | | foo(tx.clone()); LL | | }).await; | |_________^ error: aborting due to previous error ", "commid": "rust_pr_75020"}], "negative_passages": []} {"query_id": "q-en-rust-0fdac81316b57566ca06965d9e5a48c18cdbdaad09c86180ed1adcf0e5768e33", "query": "Especially the fact that the lines are broken up seems like a bug. () Errors: modify labels: A-diagnostics, C-bug, D-papercut, T-compiler\nThere are two things here have a label that has embedded newlines which causes the broken down ASCII art should find a way to reverse the order in which multiline spans are sorted, but this will require handling multiple labels. The current order is optimized for not having to tweak the label positions, but when no labels are involved, we could do much better.\nSo, using raw text from the program as part of our label feels like a hack. We should try to come up with another way of getting a name (and falling back to a description like \"async block\" or something if that doesn't work). Failing a good way of doing that, I guess we can scan the program text for newlines and only use it if there aren't any..\nAgree, we should never be using snippets in labels. The change needs to happen in You have the which lets you get the and from there use an appropriate description. I think there is an open PR that adds description to all variants.", "positive_passages": [{"docid": "doc-en-rust-9b034831e0558c47baf7b49788e1a2e58ac0edb7fcfadb441cfb2e80772cb918", "text": "--> $DIR/issue-65436-raw-ptr-not-send.rs:14:9 | LL | bar(Foo(std::ptr::null())).await; | ^^^^^^^^----------------^^^^^^^^- `std::ptr::null()` is later dropped here | | | | | has type `*const u8` which is not `Send` | await occurs here, with `std::ptr::null()` maybe used later | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ first, await occurs here, with `std::ptr::null()` maybe used later... note: `std::ptr::null()` is later dropped here --> $DIR/issue-65436-raw-ptr-not-send.rs:14:41 | LL | bar(Foo(std::ptr::null())).await; | ---------------- ^ | | | has type `*const u8` which is not `Send` help: consider moving this into a `let` binding to create a shorter lived borrow --> $DIR/issue-65436-raw-ptr-not-send.rs:14:13 |", "commid": "rust_pr_75020"}], "negative_passages": []} {"query_id": "q-en-rust-fb572e8faa63661a51b3c673e66305ab169f5eeb5dbce8ba3c737203bd62ec5f", "query": " $DIR/issue-63322-forbid-dyn.rs:1:12 | LL | #![feature(const_generics)] | ^^^^^^^^^^^^^^ | = note: `#[warn(incomplete_features)]` on by default error[E0741]: `&'static (dyn A + 'static)` must be annotated with `#[derive(PartialEq, Eq)]` to be used as the type of a const parameter --> $DIR/issue-63322-forbid-dyn.rs:8:18 | LL | fn test() { | ^^^^^^^^^^^^^^ `&'static (dyn A + 'static)` doesn't derive both `PartialEq` and `Eq` error: aborting due to previous error; 1 warning emitted For more information about this error, try `rustc --explain E0741`. ", "commid": "rust_pr_71038"}], "negative_passages": []} {"query_id": "q-en-rust-fb572e8faa63661a51b3c673e66305ab169f5eeb5dbce8ba3c737203bd62ec5f", "query": " $DIR/issue-70972-dyn-trait.rs:6:9 | LL | F => panic!(), | ^ error: aborting due to previous error ", "commid": "rust_pr_71038"}], "negative_passages": []} {"query_id": "q-en-rust-255ae6bf85fd7505ca3a5cbd14a5bbbd74591063027b760d93f9eab13d54859a", "query": "These include: The reasoning for all of these extensions is that they all have wide use not just in GitHub, but in many applications of Markdown.\nI'm not sure if automatically creating links from bare domain names is a good idea, so if there's an option to disable that specifically I'd suggest it. The rest all seems fine to me.\nJust like said for Autolinks. I'm also not sure to see the point of the task list one. However, adding strikethrough would be nice.\nTasklist would be useful for things like this: [x] Implemented thing [x] Some other implemented thing [ ] Unimplemented thing [ ] Some other unimplemented thing or possibly in the crate root [x] Webhooks [x] Websockets [ ] Websites [ ] Other\nAnother noteworthy extension to implement is . This would prevent the following tags from being used for safety reasons: - - - - -\nWe decided in not to add this. Instead rustdoc will now suggest to turn raw links into automatic links (). So I think the only thing left is adding task list parsing.", "positive_passages": [{"docid": "doc-en-rust-39565ca976f806c69887d6f3d87a9c6020c82eaf6ee3bfada3837329bb8246de", "text": "See the specification for the [GitHub Tables extension][tables] for more details on the exact syntax supported. ### Task lists Task lists can be used as a checklist of items that have been completed. Example: ```md - [x] Complete task - [ ] IncComplete task ``` This will render as

    See the specification for the [task list extension] for more details.
    [`backtrace`]: https://docs.rs/backtrace/0.3.50/backtrace/ [commonmark markdown specification]: https://commonmark.org/ [commonmark quick reference]: https://commonmark.org/help/", "commid": "rust_pr_81766"}], "negative_passages": []} {"query_id": "q-en-rust-255ae6bf85fd7505ca3a5cbd14a5bbbd74591063027b760d93f9eab13d54859a", "query": "These include: The reasoning for all of these extensions is that they all have wide use not just in GitHub, but in many applications of Markdown.\nI'm not sure if automatically creating links from bare domain names is a good idea, so if there's an option to disable that specifically I'd suggest it. The rest all seems fine to me.\nJust like said for Autolinks. I'm also not sure to see the point of the task list one. However, adding strikethrough would be nice.\nTasklist would be useful for things like this: [x] Implemented thing [x] Some other implemented thing [ ] Unimplemented thing [ ] Some other unimplemented thing or possibly in the crate root [x] Webhooks [x] Websockets [ ] Websites [ ] Other\nAnother noteworthy extension to implement is . This would prevent the following tags from being used for safety reasons: - - - - -\nWe decided in not to add this. Instead rustdoc will now suggest to turn raw links into automatic links (). So I think the only thing left is adding task list parsing.", "positive_passages": [{"docid": "doc-en-rust-194a6d9038fa05289cce313e2bccb2a420f8b7309d84bbe0f03923c76ec186c2", "text": "[`std::env`]: https://doc.rust-lang.org/stable/std/env/index.html#functions [strikethrough]: https://github.github.com/gfm/#strikethrough-extension- [tables]: https://github.github.com/gfm/#tables-extension- [task list extension]: https://github.github.com/gfm/#task-list-items-extension- ", "commid": "rust_pr_81766"}], "negative_passages": []} {"query_id": "q-en-rust-255ae6bf85fd7505ca3a5cbd14a5bbbd74591063027b760d93f9eab13d54859a", "query": "These include: The reasoning for all of these extensions is that they all have wide use not just in GitHub, but in many applications of Markdown.\nI'm not sure if automatically creating links from bare domain names is a good idea, so if there's an option to disable that specifically I'd suggest it. The rest all seems fine to me.\nJust like said for Autolinks. I'm also not sure to see the point of the task list one. However, adding strikethrough would be nice.\nTasklist would be useful for things like this: [x] Implemented thing [x] Some other implemented thing [ ] Unimplemented thing [ ] Some other unimplemented thing or possibly in the crate root [x] Webhooks [x] Websockets [ ] Websites [ ] Other\nAnother noteworthy extension to implement is . This would prevent the following tags from being used for safety reasons: - - - - -\nWe decided in not to add this. Instead rustdoc will now suggest to turn raw links into automatic links (). So I think the only thing left is adding task list parsing.", "positive_passages": [{"docid": "doc-en-rust-89986d80e445dd0887033a387e71a27f5bd4a9d697ebef448fb73ec725f77e7e", "text": "/// Options for rendering Markdown in the main body of documentation. pub(crate) fn opts() -> Options { Options::ENABLE_TABLES | Options::ENABLE_FOOTNOTES | Options::ENABLE_STRIKETHROUGH Options::ENABLE_TABLES | Options::ENABLE_FOOTNOTES | Options::ENABLE_STRIKETHROUGH | Options::ENABLE_TASKLISTS } /// A subset of [`opts()`] used for rendering summaries.", "commid": "rust_pr_81766"}], "negative_passages": []} {"query_id": "q-en-rust-255ae6bf85fd7505ca3a5cbd14a5bbbd74591063027b760d93f9eab13d54859a", "query": "These include: The reasoning for all of these extensions is that they all have wide use not just in GitHub, but in many applications of Markdown.\nI'm not sure if automatically creating links from bare domain names is a good idea, so if there's an option to disable that specifically I'd suggest it. The rest all seems fine to me.\nJust like said for Autolinks. I'm also not sure to see the point of the task list one. However, adding strikethrough would be nice.\nTasklist would be useful for things like this: [x] Implemented thing [x] Some other implemented thing [ ] Unimplemented thing [ ] Some other unimplemented thing or possibly in the crate root [x] Webhooks [x] Websockets [ ] Websites [ ] Other\nAnother noteworthy extension to implement is . This would prevent the following tags from being used for safety reasons: - - - - -\nWe decided in not to add this. Instead rustdoc will now suggest to turn raw links into automatic links (). So I think the only thing left is adding task list parsing.", "positive_passages": [{"docid": "doc-en-rust-f1385cd613cd783251568d380fb6b18f19b2ffa83d08ec11c6afa635d0a8085f", "text": " // ignore-tidy-linelength // FIXME: this doesn't test as much as I'd like; ideally it would have these query too: // has task_lists/index.html '//li/input[@type=\"checkbox\" and disabled]/following-sibling::text()' 'a' // has task_lists/index.html '//li/input[@type=\"checkbox\"]/following-sibling::text()' 'b' // Unfortunately that requires LXML, because the built-in xml module doesn't support all of xpath. // @has task_lists/index.html '//ul/li/input[@type=\"checkbox\"]' '' // @has task_lists/index.html '//ul/li/input[@disabled]' '' // @has task_lists/index.html '//ul/li' 'a' // @has task_lists/index.html '//ul/li' 'b' //! This tests 'task list' support, a common markdown extension. //! - [ ] a //! - [x] b ", "commid": "rust_pr_81766"}], "negative_passages": []} {"query_id": "q-en-rust-173916e9452c90eac83b9e2fee9f1021cbd99e02cfcbf5079368a25ebdf4ace5", "query": "In this example: rustc throws a warning: Removing the parentheses as the message directs causes errors: It seems the warning is wrong, unless this is expected to compile $DIR/issue-71363.rs:6:6 | 6 | impl std::error::Error for MyError {} | ^^^^^^^^^^^^^^^^^ `MyError` cannot be formatted with the default formatter | = help: the trait `std::fmt::Display` is not implemented for `MyError` = note: in format strings you may be able to use `{:?}` (or {:#?} for pretty-print) instead note: required by a bound in `std::error::Error` error[E0277]: `MyError` doesn't implement `Debug` --> $DIR/issue-71363.rs:6:6 | 6 | impl std::error::Error for MyError {} | ^^^^^^^^^^^^^^^^^ `MyError` cannot be formatted using `{:?}` | = help: the trait `Debug` is not implemented for `MyError` = note: add `#[derive(Debug)]` to `MyError` or manually `impl Debug for MyError` note: required by a bound in `std::error::Error` help: consider annotating `MyError` with `#[derive(Debug)]` | 5 | #[derive(Debug)] | error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_97504"}], "negative_passages": []} {"query_id": "q-en-rust-0a83cfc50ccc7a7796089f355f5c7e9d5f34f5a9518ef005b13e27e8abd3782e", "query": " $DIR/issue-71381.rs:13:61 | LL | pub fn call_me(&self) { | ^^^^^^^^^^^^^^^^^^^^^^^^^^ error: using function pointers as const generic parameters is forbidden --> $DIR/issue-71381.rs:21:19 | LL | const FN: unsafe extern \"C\" fn(Args), | ^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors ", "commid": "rust_pr_73795"}], "negative_passages": []} {"query_id": "q-en-rust-0a83cfc50ccc7a7796089f355f5c7e9d5f34f5a9518ef005b13e27e8abd3782e", "query": " $DIR/issue-71382.rs:15:23 | LL | fn test(&self) { | ^^^^ error: aborting due to previous error ", "commid": "rust_pr_73795"}], "negative_passages": []} {"query_id": "q-en-rust-0a83cfc50ccc7a7796089f355f5c7e9d5f34f5a9518ef005b13e27e8abd3782e", "query": " $DIR/issue-71611.rs:4:21 | LL | fn func(outer: A) { | ^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_73795"}], "negative_passages": []} {"query_id": "q-en-rust-0a83cfc50ccc7a7796089f355f5c7e9d5f34f5a9518ef005b13e27e8abd3782e", "query": " $DIR/issue-72352.rs:6:42 | LL | unsafe fn unsafely_do_the_thing usize>(ptr: *const i8) -> usize { | ^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_73795"}], "negative_passages": []} {"query_id": "q-en-rust-0e230200f0c648868a9765bc7839e784d61b08789d58337f2723f7069af2f9f9", "query": "Given this code (): I expected that would indicate that is not function but is. Instead, I got: A diagnostic along the lines of isn't a function, but is would be helpful. $DIR/issue-71406.rs:4:26 | LL | let (tx, rx) = mpsc::channel::new(1); | ^^^^^^^ expected type, found function `channel` in `mpsc` error: aborting due to previous error For more information about this error, try `rustc --explain E0433`. ", "commid": "rust_pr_71419"}], "negative_passages": []} {"query_id": "q-en-rust-7a2bb4e2913529fc62dd03b19a9600344ee6b75ec62f3dd5e24cb10ce7396329", "query": "IIUC currently a trait object is always covariant in its associated types, regardless of their positions. This makes it possible to pass a non-static reference to a function expecting a static reference:\nWith rustc 1.0.0 ..= 1.16.0, the following error was emitted:\n-- this looks plausibly similar to , but I don't know enough to be certain.\nOh wow how are these not invariant? Cc\nAssigning as and removing .\nJust in case, in my last action I've removed by accident and it back :).\nWait, these are supposed to be invariant. This is definitely a bug. It would be great to figure out just which nightly regressed this, maybe somebody can run cargo bisect? cleanup\nEr, I guess I meant this? ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched toolchains nightly-2016-06-01 through nightly-2020-01-01 regression in nightly-2017-03-12 ... Previous error message: I used a slightly modified variant (just removed the IIRC): let ty = relation.relate(&a.ty, &b.ty)?; let substs = relation.relate(&a.substs, &b.substs)?; let ty = relation.relate_with_variance(ty::Invariant, &a.ty, &b.ty)?; let substs = relation.relate_with_variance(ty::Invariant, &a.substs, &b.substs)?; Ok(ty::ExistentialProjection { item_def_id: a.item_def_id, substs, ty }) } }", "commid": "rust_pr_71896"}], "negative_passages": []} {"query_id": "q-en-rust-7a2bb4e2913529fc62dd03b19a9600344ee6b75ec62f3dd5e24cb10ce7396329", "query": "IIUC currently a trait object is always covariant in its associated types, regardless of their positions. This makes it possible to pass a non-static reference to a function expecting a static reference:\nWith rustc 1.0.0 ..= 1.16.0, the following error was emitted:\n-- this looks plausibly similar to , but I don't know enough to be certain.\nOh wow how are these not invariant? Cc\nAssigning as and removing .\nJust in case, in my last action I've removed by accident and it back :).\nWait, these are supposed to be invariant. This is definitely a bug. It would be great to figure out just which nightly regressed this, maybe somebody can run cargo bisect? cleanup\nEr, I guess I meant this? ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched toolchains nightly-2016-06-01 through nightly-2020-01-01 regression in nightly-2017-03-12 ... Previous error message: I used a slightly modified variant (just removed the IIRC): _>; | ^^^^^^^^^^^^^^^^^^^^^^^ expected trait object `dyn std::ops::Fn`, found closure | = note: expected struct `std::boxed::Box _>` = note: expected struct `std::boxed::Box u8>` found struct `std::boxed::Box<[closure@$DIR/coerce-expect-unsized-ascribed.rs:26:22: 26:35]>` error: aborting due to 14 previous errors", "commid": "rust_pr_71896"}], "negative_passages": []} {"query_id": "q-en-rust-7a2bb4e2913529fc62dd03b19a9600344ee6b75ec62f3dd5e24cb10ce7396329", "query": "IIUC currently a trait object is always covariant in its associated types, regardless of their positions. This makes it possible to pass a non-static reference to a function expecting a static reference:\nWith rustc 1.0.0 ..= 1.16.0, the following error was emitted:\n-- this looks plausibly similar to , but I don't know enough to be certain.\nOh wow how are these not invariant? Cc\nAssigning as and removing .\nJust in case, in my last action I've removed by accident and it back :).\nWait, these are supposed to be invariant. This is definitely a bug. It would be great to figure out just which nightly regressed this, maybe somebody can run cargo bisect? cleanup\nEr, I guess I meant this? ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched toolchains nightly-2016-06-01 through nightly-2020-01-01 regression in nightly-2017-03-12 ... Previous error message: I used a slightly modified variant (just removed the IIRC): error[E0277]: the size for values of type `dyn std::iter::Iterator` cannot be known at compilation time error[E0277]: the size for values of type `dyn std::iter::Iterator` cannot be known at compilation time --> $DIR/issue-20605.rs:2:17 | LL | for item in *things { *item = 0 } | ^^^^^^^ doesn't have a size known at compile-time | = help: the trait `std::marker::Sized` is not implemented for `dyn std::iter::Iterator` = help: the trait `std::marker::Sized` is not implemented for `dyn std::iter::Iterator` = note: to learn more, visit = note: required by `std::iter::IntoIterator::into_iter`", "commid": "rust_pr_71896"}], "negative_passages": []} {"query_id": "q-en-rust-7a2bb4e2913529fc62dd03b19a9600344ee6b75ec62f3dd5e24cb10ce7396329", "query": "IIUC currently a trait object is always covariant in its associated types, regardless of their positions. This makes it possible to pass a non-static reference to a function expecting a static reference:\nWith rustc 1.0.0 ..= 1.16.0, the following error was emitted:\n-- this looks plausibly similar to , but I don't know enough to be certain.\nOh wow how are these not invariant? Cc\nAssigning as and removing .\nJust in case, in my last action I've removed by accident and it back :).\nWait, these are supposed to be invariant. This is definitely a bug. It would be great to figure out just which nightly regressed this, maybe somebody can run cargo bisect? cleanup\nEr, I guess I meant this? ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched toolchains nightly-2016-06-01 through nightly-2020-01-01 regression in nightly-2017-03-12 ... Previous error message: I used a slightly modified variant (just removed the IIRC): error: lifetime may not live long enough --> $DIR/variance-associated-types2.rs:13:12 | LL | fn take<'a>(_: &'a u32) { | -- lifetime `'a` defined here LL | let _: Box> = make(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ type annotation requires that `'a` must outlive `'static` | = help: consider replacing `'a` with `'static` error: aborting due to previous error ", "commid": "rust_pr_71896"}], "negative_passages": []} {"query_id": "q-en-rust-7a2bb4e2913529fc62dd03b19a9600344ee6b75ec62f3dd5e24cb10ce7396329", "query": "IIUC currently a trait object is always covariant in its associated types, regardless of their positions. This makes it possible to pass a non-static reference to a function expecting a static reference:\nWith rustc 1.0.0 ..= 1.16.0, the following error was emitted:\n-- this looks plausibly similar to , but I don't know enough to be certain.\nOh wow how are these not invariant? Cc\nAssigning as and removing .\nJust in case, in my last action I've removed by accident and it back :).\nWait, these are supposed to be invariant. This is definitely a bug. It would be great to figure out just which nightly regressed this, maybe somebody can run cargo bisect? cleanup\nEr, I guess I meant this? ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched toolchains nightly-2016-06-01 through nightly-2020-01-01 regression in nightly-2017-03-12 ... Previous error message: I used a slightly modified variant (just removed the IIRC): // Test that dyn Foo is invariant with respect to T. // Failure to enforce invariance here can be weaponized, see #71550 for details. trait Foo { type Bar; } fn make() -> Box> { panic!() } fn take<'a>(_: &'a u32) { let _: Box> = make(); //~^ ERROR mismatched types [E0308] } fn main() {} ", "commid": "rust_pr_71896"}], "negative_passages": []} {"query_id": "q-en-rust-7a2bb4e2913529fc62dd03b19a9600344ee6b75ec62f3dd5e24cb10ce7396329", "query": "IIUC currently a trait object is always covariant in its associated types, regardless of their positions. This makes it possible to pass a non-static reference to a function expecting a static reference:\nWith rustc 1.0.0 ..= 1.16.0, the following error was emitted:\n-- this looks plausibly similar to , but I don't know enough to be certain.\nOh wow how are these not invariant? Cc\nAssigning as and removing .\nJust in case, in my last action I've removed by accident and it back :).\nWait, these are supposed to be invariant. This is definitely a bug. It would be great to figure out just which nightly regressed this, maybe somebody can run cargo bisect? cleanup\nEr, I guess I meant this? ping cleanup\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nsearched toolchains nightly-2016-06-01 through nightly-2020-01-01 regression in nightly-2017-03-12 ... Previous error message: I used a slightly modified variant (just removed the IIRC): error[E0308]: mismatched types --> $DIR/variance-associated-types2.rs:13:42 | LL | let _: Box> = make(); | ^^^^^^ lifetime mismatch | = note: expected trait object `dyn Foo` found trait object `dyn Foo` note: the lifetime `'a` as defined on the function body at 12:9... --> $DIR/variance-associated-types2.rs:12:9 | LL | fn take<'a>(_: &'a u32) { | ^^ = note: ...does not necessarily outlive the static lifetime error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_71896"}], "negative_passages": []} {"query_id": "q-en-rust-2ac637c3b15838444a0ed3b9181ef73a0fd484302b00fd09f4532518396f76cb", "query": "despite it being usable in . $DIR/E0308-2.rs:9:6 | LL | impl Eq for &dyn DynEq {} | ^^ lifetime mismatch | = note: expected trait `std::cmp::PartialEq` found trait `std::cmp::PartialEq` note: the lifetime `'_` as defined on the impl at 9:13... --> $DIR/E0308-2.rs:9:13 | LL | impl Eq for &dyn DynEq {} | ^ = note: ...does not necessarily outlive the static lifetime error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_74698"}], "negative_passages": []} {"query_id": "q-en-rust-c0a929f18bcfc09b4140d3b23a5400af201e9bb77c0ec58f078f31dbf3455980", "query": "and all interfaces directly with the file-system should probably use rather than trying to coerce to handle these cases. (Similar issue to .)\nnominating feature-complete\nAccepted for backwards-compatible\nMy last referenced commit is a month old, but I'm still working on this issue (currently finishing up the support for Windows paths).\nThe issues here of dealing with filesystems that are not utf8 seem related to , at least tangentially.", "positive_passages": [{"docid": "doc-en-rust-8317a1d4eb7e15aa2d44bb2d481f93c5f0cc893eeea347c80466c0d87929b7ed", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Cross-platform file path handling (re-write) use container::Container; use c_str::CString; use clone::Clone; use iter::Iterator; use option::{Option, None, Some}; use str; use str::StrSlice; use vec; use vec::{CopyableVector, OwnedCopyableVector, OwnedVector}; use vec::{ImmutableEqVector, ImmutableVector}; pub mod posix; pub mod windows; /// Typedef for POSIX file paths. /// See `posix::Path` for more info. pub type PosixPath = posix::Path; /// Typedef for Windows file paths. /// See `windows::Path` for more info. pub type WindowsPath = windows::Path; /// Typedef for the platform-native path type #[cfg(unix)] pub type Path = PosixPath; /// Typedef for the platform-native path type #[cfg(windows)] pub type Path = WindowsPath; /// Typedef for the POSIX path component iterator. /// See `posix::ComponentIter` for more info. pub type PosixComponentIter<'self> = posix::ComponentIter<'self>; // /// Typedef for the Windows path component iterator. // /// See `windows::ComponentIter` for more info. // pub type WindowsComponentIter<'self> = windows::ComponentIter<'self>; /// Typedef for the platform-native component iterator #[cfg(unix)] pub type ComponentIter<'self> = PosixComponentIter<'self>; // /// Typedef for the platform-native component iterator //#[cfg(windows)] //pub type ComponentIter<'self> = WindowsComponentIter<'self>; // Condition that is raised when a NUL is found in a byte vector given to a Path function condition! { // this should be a &[u8] but there's a lifetime issue null_byte: ~[u8] -> ~[u8]; } /// A trait that represents the generic operations available on paths pub trait GenericPath: Clone + GenericPathUnsafe { /// Creates a new Path from a byte vector. /// The resulting Path will always be normalized. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. /// /// See individual Path impls for additional restrictions. #[inline] fn from_vec(path: &[u8]) -> Self { if contains_nul(path) { let path = self::null_byte::cond.raise(path.to_owned()); assert!(!contains_nul(path)); unsafe { GenericPathUnsafe::from_vec_unchecked(path) } } else { unsafe { GenericPathUnsafe::from_vec_unchecked(path) } } } /// Creates a new Path from a string. /// The resulting Path will always be normalized. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. #[inline] fn from_str(path: &str) -> Self { let v = path.as_bytes(); if contains_nul(v) { GenericPath::from_vec(path.as_bytes()) // let from_vec handle the condition } else { unsafe { GenericPathUnsafe::from_str_unchecked(path) } } } /// Creates a new Path from a CString. /// The resulting Path will always be normalized. /// /// See individual Path impls for potential restrictions. #[inline] fn from_c_str(path: CString) -> Self { // CStrings can't contain NULs unsafe { GenericPathUnsafe::from_vec_unchecked(path.as_bytes()) } } /// Returns the path as a string, if possible. /// If the path is not representable in utf-8, this returns None. #[inline] fn as_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.as_vec()) } /// Returns the path as a byte vector fn as_vec<'a>(&'a self) -> &'a [u8]; /// Returns the directory component of `self`, as a byte vector (with no trailing separator). /// If `self` has no directory component, returns ['.']. fn dirname<'a>(&'a self) -> &'a [u8]; /// Returns the directory component of `self`, as a string, if possible. /// See `dirname` for details. #[inline] fn dirname_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.dirname()) } /// Returns the file component of `self`, as a byte vector. /// If `self` represents the root of the file hierarchy, returns the empty vector. /// If `self` is \".\", returns the empty vector. fn filename<'a>(&'a self) -> &'a [u8]; /// Returns the file component of `self`, as a string, if possible. /// See `filename` for details. #[inline] fn filename_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.filename()) } /// Returns the stem of the filename of `self`, as a byte vector. /// The stem is the portion of the filename just before the last '.'. /// If there is no '.', the entire filename is returned. fn filestem<'a>(&'a self) -> &'a [u8] { let name = self.filename(); let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => name, Some(1) if name == bytes!(\"..\") => name, Some(pos) => name.slice_to(pos) } } /// Returns the stem of the filename of `self`, as a string, if possible. /// See `filestem` for details. #[inline] fn filestem_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.filestem()) } /// Returns the extension of the filename of `self`, as an optional byte vector. /// The extension is the portion of the filename just after the last '.'. /// If there is no extension, None is returned. /// If the filename ends in '.', the empty vector is returned. fn extension<'a>(&'a self) -> Option<&'a [u8]> { let name = self.filename(); let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => None, Some(1) if name == bytes!(\"..\") => None, Some(pos) => Some(name.slice_from(pos+1)) } } /// Returns the extension of the filename of `self`, as a string, if possible. /// See `extension` for details. #[inline] fn extension_str<'a>(&'a self) -> Option<&'a str> { self.extension().and_then(|v| str::from_utf8_slice_opt(v)) } /// Replaces the directory portion of the path with the given byte vector. /// If `self` represents the root of the filesystem hierarchy, the last path component /// of the given byte vector becomes the filename. /// /// # Failure /// /// Raises the `null_byte` condition if the dirname contains a NUL. #[inline] fn set_dirname(&mut self, dirname: &[u8]) { if contains_nul(dirname) { let dirname = self::null_byte::cond.raise(dirname.to_owned()); assert!(!contains_nul(dirname)); unsafe { self.set_dirname_unchecked(dirname) } } else { unsafe { self.set_dirname_unchecked(dirname) } } } /// Replaces the directory portion of the path with the given string. /// See `set_dirname` for details. #[inline] fn set_dirname_str(&mut self, dirname: &str) { if contains_nul(dirname.as_bytes()) { self.set_dirname(dirname.as_bytes()) // triggers null_byte condition } else { unsafe { self.set_dirname_str_unchecked(dirname) } } } /// Replaces the filename portion of the path with the given byte vector. /// If the replacement name is [], this is equivalent to popping the path. /// /// # Failure /// /// Raises the `null_byte` condition if the filename contains a NUL. #[inline] fn set_filename(&mut self, filename: &[u8]) { if contains_nul(filename) { let filename = self::null_byte::cond.raise(filename.to_owned()); assert!(!contains_nul(filename)); unsafe { self.set_filename_unchecked(filename) } } else { unsafe { self.set_filename_unchecked(filename) } } } /// Replaces the filename portion of the path with the given string. /// See `set_filename` for details. #[inline] fn set_filename_str(&mut self, filename: &str) { if contains_nul(filename.as_bytes()) { self.set_filename(filename.as_bytes()) // triggers null_byte condition } else { unsafe { self.set_filename_str_unchecked(filename) } } } /// Replaces the filestem with the given byte vector. /// If there is no extension in `self` (or `self` has no filename), this is equivalent /// to `set_filename`. Otherwise, if the given byte vector is [], the extension (including /// the preceding '.') becomes the new filename. /// /// # Failure /// /// Raises the `null_byte` condition if the filestem contains a NUL. fn set_filestem(&mut self, filestem: &[u8]) { // borrowck is being a pain here let val = { let name = self.filename(); if !name.is_empty() { let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => None, Some(idx) => { let mut v; if contains_nul(filestem) { let filestem = self::null_byte::cond.raise(filestem.to_owned()); assert!(!contains_nul(filestem)); v = vec::with_capacity(filestem.len() + name.len() - idx); v.push_all(filestem); } else { v = vec::with_capacity(filestem.len() + name.len() - idx); v.push_all(filestem); } v.push_all(name.slice_from(idx)); Some(v) } } } else { None } }; match val { None => self.set_filename(filestem), Some(v) => unsafe { self.set_filename_unchecked(v) } } } /// Replaces the filestem with the given string. /// See `set_filestem` for details. #[inline] fn set_filestem_str(&mut self, filestem: &str) { self.set_filestem(filestem.as_bytes()) } /// Replaces the extension with the given byte vector. /// If there is no extension in `self`, this adds one. /// If the given byte vector is [], this removes the extension. /// If `self` has no filename, this is a no-op. /// /// # Failure /// /// Raises the `null_byte` condition if the extension contains a NUL. fn set_extension(&mut self, extension: &[u8]) { // borrowck causes problems here too let val = { let name = self.filename(); if !name.is_empty() { let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => { if extension.is_empty() { None } else { let mut v; if contains_nul(extension) { let extension = self::null_byte::cond.raise(extension.to_owned()); assert!(!contains_nul(extension)); v = vec::with_capacity(name.len() + extension.len() + 1); v.push_all(name); v.push(dot); v.push_all(extension); } else { v = vec::with_capacity(name.len() + extension.len() + 1); v.push_all(name); v.push(dot); v.push_all(extension); } Some(v) } } Some(idx) => { if extension.is_empty() { Some(name.slice_to(idx).to_owned()) } else { let mut v; if contains_nul(extension) { let extension = self::null_byte::cond.raise(extension.to_owned()); assert!(!contains_nul(extension)); v = vec::with_capacity(idx + extension.len() + 1); v.push_all(name.slice_to(idx+1)); v.push_all(extension); } else { v = vec::with_capacity(idx + extension.len() + 1); v.push_all(name.slice_to(idx+1)); v.push_all(extension); } Some(v) } } } } else { None } }; match val { None => (), Some(v) => unsafe { self.set_filename_unchecked(v) } } } /// Replaces the extension with the given string. /// See `set_extension` for details. #[inline] fn set_extension_str(&mut self, extension: &str) { self.set_extension(extension.as_bytes()) } /// Returns a new Path constructed by replacing the dirname with the given byte vector. /// See `set_dirname` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the dirname contains a NUL. #[inline] fn with_dirname(&self, dirname: &[u8]) -> Self { let mut p = self.clone(); p.set_dirname(dirname); p } /// Returns a new Path constructed by replacing the dirname with the given string. /// See `set_dirname` for details. #[inline] fn with_dirname_str(&self, dirname: &str) -> Self { let mut p = self.clone(); p.set_dirname_str(dirname); p } /// Returns a new Path constructed by replacing the filename with the given byte vector. /// See `set_filename` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the filename contains a NUL. #[inline] fn with_filename(&self, filename: &[u8]) -> Self { let mut p = self.clone(); p.set_filename(filename); p } /// Returns a new Path constructed by replacing the filename with the given string. /// See `set_filename` for details. #[inline] fn with_filename_str(&self, filename: &str) -> Self { let mut p = self.clone(); p.set_filename_str(filename); p } /// Returns a new Path constructed by setting the filestem to the given byte vector. /// See `set_filestem` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the filestem contains a NUL. #[inline] fn with_filestem(&self, filestem: &[u8]) -> Self { let mut p = self.clone(); p.set_filestem(filestem); p } /// Returns a new Path constructed by setting the filestem to the given string. /// See `set_filestem` for details. #[inline] fn with_filestem_str(&self, filestem: &str) -> Self { let mut p = self.clone(); p.set_filestem_str(filestem); p } /// Returns a new Path constructed by setting the extension to the given byte vector. /// See `set_extension` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the extension contains a NUL. #[inline] fn with_extension(&self, extension: &[u8]) -> Self { let mut p = self.clone(); p.set_extension(extension); p } /// Returns a new Path constructed by setting the extension to the given string. /// See `set_extension` for details. #[inline] fn with_extension_str(&self, extension: &str) -> Self { let mut p = self.clone(); p.set_extension_str(extension); p } /// Returns the directory component of `self`, as a Path. /// If `self` represents the root of the filesystem hierarchy, returns `self`. fn dir_path(&self) -> Self { // self.dirname() returns a NUL-free vector unsafe { GenericPathUnsafe::from_vec_unchecked(self.dirname()) } } /// Returns the file component of `self`, as a relative Path. /// If `self` represents the root of the filesystem hierarchy, returns None. fn file_path(&self) -> Option { // self.filename() returns a NUL-free vector match self.filename() { [] => None, v => Some(unsafe { GenericPathUnsafe::from_vec_unchecked(v) }) } } /// Pushes a path (as a byte vector) onto `self`. /// If the argument represents an absolute path, it replaces `self`. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. #[inline] fn push(&mut self, path: &[u8]) { if contains_nul(path) { let path = self::null_byte::cond.raise(path.to_owned()); assert!(!contains_nul(path)); unsafe { self.push_unchecked(path) } } else { unsafe { self.push_unchecked(path) } } } /// Pushes a path (as a string) onto `self. /// See `push` for details. #[inline] fn push_str(&mut self, path: &str) { if contains_nul(path.as_bytes()) { self.push(path.as_bytes()) // triggers null_byte condition } else { unsafe { self.push_str_unchecked(path) } } } /// Pushes a Path onto `self`. /// If the argument represents an absolute path, it replaces `self`. #[inline] fn push_path(&mut self, path: &Self) { self.push(path.as_vec()) } /// Pops the last path component off of `self` and returns it. /// If `self` represents the root of the file hierarchy, None is returned. fn pop_opt(&mut self) -> Option<~[u8]>; /// Pops the last path component off of `self` and returns it as a string, if possible. /// `self` will still be modified even if None is returned. /// See `pop_opt` for details. #[inline] fn pop_opt_str(&mut self) -> Option<~str> { self.pop_opt().and_then(|v| str::from_utf8_owned_opt(v)) } /// Returns a new Path constructed by joining `self` with the given path (as a byte vector). /// If the given path is absolute, the new Path will represent just that. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. #[inline] fn join(&self, path: &[u8]) -> Self { let mut p = self.clone(); p.push(path); p } /// Returns a new Path constructed by joining `self` with the given path (as a string). /// See `join` for details. #[inline] fn join_str(&self, path: &str) -> Self { let mut p = self.clone(); p.push_str(path); p } /// Returns a new Path constructed by joining `self` with the given path. /// If the given path is absolute, the new Path will represent just that. #[inline] fn join_path(&self, path: &Self) -> Self { let mut p = self.clone(); p.push_path(path); p } /// Returns whether `self` represents an absolute path. /// An absolute path is defined as one that, when joined to another path, will /// yield back the same absolute path. fn is_absolute(&self) -> bool; /// Returns whether `self` is equal to, or is an ancestor of, the given path. /// If both paths are relative, they are compared as though they are relative /// to the same parent path. fn is_ancestor_of(&self, other: &Self) -> bool; /// Returns the Path that, were it joined to `base`, would yield `self`. /// If no such path exists, None is returned. /// If `self` is absolute and `base` is relative, or on Windows if both /// paths refer to separate drives, an absolute path is returned. fn path_relative_from(&self, base: &Self) -> Option; } /// A trait that represents the unsafe operations on GenericPaths pub trait GenericPathUnsafe { /// Creates a new Path from a byte vector without checking for null bytes. /// The resulting Path will always be normalized. unsafe fn from_vec_unchecked(path: &[u8]) -> Self; /// Creates a new Path from a str without checking for null bytes. /// The resulting Path will always be normalized. #[inline] unsafe fn from_str_unchecked(path: &str) -> Self { GenericPathUnsafe::from_vec_unchecked(path.as_bytes()) } /// Replaces the directory portion of the path with the given byte vector without /// checking for null bytes. /// See `set_dirname` for details. unsafe fn set_dirname_unchecked(&mut self, dirname: &[u8]); /// Replaces the directory portion of the path with the given str without /// checking for null bytes. /// See `set_dirname_str` for details. #[inline] unsafe fn set_dirname_str_unchecked(&mut self, dirname: &str) { self.set_dirname_unchecked(dirname.as_bytes()) } /// Replaces the filename portion of the path with the given byte vector without /// checking for null bytes. /// See `set_filename` for details. unsafe fn set_filename_unchecked(&mut self, filename: &[u8]); /// Replaces the filename portion of the path with the given str without /// checking for null bytes. /// See `set_filename_str` for details. #[inline] unsafe fn set_filename_str_unchecked(&mut self, filename: &str) { self.set_filename_unchecked(filename.as_bytes()) } /// Pushes a byte vector onto `self` without checking for null bytes. /// See `push` for details. unsafe fn push_unchecked(&mut self, path: &[u8]); /// Pushes a str onto `self` without checking for null bytes. /// See `push_str` for details. #[inline] unsafe fn push_str_unchecked(&mut self, path: &str) { self.push_unchecked(path.as_bytes()) } } #[inline(always)] fn contains_nul(v: &[u8]) -> bool { v.iter().any(|&x| x == 0) } ", "commid": "rust_pr_9655.0"}], "negative_passages": []} {"query_id": "q-en-rust-c0a929f18bcfc09b4140d3b23a5400af201e9bb77c0ec58f078f31dbf3455980", "query": "and all interfaces directly with the file-system should probably use rather than trying to coerce to handle these cases. (Similar issue to .)\nnominating feature-complete\nAccepted for backwards-compatible\nMy last referenced commit is a month old, but I'm still working on this issue (currently finishing up the support for Windows paths).\nThe issues here of dealing with filesystems that are not utf8 seem related to , at least tangentially.", "positive_passages": [{"docid": "doc-en-rust-9764f9e3b31d11297b5310bbca4454373f440ec511a197d05720ba8c8b3b2cac", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! POSIX file path handling use container::Container; use c_str::{CString, ToCStr}; use clone::Clone; use cmp::Eq; use from_str::FromStr; use iter::{AdditiveIterator, Extendable, Iterator}; use option::{Option, None, Some}; use str; use str::Str; use util; use vec; use vec::CopyableVector; use vec::{Vector, VectorVector}; use super::{GenericPath, GenericPathUnsafe}; /// Iterator that yields successive components of a Path pub type ComponentIter<'self> = vec::SplitIterator<'self, u8>; /// Represents a POSIX file path #[deriving(Clone, DeepClone)] pub struct Path { priv repr: ~[u8], // assumed to never be empty or contain NULs priv sepidx: Option // index of the final separator in repr } /// The standard path separator character pub static sep: u8 = '/' as u8; /// Returns whether the given byte is a path separator #[inline] pub fn is_sep(u: &u8) -> bool { *u == sep } impl Eq for Path { #[inline] fn eq(&self, other: &Path) -> bool { self.repr == other.repr } } impl FromStr for Path { fn from_str(s: &str) -> Option { let v = s.as_bytes(); if contains_nul(v) { None } else { Some(unsafe { GenericPathUnsafe::from_vec_unchecked(v) }) } } } impl ToCStr for Path { #[inline] fn to_c_str(&self) -> CString { // The Path impl guarantees no internal NUL unsafe { self.as_vec().to_c_str_unchecked() } } #[inline] unsafe fn to_c_str_unchecked(&self) -> CString { self.as_vec().to_c_str_unchecked() } } impl GenericPathUnsafe for Path { unsafe fn from_vec_unchecked(path: &[u8]) -> Path { let path = Path::normalize(path); assert!(!path.is_empty()); let idx = path.rposition_elem(&sep); Path{ repr: path, sepidx: idx } } unsafe fn set_dirname_unchecked(&mut self, dirname: &[u8]) { match self.sepidx { None if bytes!(\".\") == self.repr || bytes!(\"..\") == self.repr => { self.repr = Path::normalize(dirname); } None => { let mut v = vec::with_capacity(dirname.len() + self.repr.len() + 1); v.push_all(dirname); v.push(sep); v.push_all(self.repr); self.repr = Path::normalize(v); } Some(0) if self.repr.len() == 1 && self.repr[0] == sep => { self.repr = Path::normalize(dirname); } Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => { self.repr = Path::normalize(dirname); } Some(idx) if dirname.is_empty() => { let v = Path::normalize(self.repr.slice_from(idx+1)); self.repr = v; } Some(idx) => { let mut v = vec::with_capacity(dirname.len() + self.repr.len() - idx); v.push_all(dirname); v.push_all(self.repr.slice_from(idx)); self.repr = Path::normalize(v); } } self.sepidx = self.repr.rposition_elem(&sep); } unsafe fn set_filename_unchecked(&mut self, filename: &[u8]) { match self.sepidx { None if bytes!(\"..\") == self.repr => { let mut v = vec::with_capacity(3 + filename.len()); v.push_all(dot_dot_static); v.push(sep); v.push_all(filename); self.repr = Path::normalize(v); } None => { self.repr = Path::normalize(filename); } Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => { let mut v = vec::with_capacity(self.repr.len() + 1 + filename.len()); v.push_all(self.repr); v.push(sep); v.push_all(filename); self.repr = Path::normalize(v); } Some(idx) => { let mut v = vec::with_capacity(idx + 1 + filename.len()); v.push_all(self.repr.slice_to(idx+1)); v.push_all(filename); self.repr = Path::normalize(v); } } self.sepidx = self.repr.rposition_elem(&sep); } unsafe fn push_unchecked(&mut self, path: &[u8]) { if !path.is_empty() { if path[0] == sep { self.repr = Path::normalize(path); } else { let mut v = vec::with_capacity(self.repr.len() + path.len() + 1); v.push_all(self.repr); v.push(sep); v.push_all(path); self.repr = Path::normalize(v); } self.sepidx = self.repr.rposition_elem(&sep); } } } impl GenericPath for Path { #[inline] fn as_vec<'a>(&'a self) -> &'a [u8] { self.repr.as_slice() } fn dirname<'a>(&'a self) -> &'a [u8] { match self.sepidx { None if bytes!(\"..\") == self.repr => self.repr.as_slice(), None => dot_static, Some(0) => self.repr.slice_to(1), Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => self.repr.as_slice(), Some(idx) => self.repr.slice_to(idx) } } fn filename<'a>(&'a self) -> &'a [u8] { match self.sepidx { None if bytes!(\".\") == self.repr || bytes!(\"..\") == self.repr => &[], None => self.repr.as_slice(), Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => &[], Some(idx) => self.repr.slice_from(idx+1) } } fn pop_opt(&mut self) -> Option<~[u8]> { match self.sepidx { None if bytes!(\".\") == self.repr => None, None => { let mut v = ~['.' as u8]; util::swap(&mut v, &mut self.repr); self.sepidx = None; Some(v) } Some(0) if bytes!(\"/\") == self.repr => None, Some(idx) => { let v = self.repr.slice_from(idx+1).to_owned(); if idx == 0 { self.repr.truncate(idx+1); } else { self.repr.truncate(idx); } self.sepidx = self.repr.rposition_elem(&sep); Some(v) } } } #[inline] fn is_absolute(&self) -> bool { self.repr[0] == sep } fn is_ancestor_of(&self, other: &Path) -> bool { if self.is_absolute() != other.is_absolute() { false } else { let mut ita = self.component_iter(); let mut itb = other.component_iter(); if bytes!(\".\") == self.repr { return itb.next() != Some(bytes!(\"..\")); } loop { match (ita.next(), itb.next()) { (None, _) => break, (Some(a), Some(b)) if a == b => { loop }, (Some(a), _) if a == bytes!(\"..\") => { // if ita contains only .. components, it's an ancestor return ita.all(|x| x == bytes!(\"..\")); } _ => return false } } true } } fn path_relative_from(&self, base: &Path) -> Option { if self.is_absolute() != base.is_absolute() { if self.is_absolute() { Some(self.clone()) } else { None } } else { let mut ita = self.component_iter(); let mut itb = base.component_iter(); let mut comps = ~[]; loop { match (ita.next(), itb.next()) { (None, None) => break, (Some(a), None) => { comps.push(a); comps.extend(&mut ita); break; } (None, _) => comps.push(dot_dot_static), (Some(a), Some(b)) if comps.is_empty() && a == b => (), (Some(a), Some(b)) if b == bytes!(\".\") => comps.push(a), (Some(_), Some(b)) if b == bytes!(\"..\") => return None, (Some(a), Some(_)) => { comps.push(dot_dot_static); for _ in itb { comps.push(dot_dot_static); } comps.push(a); comps.extend(&mut ita); break; } } } Some(Path::new(comps.connect_vec(&sep))) } } } impl Path { /// Returns a new Path from a byte vector /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. #[inline] pub fn new(v: &[u8]) -> Path { GenericPath::from_vec(v) } /// Returns a new Path from a string /// /// # Failure /// /// Raises the `null_byte` condition if the str contains a NUL. #[inline] pub fn from_str(s: &str) -> Path { GenericPath::from_str(s) } /// Converts the Path into an owned byte vector pub fn into_vec(self) -> ~[u8] { self.repr } /// Converts the Path into an owned string, if possible pub fn into_str(self) -> Option<~str> { str::from_utf8_owned_opt(self.repr) } /// Returns a normalized byte vector representation of a path, by removing all empty /// components, and unnecessary . and .. components. pub fn normalize+CopyableVector>(v: V) -> ~[u8] { // borrowck is being very picky let val = { let is_abs = !v.as_slice().is_empty() && v.as_slice()[0] == sep; let v_ = if is_abs { v.as_slice().slice_from(1) } else { v.as_slice() }; let comps = normalize_helper(v_, is_abs); match comps { None => None, Some(comps) => { if is_abs && comps.is_empty() { Some(~[sep]) } else { let n = if is_abs { comps.len() } else { comps.len() - 1} + comps.iter().map(|v| v.len()).sum(); let mut v = vec::with_capacity(n); let mut it = comps.move_iter(); if !is_abs { match it.next() { None => (), Some(comp) => v.push_all(comp) } } for comp in it { v.push(sep); v.push_all(comp); } Some(v) } } } }; match val { None => v.into_owned(), Some(val) => val } } /// Returns an iterator that yields each component of the path in turn. /// Does not distinguish between absolute and relative paths, e.g. /// /a/b/c and a/b/c yield the same set of components. /// A path of \"/\" yields no components. A path of \".\" yields one component. pub fn component_iter<'a>(&'a self) -> ComponentIter<'a> { let v = if self.repr[0] == sep { self.repr.slice_from(1) } else { self.repr.as_slice() }; let mut ret = v.split_iter(is_sep); if v.is_empty() { // consume the empty \"\" component ret.next(); } ret } } // None result means the byte vector didn't need normalizing fn normalize_helper<'a>(v: &'a [u8], is_abs: bool) -> Option<~[&'a [u8]]> { if is_abs && v.as_slice().is_empty() { return None; } let mut comps: ~[&'a [u8]] = ~[]; let mut n_up = 0u; let mut changed = false; for comp in v.split_iter(is_sep) { if comp.is_empty() { changed = true } else if comp == bytes!(\".\") { changed = true } else if comp == bytes!(\"..\") { if is_abs && comps.is_empty() { changed = true } else if comps.len() == n_up { comps.push(dot_dot_static); n_up += 1 } else { comps.pop_opt(); changed = true } } else { comps.push(comp) } } if changed { if comps.is_empty() && !is_abs { if v == bytes!(\".\") { return None; } comps.push(dot_static); } Some(comps) } else { None } } // FIXME (#8169): Pull this into parent module once visibility works #[inline(always)] fn contains_nul(v: &[u8]) -> bool { v.iter().any(|&x| x == 0) } static dot_static: &'static [u8] = &'static ['.' as u8]; static dot_dot_static: &'static [u8] = &'static ['.' as u8, '.' as u8]; #[cfg(test)] mod tests { use super::*; use option::{Some, None}; use iter::Iterator; use str; use vec::Vector; macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_str(), Some($exp)); } ); (v: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_vec(), $exp); } ) ) macro_rules! b( ($($arg:expr),+) => ( bytes!($($arg),+) ) ) #[test] fn test_paths() { t!(v: Path::new([]), b!(\".\")); t!(v: Path::new(b!(\"/\")), b!(\"/\")); t!(v: Path::new(b!(\"a/b/c\")), b!(\"a/b/c\")); t!(v: Path::new(b!(\"a/b/c\", 0xff)), b!(\"a/b/c\", 0xff)); t!(v: Path::new(b!(0xff, \"/../foo\", 0x80)), b!(\"foo\", 0x80)); let p = Path::new(b!(\"a/b/c\", 0xff)); assert_eq!(p.as_str(), None); t!(s: Path::from_str(\"\"), \".\"); t!(s: Path::from_str(\"/\"), \"/\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"hi/\"), \"hi\"); t!(s: Path::from_str(\"/lib\"), \"/lib\"); t!(s: Path::from_str(\"/lib/\"), \"/lib\"); t!(s: Path::from_str(\"hi/there\"), \"hi/there\"); t!(s: Path::from_str(\"hi/there.txt\"), \"hi/there.txt\"); t!(s: Path::from_str(\"hi/there/\"), \"hi/there\"); t!(s: Path::from_str(\"hi/../there\"), \"there\"); t!(s: Path::from_str(\"../hi/there\"), \"../hi/there\"); t!(s: Path::from_str(\"/../hi/there\"), \"/hi/there\"); t!(s: Path::from_str(\"foo/..\"), \".\"); t!(s: Path::from_str(\"/foo/..\"), \"/\"); t!(s: Path::from_str(\"/foo/../..\"), \"/\"); t!(s: Path::from_str(\"/foo/../../bar\"), \"/bar\"); t!(s: Path::from_str(\"/./hi/./there/.\"), \"/hi/there\"); t!(s: Path::from_str(\"/./hi/./there/./..\"), \"/hi\"); t!(s: Path::from_str(\"foo/../..\"), \"..\"); t!(s: Path::from_str(\"foo/../../..\"), \"../..\"); t!(s: Path::from_str(\"foo/../../bar\"), \"../bar\"); assert_eq!(Path::new(b!(\"foo/bar\")).into_vec(), b!(\"foo/bar\").to_owned()); assert_eq!(Path::new(b!(\"/foo/../../bar\")).into_vec(), b!(\"/bar\").to_owned()); assert_eq!(Path::from_str(\"foo/bar\").into_str(), Some(~\"foo/bar\")); assert_eq!(Path::from_str(\"/foo/../../bar\").into_str(), Some(~\"/bar\")); let p = Path::new(b!(\"foo/bar\", 0x80)); assert_eq!(p.as_str(), None); assert_eq!(Path::new(b!(\"foo\", 0xff, \"/bar\")).into_str(), None); } #[test] fn test_null_byte() { use path2::null_byte::cond; let mut handled = false; let mut p = do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"foo/bar\", 0)); (b!(\"/bar\").to_owned()) }).inside { Path::new(b!(\"foo/bar\", 0)) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"/bar\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.set_filename(b!(\"f\", 0, \"o\")) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"/foo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"null/\", 0, \"/byte\")); (b!(\"null/byte\").to_owned()) }).inside { p.set_dirname(b!(\"null/\", 0, \"/byte\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"null/byte/foo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.push(b!(\"f\", 0, \"o\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"null/byte/foo/foo\")); } #[test] fn test_null_byte_fail() { use path2::null_byte::cond; use task; macro_rules! t( ($name:expr => $code:block) => ( { let mut t = task::task(); t.supervised(); t.name($name); let res = do t.try $code; assert!(res.is_err()); } ) ) t!(~\"new() w/nul\" => { do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { Path::new(b!(\"foo/bar\", 0)) }; }) t!(~\"set_filename w/nul\" => { let mut p = Path::new(b!(\"foo/bar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_filename(b!(\"foo\", 0)) }; }) t!(~\"set_dirname w/nul\" => { let mut p = Path::new(b!(\"foo/bar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_dirname(b!(\"foo\", 0)) }; }) t!(~\"push w/nul\" => { let mut p = Path::new(b!(\"foo/bar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.push(b!(\"foo\", 0)) }; }) } #[test] fn test_components() { macro_rules! t( (s: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::from_str($path); assert_eq!(path.$op(), ($exp).as_bytes()); } ); (s: $path:expr, $op:ident, $exp:expr, opt) => ( { let path = Path::from_str($path); let left = path.$op().map(|&x| str::from_utf8_slice(x)); assert_eq!(left, $exp); } ); (v: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::new($path); assert_eq!(path.$op(), $exp); } ) ) t!(v: b!(\"a/b/c\"), filename, b!(\"c\")); t!(v: b!(\"a/b/c\", 0xff), filename, b!(\"c\", 0xff)); t!(v: b!(\"a/b\", 0xff, \"/c\"), filename, b!(\"c\")); t!(s: \"a/b/c\", filename, \"c\"); t!(s: \"/a/b/c\", filename, \"c\"); t!(s: \"a\", filename, \"a\"); t!(s: \"/a\", filename, \"a\"); t!(s: \".\", filename, \"\"); t!(s: \"/\", filename, \"\"); t!(s: \"..\", filename, \"\"); t!(s: \"../..\", filename, \"\"); t!(v: b!(\"a/b/c\"), dirname, b!(\"a/b\")); t!(v: b!(\"a/b/c\", 0xff), dirname, b!(\"a/b\")); t!(v: b!(\"a/b\", 0xff, \"/c\"), dirname, b!(\"a/b\", 0xff)); t!(s: \"a/b/c\", dirname, \"a/b\"); t!(s: \"/a/b/c\", dirname, \"/a/b\"); t!(s: \"a\", dirname, \".\"); t!(s: \"/a\", dirname, \"/\"); t!(s: \".\", dirname, \".\"); t!(s: \"/\", dirname, \"/\"); t!(s: \"..\", dirname, \"..\"); t!(s: \"../..\", dirname, \"../..\"); t!(v: b!(\"hi/there.txt\"), filestem, b!(\"there\")); t!(v: b!(\"hi/there\", 0x80, \".txt\"), filestem, b!(\"there\", 0x80)); t!(v: b!(\"hi/there.t\", 0x80, \"xt\"), filestem, b!(\"there\")); t!(s: \"hi/there.txt\", filestem, \"there\"); t!(s: \"hi/there\", filestem, \"there\"); t!(s: \"there.txt\", filestem, \"there\"); t!(s: \"there\", filestem, \"there\"); t!(s: \".\", filestem, \"\"); t!(s: \"/\", filestem, \"\"); t!(s: \"foo/.bar\", filestem, \".bar\"); t!(s: \".bar\", filestem, \".bar\"); t!(s: \"..bar\", filestem, \".\"); t!(s: \"hi/there..txt\", filestem, \"there.\"); t!(s: \"..\", filestem, \"\"); t!(s: \"../..\", filestem, \"\"); t!(v: b!(\"hi/there.txt\"), extension, Some(b!(\"txt\"))); t!(v: b!(\"hi/there\", 0x80, \".txt\"), extension, Some(b!(\"txt\"))); t!(v: b!(\"hi/there.t\", 0x80, \"xt\"), extension, Some(b!(\"t\", 0x80, \"xt\"))); t!(v: b!(\"hi/there\"), extension, None); t!(v: b!(\"hi/there\", 0x80), extension, None); t!(s: \"hi/there.txt\", extension, Some(\"txt\"), opt); t!(s: \"hi/there\", extension, None, opt); t!(s: \"there.txt\", extension, Some(\"txt\"), opt); t!(s: \"there\", extension, None, opt); t!(s: \".\", extension, None, opt); t!(s: \"/\", extension, None, opt); t!(s: \"foo/.bar\", extension, None, opt); t!(s: \".bar\", extension, None, opt); t!(s: \"..bar\", extension, Some(\"bar\"), opt); t!(s: \"hi/there..txt\", extension, Some(\"txt\"), opt); t!(s: \"..\", extension, None, opt); t!(s: \"../..\", extension, None, opt); } #[test] fn test_push() { macro_rules! t( (s: $path:expr, $join:expr) => ( { let path = ($path); let join = ($join); let mut p1 = Path::from_str(path); let p2 = p1.clone(); p1.push_str(join); assert_eq!(p1, p2.join_str(join)); } ) ) t!(s: \"a/b/c\", \"..\"); t!(s: \"/a/b/c\", \"d\"); t!(s: \"a/b\", \"c/d\"); t!(s: \"a/b\", \"/c/d\"); } #[test] fn test_push_path() { macro_rules! t( (s: $path:expr, $push:expr, $exp:expr) => ( { let mut p = Path::from_str($path); let push = Path::from_str($push); p.push_path(&push); assert_eq!(p.as_str(), Some($exp)); } ) ) t!(s: \"a/b/c\", \"d\", \"a/b/c/d\"); t!(s: \"/a/b/c\", \"d\", \"/a/b/c/d\"); t!(s: \"a/b\", \"c/d\", \"a/b/c/d\"); t!(s: \"a/b\", \"/c/d\", \"/c/d\"); t!(s: \"a/b\", \".\", \"a/b\"); t!(s: \"a/b\", \"../c\", \"a/c\"); } #[test] fn test_pop() { macro_rules! t( (s: $path:expr, $left:expr, $right:expr) => ( { let mut p = Path::from_str($path); let file = p.pop_opt_str(); assert_eq!(p.as_str(), Some($left)); assert_eq!(file.map(|s| s.as_slice()), $right); } ); (v: [$($path:expr),+], [$($left:expr),+], Some($($right:expr),+)) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file.map(|v| v.as_slice()), Some(b!($($right),+))); } ); (v: [$($path:expr),+], [$($left:expr),+], None) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file, None); } ) ) t!(v: [\"a/b/c\"], [\"a/b\"], Some(\"c\")); t!(v: [\"a\"], [\".\"], Some(\"a\")); t!(v: [\".\"], [\".\"], None); t!(v: [\"/a\"], [\"/\"], Some(\"a\")); t!(v: [\"/\"], [\"/\"], None); t!(v: [\"a/b/c\", 0x80], [\"a/b\"], Some(\"c\", 0x80)); t!(v: [\"a/b\", 0x80, \"/c\"], [\"a/b\", 0x80], Some(\"c\")); t!(v: [0xff], [\".\"], Some(0xff)); t!(v: [\"/\", 0xff], [\"/\"], Some(0xff)); t!(s: \"a/b/c\", \"a/b\", Some(\"c\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \".\", \".\", None); t!(s: \"/a\", \"/\", Some(\"a\")); t!(s: \"/\", \"/\", None); assert_eq!(Path::new(b!(\"foo/bar\", 0x80)).pop_opt_str(), None); assert_eq!(Path::new(b!(\"foo\", 0x80, \"/bar\")).pop_opt_str(), Some(~\"bar\")); } #[test] fn test_join() { t!(v: Path::new(b!(\"a/b/c\")).join(b!(\"..\")), b!(\"a/b\")); t!(v: Path::new(b!(\"/a/b/c\")).join(b!(\"d\")), b!(\"/a/b/c/d\")); t!(v: Path::new(b!(\"a/\", 0x80, \"/c\")).join(b!(0xff)), b!(\"a/\", 0x80, \"/c/\", 0xff)); t!(s: Path::from_str(\"a/b/c\").join_str(\"..\"), \"a/b\"); t!(s: Path::from_str(\"/a/b/c\").join_str(\"d\"), \"/a/b/c/d\"); t!(s: Path::from_str(\"a/b\").join_str(\"c/d\"), \"a/b/c/d\"); t!(s: Path::from_str(\"a/b\").join_str(\"/c/d\"), \"/c/d\"); t!(s: Path::from_str(\".\").join_str(\"a/b\"), \"a/b\"); t!(s: Path::from_str(\"/\").join_str(\"a/b\"), \"/a/b\"); } #[test] fn test_join_path() { macro_rules! t( (s: $path:expr, $join:expr, $exp:expr) => ( { let path = Path::from_str($path); let join = Path::from_str($join); let res = path.join_path(&join); assert_eq!(res.as_str(), Some($exp)); } ) ) t!(s: \"a/b/c\", \"..\", \"a/b\"); t!(s: \"/a/b/c\", \"d\", \"/a/b/c/d\"); t!(s: \"a/b\", \"c/d\", \"a/b/c/d\"); t!(s: \"a/b\", \"/c/d\", \"/c/d\"); t!(s: \".\", \"a/b\", \"a/b\"); t!(s: \"/\", \"a/b\", \"/a/b\"); } #[test] fn test_with_helpers() { t!(v: Path::new(b!(\"a/b/c\")).with_dirname(b!(\"d\")), b!(\"d/c\")); t!(v: Path::new(b!(\"a/b/c\")).with_dirname(b!(\"d/e\")), b!(\"d/e/c\")); t!(v: Path::new(b!(\"a/\", 0x80, \"b/c\")).with_dirname(b!(0xff)), b!(0xff, \"/c\")); t!(v: Path::new(b!(\"a/b/\", 0x80)).with_dirname(b!(\"/\", 0xcd)), b!(\"/\", 0xcd, \"/\", 0x80)); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"d\"), \"d/c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"d/e\"), \"d/e/c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"\"), \"c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"/\"), \"/c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\".\"), \"c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"..\"), \"../c\"); t!(s: Path::from_str(\"/\").with_dirname_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"/\").with_dirname_str(\"\"), \".\"); t!(s: Path::from_str(\"/foo\").with_dirname_str(\"bar\"), \"bar/foo\"); t!(s: Path::from_str(\"..\").with_dirname_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"../..\").with_dirname_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"..\").with_dirname_str(\"\"), \".\"); t!(s: Path::from_str(\"../..\").with_dirname_str(\"\"), \".\"); t!(s: Path::from_str(\"foo\").with_dirname_str(\"..\"), \"../foo\"); t!(s: Path::from_str(\"foo\").with_dirname_str(\"../..\"), \"../../foo\"); t!(v: Path::new(b!(\"a/b/c\")).with_filename(b!(\"d\")), b!(\"a/b/d\")); t!(v: Path::new(b!(\"a/b/c\", 0xff)).with_filename(b!(0x80)), b!(\"a/b/\", 0x80)); t!(v: Path::new(b!(\"/\", 0xff, \"/foo\")).with_filename(b!(0xcd)), b!(\"/\", 0xff, \"/\", 0xcd)); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"d\"), \"a/b/d\"); t!(s: Path::from_str(\".\").with_filename_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"/a/b/c\").with_filename_str(\"d\"), \"/a/b/d\"); t!(s: Path::from_str(\"/\").with_filename_str(\"foo\"), \"/foo\"); t!(s: Path::from_str(\"/a\").with_filename_str(\"foo\"), \"/foo\"); t!(s: Path::from_str(\"foo\").with_filename_str(\"bar\"), \"bar\"); t!(s: Path::from_str(\"/\").with_filename_str(\"foo/\"), \"/foo\"); t!(s: Path::from_str(\"/a\").with_filename_str(\"foo/\"), \"/foo\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"\"), \"a/b\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\".\"), \"a/b\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"..\"), \"a\"); t!(s: Path::from_str(\"/a\").with_filename_str(\"\"), \"/\"); t!(s: Path::from_str(\"foo\").with_filename_str(\"\"), \".\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"d/e\"), \"a/b/d/e\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"/d\"), \"a/b/d\"); t!(s: Path::from_str(\"..\").with_filename_str(\"foo\"), \"../foo\"); t!(s: Path::from_str(\"../..\").with_filename_str(\"foo\"), \"../../foo\"); t!(s: Path::from_str(\"..\").with_filename_str(\"\"), \"..\"); t!(s: Path::from_str(\"../..\").with_filename_str(\"\"), \"../..\"); t!(v: Path::new(b!(\"hi/there\", 0x80, \".txt\")).with_filestem(b!(0xff)), b!(\"hi/\", 0xff, \".txt\")); t!(v: Path::new(b!(\"hi/there.txt\", 0x80)).with_filestem(b!(0xff)), b!(\"hi/\", 0xff, \".txt\", 0x80)); t!(v: Path::new(b!(\"hi/there\", 0xff)).with_filestem(b!(0x80)), b!(\"hi/\", 0x80)); t!(v: Path::new(b!(\"hi\", 0x80, \"/there\")).with_filestem([]), b!(\"hi\", 0x80)); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"here\"), \"hi/here.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"\"), \"hi/.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\".\"), \"hi/..txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"..\"), \"hi/...txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"/\"), \"hi/.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"foo/bar\"), \"hi/foo/bar.txt\"); t!(s: Path::from_str(\"hi/there.foo.txt\").with_filestem_str(\"here\"), \"hi/here.txt\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\"here\"), \"hi/here\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\"\"), \"hi\"); t!(s: Path::from_str(\"hi\").with_filestem_str(\"\"), \".\"); t!(s: Path::from_str(\"/hi\").with_filestem_str(\"\"), \"/\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\"..\"), \".\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\".\"), \"hi\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\"foo\"), \"hi/foo.\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\"\"), \"hi\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\".\"), \".\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\"..\"), \"hi/...\"); t!(s: Path::from_str(\"/\").with_filestem_str(\"foo\"), \"/foo\"); t!(s: Path::from_str(\".\").with_filestem_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"hi/there..\").with_filestem_str(\"here\"), \"hi/here.\"); t!(s: Path::from_str(\"hi/there..\").with_filestem_str(\"\"), \"hi\"); t!(v: Path::new(b!(\"hi/there\", 0x80, \".txt\")).with_extension(b!(\"exe\")), b!(\"hi/there\", 0x80, \".exe\")); t!(v: Path::new(b!(\"hi/there.txt\", 0x80)).with_extension(b!(0xff)), b!(\"hi/there.\", 0xff)); t!(v: Path::new(b!(\"hi/there\", 0x80)).with_extension(b!(0xff)), b!(\"hi/there\", 0x80, \".\", 0xff)); t!(v: Path::new(b!(\"hi/there.\", 0xff)).with_extension([]), b!(\"hi/there\")); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\"exe\"), \"hi/there.exe\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\"\"), \"hi/there\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\".\"), \"hi/there..\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\"..\"), \"hi/there...\"); t!(s: Path::from_str(\"hi/there\").with_extension_str(\"txt\"), \"hi/there.txt\"); t!(s: Path::from_str(\"hi/there\").with_extension_str(\".\"), \"hi/there..\"); t!(s: Path::from_str(\"hi/there\").with_extension_str(\"..\"), \"hi/there...\"); t!(s: Path::from_str(\"hi/there.\").with_extension_str(\"txt\"), \"hi/there.txt\"); t!(s: Path::from_str(\"hi/.foo\").with_extension_str(\"txt\"), \"hi/.foo.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\".foo\"), \"hi/there..foo\"); t!(s: Path::from_str(\"/\").with_extension_str(\"txt\"), \"/\"); t!(s: Path::from_str(\"/\").with_extension_str(\".\"), \"/\"); t!(s: Path::from_str(\"/\").with_extension_str(\"..\"), \"/\"); t!(s: Path::from_str(\".\").with_extension_str(\"txt\"), \".\"); } #[test] fn test_setters() { macro_rules! t( (s: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::from_str(path); p1.$set(arg); let p2 = Path::from_str(path); assert_eq!(p1, p2.$with(arg)); } ); (v: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::new(path); p1.$set(arg); let p2 = Path::new(path); assert_eq!(p1, p2.$with(arg)); } ) ) t!(v: b!(\"a/b/c\"), set_dirname, with_dirname, b!(\"d\")); t!(v: b!(\"a/b/c\"), set_dirname, with_dirname, b!(\"d/e\")); t!(v: b!(\"a/\", 0x80, \"/c\"), set_dirname, with_dirname, b!(0xff)); t!(s: \"a/b/c\", set_dirname_str, with_dirname_str, \"d\"); t!(s: \"a/b/c\", set_dirname_str, with_dirname_str, \"d/e\"); t!(s: \"/\", set_dirname_str, with_dirname_str, \"foo\"); t!(s: \"/foo\", set_dirname_str, with_dirname_str, \"bar\"); t!(s: \"a/b/c\", set_dirname_str, with_dirname_str, \"\"); t!(s: \"../..\", set_dirname_str, with_dirname_str, \"x\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"../..\"); t!(v: b!(\"a/b/c\"), set_filename, with_filename, b!(\"d\")); t!(v: b!(\"/\"), set_filename, with_filename, b!(\"foo\")); t!(v: b!(0x80), set_filename, with_filename, b!(0xff)); t!(s: \"a/b/c\", set_filename_str, with_filename_str, \"d\"); t!(s: \"/\", set_filename_str, with_filename_str, \"foo\"); t!(s: \".\", set_filename_str, with_filename_str, \"foo\"); t!(s: \"a/b\", set_filename_str, with_filename_str, \"\"); t!(s: \"a\", set_filename_str, with_filename_str, \"\"); t!(v: b!(\"hi/there.txt\"), set_filestem, with_filestem, b!(\"here\")); t!(v: b!(\"hi/there\", 0x80, \".txt\"), set_filestem, with_filestem, b!(\"here\", 0xff)); t!(s: \"hi/there.txt\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hi/there.\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hi/there\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hi/there.txt\", set_filestem_str, with_filestem_str, \"\"); t!(s: \"hi/there\", set_filestem_str, with_filestem_str, \"\"); t!(v: b!(\"hi/there.txt\"), set_extension, with_extension, b!(\"exe\")); t!(v: b!(\"hi/there.t\", 0x80, \"xt\"), set_extension, with_extension, b!(\"exe\", 0xff)); t!(s: \"hi/there.txt\", set_extension_str, with_extension_str, \"exe\"); t!(s: \"hi/there.\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hi/there\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hi/there.txt\", set_extension_str, with_extension_str, \"\"); t!(s: \"hi/there\", set_extension_str, with_extension_str, \"\"); t!(s: \".\", set_extension_str, with_extension_str, \"txt\"); } #[test] fn test_getters() { macro_rules! t( (s: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename_str(), $filename); assert_eq!(path.dirname_str(), $dirname); assert_eq!(path.filestem_str(), $filestem); assert_eq!(path.extension_str(), $ext); } ); (v: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename(), $filename); assert_eq!(path.dirname(), $dirname); assert_eq!(path.filestem(), $filestem); assert_eq!(path.extension(), $ext); } ) ) t!(v: Path::new(b!(\"a/b/c\")), b!(\"c\"), b!(\"a/b\"), b!(\"c\"), None); t!(v: Path::new(b!(\"a/b/\", 0xff)), b!(0xff), b!(\"a/b\"), b!(0xff), None); t!(v: Path::new(b!(\"hi/there.\", 0xff)), b!(\"there.\", 0xff), b!(\"hi\"), b!(\"there\"), Some(b!(0xff))); t!(s: Path::from_str(\"a/b/c\"), Some(\"c\"), Some(\"a/b\"), Some(\"c\"), None); t!(s: Path::from_str(\".\"), Some(\"\"), Some(\".\"), Some(\"\"), None); t!(s: Path::from_str(\"/\"), Some(\"\"), Some(\"/\"), Some(\"\"), None); t!(s: Path::from_str(\"..\"), Some(\"\"), Some(\"..\"), Some(\"\"), None); t!(s: Path::from_str(\"../..\"), Some(\"\"), Some(\"../..\"), Some(\"\"), None); t!(s: Path::from_str(\"hi/there.txt\"), Some(\"there.txt\"), Some(\"hi\"), Some(\"there\"), Some(\"txt\")); t!(s: Path::from_str(\"hi/there\"), Some(\"there\"), Some(\"hi\"), Some(\"there\"), None); t!(s: Path::from_str(\"hi/there.\"), Some(\"there.\"), Some(\"hi\"), Some(\"there\"), Some(\"\")); t!(s: Path::from_str(\"hi/.there\"), Some(\".there\"), Some(\"hi\"), Some(\".there\"), None); t!(s: Path::from_str(\"hi/..there\"), Some(\"..there\"), Some(\"hi\"), Some(\".\"), Some(\"there\")); t!(s: Path::new(b!(\"a/b/\", 0xff)), None, Some(\"a/b\"), None, None); t!(s: Path::new(b!(\"a/b/\", 0xff, \".txt\")), None, Some(\"a/b\"), None, Some(\"txt\")); t!(s: Path::new(b!(\"a/b/c.\", 0x80)), None, Some(\"a/b\"), Some(\"c\"), None); t!(s: Path::new(b!(0xff, \"/b\")), Some(\"b\"), None, Some(\"b\"), None); } #[test] fn test_dir_file_path() { t!(v: Path::new(b!(\"hi/there\", 0x80)).dir_path(), b!(\"hi\")); t!(v: Path::new(b!(\"hi\", 0xff, \"/there\")).dir_path(), b!(\"hi\", 0xff)); t!(s: Path::from_str(\"hi/there\").dir_path(), \"hi\"); t!(s: Path::from_str(\"hi\").dir_path(), \".\"); t!(s: Path::from_str(\"/hi\").dir_path(), \"/\"); t!(s: Path::from_str(\"/\").dir_path(), \"/\"); t!(s: Path::from_str(\"..\").dir_path(), \"..\"); t!(s: Path::from_str(\"../..\").dir_path(), \"../..\"); macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = $path; let left = path.and_then_ref(|p| p.as_str()); assert_eq!(left, $exp); } ); (v: $path:expr, $exp:expr) => ( { let path = $path; let left = path.map(|p| p.as_vec()); assert_eq!(left, $exp); } ) ) t!(v: Path::new(b!(\"hi/there\", 0x80)).file_path(), Some(b!(\"there\", 0x80))); t!(v: Path::new(b!(\"hi\", 0xff, \"/there\")).file_path(), Some(b!(\"there\"))); t!(s: Path::from_str(\"hi/there\").file_path(), Some(\"there\")); t!(s: Path::from_str(\"hi\").file_path(), Some(\"hi\")); t!(s: Path::from_str(\".\").file_path(), None); t!(s: Path::from_str(\"/\").file_path(), None); t!(s: Path::from_str(\"..\").file_path(), None); t!(s: Path::from_str(\"../..\").file_path(), None); } #[test] fn test_is_absolute() { assert_eq!(Path::from_str(\"a/b/c\").is_absolute(), false); assert_eq!(Path::from_str(\"/a/b/c\").is_absolute(), true); assert_eq!(Path::from_str(\"a\").is_absolute(), false); assert_eq!(Path::from_str(\"/a\").is_absolute(), true); assert_eq!(Path::from_str(\".\").is_absolute(), false); assert_eq!(Path::from_str(\"/\").is_absolute(), true); assert_eq!(Path::from_str(\"..\").is_absolute(), false); assert_eq!(Path::from_str(\"../..\").is_absolute(), false); } #[test] fn test_is_ancestor_of() { macro_rules! t( (s: $path:expr, $dest:expr, $exp:expr) => ( { let path = Path::from_str($path); let dest = Path::from_str($dest); assert_eq!(path.is_ancestor_of(&dest), $exp); } ) ) t!(s: \"a/b/c\", \"a/b/c/d\", true); t!(s: \"a/b/c\", \"a/b/c\", true); t!(s: \"a/b/c\", \"a/b\", false); t!(s: \"/a/b/c\", \"/a/b/c\", true); t!(s: \"/a/b\", \"/a/b/c\", true); t!(s: \"/a/b/c/d\", \"/a/b/c\", false); t!(s: \"/a/b\", \"a/b/c\", false); t!(s: \"a/b\", \"/a/b/c\", false); t!(s: \"a/b/c\", \"a/b/d\", false); t!(s: \"../a/b/c\", \"a/b/c\", false); t!(s: \"a/b/c\", \"../a/b/c\", false); t!(s: \"a/b/c\", \"a/b/cd\", false); t!(s: \"a/b/cd\", \"a/b/c\", false); t!(s: \"../a/b\", \"../a/b/c\", true); t!(s: \".\", \"a/b\", true); t!(s: \".\", \".\", true); t!(s: \"/\", \"/\", true); t!(s: \"/\", \"/a/b\", true); t!(s: \"..\", \"a/b\", true); t!(s: \"../..\", \"a/b\", true); } #[test] fn test_path_relative_from() { macro_rules! t( (s: $path:expr, $other:expr, $exp:expr) => ( { let path = Path::from_str($path); let other = Path::from_str($other); let res = path.path_relative_from(&other); assert_eq!(res.and_then_ref(|x| x.as_str()), $exp); } ) ) t!(s: \"a/b/c\", \"a/b\", Some(\"c\")); t!(s: \"a/b/c\", \"a/b/d\", Some(\"../c\")); t!(s: \"a/b/c\", \"a/b/c/d\", Some(\"..\")); t!(s: \"a/b/c\", \"a/b/c\", Some(\".\")); t!(s: \"a/b/c\", \"a/b/c/d/e\", Some(\"../..\")); t!(s: \"a/b/c\", \"a/d/e\", Some(\"../../b/c\")); t!(s: \"a/b/c\", \"d/e/f\", Some(\"../../../a/b/c\")); t!(s: \"a/b/c\", \"/a/b/c\", None); t!(s: \"/a/b/c\", \"a/b/c\", Some(\"/a/b/c\")); t!(s: \"/a/b/c\", \"/a/b/c/d\", Some(\"..\")); t!(s: \"/a/b/c\", \"/a/b\", Some(\"c\")); t!(s: \"/a/b/c\", \"/a/b/c/d/e\", Some(\"../..\")); t!(s: \"/a/b/c\", \"/a/d/e\", Some(\"../../b/c\")); t!(s: \"/a/b/c\", \"/d/e/f\", Some(\"../../../a/b/c\")); t!(s: \"hi/there.txt\", \"hi/there\", Some(\"../there.txt\")); t!(s: \".\", \"a\", Some(\"..\")); t!(s: \".\", \"a/b\", Some(\"../..\")); t!(s: \".\", \".\", Some(\".\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \"a/b\", \".\", Some(\"a/b\")); t!(s: \"..\", \".\", Some(\"..\")); t!(s: \"a/b/c\", \"a/b/c\", Some(\".\")); t!(s: \"/a/b/c\", \"/a/b/c\", Some(\".\")); t!(s: \"/\", \"/\", Some(\".\")); t!(s: \"/\", \".\", Some(\"/\")); t!(s: \"../../a\", \"b\", Some(\"../../../a\")); t!(s: \"a\", \"../../b\", None); t!(s: \"../../a\", \"../../b\", Some(\"../a\")); t!(s: \"../../a\", \"../../a/b\", Some(\"..\")); t!(s: \"../../a/b\", \"../../a\", Some(\"b\")); } #[test] fn test_component_iter() { macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = Path::from_str($path); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; let exps = exp.iter().map(|x| x.as_bytes()).to_owned_vec(); assert_eq!(comps, exps); } ); (v: [$($arg:expr),+], [$([$($exp:expr),*]),*]) => ( { let path = Path::new(b!($($arg),+)); let comps = path.component_iter().to_owned_vec(); let exp: &[&[u8]] = [$(b!($($exp),*)),*]; assert_eq!(comps.as_slice(), exp); } ) ) t!(v: [\"a/b/c\"], [[\"a\"], [\"b\"], [\"c\"]]); t!(v: [\"/\", 0xff, \"/a/\", 0x80], [[0xff], [\"a\"], [0x80]]); t!(v: [\"../../foo\", 0xcd, \"bar\"], [[\"..\"], [\"..\"], [\"foo\", 0xcd, \"bar\"]]); t!(s: \"a/b/c\", [\"a\", \"b\", \"c\"]); t!(s: \"a/b/d\", [\"a\", \"b\", \"d\"]); t!(s: \"a/b/cd\", [\"a\", \"b\", \"cd\"]); t!(s: \"/a/b/c\", [\"a\", \"b\", \"c\"]); t!(s: \"a\", [\"a\"]); t!(s: \"/a\", [\"a\"]); t!(s: \"/\", []); t!(s: \".\", [\".\"]); t!(s: \"..\", [\"..\"]); t!(s: \"../..\", [\"..\", \"..\"]); t!(s: \"../../foo\", [\"..\", \"..\", \"foo\"]); } } ", "commid": "rust_pr_9655.0"}], "negative_passages": []} {"query_id": "q-en-rust-c0a929f18bcfc09b4140d3b23a5400af201e9bb77c0ec58f078f31dbf3455980", "query": "and all interfaces directly with the file-system should probably use rather than trying to coerce to handle these cases. (Similar issue to .)\nnominating feature-complete\nAccepted for backwards-compatible\nMy last referenced commit is a month old, but I'm still working on this issue (currently finishing up the support for Windows paths).\nThe issues here of dealing with filesystems that are not utf8 seem related to , at least tangentially.", "positive_passages": [{"docid": "doc-en-rust-5c403273909425ec01c762dcda1727b548953d9bdc8d4d969c4ee3927be2404c", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Windows file path handling use ascii::AsciiCast; use c_str::{CString, ToCStr}; use cast; use cmp::Eq; use from_str::FromStr; use iter::{AdditiveIterator, Extendable, Iterator}; use option::{Option, Some, None}; use str; use str::{OwnedStr, Str, StrVector}; use util; use vec::Vector; use super::{GenericPath, GenericPathUnsafe}; /// Iterator that yields successive components of a Path pub type ComponentIter<'self> = str::CharSplitIterator<'self, char>; /// Represents a Windows path // Notes for Windows path impl: // The MAX_PATH is 260, but 253 is the practical limit due to some API bugs // See http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247.aspx for good information // about windows paths. // That same page puts a bunch of restrictions on allowed characters in a path. // `foo.txt` means \"relative to current drive\", but will not be considered to be absolute here // as `\u2203P | P.join(\"foo.txt\") != \"foo.txt\"`. // `C:` is interesting, that means \"the current directory on drive C\". // Long absolute paths need to have ? prefix (or, for UNC, ?UNC). I think that can be // ignored for now, though, and only added in a hypothetical .to_pwstr() function. // However, if a path is parsed that has ?, this needs to be preserved as it disables the // processing of \".\" and \"..\" components and / as a separator. // Experimentally, ?foo is not the same thing as foo. // Also, foo is not valid either (certainly not equivalent to foo). // Similarly, C:Users is not equivalent to C:Users, although C:Usersfoo is equivalent // to C:Usersfoo. In fact the command prompt treats C:foobar as UNC path. But it might be // best to just ignore that and normalize it to C:foobar. // // Based on all this, I think the right approach is to do the following: // * Require valid utf-8 paths. Windows API may use WCHARs, but we don't, and utf-8 is convertible // to UTF-16 anyway (though does Windows use UTF-16 or UCS-2? Not sure). // * Parse the prefixes ?UNC, ?, and . explicitly. // * If ?UNC, treat following two path components as servershare. Don't error for missing // servershare. // * If ?, parse disk from following component, if present. Don't error for missing disk. // * If ., treat rest of path as just regular components. I don't know how . and .. are handled // here, they probably aren't, but I'm not going to worry about that. // * Else if starts with , treat following two components as servershare. Don't error for missing // servershare. // * Otherwise, attempt to parse drive from start of path. // // The only error condition imposed here is valid utf-8. All other invalid paths are simply // preserved by the data structure; let the Windows API error out on them. #[deriving(Clone, DeepClone)] pub struct Path { priv repr: ~str, // assumed to never be empty priv prefix: Option, priv sepidx: Option // index of the final separator in the non-prefix portion of repr } impl Eq for Path { #[inline] fn eq(&self, other: &Path) -> bool { self.repr == other.repr } } impl FromStr for Path { fn from_str(s: &str) -> Option { if contains_nul(s.as_bytes()) { None } else { Some(unsafe { GenericPathUnsafe::from_str_unchecked(s) }) } } } impl ToCStr for Path { #[inline] fn to_c_str(&self) -> CString { // The Path impl guarantees no embedded NULs unsafe { self.as_vec().to_c_str_unchecked() } } #[inline] unsafe fn to_c_str_unchecked(&self) -> CString { self.as_vec().to_c_str_unchecked() } } impl GenericPathUnsafe for Path { /// See `GenericPathUnsafe::from_vec_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn from_vec_unchecked(path: &[u8]) -> Path { if !str::is_utf8(path) { let path = str::from_utf8(path); // triggers not_utf8 condition GenericPathUnsafe::from_str_unchecked(path) } else { GenericPathUnsafe::from_str_unchecked(cast::transmute(path)) } } #[inline] unsafe fn from_str_unchecked(path: &str) -> Path { let (prefix, path) = Path::normalize_(path); assert!(!path.is_empty()); let mut ret = Path{ repr: path, prefix: prefix, sepidx: None }; ret.update_sepidx(); ret } /// See `GenericPathUnsafe::set_dirname_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn set_dirname_unchecked(&mut self, dirname: &[u8]) { if !str::is_utf8(dirname) { let dirname = str::from_utf8(dirname); // triggers not_utf8 condition self.set_dirname_str_unchecked(dirname); } else { self.set_dirname_str_unchecked(cast::transmute(dirname)) } } unsafe fn set_dirname_str_unchecked(&mut self, dirname: &str) { match self.sepidx_or_prefix_len() { None if \".\" == self.repr || \"..\" == self.repr => { self.update_normalized(dirname); } None => { let mut s = str::with_capacity(dirname.len() + self.repr.len() + 1); s.push_str(dirname); s.push_char(sep); s.push_str(self.repr); self.update_normalized(s); } Some((_,idxa,end)) if self.repr.slice(idxa,end) == \"..\" => { self.update_normalized(dirname); } Some((_,idxa,end)) if dirname.is_empty() => { let (prefix, path) = Path::normalize_(self.repr.slice(idxa,end)); self.repr = path; self.prefix = prefix; self.update_sepidx(); } Some((idxb,idxa,end)) => { let idx = if dirname.ends_with(\"\") { idxa } else { let prefix = parse_prefix(dirname); if prefix == Some(DiskPrefix) && prefix_len(prefix) == dirname.len() { idxa } else { idxb } }; let mut s = str::with_capacity(dirname.len() + end - idx); s.push_str(dirname); s.push_str(self.repr.slice(idx,end)); self.update_normalized(s); } } } /// See `GenericPathUnsafe::set_filename_unchecekd`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn set_filename_unchecked(&mut self, filename: &[u8]) { if !str::is_utf8(filename) { let filename = str::from_utf8(filename); // triggers not_utf8 condition self.set_filename_str_unchecked(filename) } else { self.set_filename_str_unchecked(cast::transmute(filename)) } } unsafe fn set_filename_str_unchecked(&mut self, filename: &str) { match self.sepidx_or_prefix_len() { None if \"..\" == self.repr => { let mut s = str::with_capacity(3 + filename.len()); s.push_str(\"..\"); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } None => { self.update_normalized(filename); } Some((_,idxa,end)) if self.repr.slice(idxa,end) == \"..\" => { let mut s = str::with_capacity(end + 1 + filename.len()); s.push_str(self.repr.slice_to(end)); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } Some((idxb,idxa,_)) if self.prefix == Some(DiskPrefix) && idxa == self.prefix_len() => { let mut s = str::with_capacity(idxb + filename.len()); s.push_str(self.repr.slice_to(idxb)); s.push_str(filename); self.update_normalized(s); } Some((idxb,_,_)) => { let mut s = str::with_capacity(idxb + 1 + filename.len()); s.push_str(self.repr.slice_to(idxb)); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } } } /// See `GenericPathUnsafe::push_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. unsafe fn push_unchecked(&mut self, path: &[u8]) { if !str::is_utf8(path) { let path = str::from_utf8(path); // triggers not_utf8 condition self.push_str_unchecked(path); } else { self.push_str_unchecked(cast::transmute(path)); } } /// See `GenericPathUnsafe::push_str_unchecked`. /// /// Concatenating two Windows Paths is rather complicated. /// For the most part, it will behave as expected, except in the case of /// pushing a volume-relative path, e.g. `C:foo.txt`. Because we have no /// concept of per-volume cwds like Windows does, we can't behave exactly /// like Windows will. Instead, if the receiver is an absolute path on /// the same volume as the new path, it will be treated as the cwd that /// the new path is relative to. Otherwise, the new path will be treated /// as if it were absolute and will replace the receiver outright. unsafe fn push_str_unchecked(&mut self, path: &str) { fn is_vol_abs(path: &str, prefix: Option) -> bool { // assume prefix is Some(DiskPrefix) let rest = path.slice_from(prefix_len(prefix)); !rest.is_empty() && rest[0].is_ascii() && is_sep2(rest[0] as char) } fn shares_volume(me: &Path, path: &str) -> bool { // path is assumed to have a prefix of Some(DiskPrefix) match me.prefix { Some(DiskPrefix) => me.repr[0] == path[0].to_ascii().to_upper().to_byte(), Some(VerbatimDiskPrefix) => me.repr[4] == path[0].to_ascii().to_upper().to_byte(), _ => false } } fn is_sep_(prefix: Option, u: u8) -> bool { u.is_ascii() && if prefix_is_verbatim(prefix) { is_sep(u as char) } else { is_sep2(u as char) } } fn replace_path(me: &mut Path, path: &str, prefix: Option) { let newpath = Path::normalize__(path, prefix); me.repr = match newpath { Some(p) => p, None => path.to_owned() }; me.prefix = prefix; me.update_sepidx(); } fn append_path(me: &mut Path, path: &str) { // appends a path that has no prefix // if me is verbatim, we need to pre-normalize the new path let path_ = if me.is_verbatim() { Path::normalize__(path, None) } else { None }; let pathlen = path_.map_default(path.len(), |p| p.len()); let mut s = str::with_capacity(me.repr.len() + 1 + pathlen); s.push_str(me.repr); let plen = me.prefix_len(); if !(me.repr.len() > plen && me.repr[me.repr.len()-1] == sep as u8) { s.push_char(sep); } match path_ { None => s.push_str(path), Some(p) => s.push_str(p) }; me.update_normalized(s) } if !path.is_empty() { let prefix = parse_prefix(path); match prefix { Some(DiskPrefix) if !is_vol_abs(path, prefix) && shares_volume(self, path) => { // cwd-relative path, self is on the same volume append_path(self, path.slice_from(prefix_len(prefix))); } Some(_) => { // absolute path, or cwd-relative and self is not same volume replace_path(self, path, prefix); } None if !path.is_empty() && is_sep_(self.prefix, path[0]) => { // volume-relative path if self.prefix().is_some() { // truncate self down to the prefix, then append let n = self.prefix_len(); self.repr.truncate(n); append_path(self, path); } else { // we have no prefix, so nothing to be relative to replace_path(self, path, prefix); } } None => { // relative path append_path(self, path); } } } } } impl GenericPath for Path { /// See `GenericPath::as_str` for info. /// Always returns a `Some` value. #[inline] fn as_str<'a>(&'a self) -> Option<&'a str> { Some(self.repr.as_slice()) } #[inline] fn as_vec<'a>(&'a self) -> &'a [u8] { self.repr.as_bytes() } #[inline] fn dirname<'a>(&'a self) -> &'a [u8] { self.dirname_str().unwrap().as_bytes() } /// See `GenericPath::dirname_str` for info. /// Always returns a `Some` value. fn dirname_str<'a>(&'a self) -> Option<&'a str> { Some(match self.sepidx_or_prefix_len() { None if \"..\" == self.repr => self.repr.as_slice(), None => \".\", Some((_,idxa,end)) if self.repr.slice(idxa, end) == \"..\" => { self.repr.as_slice() } Some((idxb,_,end)) if self.repr.slice(idxb, end) == \"\" => { self.repr.as_slice() } Some((0,idxa,_)) => self.repr.slice_to(idxa), Some((idxb,idxa,_)) => { match self.prefix { Some(DiskPrefix) | Some(VerbatimDiskPrefix) if idxb == self.prefix_len() => { self.repr.slice_to(idxa) } _ => self.repr.slice_to(idxb) } } }) } #[inline] fn filename<'a>(&'a self) -> &'a [u8] { self.filename_str().unwrap().as_bytes() } /// See `GenericPath::filename_str` for info. /// Always returns a `Some` value. fn filename_str<'a>(&'a self) -> Option<&'a str> { Some(match self.sepidx_or_prefix_len() { None if \".\" == self.repr || \"..\" == self.repr => \"\", None => self.repr.as_slice(), Some((_,idxa,end)) if self.repr.slice(idxa, end) == \"..\" => \"\", Some((_,idxa,end)) => self.repr.slice(idxa, end) }) } /// See `GenericPath::filestem_str` for info. /// Always returns a `Some` value. #[inline] fn filestem_str<'a>(&'a self) -> Option<&'a str> { // filestem() returns a byte vector that's guaranteed valid UTF-8 Some(unsafe { cast::transmute(self.filestem()) }) } #[inline] fn extension_str<'a>(&'a self) -> Option<&'a str> { // extension() returns a byte vector that's guaranteed valid UTF-8 self.extension().map_move(|v| unsafe { cast::transmute(v) }) } fn dir_path(&self) -> Path { unsafe { GenericPathUnsafe::from_str_unchecked(self.dirname_str().unwrap()) } } fn file_path(&self) -> Option { match self.filename_str() { None | Some(\"\") => None, Some(s) => Some(unsafe { GenericPathUnsafe::from_str_unchecked(s) }) } } #[inline] fn push_path(&mut self, path: &Path) { self.push_str(path.as_str().unwrap()) } #[inline] fn pop_opt(&mut self) -> Option<~[u8]> { self.pop_opt_str().map_move(|s| s.into_bytes()) } fn pop_opt_str(&mut self) -> Option<~str> { match self.sepidx_or_prefix_len() { None if \".\" == self.repr => None, None => { let mut s = ~\".\"; util::swap(&mut s, &mut self.repr); self.sepidx = None; Some(s) } Some((idxb,idxa,end)) if idxb == idxa && idxb == end => None, Some((idxb,_,end)) if self.repr.slice(idxb, end) == \"\" => None, Some((idxb,idxa,end)) => { let s = self.repr.slice(idxa, end).to_owned(); let trunc = match self.prefix { Some(DiskPrefix) | Some(VerbatimDiskPrefix) | None => { let plen = self.prefix_len(); if idxb == plen { idxa } else { idxb } } _ => idxb }; self.repr.truncate(trunc); self.update_sepidx(); Some(s) } } } /// See `GenericPath::is_absolute` for info. /// /// A Windows Path is considered absolute only if it has a non-volume prefix, /// or if it has a volume prefix and the path starts with ''. /// A path of `foo` is not considered absolute because it's actually /// relative to the \"current volume\". A separate method `Path::is_vol_relative` /// is provided to indicate this case. Similarly a path of `C:foo` is not /// considered absolute because it's relative to the cwd on volume C:. A /// separate method `Path::is_cwd_relative` is provided to indicate this case. #[inline] fn is_absolute(&self) -> bool { match self.prefix { Some(DiskPrefix) => { let rest = self.repr.slice_from(self.prefix_len()); rest.len() > 0 && rest[0] == sep as u8 } Some(_) => true, None => false } } fn is_ancestor_of(&self, other: &Path) -> bool { if !self.equiv_prefix(other) { false } else if self.is_absolute() != other.is_absolute() || self.is_vol_relative() != other.is_vol_relative() { false } else { let mut ita = self.component_iter(); let mut itb = other.component_iter(); if \".\" == self.repr { return itb.next() != Some(\"..\"); } loop { match (ita.next(), itb.next()) { (None, _) => break, (Some(a), Some(b)) if a == b => { loop }, (Some(a), _) if a == \"..\" => { // if ita contains only .. components, it's an ancestor return ita.all(|x| x == \"..\"); } _ => return false } } true } } fn path_relative_from(&self, base: &Path) -> Option { fn comp_requires_verbatim(s: &str) -> bool { s == \".\" || s == \"..\" || s.contains_char(sep2) } if !self.equiv_prefix(base) { // prefixes differ if self.is_absolute() { Some(self.clone()) } else if self.prefix == Some(DiskPrefix) && base.prefix == Some(DiskPrefix) { // both drives, drive letters must differ or they'd be equiv Some(self.clone()) } else { None } } else if self.is_absolute() != base.is_absolute() { if self.is_absolute() { Some(self.clone()) } else { None } } else if self.is_vol_relative() != base.is_vol_relative() { if self.is_vol_relative() { Some(self.clone()) } else { None } } else { let mut ita = self.component_iter(); let mut itb = base.component_iter(); let mut comps = ~[]; let a_verb = self.is_verbatim(); let b_verb = base.is_verbatim(); loop { match (ita.next(), itb.next()) { (None, None) => break, (Some(a), None) if a_verb && comp_requires_verbatim(a) => { return Some(self.clone()) } (Some(a), None) => { comps.push(a); if !a_verb { comps.extend(&mut ita); break; } } (None, _) => comps.push(\"..\"), (Some(a), Some(b)) if comps.is_empty() && a == b => (), (Some(a), Some(b)) if !b_verb && b == \".\" => { if a_verb && comp_requires_verbatim(a) { return Some(self.clone()) } else { comps.push(a) } } (Some(_), Some(b)) if !b_verb && b == \"..\" => return None, (Some(a), Some(_)) if a_verb && comp_requires_verbatim(a) => { return Some(self.clone()) } (Some(a), Some(_)) => { comps.push(\"..\"); for _ in itb { comps.push(\"..\"); } comps.push(a); if !a_verb { comps.extend(&mut ita); break; } } } } Some(Path::from_str(comps.connect(\"\"))) } } } impl Path { /// Returns a new Path from a byte vector /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. /// Raises the `str::not_utf8` condition if invalid UTF-8. #[inline] pub fn new(v: &[u8]) -> Path { GenericPath::from_vec(v) } /// Returns a new Path from a string /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. #[inline] pub fn from_str(s: &str) -> Path { GenericPath::from_str(s) } /// Converts the Path into an owned byte vector pub fn into_vec(self) -> ~[u8] { self.repr.into_bytes() } /// Converts the Path into an owned string /// Returns an Option for compatibility with posix::Path, but the /// return value will always be Some. pub fn into_str(self) -> Option<~str> { Some(self.repr) } /// Returns a normalized string representation of a path, by removing all empty /// components, and unnecessary . and .. components. pub fn normalize(s: S) -> ~str { let (_, path) = Path::normalize_(s); path } /// Returns an iterator that yields each component of the path in turn. /// Does not yield the path prefix (including server/share components in UNC paths). /// Does not distinguish between volume-relative and relative paths, e.g. /// abc and abc. /// Does not distinguish between absolute and cwd-relative paths, e.g. /// C:foo and C:foo. pub fn component_iter<'a>(&'a self) -> ComponentIter<'a> { let s = match self.prefix { Some(_) => { let plen = self.prefix_len(); if self.repr.len() > plen && self.repr[plen] == sep as u8 { self.repr.slice_from(plen+1) } else { self.repr.slice_from(plen) } } None if self.repr[0] == sep as u8 => self.repr.slice_from(1), None => self.repr.as_slice() }; let ret = s.split_terminator_iter(sep); ret } /// Returns whether the path is considered \"volume-relative\", which means a path /// that looks like \"foo\". Paths of this form are relative to the current volume, /// but absolute within that volume. #[inline] pub fn is_vol_relative(&self) -> bool { self.prefix.is_none() && self.repr[0] == sep as u8 } /// Returns whether the path is considered \"cwd-relative\", which means a path /// with a volume prefix that is not absolute. This look like \"C:foo.txt\". Paths /// of this form are relative to the cwd on the given volume. #[inline] pub fn is_cwd_relative(&self) -> bool { self.prefix == Some(DiskPrefix) && !self.is_absolute() } /// Returns the PathPrefix for this Path #[inline] pub fn prefix(&self) -> Option { self.prefix } /// Returns whether the prefix is a verbatim prefix, i.e. ? #[inline] pub fn is_verbatim(&self) -> bool { prefix_is_verbatim(self.prefix) } fn equiv_prefix(&self, other: &Path) -> bool { match (self.prefix, other.prefix) { (Some(DiskPrefix), Some(VerbatimDiskPrefix)) => { self.is_absolute() && self.repr[0].to_ascii().eq_ignore_case(other.repr[4].to_ascii()) } (Some(VerbatimDiskPrefix), Some(DiskPrefix)) => { other.is_absolute() && self.repr[4].to_ascii().eq_ignore_case(other.repr[0].to_ascii()) } (Some(VerbatimDiskPrefix), Some(VerbatimDiskPrefix)) => { self.repr[4].to_ascii().eq_ignore_case(other.repr[4].to_ascii()) } (Some(UNCPrefix(_,_)), Some(VerbatimUNCPrefix(_,_))) => { self.repr.slice(2, self.prefix_len()) == other.repr.slice(8, other.prefix_len()) } (Some(VerbatimUNCPrefix(_,_)), Some(UNCPrefix(_,_))) => { self.repr.slice(8, self.prefix_len()) == other.repr.slice(2, other.prefix_len()) } (None, None) => true, (a, b) if a == b => { self.repr.slice_to(self.prefix_len()) == other.repr.slice_to(other.prefix_len()) } _ => false } } fn normalize_(s: S) -> (Option, ~str) { // make borrowck happy let (prefix, val) = { let prefix = parse_prefix(s.as_slice()); let path = Path::normalize__(s.as_slice(), prefix); (prefix, path) }; (prefix, match val { None => s.into_owned(), Some(val) => val }) } fn normalize__(s: &str, prefix: Option) -> Option<~str> { if prefix_is_verbatim(prefix) { // don't do any normalization match prefix { Some(VerbatimUNCPrefix(x, 0)) if s.len() == 8 + x => { // the server component has no trailing '' let mut s = s.into_owned(); s.push_char(sep); Some(s) } _ => None } } else { let (is_abs, comps) = normalize_helper(s, prefix); let mut comps = comps; match (comps.is_some(),prefix) { (false, Some(DiskPrefix)) => { if s[0] >= 'a' as u8 && s[0] <= 'z' as u8 { comps = Some(~[]); } } (false, Some(VerbatimDiskPrefix)) => { if s[4] >= 'a' as u8 && s[0] <= 'z' as u8 { comps = Some(~[]); } } _ => () } match comps { None => None, Some(comps) => { if prefix.is_some() && comps.is_empty() { match prefix.unwrap() { DiskPrefix => { let len = prefix_len(prefix) + is_abs as uint; let mut s = s.slice_to(len).to_owned(); s[0] = s[0].to_ascii().to_upper().to_byte(); if is_abs { s[2] = sep as u8; // normalize C:/ to C: } Some(s) } VerbatimDiskPrefix => { let len = prefix_len(prefix) + is_abs as uint; let mut s = s.slice_to(len).to_owned(); s[4] = s[4].to_ascii().to_upper().to_byte(); Some(s) } _ => { let plen = prefix_len(prefix); if s.len() > plen { Some(s.slice_to(plen).to_owned()) } else { None } } } } else if is_abs && comps.is_empty() { Some(str::from_char(sep)) } else { let prefix_ = s.slice_to(prefix_len(prefix)); let n = prefix_.len() + if is_abs { comps.len() } else { comps.len() - 1} + comps.iter().map(|v| v.len()).sum(); let mut s = str::with_capacity(n); match prefix { Some(DiskPrefix) => { s.push_char(prefix_[0].to_ascii().to_upper().to_char()); s.push_char(':'); } Some(VerbatimDiskPrefix) => { s.push_str(prefix_.slice_to(4)); s.push_char(prefix_[4].to_ascii().to_upper().to_char()); s.push_str(prefix_.slice_from(5)); } Some(UNCPrefix(a,b)) => { s.push_str(\"\"); s.push_str(prefix_.slice(2, a+2)); s.push_char(sep); s.push_str(prefix_.slice(3+a, 3+a+b)); } Some(_) => s.push_str(prefix_), None => () } let mut it = comps.move_iter(); if !is_abs { match it.next() { None => (), Some(comp) => s.push_str(comp) } } for comp in it { s.push_char(sep); s.push_str(comp); } Some(s) } } } } } fn update_sepidx(&mut self) { let s = if self.has_nonsemantic_trailing_slash() { self.repr.slice_to(self.repr.len()-1) } else { self.repr.as_slice() }; let idx = s.rfind(if !prefix_is_verbatim(self.prefix) { is_sep2 } else { is_sep }); let prefixlen = self.prefix_len(); self.sepidx = idx.and_then(|x| if x < prefixlen { None } else { Some(x) }); } fn prefix_len(&self) -> uint { prefix_len(self.prefix) } // Returns a tuple (before, after, end) where before is the index of the separator // and after is the index just after the separator. // end is the length of the string, normally, or the index of the final character if it is // a non-semantic trailing separator in a verbatim string. // If the prefix is considered the separator, before and after are the same. fn sepidx_or_prefix_len(&self) -> Option<(uint,uint,uint)> { match self.sepidx { None => match self.prefix_len() { 0 => None, x => Some((x,x,self.repr.len())) }, Some(x) => { if self.has_nonsemantic_trailing_slash() { Some((x,x+1,self.repr.len()-1)) } else { Some((x,x+1,self.repr.len())) } } } } fn has_nonsemantic_trailing_slash(&self) -> bool { self.is_verbatim() && self.repr.len() > self.prefix_len()+1 && self.repr[self.repr.len()-1] == sep as u8 } fn update_normalized(&mut self, s: S) { let (prefix, path) = Path::normalize_(s); self.repr = path; self.prefix = prefix; self.update_sepidx(); } } /// The standard path separator character pub static sep: char = ''; /// The alternative path separator character pub static sep2: char = '/'; /// Returns whether the given byte is a path separator. /// Only allows the primary separator ''; use is_sep2 to allow '/'. #[inline] pub fn is_sep(c: char) -> bool { c == sep } /// Returns whether the given byte is a path separator. /// Allows both the primary separator '' and the alternative separator '/'. #[inline] pub fn is_sep2(c: char) -> bool { c == sep || c == sep2 } /// Prefix types for Path #[deriving(Eq, Clone, DeepClone)] pub enum PathPrefix { /// Prefix `?`, uint is the length of the following component VerbatimPrefix(uint), /// Prefix `?UNC`, uints are the lengths of the UNC components VerbatimUNCPrefix(uint, uint), /// Prefix `?C:` (for any alphabetic character) VerbatimDiskPrefix, /// Prefix `.`, uint is the length of the following component DeviceNSPrefix(uint), /// UNC prefix `servershare`, uints are the lengths of the server/share UNCPrefix(uint, uint), /// Prefix `C:` for any alphabetic character DiskPrefix } /// Internal function; only public for tests. Don't use. // FIXME (#8169): Make private once visibility is fixed pub fn parse_prefix<'a>(mut path: &'a str) -> Option { if path.starts_with(\"\") { // path = path.slice_from(2); if path.starts_with(\"?\") { // ? path = path.slice_from(2); if path.starts_with(\"UNC\") { // ?UNCservershare path = path.slice_from(4); let (idx_a, idx_b) = match parse_two_comps(path, is_sep) { Some(x) => x, None => (path.len(), 0) }; return Some(VerbatimUNCPrefix(idx_a, idx_b)); } else { // ?path let idx = path.find(''); if idx == Some(2) && path[1] == ':' as u8 { let c = path[0]; if c.is_ascii() && ::char::is_alphabetic(c as char) { // ?C: path return Some(VerbatimDiskPrefix); } } let idx = idx.unwrap_or(path.len()); return Some(VerbatimPrefix(idx)); } } else if path.starts_with(\".\") { // .path path = path.slice_from(2); let idx = path.find('').unwrap_or(path.len()); return Some(DeviceNSPrefix(idx)); } match parse_two_comps(path, is_sep2) { Some((idx_a, idx_b)) if idx_a > 0 && idx_b > 0 => { // servershare return Some(UNCPrefix(idx_a, idx_b)); } _ => () } } else if path.len() > 1 && path[1] == ':' as u8 { // C: let c = path[0]; if c.is_ascii() && ::char::is_alphabetic(c as char) { return Some(DiskPrefix); } } return None; fn parse_two_comps<'a>(mut path: &'a str, f: &fn(char)->bool) -> Option<(uint, uint)> { let idx_a = match path.find(|x| f(x)) { None => return None, Some(x) => x }; path = path.slice_from(idx_a+1); let idx_b = path.find(f).unwrap_or(path.len()); Some((idx_a, idx_b)) } } // None result means the string didn't need normalizing fn normalize_helper<'a>(s: &'a str, prefix: Option) -> (bool,Option<~[&'a str]>) { let f = if !prefix_is_verbatim(prefix) { is_sep2 } else { is_sep }; let is_abs = s.len() > prefix_len(prefix) && f(s.char_at(prefix_len(prefix))); let s_ = s.slice_from(prefix_len(prefix)); let s_ = if is_abs { s_.slice_from(1) } else { s_ }; if is_abs && s_.is_empty() { return (is_abs, match prefix { Some(DiskPrefix) | None => (if is_sep(s.char_at(prefix_len(prefix))) { None } else { Some(~[]) }), Some(_) => Some(~[]), // need to trim the trailing separator }); } let mut comps: ~[&'a str] = ~[]; let mut n_up = 0u; let mut changed = false; for comp in s_.split_iter(f) { if comp.is_empty() { changed = true } else if comp == \".\" { changed = true } else if comp == \"..\" { let has_abs_prefix = match prefix { Some(DiskPrefix) => false, Some(_) => true, None => false }; if (is_abs || has_abs_prefix) && comps.is_empty() { changed = true } else if comps.len() == n_up { comps.push(\"..\"); n_up += 1 } else { comps.pop_opt(); changed = true } } else { comps.push(comp) } } if !changed && !prefix_is_verbatim(prefix) { changed = s.find(is_sep2).is_some(); } if changed { if comps.is_empty() && !is_abs && prefix.is_none() { if s == \".\" { return (is_abs, None); } comps.push(\".\"); } (is_abs, Some(comps)) } else { (is_abs, None) } } // FIXME (#8169): Pull this into parent module once visibility works #[inline(always)] fn contains_nul(v: &[u8]) -> bool { v.iter().any(|&x| x == 0) } fn prefix_is_verbatim(p: Option) -> bool { match p { Some(VerbatimPrefix(_)) | Some(VerbatimUNCPrefix(_,_)) | Some(VerbatimDiskPrefix) => true, Some(DeviceNSPrefix(_)) => true, // not really sure, but I think so _ => false } } fn prefix_len(p: Option) -> uint { match p { None => 0, Some(VerbatimPrefix(x)) => 4 + x, Some(VerbatimUNCPrefix(x,y)) => 8 + x + 1 + y, Some(VerbatimDiskPrefix) => 6, Some(UNCPrefix(x,y)) => 2 + x + 1 + y, Some(DeviceNSPrefix(x)) => 4 + x, Some(DiskPrefix) => 2 } } fn prefix_is_sep(p: Option, c: u8) -> bool { c.is_ascii() && if !prefix_is_verbatim(p) { is_sep2(c as char) } else { is_sep(c as char) } } #[cfg(test)] mod tests { use super::*; use option::{Some,None}; use iter::Iterator; use vec::Vector; macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_str(), Some($exp)); } ); (v: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_vec(), $exp); } ) ) macro_rules! b( ($($arg:expr),+) => ( bytes!($($arg),+) ) ) #[test] fn test_parse_prefix() { macro_rules! t( ($path:expr, $exp:expr) => ( { let path = $path; let exp = $exp; let res = parse_prefix(path); assert!(res == exp, \"parse_prefix(\"%s\"): expected %?, found %?\", path, exp, res); } ) ) t!(\"SERVERsharefoo\", Some(UNCPrefix(6,5))); t!(\"\", None); t!(\"SERVER\", None); t!(\"SERVER\", None); t!(\"SERVER\", None); t!(\"SERVERfoo\", None); t!(\"SERVERshare\", Some(UNCPrefix(6,5))); t!(\"SERVER/share/foo\", Some(UNCPrefix(6,5))); t!(\"SERVERshare/foo\", Some(UNCPrefix(6,5))); t!(\"//SERVER/share/foo\", None); t!(\"abc\", None); t!(\"?abc\", Some(VerbatimPrefix(1))); t!(\"?a/b/c\", Some(VerbatimPrefix(5))); t!(\"//?/a/b/c\", None); t!(\".ab\", Some(DeviceNSPrefix(1))); t!(\".a/b\", Some(DeviceNSPrefix(3))); t!(\"//./a/b\", None); t!(\"?UNCserversharefoo\", Some(VerbatimUNCPrefix(6,5))); t!(\"?UNCsharefoo\", Some(VerbatimUNCPrefix(0,5))); t!(\"?UNC\", Some(VerbatimUNCPrefix(0,0))); t!(\"?UNCserver/share/foo\", Some(VerbatimUNCPrefix(16,0))); t!(\"?UNCserver\", Some(VerbatimUNCPrefix(6,0))); t!(\"?UNCserver\", Some(VerbatimUNCPrefix(6,0))); t!(\"?UNC/server/share\", Some(VerbatimPrefix(16))); t!(\"?UNC\", Some(VerbatimPrefix(3))); t!(\"?C:ab.txt\", Some(VerbatimDiskPrefix)); t!(\"?z:\", Some(VerbatimDiskPrefix)); t!(\"?C:\", Some(VerbatimPrefix(2))); t!(\"?C:a.txt\", Some(VerbatimPrefix(7))); t!(\"?C:ab.txt\", Some(VerbatimPrefix(3))); t!(\"?C:/a\", Some(VerbatimPrefix(4))); t!(\"C:foo\", Some(DiskPrefix)); t!(\"z:/foo\", Some(DiskPrefix)); t!(\"d:\", Some(DiskPrefix)); t!(\"ab:\", None); t!(\"\u00fc:foo\", None); t!(\"3:foo\", None); t!(\" :foo\", None); t!(\"::foo\", None); t!(\"?C:\", Some(VerbatimPrefix(2))); t!(\"?z:\", Some(VerbatimDiskPrefix)); t!(\"?ab:\", Some(VerbatimPrefix(3))); t!(\"?C:a\", Some(VerbatimDiskPrefix)); t!(\"?C:/a\", Some(VerbatimPrefix(4))); t!(\"?C:a/b\", Some(VerbatimDiskPrefix)); } #[test] fn test_paths() { t!(v: Path::new([]), b!(\".\")); t!(v: Path::new(b!(\"\")), b!(\"\")); t!(v: Path::new(b!(\"abc\")), b!(\"abc\")); t!(s: Path::from_str(\"\"), \".\"); t!(s: Path::from_str(\"\"), \"\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"lib\"), \"lib\"); t!(s: Path::from_str(\"lib\"), \"lib\"); t!(s: Path::from_str(\"hithere\"), \"hithere\"); t!(s: Path::from_str(\"hithere.txt\"), \"hithere.txt\"); t!(s: Path::from_str(\"/\"), \"\"); t!(s: Path::from_str(\"hi/\"), \"hi\"); t!(s: Path::from_str(\"/lib\"), \"lib\"); t!(s: Path::from_str(\"/lib/\"), \"lib\"); t!(s: Path::from_str(\"hi/there\"), \"hithere\"); t!(s: Path::from_str(\"hithere\"), \"hithere\"); t!(s: Path::from_str(\"hi..there\"), \"there\"); t!(s: Path::from_str(\"hi/../there\"), \"there\"); t!(s: Path::from_str(\"..hithere\"), \"..hithere\"); t!(s: Path::from_str(\"..hithere\"), \"hithere\"); t!(s: Path::from_str(\"/../hi/there\"), \"hithere\"); t!(s: Path::from_str(\"foo..\"), \".\"); t!(s: Path::from_str(\"foo..\"), \"\"); t!(s: Path::from_str(\"foo....\"), \"\"); t!(s: Path::from_str(\"foo....bar\"), \"bar\"); t!(s: Path::from_str(\".hi.there.\"), \"hithere\"); t!(s: Path::from_str(\".hi.there...\"), \"hi\"); t!(s: Path::from_str(\"foo....\"), \"..\"); t!(s: Path::from_str(\"foo......\"), \"....\"); t!(s: Path::from_str(\"foo....bar\"), \"..bar\"); assert_eq!(Path::new(b!(\"foobar\")).into_vec(), b!(\"foobar\").to_owned()); assert_eq!(Path::new(b!(\"foo....bar\")).into_vec(), b!(\"bar\").to_owned()); assert_eq!(Path::from_str(\"foobar\").into_str(), Some(~\"foobar\")); assert_eq!(Path::from_str(\"foo....bar\").into_str(), Some(~\"bar\")); t!(s: Path::from_str(\"a\"), \"a\"); t!(s: Path::from_str(\"a\"), \"a\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"ab/\"), \"ab\"); t!(s: Path::from_str(\"b\"), \"b\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"abc\"), \"abc\"); t!(s: Path::from_str(\"servershare/path\"), \"serversharepath\"); t!(s: Path::from_str(\"server/share/path\"), \"serversharepath\"); t!(s: Path::from_str(\"C:ab.txt\"), \"C:ab.txt\"); t!(s: Path::from_str(\"C:a/b.txt\"), \"C:ab.txt\"); t!(s: Path::from_str(\"z:ab.txt\"), \"Z:ab.txt\"); t!(s: Path::from_str(\"z:/a/b.txt\"), \"Z:ab.txt\"); t!(s: Path::from_str(\"ab:/a/b.txt\"), \"ab:ab.txt\"); t!(s: Path::from_str(\"C:\"), \"C:\"); t!(s: Path::from_str(\"C:\"), \"C:\"); t!(s: Path::from_str(\"q:\"), \"Q:\"); t!(s: Path::from_str(\"C:/\"), \"C:\"); t!(s: Path::from_str(\"C:foo..\"), \"C:\"); t!(s: Path::from_str(\"C:foo..\"), \"C:\"); t!(s: Path::from_str(\"C:a\"), \"C:a\"); t!(s: Path::from_str(\"C:a/\"), \"C:a\"); t!(s: Path::from_str(\"C:ab\"), \"C:ab\"); t!(s: Path::from_str(\"C:ab/\"), \"C:ab\"); t!(s: Path::from_str(\"C:a\"), \"C:a\"); t!(s: Path::from_str(\"C:a/\"), \"C:a\"); t!(s: Path::from_str(\"C:ab\"), \"C:ab\"); t!(s: Path::from_str(\"C:ab/\"), \"C:ab\"); t!(s: Path::from_str(\"?z:ab.txt\"), \"?z:ab.txt\"); t!(s: Path::from_str(\"?C:/a/b.txt\"), \"?C:/a/b.txt\"); t!(s: Path::from_str(\"?C:a/b.txt\"), \"?C:a/b.txt\"); t!(s: Path::from_str(\"?testab.txt\"), \"?testab.txt\"); t!(s: Path::from_str(\"?foobar\"), \"?foobar\"); t!(s: Path::from_str(\".foobar\"), \".foobar\"); t!(s: Path::from_str(\".\"), \".\"); t!(s: Path::from_str(\"?UNCserversharefoo\"), \"?UNCserversharefoo\"); t!(s: Path::from_str(\"?UNCserver/share\"), \"?UNCserver/share\"); t!(s: Path::from_str(\"?UNCserver\"), \"?UNCserver\"); t!(s: Path::from_str(\"?UNC\"), \"?UNC\"); t!(s: Path::from_str(\"?UNC\"), \"?UNC\"); // I'm not sure whether .foo/bar should normalize to .foobar // as information is sparse and this isn't really googleable. // I'm going to err on the side of not normalizing it, as this skips the filesystem t!(s: Path::from_str(\".foo/bar\"), \".foo/bar\"); t!(s: Path::from_str(\".foobar\"), \".foobar\"); } #[test] fn test_null_byte() { use path2::null_byte::cond; let mut handled = false; let mut p = do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"foobar\", 0)); (b!(\"bar\").to_owned()) }).inside { Path::new(b!(\"foobar\", 0)) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"bar\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.set_filename(b!(\"f\", 0, \"o\")) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"foo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"null\", 0, \"byte\")); (b!(\"nullbyte\").to_owned()) }).inside { p.set_dirname(b!(\"null\", 0, \"byte\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"nullbytefoo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.push(b!(\"f\", 0, \"o\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"nullbytefoofoo\")); } #[test] fn test_null_byte_fail() { use path2::null_byte::cond; use task; macro_rules! t( ($name:expr => $code:block) => ( { let mut t = task::task(); t.supervised(); t.name($name); let res = do t.try $code; assert!(res.is_err()); } ) ) t!(~\"new() wnul\" => { do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { Path::new(b!(\"foobar\", 0)) }; }) t!(~\"set_filename wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_filename(b!(\"foo\", 0)) }; }) t!(~\"set_dirname wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_dirname(b!(\"foo\", 0)) }; }) t!(~\"push wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.push(b!(\"foo\", 0)) }; }) } #[test] #[should_fail] fn test_not_utf8_fail() { Path::new(b!(\"hello\", 0x80, \".txt\")); } #[test] fn test_components() { macro_rules! t( (s: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::from_str($path); assert_eq!(path.$op(), Some($exp)); } ); (s: $path:expr, $op:ident, $exp:expr, opt) => ( { let path = Path::from_str($path); let left = path.$op(); assert_eq!(left, $exp); } ); (v: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::new($path); assert_eq!(path.$op(), $exp); } ) ) t!(v: b!(\"abc\"), filename, b!(\"c\")); t!(s: \"abc\", filename_str, \"c\"); t!(s: \"abc\", filename_str, \"c\"); t!(s: \"a\", filename_str, \"a\"); t!(s: \"a\", filename_str, \"a\"); t!(s: \".\", filename_str, \"\"); t!(s: \"\", filename_str, \"\"); t!(s: \"..\", filename_str, \"\"); t!(s: \"....\", filename_str, \"\"); t!(s: \"c:foo.txt\", filename_str, \"foo.txt\"); t!(s: \"C:\", filename_str, \"\"); t!(s: \"C:\", filename_str, \"\"); t!(s: \"serversharefoo.txt\", filename_str, \"foo.txt\"); t!(s: \"servershare\", filename_str, \"\"); t!(s: \"server\", filename_str, \"server\"); t!(s: \"?barfoo.txt\", filename_str, \"foo.txt\"); t!(s: \"?bar\", filename_str, \"\"); t!(s: \"?\", filename_str, \"\"); t!(s: \"?UNCserversharefoo.txt\", filename_str, \"foo.txt\"); t!(s: \"?UNCserver\", filename_str, \"\"); t!(s: \"?UNC\", filename_str, \"\"); t!(s: \"?C:foo.txt\", filename_str, \"foo.txt\"); t!(s: \"?C:\", filename_str, \"\"); t!(s: \"?C:\", filename_str, \"\"); t!(s: \"?foo/bar\", filename_str, \"\"); t!(s: \"?C:/foo\", filename_str, \"\"); t!(s: \".foobar\", filename_str, \"bar\"); t!(s: \".foo\", filename_str, \"\"); t!(s: \".foo/bar\", filename_str, \"\"); t!(s: \".foobar/baz\", filename_str, \"bar/baz\"); t!(s: \".\", filename_str, \"\"); t!(s: \"?ab\", filename_str, \"b\"); t!(v: b!(\"abc\"), dirname, b!(\"ab\")); t!(s: \"abc\", dirname_str, \"ab\"); t!(s: \"abc\", dirname_str, \"ab\"); t!(s: \"a\", dirname_str, \".\"); t!(s: \"a\", dirname_str, \"\"); t!(s: \".\", dirname_str, \".\"); t!(s: \"\", dirname_str, \"\"); t!(s: \"..\", dirname_str, \"..\"); t!(s: \"....\", dirname_str, \"....\"); t!(s: \"c:foo.txt\", dirname_str, \"C:\"); t!(s: \"C:\", dirname_str, \"C:\"); t!(s: \"C:\", dirname_str, \"C:\"); t!(s: \"C:foo.txt\", dirname_str, \"C:\"); t!(s: \"serversharefoo.txt\", dirname_str, \"servershare\"); t!(s: \"servershare\", dirname_str, \"servershare\"); t!(s: \"server\", dirname_str, \"\"); t!(s: \"?barfoo.txt\", dirname_str, \"?bar\"); t!(s: \"?bar\", dirname_str, \"?bar\"); t!(s: \"?\", dirname_str, \"?\"); t!(s: \"?UNCserversharefoo.txt\", dirname_str, \"?UNCservershare\"); t!(s: \"?UNCserver\", dirname_str, \"?UNCserver\"); t!(s: \"?UNC\", dirname_str, \"?UNC\"); t!(s: \"?C:foo.txt\", dirname_str, \"?C:\"); t!(s: \"?C:\", dirname_str, \"?C:\"); t!(s: \"?C:\", dirname_str, \"?C:\"); t!(s: \"?C:/foo/bar\", dirname_str, \"?C:/foo/bar\"); t!(s: \"?foo/bar\", dirname_str, \"?foo/bar\"); t!(s: \".foobar\", dirname_str, \".foo\"); t!(s: \".foo\", dirname_str, \".foo\"); t!(s: \"?ab\", dirname_str, \"?a\"); t!(v: b!(\"hithere.txt\"), filestem, b!(\"there\")); t!(s: \"hithere.txt\", filestem_str, \"there\"); t!(s: \"hithere\", filestem_str, \"there\"); t!(s: \"there.txt\", filestem_str, \"there\"); t!(s: \"there\", filestem_str, \"there\"); t!(s: \".\", filestem_str, \"\"); t!(s: \"\", filestem_str, \"\"); t!(s: \"foo.bar\", filestem_str, \".bar\"); t!(s: \".bar\", filestem_str, \".bar\"); t!(s: \"..bar\", filestem_str, \".\"); t!(s: \"hithere..txt\", filestem_str, \"there.\"); t!(s: \"..\", filestem_str, \"\"); t!(s: \"....\", filestem_str, \"\"); // filestem is based on filename, so we don't need the full set of prefix tests t!(v: b!(\"hithere.txt\"), extension, Some(b!(\"txt\"))); t!(v: b!(\"hithere\"), extension, None); t!(s: \"hithere.txt\", extension_str, Some(\"txt\"), opt); t!(s: \"hithere\", extension_str, None, opt); t!(s: \"there.txt\", extension_str, Some(\"txt\"), opt); t!(s: \"there\", extension_str, None, opt); t!(s: \".\", extension_str, None, opt); t!(s: \"\", extension_str, None, opt); t!(s: \"foo.bar\", extension_str, None, opt); t!(s: \".bar\", extension_str, None, opt); t!(s: \"..bar\", extension_str, Some(\"bar\"), opt); t!(s: \"hithere..txt\", extension_str, Some(\"txt\"), opt); t!(s: \"..\", extension_str, None, opt); t!(s: \"....\", extension_str, None, opt); // extension is based on filename, so we don't need the full set of prefix tests } #[test] fn test_push() { macro_rules! t( (s: $path:expr, $join:expr) => ( { let path = ($path); let join = ($join); let mut p1 = Path::from_str(path); let p2 = p1.clone(); p1.push_str(join); assert_eq!(p1, p2.join_str(join)); } ) ) t!(s: \"abc\", \"..\"); t!(s: \"abc\", \"d\"); t!(s: \"ab\", \"cd\"); t!(s: \"ab\", \"cd\"); // this is just a sanity-check test. push_str and join_str share an implementation, // so there's no need for the full set of prefix tests // we do want to check one odd case though to ensure the prefix is re-parsed let mut p = Path::from_str(\"?C:\"); assert_eq!(p.prefix(), Some(VerbatimPrefix(2))); p.push_str(\"foo\"); assert_eq!(p.prefix(), Some(VerbatimDiskPrefix)); assert_eq!(p.as_str(), Some(\"?C:foo\")); // and another with verbatim non-normalized paths let mut p = Path::from_str(\"?C:a\"); p.push_str(\"foo\"); assert_eq!(p.as_str(), Some(\"?C:afoo\")); } #[test] fn test_push_path() { macro_rules! t( (s: $path:expr, $push:expr, $exp:expr) => ( { let mut p = Path::from_str($path); let push = Path::from_str($push); p.push_path(&push); assert_eq!(p.as_str(), Some($exp)); } ) ) t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"ab\", \"cd\", \"abcd\"); t!(s: \"ab\", \"cd\", \"cd\"); t!(s: \"ab\", \".\", \"ab\"); t!(s: \"ab\", \"..c\", \"ac\"); t!(s: \"ab\", \"C:a.txt\", \"C:a.txt\"); t!(s: \"ab\", \"......c\", \"..c\"); t!(s: \"ab\", \"C:a.txt\", \"C:a.txt\"); t!(s: \"C:a\", \"C:b.txt\", \"C:b.txt\"); t!(s: \"C:abc\", \"C:d\", \"C:abcd\"); t!(s: \"C:abc\", \"C:d\", \"C:abcd\"); t!(s: \"C:ab\", \"......c\", \"C:..c\"); t!(s: \"C:ab\", \"......c\", \"C:c\"); t!(s: \"serversharefoo\", \"bar\", \"serversharefoobar\"); t!(s: \"serversharefoo\", \"....bar\", \"serversharebar\"); t!(s: \"serversharefoo\", \"C:baz\", \"C:baz\"); t!(s: \"?C:ab\", \"C:cd\", \"?C:abcd\"); t!(s: \"?C:ab\", \"C:cd\", \"C:cd\"); t!(s: \"?C:ab\", \"C:cd\", \"C:cd\"); t!(s: \"?foobar\", \"baz\", \"?foobarbaz\"); t!(s: \"?C:ab\", \"......c\", \"?C:ab......c\"); t!(s: \"?foobar\", \"....c\", \"?foobar....c\"); t!(s: \"?\", \"foo\", \"?foo\"); t!(s: \"?UNCserversharefoo\", \"bar\", \"?UNCserversharefoobar\"); t!(s: \"?UNCservershare\", \"C:a\", \"C:a\"); t!(s: \"?UNCservershare\", \"C:a\", \"C:a\"); t!(s: \"?UNCserver\", \"foo\", \"?UNCserverfoo\"); t!(s: \"C:a\", \"?UNCservershare\", \"?UNCservershare\"); t!(s: \".foobar\", \"baz\", \".foobarbaz\"); t!(s: \".foobar\", \"C:a\", \"C:a\"); // again, not sure about the following, but I'm assuming . should be verbatim t!(s: \".foo\", \"..bar\", \".foo..bar\"); t!(s: \"?C:\", \"foo\", \"?C:foo\"); // this is a weird one } #[test] fn test_pop() { macro_rules! t( (s: $path:expr, $left:expr, $right:expr) => ( { let pstr = $path; let mut p = Path::from_str(pstr); let file = p.pop_opt_str(); let left = $left; assert!(p.as_str() == Some(left), \"`%s`.pop() failed; expected remainder `%s`, found `%s`\", pstr, left, p.as_str().unwrap()); let right = $right; let res = file.map(|s| s.as_slice()); assert!(res == right, \"`%s`.pop() failed; expected `%?`, found `%?`\", pstr, right, res); } ); (v: [$($path:expr),+], [$($left:expr),+], Some($($right:expr),+)) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file.map(|v| v.as_slice()), Some(b!($($right),+))); } ); (v: [$($path:expr),+], [$($left:expr),+], None) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file, None); } ) ) t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \".\", \".\", None); t!(s: \"a\", \"\", Some(\"a\")); t!(s: \"\", \"\", None); t!(v: [\"abc\"], [\"ab\"], Some(\"c\")); t!(v: [\"a\"], [\".\"], Some(\"a\")); t!(v: [\".\"], [\".\"], None); t!(v: [\"a\"], [\"\"], Some(\"a\")); t!(v: [\"\"], [\"\"], None); t!(s: \"C:ab\", \"C:a\", Some(\"b\")); t!(s: \"C:a\", \"C:\", Some(\"a\")); t!(s: \"C:\", \"C:\", None); t!(s: \"C:ab\", \"C:a\", Some(\"b\")); t!(s: \"C:a\", \"C:\", Some(\"a\")); t!(s: \"C:\", \"C:\", None); t!(s: \"servershareab\", \"serversharea\", Some(\"b\")); t!(s: \"serversharea\", \"servershare\", Some(\"a\")); t!(s: \"servershare\", \"servershare\", None); t!(s: \"?abc\", \"?ab\", Some(\"c\")); t!(s: \"?ab\", \"?a\", Some(\"b\")); t!(s: \"?a\", \"?a\", None); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:a\", \"?C:\", Some(\"a\")); t!(s: \"?C:\", \"?C:\", None); t!(s: \"?UNCservershareab\", \"?UNCserversharea\", Some(\"b\")); t!(s: \"?UNCserversharea\", \"?UNCservershare\", Some(\"a\")); t!(s: \"?UNCservershare\", \"?UNCservershare\", None); t!(s: \".abc\", \".ab\", Some(\"c\")); t!(s: \".ab\", \".a\", Some(\"b\")); t!(s: \".a\", \".a\", None); t!(s: \"?ab\", \"?a\", Some(\"b\")); } #[test] fn test_join() { t!(s: Path::from_str(\"abc\").join_str(\"..\"), \"ab\"); t!(s: Path::from_str(\"abc\").join_str(\"d\"), \"abcd\"); t!(s: Path::from_str(\"ab\").join_str(\"cd\"), \"abcd\"); t!(s: Path::from_str(\"ab\").join_str(\"cd\"), \"cd\"); t!(s: Path::from_str(\".\").join_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"\").join_str(\"ab\"), \"ab\"); t!(v: Path::new(b!(\"abc\")).join(b!(\"..\")), b!(\"ab\")); t!(v: Path::new(b!(\"abc\")).join(b!(\"d\")), b!(\"abcd\")); // full join testing is covered under test_push_path, so no need for // the full set of prefix tests } #[test] fn test_join_path() { macro_rules! t( (s: $path:expr, $join:expr, $exp:expr) => ( { let path = Path::from_str($path); let join = Path::from_str($join); let res = path.join_path(&join); assert_eq!(res.as_str(), Some($exp)); } ) ) t!(s: \"abc\", \"..\", \"ab\"); t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"ab\", \"cd\", \"abcd\"); t!(s: \"ab\", \"cd\", \"cd\"); t!(s: \".\", \"ab\", \"ab\"); t!(s: \"\", \"ab\", \"ab\"); // join_path is implemented using push_path, so there's no need for // the full set of prefix tests } #[test] fn test_with_helpers() { macro_rules! t( (s: $path:expr, $op:ident, $arg:expr, $res:expr) => ( { let pstr = $path; let path = Path::from_str(pstr); let arg = $arg; let res = path.$op(arg); let exp = $res; assert!(res.as_str() == Some(exp), \"`%s`.%s(\"%s\"): Expected `%s`, found `%s`\", pstr, stringify!($op), arg, exp, res.as_str().unwrap()); } ) ) t!(s: \"abc\", with_dirname_str, \"d\", \"dc\"); t!(s: \"abc\", with_dirname_str, \"de\", \"dec\"); t!(s: \"abc\", with_dirname_str, \"\", \"c\"); t!(s: \"abc\", with_dirname_str, \"\", \"c\"); t!(s: \"abc\", with_dirname_str, \"/\", \"c\"); t!(s: \"abc\", with_dirname_str, \".\", \"c\"); t!(s: \"abc\", with_dirname_str, \"..\", \"..c\"); t!(s: \"\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"\", with_dirname_str, \"\", \".\"); t!(s: \"foo\", with_dirname_str, \"bar\", \"barfoo\"); t!(s: \"..\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"....\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"..\", with_dirname_str, \"\", \".\"); t!(s: \"....\", with_dirname_str, \"\", \".\"); t!(s: \".\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"foo\", with_dirname_str, \"..\", \"..foo\"); t!(s: \"foo\", with_dirname_str, \"....\", \"....foo\"); t!(s: \"C:ab\", with_dirname_str, \"foo\", \"foob\"); t!(s: \"foo\", with_dirname_str, \"C:ab\", \"C:abfoo\"); t!(s: \"C:ab\", with_dirname_str, \"servershare\", \"servershareb\"); t!(s: \"a\", with_dirname_str, \"servershare\", \"serversharea\"); t!(s: \"ab\", with_dirname_str, \"?\", \"?b\"); t!(s: \"ab\", with_dirname_str, \"C:\", \"C:b\"); t!(s: \"ab\", with_dirname_str, \"C:\", \"C:b\"); t!(s: \"ab\", with_dirname_str, \"C:/\", \"C:b\"); t!(s: \"C:\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"C:\", with_dirname_str, \"foo\", \"foo\"); t!(s: \".\", with_dirname_str, \"C:\", \"C:\"); t!(s: \".\", with_dirname_str, \"C:/\", \"C:\"); t!(s: \"?C:foo\", with_dirname_str, \"C:\", \"C:foo\"); t!(s: \"?C:\", with_dirname_str, \"bar\", \"bar\"); t!(s: \"foobar\", with_dirname_str, \"?C:baz\", \"?C:bazbar\"); t!(s: \"?foo\", with_dirname_str, \"C:bar\", \"C:bar\"); t!(s: \"?afoo\", with_dirname_str, \"C:bar\", \"C:barfoo\"); t!(s: \"?afoo/bar\", with_dirname_str, \"C:baz\", \"C:bazfoobar\"); t!(s: \"?UNCserversharebaz\", with_dirname_str, \"a\", \"abaz\"); t!(s: \"foobar\", with_dirname_str, \"?UNCserversharebaz\", \"?UNCserversharebazbar\"); t!(s: \".foo\", with_dirname_str, \"bar\", \"bar\"); t!(s: \".foobar\", with_dirname_str, \"baz\", \"bazbar\"); t!(s: \".foobar\", with_dirname_str, \"baz\", \"bazbar\"); t!(s: \".foobar\", with_dirname_str, \"baz/\", \"bazbar\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \".\", with_filename_str, \"foo\", \"foo\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \"\", with_filename_str, \"foo\", \"foo\"); t!(s: \"a\", with_filename_str, \"foo\", \"foo\"); t!(s: \"foo\", with_filename_str, \"bar\", \"bar\"); t!(s: \"\", with_filename_str, \"foo\", \"foo\"); t!(s: \"a\", with_filename_str, \"foo\", \"foo\"); t!(s: \"abc\", with_filename_str, \"\", \"ab\"); t!(s: \"abc\", with_filename_str, \".\", \"ab\"); t!(s: \"abc\", with_filename_str, \"..\", \"a\"); t!(s: \"a\", with_filename_str, \"\", \"\"); t!(s: \"foo\", with_filename_str, \"\", \".\"); t!(s: \"abc\", with_filename_str, \"de\", \"abde\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \"..\", with_filename_str, \"foo\", \"..foo\"); t!(s: \"....\", with_filename_str, \"foo\", \"....foo\"); t!(s: \"..\", with_filename_str, \"\", \"..\"); t!(s: \"....\", with_filename_str, \"\", \"....\"); t!(s: \"C:foobar\", with_filename_str, \"baz\", \"C:foobaz\"); t!(s: \"C:foo\", with_filename_str, \"bar\", \"C:bar\"); t!(s: \"C:\", with_filename_str, \"foo\", \"C:foo\"); t!(s: \"C:foobar\", with_filename_str, \"baz\", \"C:foobaz\"); t!(s: \"C:foo\", with_filename_str, \"bar\", \"C:bar\"); t!(s: \"C:\", with_filename_str, \"foo\", \"C:foo\"); t!(s: \"C:foo\", with_filename_str, \"\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"\", \"C:\"); t!(s: \"C:foobar\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foobar\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"..\", \"C:..\"); t!(s: \"C:\", with_filename_str, \"..\", \"C:..\"); t!(s: \"serversharefoo\", with_filename_str, \"bar\", \"serversharebar\"); t!(s: \"servershare\", with_filename_str, \"foo\", \"serversharefoo\"); t!(s: \"serversharefoo\", with_filename_str, \"\", \"servershare\"); t!(s: \"servershare\", with_filename_str, \"\", \"servershare\"); t!(s: \"serversharefoo\", with_filename_str, \"..\", \"servershare\"); t!(s: \"servershare\", with_filename_str, \"..\", \"servershare\"); t!(s: \"?C:foobar\", with_filename_str, \"baz\", \"?C:foobaz\"); t!(s: \"?C:foo\", with_filename_str, \"bar\", \"?C:bar\"); t!(s: \"?C:\", with_filename_str, \"foo\", \"?C:foo\"); t!(s: \"?C:foo\", with_filename_str, \"..\", \"?C:..\"); t!(s: \"?foobar\", with_filename_str, \"baz\", \"?foobaz\"); t!(s: \"?foo\", with_filename_str, \"bar\", \"?foobar\"); t!(s: \"?\", with_filename_str, \"foo\", \"?foo\"); t!(s: \"?foobar\", with_filename_str, \"..\", \"?foo..\"); t!(s: \".foobar\", with_filename_str, \"baz\", \".foobaz\"); t!(s: \".foo\", with_filename_str, \"bar\", \".foobar\"); t!(s: \".foobar\", with_filename_str, \"..\", \".foo..\"); t!(s: \"hithere.txt\", with_filestem_str, \"here\", \"hihere.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"\", \"hi.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \".\", \"hi..txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"..\", \"hi...txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"\", \"hi.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"foobar\", \"hifoobar.txt\"); t!(s: \"hithere.foo.txt\", with_filestem_str, \"here\", \"hihere.txt\"); t!(s: \"hithere\", with_filestem_str, \"here\", \"hihere\"); t!(s: \"hithere\", with_filestem_str, \"\", \"hi\"); t!(s: \"hi\", with_filestem_str, \"\", \".\"); t!(s: \"hi\", with_filestem_str, \"\", \"\"); t!(s: \"hithere\", with_filestem_str, \"..\", \".\"); t!(s: \"hithere\", with_filestem_str, \".\", \"hi\"); t!(s: \"hithere.\", with_filestem_str, \"foo\", \"hifoo.\"); t!(s: \"hithere.\", with_filestem_str, \"\", \"hi\"); t!(s: \"hithere.\", with_filestem_str, \".\", \".\"); t!(s: \"hithere.\", with_filestem_str, \"..\", \"hi...\"); t!(s: \"\", with_filestem_str, \"foo\", \"foo\"); t!(s: \".\", with_filestem_str, \"foo\", \"foo\"); t!(s: \"hithere..\", with_filestem_str, \"here\", \"hihere.\"); t!(s: \"hithere..\", with_filestem_str, \"\", \"hi\"); // filestem setter calls filename setter internally, no need for extended tests t!(s: \"hithere.txt\", with_extension_str, \"exe\", \"hithere.exe\"); t!(s: \"hithere.txt\", with_extension_str, \"\", \"hithere\"); t!(s: \"hithere.txt\", with_extension_str, \".\", \"hithere..\"); t!(s: \"hithere.txt\", with_extension_str, \"..\", \"hithere...\"); t!(s: \"hithere\", with_extension_str, \"txt\", \"hithere.txt\"); t!(s: \"hithere\", with_extension_str, \".\", \"hithere..\"); t!(s: \"hithere\", with_extension_str, \"..\", \"hithere...\"); t!(s: \"hithere.\", with_extension_str, \"txt\", \"hithere.txt\"); t!(s: \"hi.foo\", with_extension_str, \"txt\", \"hi.foo.txt\"); t!(s: \"hithere.txt\", with_extension_str, \".foo\", \"hithere..foo\"); t!(s: \"\", with_extension_str, \"txt\", \"\"); t!(s: \"\", with_extension_str, \".\", \"\"); t!(s: \"\", with_extension_str, \"..\", \"\"); t!(s: \".\", with_extension_str, \"txt\", \".\"); // extension setter calls filename setter internally, no need for extended tests } #[test] fn test_setters() { macro_rules! t( (s: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::from_str(path); p1.$set(arg); let p2 = Path::from_str(path); assert_eq!(p1, p2.$with(arg)); } ); (v: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::new(path); p1.$set(arg); let p2 = Path::new(path); assert_eq!(p1, p2.$with(arg)); } ) ) t!(v: b!(\"abc\"), set_dirname, with_dirname, b!(\"d\")); t!(v: b!(\"abc\"), set_dirname, with_dirname, b!(\"de\")); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"d\"); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"de\"); t!(s: \"\", set_dirname_str, with_dirname_str, \"foo\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"bar\"); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"\"); t!(s: \"....\", set_dirname_str, with_dirname_str, \"x\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"....\"); t!(v: b!(\"abc\"), set_filename, with_filename, b!(\"d\")); t!(v: b!(\"\"), set_filename, with_filename, b!(\"foo\")); t!(s: \"abc\", set_filename_str, with_filename_str, \"d\"); t!(s: \"\", set_filename_str, with_filename_str, \"foo\"); t!(s: \".\", set_filename_str, with_filename_str, \"foo\"); t!(s: \"ab\", set_filename_str, with_filename_str, \"\"); t!(s: \"a\", set_filename_str, with_filename_str, \"\"); t!(v: b!(\"hithere.txt\"), set_filestem, with_filestem, b!(\"here\")); t!(s: \"hithere.txt\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere.\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere.txt\", set_filestem_str, with_filestem_str, \"\"); t!(s: \"hithere\", set_filestem_str, with_filestem_str, \"\"); t!(v: b!(\"hithere.txt\"), set_extension, with_extension, b!(\"exe\")); t!(s: \"hithere.txt\", set_extension_str, with_extension_str, \"exe\"); t!(s: \"hithere.\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hithere\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hithere.txt\", set_extension_str, with_extension_str, \"\"); t!(s: \"hithere\", set_extension_str, with_extension_str, \"\"); t!(s: \".\", set_extension_str, with_extension_str, \"txt\"); // with_ helpers use the setter internally, so the tests for the with_ helpers // will suffice. No need for the full set of prefix tests. } #[test] fn test_getters() { macro_rules! t( (s: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename_str(), $filename); assert_eq!(path.dirname_str(), $dirname); assert_eq!(path.filestem_str(), $filestem); assert_eq!(path.extension_str(), $ext); } ); (v: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename(), $filename); assert_eq!(path.dirname(), $dirname); assert_eq!(path.filestem(), $filestem); assert_eq!(path.extension(), $ext); } ) ) t!(v: Path::new(b!(\"abc\")), b!(\"c\"), b!(\"ab\"), b!(\"c\"), None); t!(s: Path::from_str(\"abc\"), Some(\"c\"), Some(\"ab\"), Some(\"c\"), None); t!(s: Path::from_str(\".\"), Some(\"\"), Some(\".\"), Some(\"\"), None); t!(s: Path::from_str(\"\"), Some(\"\"), Some(\"\"), Some(\"\"), None); t!(s: Path::from_str(\"..\"), Some(\"\"), Some(\"..\"), Some(\"\"), None); t!(s: Path::from_str(\"....\"), Some(\"\"), Some(\"....\"), Some(\"\"), None); t!(s: Path::from_str(\"hithere.txt\"), Some(\"there.txt\"), Some(\"hi\"), Some(\"there\"), Some(\"txt\")); t!(s: Path::from_str(\"hithere\"), Some(\"there\"), Some(\"hi\"), Some(\"there\"), None); t!(s: Path::from_str(\"hithere.\"), Some(\"there.\"), Some(\"hi\"), Some(\"there\"), Some(\"\")); t!(s: Path::from_str(\"hi.there\"), Some(\".there\"), Some(\"hi\"), Some(\".there\"), None); t!(s: Path::from_str(\"hi..there\"), Some(\"..there\"), Some(\"hi\"), Some(\".\"), Some(\"there\")); // these are already tested in test_components, so no need for extended tests } #[test] fn test_dir_file_path() { t!(s: Path::from_str(\"hithere\").dir_path(), \"hi\"); t!(s: Path::from_str(\"hi\").dir_path(), \".\"); t!(s: Path::from_str(\"hi\").dir_path(), \"\"); t!(s: Path::from_str(\"\").dir_path(), \"\"); t!(s: Path::from_str(\"..\").dir_path(), \"..\"); t!(s: Path::from_str(\"....\").dir_path(), \"....\"); macro_rules! t( ($path:expr, $exp:expr) => ( { let path = $path; let left = path.and_then_ref(|p| p.as_str()); assert_eq!(left, $exp); } ); ) t!(Path::from_str(\"hithere\").file_path(), Some(\"there\")); t!(Path::from_str(\"hi\").file_path(), Some(\"hi\")); t!(Path::from_str(\".\").file_path(), None); t!(Path::from_str(\"\").file_path(), None); t!(Path::from_str(\"..\").file_path(), None); t!(Path::from_str(\"....\").file_path(), None); // dir_path and file_path are just dirname and filename interpreted as paths. // No need for extended tests } #[test] fn test_is_absolute() { macro_rules! t( ($path:expr, $abs:expr, $vol:expr, $cwd:expr) => ( { let path = Path::from_str($path); let (abs, vol, cwd) = ($abs, $vol, $cwd); let b = path.is_absolute(); assert!(b == abs, \"Path '%s'.is_absolute(): expected %?, found %?\", path.as_str().unwrap(), abs, b); let b = path.is_vol_relative(); assert!(b == vol, \"Path '%s'.is_vol_relative(): expected %?, found %?\", path.as_str().unwrap(), vol, b); let b = path.is_cwd_relative(); assert!(b == cwd, \"Path '%s'.is_cwd_relative(): expected %?, found %?\", path.as_str().unwrap(), cwd, b); } ) ) t!(\"abc\", false, false, false); t!(\"abc\", false, true, false); t!(\"a\", false, false, false); t!(\"a\", false, true, false); t!(\".\", false, false, false); t!(\"\", false, true, false); t!(\"..\", false, false, false); t!(\"....\", false, false, false); t!(\"C:ab.txt\", false, false, true); t!(\"C:ab.txt\", true, false, false); t!(\"servershareab.txt\", true, false, false); t!(\"?abc.txt\", true, false, false); t!(\"?C:ab.txt\", true, false, false); t!(\"?C:ab.txt\", true, false, false); // NB: not equivalent to C:ab.txt t!(\"?UNCservershareab.txt\", true, false, false); t!(\".ab\", true, false, false); } #[test] fn test_is_ancestor_of() { macro_rules! t( (s: $path:expr, $dest:expr, $exp:expr) => ( { let path = Path::from_str($path); let dest = Path::from_str($dest); let exp = $exp; let res = path.is_ancestor_of(&dest); assert!(res == exp, \"`%s`.is_ancestor_of(`%s`): Expected %?, found %?\", path.as_str().unwrap(), dest.as_str().unwrap(), exp, res); } ) ) t!(s: \"abc\", \"abcd\", true); t!(s: \"abc\", \"abc\", true); t!(s: \"abc\", \"ab\", false); t!(s: \"abc\", \"abc\", true); t!(s: \"ab\", \"abc\", true); t!(s: \"abcd\", \"abc\", false); t!(s: \"ab\", \"abc\", false); t!(s: \"ab\", \"abc\", false); t!(s: \"abc\", \"abd\", false); t!(s: \"..abc\", \"abc\", false); t!(s: \"abc\", \"..abc\", false); t!(s: \"abc\", \"abcd\", false); t!(s: \"abcd\", \"abc\", false); t!(s: \"..ab\", \"..abc\", true); t!(s: \".\", \"ab\", true); t!(s: \".\", \".\", true); t!(s: \"\", \"\", true); t!(s: \"\", \"ab\", true); t!(s: \"..\", \"ab\", true); t!(s: \"....\", \"ab\", true); t!(s: \"foobar\", \"foobar\", false); t!(s: \"foobar\", \"foobar\", false); t!(s: \"foo\", \"C:foo\", false); t!(s: \"C:foo\", \"foo\", false); t!(s: \"C:foo\", \"C:foobar\", true); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foo\", \"C:foobar\", true); t!(s: \"C:\", \"C:\", true); t!(s: \"C:\", \"C:\", false); t!(s: \"C:\", \"C:\", false); t!(s: \"C:\", \"C:\", true); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foo\", \"foo\", false); t!(s: \"foo\", \"C:foo\", false); t!(s: \"serversharefoo\", \"serversharefoobar\", true); t!(s: \"servershare\", \"serversharefoo\", true); t!(s: \"serversharefoo\", \"servershare\", false); t!(s: \"C:foo\", \"serversharefoo\", false); t!(s: \"serversharefoo\", \"C:foo\", false); t!(s: \"?foobar\", \"?foobarbaz\", true); t!(s: \"?foobarbaz\", \"?foobar\", false); t!(s: \"?foobar\", \"foobarbaz\", false); t!(s: \"foobar\", \"?foobarbaz\", false); t!(s: \"?C:foobar\", \"?C:foobarbaz\", true); t!(s: \"?C:foobarbaz\", \"?C:foobar\", false); t!(s: \"?C:\", \"?C:foo\", true); t!(s: \"?C:\", \"?C:\", false); // this is a weird one t!(s: \"?C:\", \"?C:\", false); t!(s: \"?C:a\", \"?c:ab\", true); t!(s: \"?c:a\", \"?C:ab\", true); t!(s: \"?C:a\", \"?D:ab\", false); t!(s: \"?foo\", \"?foobar\", false); t!(s: \"?ab\", \"?abc\", true); t!(s: \"?ab\", \"?ab\", true); t!(s: \"?ab\", \"?ab\", true); t!(s: \"?abc\", \"?ab\", false); t!(s: \"?abc\", \"?ab\", false); t!(s: \"?UNCabc\", \"?UNCabcd\", true); t!(s: \"?UNCabcd\", \"?UNCabc\", false); t!(s: \"?UNCab\", \"?UNCabc\", true); t!(s: \".foobar\", \".foobarbaz\", true); t!(s: \".foobarbaz\", \".foobar\", false); t!(s: \".foo\", \".foobar\", true); t!(s: \".foo\", \".foobar\", false); t!(s: \"ab\", \"?ab\", false); t!(s: \"?ab\", \"ab\", false); t!(s: \"ab\", \"?C:ab\", false); t!(s: \"?C:ab\", \"ab\", false); t!(s: \"Z:ab\", \"?z:ab\", true); t!(s: \"C:ab\", \"?D:ab\", false); t!(s: \"ab\", \"?ab\", false); t!(s: \"?ab\", \"ab\", false); t!(s: \"C:ab\", \"?C:ab\", true); t!(s: \"?C:ab\", \"C:ab\", true); t!(s: \"C:ab\", \"?C:ab\", false); t!(s: \"C:ab\", \"?C:ab\", false); t!(s: \"?C:ab\", \"C:ab\", false); t!(s: \"?C:ab\", \"C:ab\", false); t!(s: \"C:ab\", \"?C:ab\", true); t!(s: \"?C:ab\", \"C:ab\", true); t!(s: \"abc\", \"?UNCabc\", true); t!(s: \"?UNCabc\", \"abc\", true); } #[test] fn test_path_relative_from() { macro_rules! t( (s: $path:expr, $other:expr, $exp:expr) => ( { let path = Path::from_str($path); let other = Path::from_str($other); let res = path.path_relative_from(&other); let exp = $exp; assert!(res.and_then_ref(|x| x.as_str()) == exp, \"`%s`.path_relative_from(`%s`): Expected %?, got %?\", path.as_str().unwrap(), other.as_str().unwrap(), exp, res.and_then_ref(|x| x.as_str())); } ) ) t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"abc\", \"abd\", Some(\"..c\")); t!(s: \"abc\", \"abcd\", Some(\"..\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"abc\", \"abcde\", Some(\"....\")); t!(s: \"abc\", \"ade\", Some(\"....bc\")); t!(s: \"abc\", \"def\", Some(\"......abc\")); t!(s: \"abc\", \"abc\", None); t!(s: \"abc\", \"abc\", Some(\"abc\")); t!(s: \"abc\", \"abcd\", Some(\"..\")); t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"abc\", \"abcde\", Some(\"....\")); t!(s: \"abc\", \"ade\", Some(\"....bc\")); t!(s: \"abc\", \"def\", Some(\"......abc\")); t!(s: \"hithere.txt\", \"hithere\", Some(\"..there.txt\")); t!(s: \".\", \"a\", Some(\"..\")); t!(s: \".\", \"ab\", Some(\"....\")); t!(s: \".\", \".\", Some(\".\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \"ab\", \".\", Some(\"ab\")); t!(s: \"..\", \".\", Some(\"..\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"\", \"\", Some(\".\")); t!(s: \"\", \".\", Some(\"\")); t!(s: \"....a\", \"b\", Some(\"......a\")); t!(s: \"a\", \"....b\", None); t!(s: \"....a\", \"....b\", Some(\"..a\")); t!(s: \"....a\", \"....ab\", Some(\"..\")); t!(s: \"....ab\", \"....a\", Some(\"b\")); t!(s: \"C:abc\", \"C:ab\", Some(\"c\")); t!(s: \"C:ab\", \"C:abc\", Some(\"..\")); t!(s: \"C:\" ,\"C:ab\", Some(\"....\")); t!(s: \"C:ab\", \"C:cd\", Some(\"....ab\")); t!(s: \"C:ab\", \"D:cd\", Some(\"C:ab\")); t!(s: \"C:ab\", \"C:..c\", None); t!(s: \"C:..a\", \"C:bc\", Some(\"......a\")); t!(s: \"C:abc\", \"C:ab\", Some(\"c\")); t!(s: \"C:ab\", \"C:abc\", Some(\"..\")); t!(s: \"C:\", \"C:ab\", Some(\"....\")); t!(s: \"C:ab\", \"C:cd\", Some(\"....ab\")); t!(s: \"C:ab\", \"C:ab\", Some(\"C:ab\")); t!(s: \"C:ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"ab\", \"abc\", Some(\"..\")); t!(s: \"abce\", \"abcd\", Some(\"..e\")); t!(s: \"acd\", \"abd\", Some(\"acd\")); t!(s: \"bcd\", \"acd\", Some(\"bcd\")); t!(s: \"abc\", \"de\", Some(\"abc\")); t!(s: \"de\", \"abc\", None); t!(s: \"de\", \"abc\", None); t!(s: \"C:abc\", \"abc\", Some(\"C:abc\")); t!(s: \"C:c\", \"abc\", Some(\"C:c\")); t!(s: \"?ab\", \"ab\", Some(\"?ab\")); t!(s: \"?ab\", \"ab\", Some(\"?ab\")); t!(s: \"?ab\", \"b\", Some(\"?ab\")); t!(s: \"?ab\", \"b\", Some(\"?ab\")); t!(s: \"?ab\", \"?abc\", Some(\"..\")); t!(s: \"?abc\", \"?ab\", Some(\"c\")); t!(s: \"?ab\", \"?cd\", Some(\"?ab\")); t!(s: \"?a\", \"?b\", Some(\"?a\")); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:a\", \"?C:ab\", Some(\"..\")); t!(s: \"?C:a\", \"?C:b\", Some(\"..a\")); t!(s: \"?C:a\", \"?D:a\", Some(\"?C:a\")); t!(s: \"?C:ab\", \"?c:a\", Some(\"b\")); t!(s: \"?C:ab\", \"C:a\", Some(\"b\")); t!(s: \"?C:a\", \"C:ab\", Some(\"..\")); t!(s: \"C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"C:a\", \"?C:ab\", Some(\"..\")); t!(s: \"?C:a\", \"D:a\", Some(\"?C:a\")); t!(s: \"?c:ab\", \"C:a\", Some(\"b\")); t!(s: \"?C:ab\", \"C:ab\", Some(\"?C:ab\")); t!(s: \"?C:a.b\", \"C:a\", Some(\"?C:a.b\")); t!(s: \"?C:ab/c\", \"C:a\", Some(\"?C:ab/c\")); t!(s: \"?C:a..b\", \"C:a\", Some(\"?C:a..b\")); t!(s: \"C:ab\", \"?C:ab\", None); t!(s: \"?C:a.b\", \"?C:a\", Some(\"?C:a.b\")); t!(s: \"?C:ab/c\", \"?C:a\", Some(\"?C:ab/c\")); t!(s: \"?C:a..b\", \"?C:a\", Some(\"?C:a..b\")); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:.b\", \"?C:.\", Some(\"b\")); t!(s: \"C:b\", \"?C:.\", Some(\"..b\")); t!(s: \"?a.bc\", \"?a.b\", Some(\"c\")); t!(s: \"?abc\", \"?a.d\", Some(\"....bc\")); t!(s: \"?a..b\", \"?a..\", Some(\"b\")); t!(s: \"?ab..\", \"?ab\", Some(\"?ab..\")); t!(s: \"?abc\", \"?a..b\", Some(\"....bc\")); t!(s: \"?UNCabc\", \"?UNCab\", Some(\"c\")); t!(s: \"?UNCab\", \"?UNCabc\", Some(\"..\")); t!(s: \"?UNCabc\", \"?UNCacd\", Some(\"?UNCabc\")); t!(s: \"?UNCbcd\", \"?UNCacd\", Some(\"?UNCbcd\")); t!(s: \"?UNCabc\", \"?abc\", Some(\"?UNCabc\")); t!(s: \"?UNCabc\", \"?C:abc\", Some(\"?UNCabc\")); t!(s: \"?UNCabc/d\", \"?UNCab\", Some(\"?UNCabc/d\")); t!(s: \"?UNCab.\", \"?UNCab\", Some(\"?UNCab.\")); t!(s: \"?UNCab..\", \"?UNCab\", Some(\"?UNCab..\")); t!(s: \"?UNCabc\", \"ab\", Some(\"c\")); t!(s: \"?UNCab\", \"abc\", Some(\"..\")); t!(s: \"?UNCabc\", \"acd\", Some(\"?UNCabc\")); t!(s: \"?UNCbcd\", \"acd\", Some(\"?UNCbcd\")); t!(s: \"?UNCab.\", \"ab\", Some(\"?UNCab.\")); t!(s: \"?UNCabc/d\", \"ab\", Some(\"?UNCabc/d\")); t!(s: \"?UNCab..\", \"ab\", Some(\"?UNCab..\")); t!(s: \"abc\", \"?UNCab\", Some(\"c\")); t!(s: \"abc\", \"?UNCacd\", Some(\"abc\")); } #[test] fn test_component_iter() { macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = Path::from_str($path); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; assert_eq!(comps.as_slice(), exp); } ); (v: [$($arg:expr),+], $exp:expr) => ( { let path = Path::new(b!($($arg),+)); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; assert_eq!(comps.as_slice(), exp); } ) ) t!(v: [\"abc\"], [\"a\", \"b\", \"c\"]); t!(s: \"abc\", [\"a\", \"b\", \"c\"]); t!(s: \"abd\", [\"a\", \"b\", \"d\"]); t!(s: \"abcd\", [\"a\", \"b\", \"cd\"]); t!(s: \"abc\", [\"a\", \"b\", \"c\"]); t!(s: \"a\", [\"a\"]); t!(s: \"a\", [\"a\"]); t!(s: \"\", []); t!(s: \".\", [\".\"]); t!(s: \"..\", [\"..\"]); t!(s: \"....\", [\"..\", \"..\"]); t!(s: \"....foo\", [\"..\", \"..\", \"foo\"]); t!(s: \"C:foobar\", [\"foo\", \"bar\"]); t!(s: \"C:foo\", [\"foo\"]); t!(s: \"C:\", []); t!(s: \"C:foobar\", [\"foo\", \"bar\"]); t!(s: \"C:foo\", [\"foo\"]); t!(s: \"C:\", []); t!(s: \"serversharefoobar\", [\"foo\", \"bar\"]); t!(s: \"serversharefoo\", [\"foo\"]); t!(s: \"servershare\", []); t!(s: \"?foobarbaz\", [\"bar\", \"baz\"]); t!(s: \"?foobar\", [\"bar\"]); t!(s: \"?foo\", []); t!(s: \"?\", []); t!(s: \"?ab\", [\"b\"]); t!(s: \"?ab\", [\"b\"]); t!(s: \"?foobarbaz\", [\"bar\", \"\", \"baz\"]); t!(s: \"?C:foobar\", [\"foo\", \"bar\"]); t!(s: \"?C:foo\", [\"foo\"]); t!(s: \"?C:\", []); t!(s: \"?C:foo\", [\"foo\"]); t!(s: \"?UNCserversharefoobar\", [\"foo\", \"bar\"]); t!(s: \"?UNCserversharefoo\", [\"foo\"]); t!(s: \"?UNCservershare\", []); t!(s: \".foobarbaz\", [\"bar\", \"baz\"]); t!(s: \".foobar\", [\"bar\"]); t!(s: \".foo\", []); } } ", "commid": "rust_pr_9655.0"}], "negative_passages": []} {"query_id": "q-en-rust-c0a929f18bcfc09b4140d3b23a5400af201e9bb77c0ec58f078f31dbf3455980", "query": "and all interfaces directly with the file-system should probably use rather than trying to coerce to handle these cases. (Similar issue to .)\nnominating feature-complete\nAccepted for backwards-compatible\nMy last referenced commit is a month old, but I'm still working on this issue (currently finishing up the support for Windows paths).\nThe issues here of dealing with filesystems that are not utf8 seem related to , at least tangentially.", "positive_passages": [{"docid": "doc-en-rust-ddea2bffcc042993a01b929d5e2daea658bb84856dab22b7abb7b540d565adf0", "text": "pub mod c_str; pub mod os; pub mod path; pub mod path2; pub mod rand; pub mod run; pub mod sys;", "commid": "rust_pr_9655.0"}], "negative_passages": []} {"query_id": "q-en-rust-ecfe0ab2e7e6fae152d681b301111b2e7564cafcea4ffa7308cbcc2c46fd7941", "query": "When changing a pattern binding from something like to something like it's easy to end up with which yields an unhelpful syntax error message (). I don't believe it's obvious why and are valid, but is not. The same holds for bindings, of course. It would be nice if rustc's parser accepted the invalid syntax and provided one of its helpful \"did you mean to write ?\" error messages.\nCurrent output:", "positive_passages": [{"docid": "doc-en-rust-acd0836ec6130667cba23d990c0eb99bfdd34d1aa97e733a99b2b251200d1b8e", "text": "// check that a comma comes after every field if !ate_comma { let err = ExpectedCommaAfterPatternField { span: self.token.span } let mut err = ExpectedCommaAfterPatternField { span: self.token.span } .into_diagnostic(&self.sess.span_diagnostic); if let Some(mut delayed) = delayed_err { delayed.emit(); } self.recover_misplaced_pattern_modifiers(&fields, &mut err); return Err(err); } ate_comma = false;", "commid": "rust_pr_117289"}], "negative_passages": []} {"query_id": "q-en-rust-ecfe0ab2e7e6fae152d681b301111b2e7564cafcea4ffa7308cbcc2c46fd7941", "query": "When changing a pattern binding from something like to something like it's easy to end up with which yields an unhelpful syntax error message (). I don't believe it's obvious why and are valid, but is not. The same holds for bindings, of course. It would be nice if rustc's parser accepted the invalid syntax and provided one of its helpful \"did you mean to write ?\" error messages.\nCurrent output:", "positive_passages": [{"docid": "doc-en-rust-2c25bf1c6e12a6c860426ad77b8209ce4ca8c1210d9348fa2055204b3ce805f7", "text": "Ok((fields, etc)) } /// If the user writes `S { ref field: name }` instead of `S { field: ref name }`, we suggest /// the correct code. fn recover_misplaced_pattern_modifiers( &self, fields: &ThinVec, err: &mut DiagnosticBuilder<'a, ErrorGuaranteed>, ) { if let Some(last) = fields.iter().last() && last.is_shorthand && let PatKind::Ident(binding, ident, None) = last.pat.kind && binding != BindingAnnotation::NONE && self.token == token::Colon // We found `ref mut? ident:`, try to parse a `name,` or `name }`. && let Some(name_span) = self.look_ahead(1, |t| t.is_ident().then(|| t.span)) && self.look_ahead(2, |t| { t == &token::Comma || t == &token::CloseDelim(Delimiter::Brace) }) { let span = last.pat.span.with_hi(ident.span.lo()); // We have `S { ref field: name }` instead of `S { field: ref name }` err.multipart_suggestion( \"the pattern modifiers belong after the `:`\", vec![ (span, String::new()), (name_span.shrink_to_lo(), binding.prefix_str().to_string()), ], Applicability::MachineApplicable, ); } } /// Recover on `...` or `_` as if it were `..` to avoid further errors. /// See issue #46718. fn recover_bad_dot_dot(&self) {", "commid": "rust_pr_117289"}], "negative_passages": []} {"query_id": "q-en-rust-ecfe0ab2e7e6fae152d681b301111b2e7564cafcea4ffa7308cbcc2c46fd7941", "query": "When changing a pattern binding from something like to something like it's easy to end up with which yields an unhelpful syntax error message (). I don't believe it's obvious why and are valid, but is not. The same holds for bindings, of course. It would be nice if rustc's parser accepted the invalid syntax and provided one of its helpful \"did you mean to write ?\" error messages.\nCurrent output:", "positive_passages": [{"docid": "doc-en-rust-38ea225a6c1ca5015761bb043ddf88af60326b5e38824e9af096b9135aba61cb", "text": " // run-rustfix struct S { field_name: (), } fn main() { match (S {field_name: ()}) { S {field_name: ref _foo} => {} //~ ERROR expected `,` } match (S {field_name: ()}) { S {field_name: mut _foo} => {} //~ ERROR expected `,` } match (S {field_name: ()}) { S {field_name: ref mut _foo} => {} //~ ERROR expected `,` } // Verify that we recover enough to run typeck. let _: usize = 3usize; //~ ERROR mismatched types } ", "commid": "rust_pr_117289"}], "negative_passages": []} {"query_id": "q-en-rust-ecfe0ab2e7e6fae152d681b301111b2e7564cafcea4ffa7308cbcc2c46fd7941", "query": "When changing a pattern binding from something like to something like it's easy to end up with which yields an unhelpful syntax error message (). I don't believe it's obvious why and are valid, but is not. The same holds for bindings, of course. It would be nice if rustc's parser accepted the invalid syntax and provided one of its helpful \"did you mean to write ?\" error messages.\nCurrent output:", "positive_passages": [{"docid": "doc-en-rust-3bbe28e9d2dd3db6772dde234c493f1bd22358d2c6f124c70ac8e7125b8fd740", "text": " // run-rustfix struct S { field_name: (), } fn main() { match (S {field_name: ()}) { S {ref field_name: _foo} => {} //~ ERROR expected `,` } match (S {field_name: ()}) { S {mut field_name: _foo} => {} //~ ERROR expected `,` } match (S {field_name: ()}) { S {ref mut field_name: _foo} => {} //~ ERROR expected `,` } // Verify that we recover enough to run typeck. let _: usize = 3u8; //~ ERROR mismatched types } ", "commid": "rust_pr_117289"}], "negative_passages": []} {"query_id": "q-en-rust-ecfe0ab2e7e6fae152d681b301111b2e7564cafcea4ffa7308cbcc2c46fd7941", "query": "When changing a pattern binding from something like to something like it's easy to end up with which yields an unhelpful syntax error message (). I don't believe it's obvious why and are valid, but is not. The same holds for bindings, of course. It would be nice if rustc's parser accepted the invalid syntax and provided one of its helpful \"did you mean to write ?\" error messages.\nCurrent output:", "positive_passages": [{"docid": "doc-en-rust-b127c872129e80cdc8811755da4c1b6d2cd1d3510ec0764648f46e19322f7908", "text": " error: expected `,` --> $DIR/incorrect-placement-of-pattern-modifiers.rs:8:26 | LL | S {ref field_name: _foo} => {} | - ^ | | | while parsing the fields for this pattern | help: the pattern modifiers belong after the `:` | LL - S {ref field_name: _foo} => {} LL + S {field_name: ref _foo} => {} | error: expected `,` --> $DIR/incorrect-placement-of-pattern-modifiers.rs:11:26 | LL | S {mut field_name: _foo} => {} | - ^ | | | while parsing the fields for this pattern | help: the pattern modifiers belong after the `:` | LL - S {mut field_name: _foo} => {} LL + S {field_name: mut _foo} => {} | error: expected `,` --> $DIR/incorrect-placement-of-pattern-modifiers.rs:14:30 | LL | S {ref mut field_name: _foo} => {} | - ^ | | | while parsing the fields for this pattern | help: the pattern modifiers belong after the `:` | LL - S {ref mut field_name: _foo} => {} LL + S {field_name: ref mut _foo} => {} | error[E0308]: mismatched types --> $DIR/incorrect-placement-of-pattern-modifiers.rs:17:20 | LL | let _: usize = 3u8; | ----- ^^^ expected `usize`, found `u8` | | | expected due to this | help: change the type of the numeric literal from `u8` to `usize` | LL | let _: usize = 3usize; | ~~~~~ error: aborting due to 4 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_117289"}], "negative_passages": []} {"query_id": "q-en-rust-9f00dea09a8b2789f02ec7ce2b41af5bc7371d0a975d0df24ca3b8695fba4516", "query": " $DIR/issue-72442.rs:12:36 | LL | let mut f = File::open(path.to_str())?; | ^^^^^^^^^^^^^ the trait `std::convert::AsRef` is not implemented for `std::option::Option<&str>` | ::: $SRC_DIR/libstd/fs.rs:LL:COL | LL | pub fn open>(path: P) -> io::Result { | ----------- required by this bound in `std::fs::File::open` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_72450"}], "negative_passages": []} {"query_id": "q-en-rust-6c4d4f3b496f5ae06ffc89ae4053ef8049e709be468f2006bc9bfacc001716c8", "query": "In I noticed this surprising spelling suggestion: To me and don't seem like they would be similar enough to meet the threshold for showing such a suggestion. Can we calibrate this better for short idents? For comparison, even doesn't assume you mean . rustc 1.45.0-nightly ( 2020-05-23) Mentioning who worked on suggestions most recently in .\nI debugged it a bit. is selected as the candidate because the edit distance is just 2 to . The code to fix is likely in or around . I tried some quick tricks but got ICEs and failing tests and gave up. One thing I didn't try that could perhaps work is to consider edit distance between characters of different alphabets/logogram sets as infinite, rather than 1, which is the case right now.\nThank you for investigating! Independent of what we do with different logogram sets (your suggestion sounds plausible), an edit distance of 2 for a string of length 2 should not meet the threshold for showing a suggestion, even within a single logogram set.\nI think edit distance of 1 for a string of length 1 is reasonable (a lot of existing UI tests relies on it), but maybe an edit distance of 2 for a string of length 2 is not reasonable indeed. As you point out, with regular chars, the edit distance is still 2 but is not suggested. So maybe there is a simpler fix to be made for than looking at what alphabets/logogram sets characters belong to.", "positive_passages": [{"docid": "doc-en-rust-bdf948d1625515fc81c71d3316ef7a97dfcc775d7ad44283f0670bf7eef2c3ad", "text": "return Some(*c); } let mut dist = dist.unwrap_or_else(|| cmp::max(lookup.len(), 3) / 3); // `fn edit_distance()` use `chars()` to calculate edit distance, so we must // also use `chars()` (and not `str::len()`) to calculate length here. let lookup_len = lookup.chars().count(); let mut dist = dist.unwrap_or_else(|| cmp::max(lookup_len, 3) / 3); let mut best = None; // store the candidates with the same distance, only for `use_substring_score` current. let mut next_candidates = vec![];", "commid": "rust_pr_118381"}], "negative_passages": []} {"query_id": "q-en-rust-6c4d4f3b496f5ae06ffc89ae4053ef8049e709be468f2006bc9bfacc001716c8", "query": "In I noticed this surprising spelling suggestion: To me and don't seem like they would be similar enough to meet the threshold for showing such a suggestion. Can we calibrate this better for short idents? For comparison, even doesn't assume you mean . rustc 1.45.0-nightly ( 2020-05-23) Mentioning who worked on suggestions most recently in .\nI debugged it a bit. is selected as the candidate because the edit distance is just 2 to . The code to fix is likely in or around . I tried some quick tricks but got ICEs and failing tests and gave up. One thing I didn't try that could perhaps work is to consider edit distance between characters of different alphabets/logogram sets as infinite, rather than 1, which is the case right now.\nThank you for investigating! Independent of what we do with different logogram sets (your suggestion sounds plausible), an edit distance of 2 for a string of length 2 should not meet the threshold for showing a suggestion, even within a single logogram set.\nI think edit distance of 1 for a string of length 1 is reasonable (a lot of existing UI tests relies on it), but maybe an edit distance of 2 for a string of length 2 is not reasonable indeed. As you point out, with regular chars, the edit distance is still 2 but is not suggested. So maybe there is a simpler fix to be made for than looking at what alphabets/logogram sets characters belong to.", "positive_passages": [{"docid": "doc-en-rust-2abb820698ea4907c22db32c025f3f67631b7a2300c0c0f28a2933978cdd0572", "text": " fn main() { // There shall be no suggestions here. In particular not `Ok`. let _ = \u8bfb\u6587; //~ ERROR cannot find value `\u8bfb\u6587` in this scope } ", "commid": "rust_pr_118381"}], "negative_passages": []} {"query_id": "q-en-rust-6c4d4f3b496f5ae06ffc89ae4053ef8049e709be468f2006bc9bfacc001716c8", "query": "In I noticed this surprising spelling suggestion: To me and don't seem like they would be similar enough to meet the threshold for showing such a suggestion. Can we calibrate this better for short idents? For comparison, even doesn't assume you mean . rustc 1.45.0-nightly ( 2020-05-23) Mentioning who worked on suggestions most recently in .\nI debugged it a bit. is selected as the candidate because the edit distance is just 2 to . The code to fix is likely in or around . I tried some quick tricks but got ICEs and failing tests and gave up. One thing I didn't try that could perhaps work is to consider edit distance between characters of different alphabets/logogram sets as infinite, rather than 1, which is the case right now.\nThank you for investigating! Independent of what we do with different logogram sets (your suggestion sounds plausible), an edit distance of 2 for a string of length 2 should not meet the threshold for showing a suggestion, even within a single logogram set.\nI think edit distance of 1 for a string of length 1 is reasonable (a lot of existing UI tests relies on it), but maybe an edit distance of 2 for a string of length 2 is not reasonable indeed. As you point out, with regular chars, the edit distance is still 2 but is not suggested. So maybe there is a simpler fix to be made for than looking at what alphabets/logogram sets characters belong to.", "positive_passages": [{"docid": "doc-en-rust-befcf4ac885f1d734830dc2b3a94cbb9ef12d9cfefbdb2a81658b13c58e3eb80", "text": " error[E0425]: cannot find value `\u8bfb\u6587` in this scope --> $DIR/non_ascii_ident.rs:3:13 | LL | let _ = \u8bfb\u6587; | ^^^^ not found in this scope error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_118381"}], "negative_passages": []} {"query_id": "q-en-rust-ba5448033eb46c4889ba63737c0f332da0059ac03e0dbc0745842cbfd26c43c9", "query": "The feature gate for the issue is . When constructing a value using the word \"default\" is repeated twice. See for additional details, including alternative solutions. $DIR/issue-2356.rs:31:5 | LL | default(); | ^^^^^^^ | help: you might have meant to call the associated function | LL | Self::default(); | ~~~~~~~~~~~~~ help: consider importing this function | LL + use std::default::default; | error[E0425]: cannot find value `whiskers` in this scope --> $DIR/issue-2356.rs:39:5 |", "commid": "rust_pr_113469"}], "negative_passages": []} {"query_id": "q-en-rust-ba5448033eb46c4889ba63737c0f332da0059ac03e0dbc0745842cbfd26c43c9", "query": "The feature gate for the issue is . When constructing a value using the word \"default\" is repeated twice. See for additional details, including alternative solutions. $DIR/issue-2356.rs:31:5 | LL | default(); | ^^^^^^^ help: you might have meant to call the associated function: `Self::default` error[E0425]: cannot find function `shave` in this scope --> $DIR/issue-2356.rs:41:5 |", "commid": "rust_pr_113469"}], "negative_passages": []} {"query_id": "q-en-rust-b8486f262afdbb68cfc63c0ab3b471ce52ebbe592d6c5d0b7e1cf3b30e23185f", "query": " $DIR/issue-73020.rs:2:6 | LL | use {self}; | ^^^^ can only appear in an import list with a non-empty prefix error: aborting due to previous error For more information about this error, try `rustc --explain E0431`. ", "commid": "rust_pr_73046"}], "negative_passages": []} {"query_id": "q-en-rust-b8486f262afdbb68cfc63c0ab3b471ce52ebbe592d6c5d0b7e1cf3b30e23185f", "query": " $DIR/issue-73112.rs:9:5 | LL | / struct SomeStruct { LL | | LL | | page_table: PageTable, LL | | } | |_____^ | note: `PageTable` has a `#[repr(align)]` attribute --> $DIR/auxiliary/issue-73112.rs:8:1 | LL | / pub struct PageTable { LL | | entries: [PageTableEntry; 512], LL | | } | |_^ error: aborting due to previous error For more information about this error, try `rustc --explain E0588`. ", "commid": "rust_pr_74336"}], "negative_passages": []} {"query_id": "q-en-rust-e079ed766caa84050f55795f7e9ff5e09694489c78026da06c6810480cb5e876", "query": "We fail to parse this (admittedly questionable) code as of beta: This broke one repository (not crate) in the beta crater run: https://crater-\ncc I personally suspect that we may want to ask T-lang if we're good accepting this as expected breakage. Alternatively I guess we can adjust the shebang detection to not require a non-empty shebang.\nAssigning .\nFor reference, here's the code responsible for parsing shebang in Linux kernel - (Fully whitespace (including empty) line isn't considered a shebang.)\nI am inclined to close as won't fix; let's cc as the author of that repository as well. I'm going to go ahead and nominate this so that we can get a :+1: (or not) at the T-compiler meeting confirming won't fix status.\nI made a fix in , which also has a benefit of simplifying the rules slightly.\nThanks guys, was more of a spelling mistake/leftover from previous version than unclear explanation of how to use shebang with rust as your previous explanation on how to use it worked fine in something else I was playing around with. But thank you very much for letting me know. Sent with Secure Email. \u2010\u2010\u2010\u2010\u2010\u2010\u2010 Original Message \u2010\u2010\u2010\u2010\u2010\u2010\u2010", "positive_passages": [{"docid": "doc-en-rust-ccca5b0632fcfeb54b32d1f72f4a5a25ade2ed8cfedd75add87b6a27c6be42b4", "text": "/// but shebang isn't a part of rust syntax. pub fn strip_shebang(input: &str) -> Option { // Shebang must start with `#!` literally, without any preceding whitespace. if input.starts_with(\"#!\") { let input_tail = &input[2..]; // Shebang must have something non-whitespace after `#!` on the first line. let first_line_tail = input_tail.lines().next()?; if first_line_tail.contains(|c| !is_whitespace(c)) { // Ok, this is a shebang but if the next non-whitespace token is `[` or maybe // a doc comment (due to `TokenKind::(Line,Block)Comment` ambiguity at lexer level), // then it may be valid Rust code, so consider it Rust code. let next_non_whitespace_token = tokenize(input_tail).map(|tok| tok.kind).find(|tok| !matches!(tok, TokenKind::Whitespace | TokenKind::LineComment | TokenKind::BlockComment { .. }) ); if next_non_whitespace_token != Some(TokenKind::OpenBracket) { // No other choice than to consider this a shebang. return Some(2 + first_line_tail.len()); } // For simplicity we consider any line starting with `#!` a shebang, // regardless of restrictions put on shebangs by specific platforms. if let Some(input_tail) = input.strip_prefix(\"#!\") { // Ok, this is a shebang but if the next non-whitespace token is `[` or maybe // a doc comment (due to `TokenKind::(Line,Block)Comment` ambiguity at lexer level), // then it may be valid Rust code, so consider it Rust code. let next_non_whitespace_token = tokenize(input_tail).map(|tok| tok.kind).find(|tok| !matches!(tok, TokenKind::Whitespace | TokenKind::LineComment | TokenKind::BlockComment { .. }) ); if next_non_whitespace_token != Some(TokenKind::OpenBracket) { // No other choice than to consider this a shebang. return Some(2 + input_tail.lines().next().unwrap_or_default().len()); } } None", "commid": "rust_pr_73596"}], "negative_passages": []} {"query_id": "q-en-rust-e079ed766caa84050f55795f7e9ff5e09694489c78026da06c6810480cb5e876", "query": "We fail to parse this (admittedly questionable) code as of beta: This broke one repository (not crate) in the beta crater run: https://crater-\ncc I personally suspect that we may want to ask T-lang if we're good accepting this as expected breakage. Alternatively I guess we can adjust the shebang detection to not require a non-empty shebang.\nAssigning .\nFor reference, here's the code responsible for parsing shebang in Linux kernel - (Fully whitespace (including empty) line isn't considered a shebang.)\nI am inclined to close as won't fix; let's cc as the author of that repository as well. I'm going to go ahead and nominate this so that we can get a :+1: (or not) at the T-compiler meeting confirming won't fix status.\nI made a fix in , which also has a benefit of simplifying the rules slightly.\nThanks guys, was more of a spelling mistake/leftover from previous version than unclear explanation of how to use shebang with rust as your previous explanation on how to use it worked fine in something else I was playing around with. But thank you very much for letting me know. Sent with Secure Email. \u2010\u2010\u2010\u2010\u2010\u2010\u2010 Original Message \u2010\u2010\u2010\u2010\u2010\u2010\u2010", "positive_passages": [{"docid": "doc-en-rust-8fcf590ea2ef2e960018095d720eff62bf5eef4930e5aa65dfdddd5e40a3ef27", "text": " #! // check-pass fn main() {} ", "commid": "rust_pr_73596"}], "negative_passages": []} {"query_id": "q-en-rust-e079ed766caa84050f55795f7e9ff5e09694489c78026da06c6810480cb5e876", "query": "We fail to parse this (admittedly questionable) code as of beta: This broke one repository (not crate) in the beta crater run: https://crater-\ncc I personally suspect that we may want to ask T-lang if we're good accepting this as expected breakage. Alternatively I guess we can adjust the shebang detection to not require a non-empty shebang.\nAssigning .\nFor reference, here's the code responsible for parsing shebang in Linux kernel - (Fully whitespace (including empty) line isn't considered a shebang.)\nI am inclined to close as won't fix; let's cc as the author of that repository as well. I'm going to go ahead and nominate this so that we can get a :+1: (or not) at the T-compiler meeting confirming won't fix status.\nI made a fix in , which also has a benefit of simplifying the rules slightly.\nThanks guys, was more of a spelling mistake/leftover from previous version than unclear explanation of how to use shebang with rust as your previous explanation on how to use it worked fine in something else I was playing around with. But thank you very much for letting me know. Sent with Secure Email. \u2010\u2010\u2010\u2010\u2010\u2010\u2010 Original Message \u2010\u2010\u2010\u2010\u2010\u2010\u2010", "positive_passages": [{"docid": "doc-en-rust-3d986348461e3495d87c921eacfb99fdadb110a43d73c5d3bd552e3da6ed538b", "text": " #! // check-pass // ignore-tidy-end-whitespace fn main() {} ", "commid": "rust_pr_73596"}], "negative_passages": []} {"query_id": "q-en-rust-e079ed766caa84050f55795f7e9ff5e09694489c78026da06c6810480cb5e876", "query": "We fail to parse this (admittedly questionable) code as of beta: This broke one repository (not crate) in the beta crater run: https://crater-\ncc I personally suspect that we may want to ask T-lang if we're good accepting this as expected breakage. Alternatively I guess we can adjust the shebang detection to not require a non-empty shebang.\nAssigning .\nFor reference, here's the code responsible for parsing shebang in Linux kernel - (Fully whitespace (including empty) line isn't considered a shebang.)\nI am inclined to close as won't fix; let's cc as the author of that repository as well. I'm going to go ahead and nominate this so that we can get a :+1: (or not) at the T-compiler meeting confirming won't fix status.\nI made a fix in , which also has a benefit of simplifying the rules slightly.\nThanks guys, was more of a spelling mistake/leftover from previous version than unclear explanation of how to use shebang with rust as your previous explanation on how to use it worked fine in something else I was playing around with. But thank you very much for letting me know. Sent with Secure Email. \u2010\u2010\u2010\u2010\u2010\u2010\u2010 Original Message \u2010\u2010\u2010\u2010\u2010\u2010\u2010", "positive_passages": [{"docid": "doc-en-rust-c4da4e066acd48bf4c504ab7d03e9b9ac438a5026643ab3ae219b78f43a50481", "text": " #!/usr/bin/env rustx // run-pass pub fn main() { println!(\"Hello World\"); } ", "commid": "rust_pr_73596"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. Ok(result.normalized_ty) let res = result.normalized_ty; // `tcx.normalize_projection_ty` may normalize to a type that still has // unevaluated consts, so keep normalizing here if that's the case. if res != ty && res.has_type_flags(ty::TypeFlags::HAS_CT_PROJECTION) { Ok(res.try_super_fold_with(self)?) } else { Ok(res) } } ty::Projection(data) => {", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. Ok(crate::traits::project::PlaceholderReplacer::replace_placeholders( let res = crate::traits::project::PlaceholderReplacer::replace_placeholders( infcx, mapped_regions, mapped_types, mapped_consts, &self.universes, result.normalized_ty, )) ); // `tcx.normalize_projection_ty` may normalize to a type that still has // unevaluated consts, so keep normalizing here if that's the case. if res != ty && res.has_type_flags(ty::TypeFlags::HAS_CT_PROJECTION) { Ok(res.try_super_fold_with(self)?) } else { Ok(res) } } _ => ty.try_super_fold_with(self), })()?; self.cache.insert(ty, res); Ok(res) }", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. // build-pass #![allow(incomplete_features)] #![feature(generic_const_exprs)] trait TraitOne { const MY_NUM: usize; type MyErr: std::fmt::Debug; fn do_one_stuff(arr: [u8; Self::MY_NUM]) -> Result<(), Self::MyErr>; } trait TraitTwo { fn do_two_stuff(); } impl TraitTwo for O where [(); Self::MY_NUM]:, { fn do_two_stuff() { O::do_one_stuff([5; Self::MY_NUM]).unwrap() } } struct Blargotron; #[derive(Debug)] struct ErrTy([(); N]); impl TraitOne for Blargotron { const MY_NUM: usize = 3; type MyErr = ErrTy<{ Self::MY_NUM }>; fn do_one_stuff(_arr: [u8; Self::MY_NUM]) -> Result<(), Self::MyErr> { Ok(()) } } fn main() { Blargotron::do_two_stuff(); } ", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. // build-pass #![allow(incomplete_features)] #![feature(generic_const_exprs)] use std::convert::AsMut; use std::default::Default; trait Foo: Sized { type Baz: Default + AsMut<[u8]>; fn bar() { Self::Baz::default().as_mut(); } } impl Foo for () { type Baz = [u8; 1 * 1]; //type Baz = [u8; 1]; } fn main() { <() as Foo>::bar(); } ", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. // build-pass #![allow(incomplete_features)] #![feature(generic_const_exprs)] trait Collate { type Pass; type Fail; fn collate(self) -> (Self::Pass, Self::Fail); } impl Collate for () { type Pass = (); type Fail = (); fn collate(self) -> ((), ()) { ((), ()) } } trait CollateStep { type Pass; type Fail; fn collate_step(x: X, prev: Prev) -> (Self::Pass, Self::Fail); } impl CollateStep for () { type Pass = (X, P); type Fail = F; fn collate_step(x: X, (p, f): (P, F)) -> ((X, P), F) { ((x, p), f) } } struct CollateOpImpl; trait CollateOpStep { type NextOp; type Apply; } impl CollateOpStep for CollateOpImpl where CollateOpImpl<{ MASK >> 1 }>: Sized, { type NextOp = CollateOpImpl<{ MASK >> 1 }>; type Apply = (); } impl Collate for (H, T) where T: Collate, Op::Apply: CollateStep, { type Pass = >::Pass; type Fail = >::Fail; fn collate(self) -> (Self::Pass, Self::Fail) { >::collate_step(self.0, self.1.collate()) } } fn collate(x: X) -> (X::Pass, X::Fail) where X: Collate>, { x.collate() } fn main() { dbg!(collate::<_, 5>((\"Hello\", (42, ('!', ()))))); } ", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. // build-pass #![allow(incomplete_features)] #![feature(generic_const_exprs)] pub trait Foo { fn foo(&self); } pub struct FooImpl; impl Foo for FooImpl { fn foo(&self) {} } pub trait Bar: 'static { type Foo: Foo; fn get() -> &'static Self::Foo; } struct BarImpl; impl Bar for BarImpl { type Foo = FooImpl< { { 4 } }, >; fn get() -> &'static Self::Foo { &FooImpl } } pub fn boom() { B::get().foo(); } fn main() { boom::(); } ", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. // build-pass #![feature(generic_const_exprs)] #![allow(incomplete_features)] trait Foo { type Output; fn foo() -> Self::Output; } impl Foo for [u8; 3] { type Output = [u8; 1 + 2]; fn foo() -> [u8; 3] { [1u8; 3] } } fn bug() where [u8; N]: Foo, <[u8; N] as Foo>::Output: AsRef<[u8]>, { <[u8; N]>::foo().as_ref(); } fn main() { bug::<3>(); } ", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-c80d879446f13ef43f9e7e2dfd20a4a3e549a6ee2f8a3244fcb8f78c81a37cee", "query": "Const generics and release mode and Hc128Rng::seedfromu64 and rustc 1.45.0-nightly ( 2020-05-29) or later cause cc linking error. To avoid the error, do any of: use rustc 1.45.0-nightly ( 2020-05-14) or earlier remove the remove the line compile in debug mode I'm guessing that Hc128Rng::seedfrom_u64 is not the only thing that will trigger this, but I have not been able to find any other examples.\nI reduced it. works, but does not work. In both release and debug, it yields approximate the same error as before on: But compiles successfully on:\nThis fails with , but not .\nThis appears to be working now. I'll add a test.\nand I were unable to reproduce an error with the examples here, but they still fail on extended CI (though not with the original error here). The PR is We'll have to revisit this one.\nNote that on latest nightly, I am still encountering this issue. (Any time I use the crate and with , I trigger this.) Error:\nSorry to keep bumping. I continue to encounter this issue when I attempt to use with on latest nightly. // build-pass #![allow(incomplete_features)] #![feature(generic_const_exprs)] use std::marker::PhantomData; fn main() { let x = FooImpl::> { phantom: PhantomData }; let _ = x.foo::>(); } trait Foo where T: Bar, { fn foo(&self) where T: Operation, >::Output: Bar; } struct FooImpl where T: Bar, { phantom: PhantomData, } impl Foo for FooImpl where T: Bar, { fn foo(&self) where T: Operation, >::Output: Bar, { <>::Output as Bar>::error_occurs_here(); } } trait Bar { fn error_occurs_here(); } struct BarImpl; impl Bar for BarImpl { fn error_occurs_here() {} } trait Operation { type Output; } //// Part-A: This causes error. impl Operation> for BarImpl where BarImpl<{ N + M }>: Sized, { type Output = BarImpl<{ N + M }>; } //// Part-B: This doesn't cause error. // impl Operation> for BarImpl { // type Output = BarImpl; // } //// Part-C: This also doesn't cause error. // impl Operation> for BarImpl { // type Output = BarImpl<{ M }>; // } ", "commid": "rust_pr_100315"}], "negative_passages": []} {"query_id": "q-en-rust-b66c442189565ad2f1442f52f0c8860a07d772c3d28dd6f79a73b898179c461c", "query": "compiles fine. Constructing a literal doesn't work, listing two s gives , giving only one gives , with an empty list of missing fields.", "positive_passages": [{"docid": "doc-en-rust-8075a9ff8d6ea750ebbdaf8ca45e2734434ecece456d18e35f819199c32907a5", "text": "generics: &Generics, fields: &[@struct_field], visitor: ResolveVisitor) { let mut ident_map = HashMap::new::(); for fields.iter().advance |&field| { match field.node.kind { named_field(ident, _) => { match ident_map.find(&ident) { Some(&prev_field) => { let ident_str = self.session.str_of(ident); self.session.span_err(field.span, fmt!(\"field `%s` is already declared\", ident_str)); self.session.span_note(prev_field.span, \"Previously declared here\"); }, None => { ident_map.insert(ident, field); } } } _ => () } } // If applicable, create a rib for the type parameters. do self.with_type_parameter_rib(HasTypeParameters (generics, id, 0,", "commid": "rust_pr_7443"}], "negative_passages": []} {"query_id": "q-en-rust-b66c442189565ad2f1442f52f0c8860a07d772c3d28dd6f79a73b898179c461c", "query": "compiles fine. Constructing a literal doesn't work, listing two s gives , giving only one gives , with an empty list of missing fields.", "positive_passages": [{"docid": "doc-en-rust-0e70cc38cdbb01e65caa870450080966ea02365d8339b56ac8bae4e916c939b1", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. struct BuildData { foo: int, foo: int, //~ ERROR field `foo` is already declared } fn main() { } ", "commid": "rust_pr_7443"}], "negative_passages": []} {"query_id": "q-en-rust-264ef9d3ce500336be536b44f17b588d41ab68baa15410cd8536185e9aa2325a", "query": "... as documented. The problem: this line from the documentation: The weak count as reported by is 0 when the strong count is 0, even if there are still outstanding weak references: The combination of these two means that getting a pointer from is not enough to guarantee that it's safe to use in . Instead, you have to know that (if it originated from a strong reference) you still have an outstanding strong reference to the allocation, because otherwise the (exposed) weak count is 0, and thus calling is \"documented UB\". To properly document when this is intuitively allowed (i.e. it's allowed because I got the pointer from and have called more than , so there are still \"unowned\" raw weak references to claim) (which lines up with what the implementation allows), we need to expose (probably docs only) the fact that there is still being a weak reference count behind the hood being tracked until all of the s have been dropped. This will probably also require guaranteeing that doesn't drop its weak reference and become dangling \"early\" (e.g. note that failed, decrement the (real) weak count (potentially deallocating the place), and become a dangling weak via internal mutability) and that on a \"zombie\" increments the (real) weak count and creates a new \"zombie\" , not a dangling .\n(I don't know if C-bug is the correct label for a \"documentation bug,\" but it was the closest template to correct.)\nI guess that means the internal weak count rather than the user-visible .\nThat's obviously the intent and what the implementation shows is the case, but when considering documented properties of an API, you have to only consider the documented API. A correct resolution to the issue would be to adjust the documentation to clarify that the weak count is referring to the hidden internal weak count that's not exposed (except conceptually for the safety of /). cc who wrote these methods and docs\nYes, it's meant as the actual number of weak pointers pointing there (including the ones \u201efrozen\u201c to raw pointers), not what some method returns (if it's currently in the form of the raw pointer, you really can't even call the method, so I'm not sure if that part can even apply). I agree it could probably be documented better. I'd try to fix the docs, but considering I've already updated them like 3 times during the review and they still haven't ended up entirely correct, I'll probably ask if you want to give it a go first, as obviously the docs aren't my strong ability :-).\nThe main thing this requires is actually specifying how the \"internal\" weak count works, and guarnateeing that we don't do \"clever\" things with converting \"zombie \"s into dangling s. I really don't know the best way to do this or even where to do this, since it basically only impacts how and when is allowed to be called.\n:thinking: Maybe just getting rid of the \u201eweak count\u201c term altogether and try to describe it in a way that the pointer preserves the ownership of the , or that it's an alternative form of and it can be turned back if it still has the ownership. Or maybe just point out that the \u201eweak count\u201c is meant in its natural meaning, not referring to the method.\nOk, looking into this, how about wording like this? On the intoraw: And on the : Do you think this would be better?\nYeah, I think that wording captures the semantics properly.\nThanks. I'll get around to submitting the changes some time soonish.", "positive_passages": [{"docid": "doc-en-rust-017cbc72e8a9356e0f8c22f86564474660659c07b614acdb595bc939a9d952b7", "text": "/// This can be used to safely get a strong reference (by calling [`upgrade`] /// later) or to deallocate the weak count by dropping the `Weak`. /// /// It takes ownership of one weak count (with the exception of pointers created by [`new`], /// as these don't have any corresponding weak count). /// It takes ownership of one weak reference (with the exception of pointers created by [`new`], /// as these don't own anything; the method still works on them). /// /// # Safety /// /// The pointer must have originated from the [`into_raw`] and must still own its potential /// weak reference count. /// The pointer must have originated from the [`into_raw`] and must still own its potential /// weak reference. /// /// It is allowed for the strong count to be 0 at the time of calling this, but the weak count /// must be non-zero or the pointer must have originated from a dangling `Weak` (one created /// by [`new`]). /// It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this /// takes ownership of one weak reference currently represented as a raw pointer (the weak /// count is not modified by this operation) and therefore it must be paired with a previous /// call to [`into_raw`]. /// /// # Examples ///", "commid": "rust_pr_74782"}], "negative_passages": []} {"query_id": "q-en-rust-264ef9d3ce500336be536b44f17b588d41ab68baa15410cd8536185e9aa2325a", "query": "... as documented. The problem: this line from the documentation: The weak count as reported by is 0 when the strong count is 0, even if there are still outstanding weak references: The combination of these two means that getting a pointer from is not enough to guarantee that it's safe to use in . Instead, you have to know that (if it originated from a strong reference) you still have an outstanding strong reference to the allocation, because otherwise the (exposed) weak count is 0, and thus calling is \"documented UB\". To properly document when this is intuitively allowed (i.e. it's allowed because I got the pointer from and have called more than , so there are still \"unowned\" raw weak references to claim) (which lines up with what the implementation allows), we need to expose (probably docs only) the fact that there is still being a weak reference count behind the hood being tracked until all of the s have been dropped. This will probably also require guaranteeing that doesn't drop its weak reference and become dangling \"early\" (e.g. note that failed, decrement the (real) weak count (potentially deallocating the place), and become a dangling weak via internal mutability) and that on a \"zombie\" increments the (real) weak count and creates a new \"zombie\" , not a dangling .\n(I don't know if C-bug is the correct label for a \"documentation bug,\" but it was the closest template to correct.)\nI guess that means the internal weak count rather than the user-visible .\nThat's obviously the intent and what the implementation shows is the case, but when considering documented properties of an API, you have to only consider the documented API. A correct resolution to the issue would be to adjust the documentation to clarify that the weak count is referring to the hidden internal weak count that's not exposed (except conceptually for the safety of /). cc who wrote these methods and docs\nYes, it's meant as the actual number of weak pointers pointing there (including the ones \u201efrozen\u201c to raw pointers), not what some method returns (if it's currently in the form of the raw pointer, you really can't even call the method, so I'm not sure if that part can even apply). I agree it could probably be documented better. I'd try to fix the docs, but considering I've already updated them like 3 times during the review and they still haven't ended up entirely correct, I'll probably ask if you want to give it a go first, as obviously the docs aren't my strong ability :-).\nThe main thing this requires is actually specifying how the \"internal\" weak count works, and guarnateeing that we don't do \"clever\" things with converting \"zombie \"s into dangling s. I really don't know the best way to do this or even where to do this, since it basically only impacts how and when is allowed to be called.\n:thinking: Maybe just getting rid of the \u201eweak count\u201c term altogether and try to describe it in a way that the pointer preserves the ownership of the , or that it's an alternative form of and it can be turned back if it still has the ownership. Or maybe just point out that the \u201eweak count\u201c is meant in its natural meaning, not referring to the method.\nOk, looking into this, how about wording like this? On the intoraw: And on the : Do you think this would be better?\nYeah, I think that wording captures the semantics properly.\nThanks. I'll get around to submitting the changes some time soonish.", "positive_passages": [{"docid": "doc-en-rust-1e314e75766469e5419ea1ac40514b6ae43e55aa1a823aa82c5e80100b6fc24f", "text": "/// Consumes the `Weak` and turns it into a raw pointer. /// /// This converts the weak pointer into a raw pointer, preserving the original weak count. It /// can be turned back into the `Weak` with [`from_raw`]. /// This converts the weak pointer into a raw pointer, while still preserving the ownership of /// one weak reference (the weak count is not modified by this operation). It can be turned /// back into the `Weak` with [`from_raw`]. /// /// The same restrictions of accessing the target of the pointer as with /// [`as_ptr`] apply.", "commid": "rust_pr_74782"}], "negative_passages": []} {"query_id": "q-en-rust-264ef9d3ce500336be536b44f17b588d41ab68baa15410cd8536185e9aa2325a", "query": "... as documented. The problem: this line from the documentation: The weak count as reported by is 0 when the strong count is 0, even if there are still outstanding weak references: The combination of these two means that getting a pointer from is not enough to guarantee that it's safe to use in . Instead, you have to know that (if it originated from a strong reference) you still have an outstanding strong reference to the allocation, because otherwise the (exposed) weak count is 0, and thus calling is \"documented UB\". To properly document when this is intuitively allowed (i.e. it's allowed because I got the pointer from and have called more than , so there are still \"unowned\" raw weak references to claim) (which lines up with what the implementation allows), we need to expose (probably docs only) the fact that there is still being a weak reference count behind the hood being tracked until all of the s have been dropped. This will probably also require guaranteeing that doesn't drop its weak reference and become dangling \"early\" (e.g. note that failed, decrement the (real) weak count (potentially deallocating the place), and become a dangling weak via internal mutability) and that on a \"zombie\" increments the (real) weak count and creates a new \"zombie\" , not a dangling .\n(I don't know if C-bug is the correct label for a \"documentation bug,\" but it was the closest template to correct.)\nI guess that means the internal weak count rather than the user-visible .\nThat's obviously the intent and what the implementation shows is the case, but when considering documented properties of an API, you have to only consider the documented API. A correct resolution to the issue would be to adjust the documentation to clarify that the weak count is referring to the hidden internal weak count that's not exposed (except conceptually for the safety of /). cc who wrote these methods and docs\nYes, it's meant as the actual number of weak pointers pointing there (including the ones \u201efrozen\u201c to raw pointers), not what some method returns (if it's currently in the form of the raw pointer, you really can't even call the method, so I'm not sure if that part can even apply). I agree it could probably be documented better. I'd try to fix the docs, but considering I've already updated them like 3 times during the review and they still haven't ended up entirely correct, I'll probably ask if you want to give it a go first, as obviously the docs aren't my strong ability :-).\nThe main thing this requires is actually specifying how the \"internal\" weak count works, and guarnateeing that we don't do \"clever\" things with converting \"zombie \"s into dangling s. I really don't know the best way to do this or even where to do this, since it basically only impacts how and when is allowed to be called.\n:thinking: Maybe just getting rid of the \u201eweak count\u201c term altogether and try to describe it in a way that the pointer preserves the ownership of the , or that it's an alternative form of and it can be turned back if it still has the ownership. Or maybe just point out that the \u201eweak count\u201c is meant in its natural meaning, not referring to the method.\nOk, looking into this, how about wording like this? On the intoraw: And on the : Do you think this would be better?\nYeah, I think that wording captures the semantics properly.\nThanks. I'll get around to submitting the changes some time soonish.", "positive_passages": [{"docid": "doc-en-rust-1762b8109281a6c23ec760b7f0b11eec39286e768dd828282a82855d2c487ff7", "text": "result } /// Converts a raw pointer previously created by [`into_raw`] back into /// `Weak`. /// Converts a raw pointer previously created by [`into_raw`] back into `Weak`. /// /// This can be used to safely get a strong reference (by calling [`upgrade`] /// later) or to deallocate the weak count by dropping the `Weak`. /// /// It takes ownership of one weak count (with the exception of pointers created by [`new`], /// as these don't have any corresponding weak count). /// It takes ownership of one weak reference (with the exception of pointers created by [`new`], /// as these don't own anything; the method still works on them). /// /// # Safety /// /// The pointer must have originated from the [`into_raw`] and must still own its potential /// weak reference count. /// /// It is allowed for the strong count to be 0 at the time of calling this, but the weak count /// must be non-zero or the pointer must have originated from a dangling `Weak` (one created /// by [`new`]). /// weak reference. /// /// It is allowed for the strong count to be 0 at the time of calling this. Nevertheless, this /// takes ownership of one weak reference currently represented as a raw pointer (the weak /// count is not modified by this operation) and therefore it must be paired with a previous /// call to [`into_raw`]. /// # Examples /// /// ```", "commid": "rust_pr_74782"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-af365a76a959741d2cbf49876c339f786776c5660635edb87f0696ad28278c29", "text": "Greater | // `{ 42 } > 3` GreaterEqual | // `{ 42 } >= 3` AssignOp(_) | // `{ 42 } +=` LAnd | // `{ 42 } &&foo` As | // `{ 42 } as usize` // Equal | // `{ 42 } == { 42 }` Accepting these here would regress incorrect // NotEqual | // `{ 42 } != { 42 } struct literals parser recovery.", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-1f2dd0ac1d1369b753b38aa3205253bbafd8eaa5b97d768a2f510ba2e8343638", "text": "// want to keep their span info to improve diagnostics in these cases in a later stage. (true, Some(AssocOp::Multiply)) | // `{ 42 } *foo = bar;` or `{ 42 } * 3` (true, Some(AssocOp::Subtract)) | // `{ 42 } -5` (true, Some(AssocOp::LAnd)) | // `{ 42 } &&x` (#61475) (true, Some(AssocOp::Add)) // `{ 42 } + 42 // If the next token is a keyword, then the tokens above *are* unambiguously incorrect: // `if x { a } else { b } && if y { c } else { d }` if !self.look_ahead(1, |t| t.is_reserved_ident()) => { if !self.look_ahead(1, |t| t.is_used_keyword()) => { // These cases are ambiguous and can't be identified in the parser alone. let sp = self.sess.source_map().start_point(self.token.span); self.sess.ambiguous_block_expr_parse.borrow_mut().insert(sp, lhs.span); false } (true, Some(AssocOp::LAnd)) => { // `{ 42 } &&x` (#61475) or `{ 42 } && if x { 1 } else { 0 }`. Separated from the // above due to #74233. // These cases are ambiguous and can't be identified in the parser alone. let sp = self.sess.source_map().start_point(self.token.span); self.sess.ambiguous_block_expr_parse.borrow_mut().insert(sp, lhs.span);", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-2e1493202f57516ddd69378154c3fefd8a3cf225266b9e96d5011e3b3856034d", "text": "} self.suggest_boxing_when_appropriate(err, expr, expected, expr_ty); self.suggest_missing_await(err, expr, expected, expr_ty); self.suggest_missing_parentheses(err, expr); self.note_need_for_fn_pointer(err, expected, expr_ty); }", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-603612eba64d009f4f98ac411d1bcc8914219489af54323a098d304eff0eca6a", "text": "} } fn suggest_missing_parentheses(&self, err: &mut DiagnosticBuilder<'_>, expr: &hir::Expr<'_>) { let sp = self.tcx.sess.source_map().start_point(expr.span); if let Some(sp) = self.tcx.sess.parse_sess.ambiguous_block_expr_parse.borrow().get(&sp) { // `{ 42 } &&x` (#61475) or `{ 42 } && if x { 1 } else { 0 }` self.tcx.sess.parse_sess.expr_parentheses_needed(err, *sp, None); } } fn note_need_for_fn_pointer( &self, err: &mut DiagnosticBuilder<'_>,", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-5f387b3bd5b9d3f2dbe8a190f9409da8ade059a46ea072ebdf55a57e7bc28714", "text": " // This is not autofixable because we give extra suggestions to end the first expression with `;`. fn foo(a: Option, b: Option) -> bool { if let Some(x) = a { true } else { false } //~^ ERROR mismatched types //~| ERROR mismatched types && //~ ERROR mismatched types if let Some(y) = a { true } else { false } } fn main() {} ", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-b157ee8815497b02894a2d35fd255ddd708c51eea8a8944291389b7620207366", "text": " error[E0308]: mismatched types --> $DIR/expr-as-stmt-2.rs:3:26 | LL | if let Some(x) = a { true } else { false } | ---------------------^^^^------------------ help: consider using a semicolon here | | | | | expected `()`, found `bool` | expected this to be `()` error[E0308]: mismatched types --> $DIR/expr-as-stmt-2.rs:3:40 | LL | if let Some(x) = a { true } else { false } | -----------------------------------^^^^^--- help: consider using a semicolon here | | | | | expected `()`, found `bool` | expected this to be `()` error[E0308]: mismatched types --> $DIR/expr-as-stmt-2.rs:6:5 | LL | fn foo(a: Option, b: Option) -> bool { | ---- expected `bool` because of return type LL | if let Some(x) = a { true } else { false } | ------------------------------------------ help: parentheses are required to parse this as an expression: `(if let Some(x) = a { true } else { false })` ... LL | / && LL | | if let Some(y) = a { true } else { false } | |______________________________________________^ expected `bool`, found `&&bool` error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-db3143fa5c5055171a55dd05b30d117cd1ef07d5c14c85a9e309b3b16f920c17", "text": "//~^ ERROR mismatched types } fn qux(a: Option, b: Option) -> bool { (if let Some(x) = a { true } else { false }) && //~ ERROR expected expression if let Some(y) = a { true } else { false } } fn moo(x: u32) -> bool { (match x { _ => 1,", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-eb06223c19a091c6638be64b96798f49b2ff151aea08607671a2e04c74e6cd82", "text": "//~^ ERROR mismatched types } fn qux(a: Option, b: Option) -> bool { if let Some(x) = a { true } else { false } && //~ ERROR expected expression if let Some(y) = a { true } else { false } } fn moo(x: u32) -> bool { match x { _ => 1,", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-e4465ea77789644270ceb4a81ffb558ac3098447849a30c494859a7d40388104", "text": "| | | help: parentheses are required to parse this as an expression: `({ 42 })` error: expected expression, found `&&` --> $DIR/expr-as-stmt.rs:30:5 | LL | if let Some(x) = a { true } else { false } | ------------------------------------------ help: parentheses are required to parse this as an expression: `(if let Some(x) = a { true } else { false })` LL | && | ^^ expected expression error: expected expression, found `>` --> $DIR/expr-as-stmt.rs:37:7 --> $DIR/expr-as-stmt.rs:31:7 | LL | } > 0 | ^ expected expression", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-9906d28e2eeabca060102c31a4b43844fd44c5502ab9774334496d9a16e252b5", "query": "This example : (The example above was reduced from one involving , found by This regression was likely introduced by , due to the definition of : That function lists all of the operators that can't start an expression, but only continue it. () is incorrectly included, as is parsed the same as at the start of an expression. cc\nI've left some comments on (a partial fix for a regression that seems similar to this, but more specific), e.g.: As per , should've never returned for , preventing the regression.\nAssigning as and removing .", "positive_passages": [{"docid": "doc-en-rust-ae0fb8697081b39a31facc86eedc17da574b3e9977769526288608c8c9165669", "text": "| | | help: parentheses are required to parse this as an expression: `({ 3 })` error: aborting due to 10 previous errors error: aborting due to 9 previous errors Some errors have detailed explanations: E0308, E0614. For more information about an error, try `rustc --explain E0308`.", "commid": "rust_pr_74650"}], "negative_passages": []} {"query_id": "q-en-rust-7db59ba9baa6b6be2d9a5043997e9ef9bc2216a3efce2b815f5daadf8fdd8d6a", "query": " $DIR/issue-74539.rs:9:13 | LL | x | ^ help: a local variable with a similar name exists: `e` error: `x @` is not allowed in a tuple struct --> $DIR/issue-74539.rs:8:14 | LL | E::A(x @ ..) => { | ^^^^^^ this is only allowed in slice patterns | = help: remove this and bind each tuple field independently help: if you don't need to use the contents of x, discard the tuple's remaining fields | LL | E::A(..) => { | ^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_74557"}], "negative_passages": []} {"query_id": "q-en-rust-115204c5a24ef20deb77be96da6b61735310e819ebe3fae6b28b98de0206617d", "query": "I tried to compile this code: I expected the code to compile Instead, it failed to compile with this lifetime error: If I change the function in these ways it compiles. This compiles successfully on Stable Rust 1.45.0 back to 1.26.0 (when lifetimes were stabilized) This fails to compile on these versions(haven't tried previous ones in the same channels): 1.46.0-beta.2 (2020-07-23 ) 1.47.0-nightly (2020-07-28 ) $DIR/issue-75062-fieldless-tuple-struct.rs:9:10 | LL | foo::Bar(); | ^^^ private tuple struct | note: the tuple struct `Bar` is defined here --> $DIR/issue-75062-fieldless-tuple-struct.rs:5:5 | LL | struct Bar(); | ^^^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0603`. ", "commid": "rust_pr_75188"}], "negative_passages": []} {"query_id": "q-en-rust-11da96b4f13f750e722b6c8a459bb19b056581a0eb2b7c7132d2356e0a1774cc", "query": " $DIR/exhaustiveness-unreachable-pattern.rs:89:15 --> $DIR/exhaustiveness-unreachable-pattern.rs:88:19 | LL | | false) => {} | ^^^^^ error: unreachable pattern --> $DIR/exhaustiveness-unreachable-pattern.rs:96:15 | LL | | true) => {} | ^^^^ error: unreachable pattern --> $DIR/exhaustiveness-unreachable-pattern.rs:95:15 --> $DIR/exhaustiveness-unreachable-pattern.rs:102:15 | LL | | true, | ^^^^ error: aborting due to 18 previous errors error: aborting due to 19 previous errors ", "commid": "rust_pr_78167"}], "negative_passages": []} {"query_id": "q-en-rust-58717b1c795ffe9531395a8eeedb9fa2db03f873e97c81c47b49f946f1cdbe73", "query": "Seaching for in the docs only surfaces . The solution seems to link from to so people can read more. However I'm not sure how to create a link between these since they live in separate crates. What's the right way to go about this? edit: if someone wants to pick this up, please do. This is probably a \"good first issue\" once we figure out how to link between the two. ! -- this is how I eventually learned about ; ideally it'd be easier to find. $DIR/liveness-asm.rs:13:32 | LL | asm!(\"/*{0}*/\", inout(reg) src); | ^^^ | note: the lint level is defined here --> $DIR/liveness-asm.rs:8:9 | LL | #![warn(unused_assignments)] | ^^^^^^^^^^^^^^^^^^ = help: maybe it is overwritten before being read? warning: value assigned to `src` is never read --> $DIR/liveness-asm.rs:23:39 | LL | asm!(\"/*{0}*/\", inout(reg) src => src); | ^^^ | = help: maybe it is overwritten before being read? warning: 2 warnings emitted ", "commid": "rust_pr_77976"}], "negative_passages": []} {"query_id": "q-en-rust-3802921d806f712ef6c2e7866ffb6d7b37b3a2ea3f26640d08ff1cb00b82b15b", "query": "Hello, this is your friendly neighborhood mergebot. After merging PR rust-lang/rust, I observed that the tool rustfmt no longer builds. A follow-up PR to the repository is needed to fix the fallout. cc do you think you would have time to do the follow-up work? If so, that would be great!\nHi can you please the command to replicate this bug? Thanks\nwith set in\nwon't be able to get to this myself in the next few days. To fix this we probably have to compile it with , find the place where we call and put a before that\nHi the error occurred where we just erasing regions by simply doing this (tcx.eraseregions(&traitref)) and provided assertion with (tcx.normalizeerasingregions(paramenv, traitref)). Is this scenario correct? Just want to confirm as you already mentioned to put a tcx.normalizeerasingregions. Thanks\nyes, so we do not want to use in as that can be a bit costly but instead normalize eagerly before then. To detect cases where we are still using unnormalized values a debug assert there. If we look at the backtrace we can find out where the is coming from and normalize it there\nokay, thanks for the update.\nHi as you mentioned can you please elaborate it please? As I can see the Instace::resolve in but don't get you point to put tcx.normalizeerasing_regions. Thanks\nCan you post the resulting backtrace using ?\nhere it is:\nSo I think that the easiest fix is to add before this call. I am however surprised that isn't actually normalized here. cc do we want to instead normalize this in mir building?", "positive_passages": [{"docid": "doc-en-rust-9a4eb37f3cb452a45c9b56f91f1398513eb143c7b2ec75c7ed77a310f395d988", "text": "let func_ty = func.ty(body, tcx); if let ty::FnDef(callee, substs) = *func_ty.kind() { let (callee, call_substs) = if let Ok(Some(instance)) = Instance::resolve(tcx, param_env, callee, substs) { (instance.def_id(), instance.substs) } else { (callee, substs) }; let normalized_substs = tcx.normalize_erasing_regions(param_env, substs); let (callee, call_substs) = if let Ok(Some(instance)) = Instance::resolve(tcx, param_env, callee, normalized_substs) { (instance.def_id(), instance.substs) } else { (callee, normalized_substs) }; // FIXME(#57965): Make this work across function boundaries", "commid": "rust_pr_78156"}], "negative_passages": []} {"query_id": "q-en-rust-193c20fd7306282ed6fdd541a0a32f634e58a136dc76c1cc0800c7848999e583", "query": "dilbert-feed is ICEing on beta with error: internal compiler error: could not fully normalize https://crater-\nFull error log:\nI'll bisect this one\nDoesn't happen on the latest commit by on .\nI can still reproduce with :\nIt may just be an instance of me not doing something right, I used:\ngives me searched toolchains through regression in So this would have regressed in , cc\nThe PR did introduce some additional uses of inside specialized implementations. itself it is a stable function so I am not aware of any restrictions on its use. Perhaps this is some interaction with specialization?\nBoth sites where it is used are of this pattern: They're just there as hints to to let llvm DCE the dropinplace earlier. So if needed removing them should be safe.\nAssigning as and removing .\nAssigning myself to revert on beta (and potentially nightly, depending on if we can get a fix soon).\nDo we know if removing fixes the problem? We also could use an MVCE, right?\nMCVE would be good, yes.\nI hit (what seems like) this ICE with in a pretty small experiment, would it be useful to try reducing that?\nYes, it would be -- even if it's a separate issue we can just open that :) MCVE is always useful.\nping cleanup This issue needs an MVCE\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nHere's my nightly MCVE: Interestingly, it goes away when putting all of the code into one crate. Drop for InPlaceDrop { #[inline] fn drop(&mut self) { if mem::needs_drop::() { unsafe { ptr::drop_in_place(slice::from_raw_parts_mut(self.inner, self.len())); } unsafe { ptr::drop_in_place(slice::from_raw_parts_mut(self.inner, self.len())); } } }", "commid": "rust_pr_78854"}], "negative_passages": []} {"query_id": "q-en-rust-193c20fd7306282ed6fdd541a0a32f634e58a136dc76c1cc0800c7848999e583", "query": "dilbert-feed is ICEing on beta with error: internal compiler error: could not fully normalize https://crater-\nFull error log:\nI'll bisect this one\nDoesn't happen on the latest commit by on .\nI can still reproduce with :\nIt may just be an instance of me not doing something right, I used:\ngives me searched toolchains through regression in So this would have regressed in , cc\nThe PR did introduce some additional uses of inside specialized implementations. itself it is a stable function so I am not aware of any restrictions on its use. Perhaps this is some interaction with specialization?\nBoth sites where it is used are of this pattern: They're just there as hints to to let llvm DCE the dropinplace earlier. So if needed removing them should be safe.\nAssigning as and removing .\nAssigning myself to revert on beta (and potentially nightly, depending on if we can get a fix soon).\nDo we know if removing fixes the problem? We also could use an MVCE, right?\nMCVE would be good, yes.\nI hit (what seems like) this ICE with in a pretty small experiment, would it be useful to try reducing that?\nYes, it would be -- even if it's a separate issue we can just open that :) MCVE is always useful.\nping cleanup This issue needs an MVCE\nHey Cleanup Crew ICE-breakers! This bug has been identified as a good \"Cleanup ICE-breaking candidate\". In case it's useful, here are some [instructions] for tackling these sorts of bugs. Maybe take a look? Thanks! <3 [instructions]: https://rustc-dev- cc\nHere's my nightly MCVE: Interestingly, it goes away when putting all of the code into one crate. if mem::needs_drop::() { unsafe { ptr::drop_in_place(self.as_mut_slice()); } unsafe { ptr::drop_in_place(self.as_mut_slice()); } self.ptr = self.end; }", "commid": "rust_pr_78854"}], "negative_passages": []} {"query_id": "q-en-rust-b51e50ca449e6df61354ff4f9034ce931cdfa2d63e17a96c0ab3812045854f30", "query": "When panicking, every line of stack traces contain instead of a code location. E.g. I'm on rustc 1.47.0 and FreeBSD 12.1: Possible duplicate of I've tried mounting /proc but it didn't fix the problem.\nCan you test on old Rust stable version to see if this is a regression ? Edit: This is an expected regression\nIt worked on 1.46.0:\nDo you know if there is a build farm for freebsd like gcc build farm ? Because we could fo find from which PR caused the regression. Otherwise, since freebsd is a tier-2 target, not everyone could investigate it. modify labels: T-compiler\nError: Label P-medium can only be set by Rust team members Please let know if you're having trouble with this bot.\nAssigning as discussed as part of the and removing .) and removing .\nFor what it's worth, this problem occurs on iOS (x86_64-apple-ios) builds too: 1.47.0: 1.46.0:\nLikely caused by . (switching from libbacktrace to gimli) It seems that is not implemented for FreeBSD: I think this means that it can't find the debuginfo for any libraries and the current executable.\nFor iOS it may or may not work to extend the condition at to include iOS too.\nYou're right! I submitted a PR to fix it for iOS. Thanks a lot!\nYou might want to file another PR to update backtrace-rs submodule in this repo.\nThanks!\nIt looks like , so I think we could just extend to cover FreeBSD\nHi, could you ( or ) use rustup- toolchain-install-master to install and try this build: for freebsd?\nLooks good!\nbacktrace-rs issue: backtrace-rs PR for issue:\nWhat's left to do here? Would a PR that updates the submodule for library/backtrace be sufficient?", "positive_passages": [{"docid": "doc-en-rust-0d93a7641b33864d85ddb44735f88c6e3d47a3c3d6b18a8d3845d76c843baabc", "text": " Subproject commit af078ecc0b069ec594982f92d4c6c58af99efbb5 Subproject commit 710fc18ddcb6c7677b3c96359abb35da37f2a488 ", "commid": "rust_pr_83506"}], "negative_passages": []} {"query_id": "q-en-rust-64425a40f91c9452624f4b4e1343fbb293fe2a7c8a045f326b6f50b811e24bac", "query": "The code that truncates long lines doesn't take (hard) tabs into account, so the lines printed are off. Example code: Gives the following error: If you use four spaces instead of each tab it works as expected: cc and cc $DIR/tabs-trimming.rs:9:16 | LL | ... v @ 1 | 2 | 3 => panic!(\"You gave me too little money {}\", v), // Long text here: TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTT... | - ^ ^ pattern doesn't bind `v` | | | | | pattern doesn't bind `v` | variable not in all patterns error: aborting due to previous error For more information about this error, try `rustc --explain E0408`. ", "commid": "rust_pr_79757"}], "negative_passages": []} {"query_id": "q-en-rust-e00804d5aa10cec2fc1d584798a5e86795f6be82fbf4786b7443e995f879e2c6", "query": "For details see , opening an issue since PR was closed:\nWhile this one particular test case was fixed, the transformation still suffers from issues hinted at earlier comment: assumes that terminator always switches on a local that was assigned last in the block. The assumption is false. moves a read of a discriminant from the second block into first without ensuring that it holds the same value still. some executions it introduces a read that would not have happen otherwise and could have an undefined behaviour. some executions it skips execution of statements from one the blocks without checking if they are side-effect free. cc\n, EarlyOtherwiseBranch only applies to basic blocks of which the last statement is a discriminant (and the terminator is a switch): And, We could require the discriminant read in the second basic block to be the only statement within the block. Not sure by how much would the change affect optimization hit rate, but at least the example in would still be optimized. What makes me really concerned is this one: When would the first discriminant be safe but not the second? The example you constructed is a legit one: the second read assumes the value of the first read and does a downcast; that is the case my patch catches. The only other possible invalid case I could think of, is that when the second discriminant call's argument is an invalid pointer. Probably something like: But today the MIR that FE emits dodges the problem by accident: Because these two switch instructions have different blocks. So, my guess is, even though EarlyOtherwiseBranch is unsound, today the compiler FE is still unable to produce MIR that could trigger that.\nIf one considers shape of MIR immediately after it has been built, that is probably true. One thing to keep mind: this transformation runs on MIR that have been transformed already. In fact, the whole optimization pipeline might have run an arbitrary number of times as a result of inlining. Transformations shouldn't rely on any invariants that aren't preserved by all transformations. The might be unrelated to a local used in switch, since this is not checked anywhere. Last time out of curiosity I an assertion checking that, enabled the transform by default and bootstraped rustc, the assertion did fail.\nAh! I misunderstood your point , but now I get it. You are right. I think we could resolve your , , and by adding the following restrictions: discriminant read in the second block must be the only statement in there; switch must use the value of the discriminant read. But I found that the same effect of EarlyOtherwiseBranch could be achieved through a much easier change; we could provide a hint for LLVM optimizer to compare the discriminant values before entering the switch, e.g.: With this hint, (unfortunately, isn't enough) produces: IMO this is much more elegant than EarlyOtherwiseBranch; it should also be pretty easy to implement, so I will give it a try.", "positive_passages": [{"docid": "doc-en-rust-abf3fc95e88ce5d91d68565513c4d511b2ea146d12c3528b5f31350ae50b2c52", "text": "return None; } // when the second place is a projection of the first one, it's not safe to calculate their discriminant values sequentially. // for example, this should not be optimized: // // ```rust // enum E<'a> { Empty, Some(&'a E<'a>), } // let Some(Some(_)) = e; // ``` // // ```mir // bb0: { // _2 = discriminant(*_1) // switchInt(_2) -> [...] // } // bb1: { // _3 = discriminant(*(((*_1) as Some).0: &E)) // switchInt(_3) -> [...] // } // ``` let discr_place = discr_info.place_of_adt_discr_read; let this_discr_place = this_bb_discr_info.place_of_adt_discr_read; if discr_place.local == this_discr_place.local && this_discr_place.projection.starts_with(discr_place.projection) { trace!(\"NO: one target is the projection of another\"); return None; } // if we reach this point, the optimization applies, and we should be able to optimize this case // store the info that is needed to apply the optimization", "commid": "rust_pr_79882"}], "negative_passages": []} {"query_id": "q-en-rust-e00804d5aa10cec2fc1d584798a5e86795f6be82fbf4786b7443e995f879e2c6", "query": "For details see , opening an issue since PR was closed:\nWhile this one particular test case was fixed, the transformation still suffers from issues hinted at earlier comment: assumes that terminator always switches on a local that was assigned last in the block. The assumption is false. moves a read of a discriminant from the second block into first without ensuring that it holds the same value still. some executions it introduces a read that would not have happen otherwise and could have an undefined behaviour. some executions it skips execution of statements from one the blocks without checking if they are side-effect free. cc\n, EarlyOtherwiseBranch only applies to basic blocks of which the last statement is a discriminant (and the terminator is a switch): And, We could require the discriminant read in the second basic block to be the only statement within the block. Not sure by how much would the change affect optimization hit rate, but at least the example in would still be optimized. What makes me really concerned is this one: When would the first discriminant be safe but not the second? The example you constructed is a legit one: the second read assumes the value of the first read and does a downcast; that is the case my patch catches. The only other possible invalid case I could think of, is that when the second discriminant call's argument is an invalid pointer. Probably something like: But today the MIR that FE emits dodges the problem by accident: Because these two switch instructions have different blocks. So, my guess is, even though EarlyOtherwiseBranch is unsound, today the compiler FE is still unable to produce MIR that could trigger that.\nIf one considers shape of MIR immediately after it has been built, that is probably true. One thing to keep mind: this transformation runs on MIR that have been transformed already. In fact, the whole optimization pipeline might have run an arbitrary number of times as a result of inlining. Transformations shouldn't rely on any invariants that aren't preserved by all transformations. The might be unrelated to a local used in switch, since this is not checked anywhere. Last time out of curiosity I an assertion checking that, enabled the transform by default and bootstraped rustc, the assertion did fail.\nAh! I misunderstood your point , but now I get it. You are right. I think we could resolve your , , and by adding the following restrictions: discriminant read in the second block must be the only statement in there; switch must use the value of the discriminant read. But I found that the same effect of EarlyOtherwiseBranch could be achieved through a much easier change; we could provide a hint for LLVM optimizer to compare the discriminant values before entering the switch, e.g.: With this hint, (unfortunately, isn't enough) produces: IMO this is much more elegant than EarlyOtherwiseBranch; it should also be pretty easy to implement, so I will give it a try.", "positive_passages": [{"docid": "doc-en-rust-4e74fc3423f299eb3a78345a3d8b6f8b915d8d4f1d67dbcd4c26ad80355ca2cd", "text": " // run-pass // compile-flags: -Z mir-opt-level=2 -C opt-level=0 // example from #78496 pub enum E<'a> { Empty, Some(&'a E<'a>), } fn f(e: &E) -> u32 { if let E::Some(E::Some(_)) = e { 1 } else { 2 } } fn main() { assert_eq!(f(&E::Empty), 2); } ", "commid": "rust_pr_79882"}], "negative_passages": []} {"query_id": "q-en-rust-c33a474aa69ac3327d136c1f1482478bc8477b593a385417dcd9564d2a3200ff", "query": " $DIR/yield-outside-generator-issue-78653.rs:4:5 | LL | yield || for i in 0 { } | ^^^^^^^^^^^^^^^^^^^^^^^ error[E0277]: `{integer}` is not an iterator --> $DIR/yield-outside-generator-issue-78653.rs:4:23 | LL | yield || for i in 0 { } | ^ `{integer}` is not an iterator | = help: the trait `Iterator` is not implemented for `{integer}` = note: if you want to iterate between `start` until a value `end`, use the exclusive range syntax `start..end` or the inclusive range syntax `start..=end` = note: required because of the requirements on the impl of `IntoIterator` for `{integer}` = note: required by `into_iter` error: aborting due to 2 previous errors Some errors have detailed explanations: E0277, E0627. For more information about an error, try `rustc --explain E0277`. ", "commid": "rust_pr_82245"}], "negative_passages": []} {"query_id": "q-en-rust-2b21fb3964c1f84d80eeb0aea0fc4164723399941eee440e7231c8f569993c50", "query": "When forgetting to include parenthesis around a tuple pattern in a for loop, the compiler gives a suggestion to use an or pattern. I would expect this, a. because it's not what most would expect if you use a comma to separate two values in a pattern, and b. because or patterns in for loops are still an unstable feature. I tried this code: As expected, the compiler gives an , error, because I forgot the parenthesis around . However, the compiler also suggests that I add a \"vertical bar to match on multiple alternatives\". While it does provide the correct answer (adding parenthesis), it also provides the vertical bar option, which is an unstable feature in this . While the RFC does not explicitly single out for loops as a situation where this is necessary, I believe it does apply here, because in (credit /u/ehuss for the example, which is a situation where the or pattern syntax is being used correctly, I get a error, as expected, and including resolves the error. I tested this on stable, beta, and nightly in the rust playground. The output is identical for all 3 of them. $DIR/issue-48492-tuple-destructure-missing-parens.rs:67:10 | LL | for x, _barr_body in women.iter().map(|woman| woman.allosomes.clone()) { | ^ | help: try adding parentheses to match on a tuple... | LL | for (x, _barr_body) in women.iter().map(|woman| woman.allosomes.clone()) { | ^^^^^^^^^^^^^^^ help: ...or a vertical bar to match on multiple alternatives | LL | for x | _barr_body in women.iter().map(|woman| woman.allosomes.clone()) { | ^^^^^^^^^^^^^^ | -^----------- help: try adding parentheses to match on a tuple: `(x, _barr_body)` error: unexpected `,` in pattern --> $DIR/issue-48492-tuple-destructure-missing-parens.rs:75:10 | LL | for x, y @ Allosome::Y(_) in men.iter().map(|man| man.allosomes.clone()) { | ^ | help: try adding parentheses to match on a tuple... | LL | for (x, y @ Allosome::Y(_)) in men.iter().map(|man| man.allosomes.clone()) { | ^^^^^^^^^^^^^^^^^^^^^^^ help: ...or a vertical bar to match on multiple alternatives | LL | for x | y @ Allosome::Y(_) in men.iter().map(|man| man.allosomes.clone()) { | ^^^^^^^^^^^^^^^^^^^^^^ | -^------------------- help: try adding parentheses to match on a tuple: `(x, y @ Allosome::Y(_))` error: unexpected `,` in pattern --> $DIR/issue-48492-tuple-destructure-missing-parens.rs:84:14 | LL | let women, men: (Vec, Vec) = genomes.iter().cloned() | ^ | help: try adding parentheses to match on a tuple... | LL | let (women, men): (Vec, Vec) = genomes.iter().cloned() | ^^^^^^^^^^^^ help: ...or a vertical bar to match on multiple alternatives | LL | let women | men: (Vec, Vec) = genomes.iter().cloned() | ^^^^^^^^^^^ | -----^---- help: try adding parentheses to match on a tuple: `(women, men)` error: aborting due to 6 previous errors", "commid": "rust_pr_79364"}], "negative_passages": []} {"query_id": "q-en-rust-1a430504c5ba70000cbfc172ef57c7b3ea9b752163320b7dd942ca86605edc3d", "query": " $DIR/issue-79690.rs:29:1 | LL | const G: Fat = unsafe { Transmute { t: FOO }.u }; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered read of part of a pointer at .1..size.foo | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior. error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_80900"}], "negative_passages": []} {"query_id": "q-en-rust-1a430504c5ba70000cbfc172ef57c7b3ea9b752163320b7dd942ca86605edc3d", "query": " $DIR/issue-79690.rs:29:1 | LL | const G: Fat = unsafe { Transmute { t: FOO }.u }; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered read of part of a pointer at .1..size.foo | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type validation failed: encountered (potentially part of) a pointer at .1..size.foo, but expected plain (non-pointer) bytes | = note: The rules on what exactly is undefined behavior aren't clear, so this check might be overzealous. Please open an issue on the rustc repository if you believe it should not be considered undefined behavior.", "commid": "rust_pr_82061"}], "negative_passages": []} {"query_id": "q-en-rust-0608fe214b08fd7ed733a9ae968f8b1fba24ed731d80a9e9642aa1f8851daf4c", "query": "is not actually allfalse - will return . It would be nice to rename it to at some point (and maybe add while we're at it). cc Originally posted by in $DIR/syntax-error-recovery.rs:7:26 | LL | $token $($inner)? = $value, | ^^^^^^ expected one of `(`, `,`, `=`, `{`, or `}` ... LL | values!(STRING(1) as (String) => cfg(test),); | -------------------------------------------- in this macro invocation | = note: this error originates in the macro `values` (in Nightly builds, run with -Z macro-backtrace for more info) error: macro expansion ignores token `(String)` and any following --> $DIR/syntax-error-recovery.rs:7:26 | LL | $token $($inner)? = $value, | ^^^^^^ ... LL | values!(STRING(1) as (String) => cfg(test),); | -------------------------------------------- caused by the macro expansion here | = note: the usage of `values!` is likely invalid in item context error: expected one of `!` or `::`, found `` --> $DIR/syntax-error-recovery.rs:15:9 | LL | values!(STRING(1) as (String) => cfg(test),); | ^^^^^^ expected one of `!` or `::` error: aborting due to 3 previous errors ", "commid": "rust_pr_100250"}], "negative_passages": []} {"query_id": "q-en-rust-7c709c2f5ae2ada94580a48e8d8ea5a7ce339728a9d7168b9b77fe08cb0eac1d", "query": "Repro: As of current nightly: This is the unreachable line: The same crash reproduces back to 1.47.0. Older stable compilers do not crash:\nBisect: searched nightlies: from nightly-2020-07-10 to nightly-2020-08-27 regressed nightly: nightly-2020-07-22 searched commits: from to regressed commit: () Mentioning because appears the most relevant.\nAssigning .", "positive_passages": [{"docid": "doc-en-rust-25270aca1f3681ea1605bf0c99c3598420823e4f3f32a1de83ae767833d24b21", "text": "Some(Node::Item(it)) => item_scope_tag(&it), Some(Node::TraitItem(it)) => trait_item_scope_tag(&it), Some(Node::ImplItem(it)) => impl_item_scope_tag(&it), Some(Node::ForeignItem(it)) => foreign_item_scope_tag(&it), _ => unreachable!(), }; let (prefix, span) = match *region {", "commid": "rust_pr_80548"}], "negative_passages": []} {"query_id": "q-en-rust-7c709c2f5ae2ada94580a48e8d8ea5a7ce339728a9d7168b9b77fe08cb0eac1d", "query": "Repro: As of current nightly: This is the unreachable line: The same crash reproduces back to 1.47.0. Older stable compilers do not crash:\nBisect: searched nightlies: from nightly-2020-07-10 to nightly-2020-08-27 regressed nightly: nightly-2020-07-22 searched commits: from to regressed commit: () Mentioning because appears the most relevant.\nAssigning .", "positive_passages": [{"docid": "doc-en-rust-9d0d990273742042940e8f23b90abf665ab60b6ba1ba128606540d4e1bbc0770", "text": "} } fn foreign_item_scope_tag(item: &hir::ForeignItem<'_>) -> &'static str { match item.kind { hir::ForeignItemKind::Fn(..) => \"method body\", hir::ForeignItemKind::Static(..) | hir::ForeignItemKind::Type => \"associated item\", } } fn explain_span(tcx: TyCtxt<'tcx>, heading: &str, span: Span) -> (String, Option) { let lo = tcx.sess.source_map().lookup_char_pos(span.lo()); (format!(\"the {} at {}:{}\", heading, lo.line, lo.col.to_usize() + 1), Some(span))", "commid": "rust_pr_80548"}], "negative_passages": []} {"query_id": "q-en-rust-7c709c2f5ae2ada94580a48e8d8ea5a7ce339728a9d7168b9b77fe08cb0eac1d", "query": "Repro: As of current nightly: This is the unreachable line: The same crash reproduces back to 1.47.0. Older stable compilers do not crash:\nBisect: searched nightlies: from nightly-2020-07-10 to nightly-2020-08-27 regressed nightly: nightly-2020-07-22 searched commits: from to regressed commit: () Mentioning because appears the most relevant.\nAssigning .", "positive_passages": [{"docid": "doc-en-rust-6b375d7e2dc4fcae96d7559dda8d9fd47be427fdf85016198ea0367f85de3e20", "text": "let fcx = FnCtxt::new(&inh, param_env, id); if !inh.tcx.features().trivial_bounds { // As predicates are cached rather than obligations, this // needsto be called first so that they are checked with an // needs to be called first so that they are checked with an // empty `param_env`. check_false_global_bounds(&fcx, span, id); }", "commid": "rust_pr_80548"}], "negative_passages": []} {"query_id": "q-en-rust-7c709c2f5ae2ada94580a48e8d8ea5a7ce339728a9d7168b9b77fe08cb0eac1d", "query": "Repro: As of current nightly: This is the unreachable line: The same crash reproduces back to 1.47.0. Older stable compilers do not crash:\nBisect: searched nightlies: from nightly-2020-07-10 to nightly-2020-08-27 regressed nightly: nightly-2020-07-22 searched commits: from to regressed commit: () Mentioning because appears the most relevant.\nAssigning .", "positive_passages": [{"docid": "doc-en-rust-077b0797a955bd8741c7f84bcc61f044b4c6c2a2e13b9b9eebb07a3a24284ac8", "text": " // Regression test for #80468. #![crate_type = \"lib\"] pub trait Trait {} #[repr(transparent)] pub struct Wrapper(T); #[repr(transparent)] pub struct Ref<'a>(&'a u8); impl Trait for Ref {} //~ ERROR: implicit elided lifetime not allowed here extern \"C\" { pub fn repro(_: Wrapper); //~ ERROR: mismatched types } ", "commid": "rust_pr_80548"}], "negative_passages": []} {"query_id": "q-en-rust-7c709c2f5ae2ada94580a48e8d8ea5a7ce339728a9d7168b9b77fe08cb0eac1d", "query": "Repro: As of current nightly: This is the unreachable line: The same crash reproduces back to 1.47.0. Older stable compilers do not crash:\nBisect: searched nightlies: from nightly-2020-07-10 to nightly-2020-08-27 regressed nightly: nightly-2020-07-22 searched commits: from to regressed commit: () Mentioning because appears the most relevant.\nAssigning .", "positive_passages": [{"docid": "doc-en-rust-e7ac36f4474f8c909549840bc2c12bdee67bc0ad3625456d3dec69799d0119bb", "text": " error[E0726]: implicit elided lifetime not allowed here --> $DIR/wf-in-foreign-fn-decls-issue-80468.rs:13:16 | LL | impl Trait for Ref {} | ^^^- help: indicate the anonymous lifetime: `<'_>` error[E0308]: mismatched types --> $DIR/wf-in-foreign-fn-decls-issue-80468.rs:16:21 | LL | pub fn repro(_: Wrapper); | ^^^^^^^^^^^^ lifetime mismatch | = note: expected trait `Trait` found trait `Trait` note: the anonymous lifetime #1 defined on the method body at 16:5... --> $DIR/wf-in-foreign-fn-decls-issue-80468.rs:16:5 | LL | pub fn repro(_: Wrapper); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ = note: ...does not necessarily outlive the static lifetime error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_80548"}], "negative_passages": []} {"query_id": "q-en-rust-65e2f53ee81f8a664977437d82a2bf7f20b4da7e87ff115287568c31bbede52f", "query": "When you write a mod with a capital name of a keyword, we suggest changing it to the lowercase version of the keyword, which is, well, a keyword and not going to work. We should suggest converting the identifier to snake case and making it a raw identifier, or maybe suggest switching to an altogether separate name? $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:12:9 | LL | let Mod: usize = 0; | ^^^ help: if this is intentional, prefix it with an underscore: `_Mod` | note: the lint level is defined here --> $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:1:9 | LL | #![warn(unused)] | ^^^^^^ = note: `#[warn(unused_variables)]` implied by `#[warn(unused)]` warning: unused variable: `Super` --> $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:16:9 | LL | let Super: usize = 0; | ^^^^^ help: if this is intentional, prefix it with an underscore: `_Super` error: module `Impl` should have a snake case name --> $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:5:5 | LL | mod Impl {} | ^^^^ | note: the lint level is defined here --> $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:3:9 | LL | #![deny(non_snake_case)] | ^^^^^^^^^^^^^^ help: rename the identifier or convert it to a snake case raw identifier | LL | mod r#impl {} | ^^^^^^ error: function `While` should have a snake case name --> $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:8:4 | LL | fn While() {} | ^^^^^ | help: rename the identifier or convert it to a snake case raw identifier | LL | fn r#while() {} | ^^^^^^^ error: variable `Mod` should have a snake case name --> $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:12:9 | LL | let Mod: usize = 0; | ^^^ | help: rename the identifier or convert it to a snake case raw identifier | LL | let r#mod: usize = 0; | ^^^^^ error: variable `Super` should have a snake case name --> $DIR/lint-non-snake-case-identifiers-suggestion-reserved.rs:16:9 | LL | let Super: usize = 0; | ^^^^^ help: rename the identifier | = note: `super` cannot be used as a raw identifier error: aborting due to 4 previous errors; 2 warnings emitted ", "commid": "rust_pr_80592"}], "negative_passages": []} {"query_id": "q-en-rust-65731bb5d2f3fe99c4c9687e1bcda2374b2d6ffb629845f60d98a5fab10f516a", "query": " $DIR/issue-75883.rs:6:21 | LL | pub fn run() -> Result<_> { | ^^^^^^ - supplied 1 type argument | | | expected 2 type arguments | note: enum defined here, with 2 type parameters: `T`, `E` --> $SRC_DIR/core/src/result.rs:LL:COL | LL | pub enum Result { | ^^^^^^ - - help: add missing type argument | LL | pub fn run() -> Result<_, E> { | ^^^ error[E0107]: this enum takes 2 type arguments but only 1 type argument was supplied --> $DIR/issue-75883.rs:15:35 | LL | pub fn interact(&mut self) -> Result<_> { | ^^^^^^ - supplied 1 type argument | | | expected 2 type arguments | note: enum defined here, with 2 type parameters: `T`, `E` --> $SRC_DIR/core/src/result.rs:LL:COL | LL | pub enum Result { | ^^^^^^ - - help: add missing type argument | LL | pub fn interact(&mut self) -> Result<_, E> { | ^^^ error[E0121]: the type placeholder `_` is not allowed within types on item signatures --> $DIR/issue-75883.rs:15:42 | LL | pub fn interact(&mut self) -> Result<_> { | ^ not allowed in type signatures error[E0121]: the type placeholder `_` is not allowed within types on item signatures --> $DIR/issue-75883.rs:6:28 | LL | pub fn run() -> Result<_> { | ^ not allowed in type signatures error: aborting due to 4 previous errors Some errors have detailed explanations: E0107, E0121. For more information about an error, try `rustc --explain E0107`. ", "commid": "rust_pr_84646"}], "negative_passages": []} {"query_id": "q-en-rust-65731bb5d2f3fe99c4c9687e1bcda2374b2d6ffb629845f60d98a5fab10f516a", "query": " $DIR/issue-80779.rs:10:28 | LL | pub fn g(_: T<'static>) -> _ {} | ^ | | | not allowed in type signatures | help: replace with the correct return type: `()` error[E0121]: the type placeholder `_` is not allowed within types on item signatures --> $DIR/issue-80779.rs:5:29 | LL | pub fn f<'a>(val: T<'a>) -> _ { | ^ | | | not allowed in type signatures | help: replace with the correct return type: `()` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0121`. ", "commid": "rust_pr_84646"}], "negative_passages": []} {"query_id": "q-en-rust-8b0125de1cc299f933fe5b7f70dd3f5512334973a1d755f2064b11b62d253f83", "query": "As it is currently written, E0207 talks entirely about type parameters. Now that const parameters also need to constrain implementations (under the same rules described for types), they also need to be mentioned in the error explanation. Since other items don't require const parameters to be used at all, we might also want to specifically call out this difference here. Furthermore, E0207 triggers if you use an unconstrained lifetime in an associated type, and this is completely unspecified by the error as well. $DIR/bound-suggestions.rs:44:46 | LL | const SIZE: usize = core::mem::size_of::(); | ^^^^ doesn't have a size known at compile-time | ::: $SRC_DIR/core/src/mem/mod.rs:LL:COL | LL | pub const fn size_of() -> usize { | - required by this bound in `std::mem::size_of` | help: consider further restricting `Self` | LL | trait Foo: Sized { | ^^^^^^^ error[E0277]: the size for values of type `Self` cannot be known at compilation time --> $DIR/bound-suggestions.rs:49:46 | LL | const SIZE: usize = core::mem::size_of::(); | ^^^^ doesn't have a size known at compile-time | ::: $SRC_DIR/core/src/mem/mod.rs:LL:COL | LL | pub const fn size_of() -> usize { | - required by this bound in `std::mem::size_of` | help: consider further restricting `Self` | LL | trait Bar: std::fmt::Display + Sized { | ^^^^^^^ error[E0277]: the size for values of type `Self` cannot be known at compilation time --> $DIR/bound-suggestions.rs:54:46 | LL | const SIZE: usize = core::mem::size_of::(); | ^^^^ doesn't have a size known at compile-time | ::: $SRC_DIR/core/src/mem/mod.rs:LL:COL | LL | pub const fn size_of() -> usize { | - required by this bound in `std::mem::size_of` | help: consider further restricting `Self` | LL | trait Baz: Sized where Self: std::fmt::Display { | ^^^^^^^ error[E0277]: the size for values of type `Self` cannot be known at compilation time --> $DIR/bound-suggestions.rs:59:46 | LL | const SIZE: usize = core::mem::size_of::(); | ^^^^ doesn't have a size known at compile-time | ::: $SRC_DIR/core/src/mem/mod.rs:LL:COL | LL | pub const fn size_of() -> usize { | - required by this bound in `std::mem::size_of` | help: consider further restricting `Self` | LL | trait Qux: Sized where Self: std::fmt::Display { | ^^^^^^^ error[E0277]: the size for values of type `Self` cannot be known at compilation time --> $DIR/bound-suggestions.rs:64:46 | LL | const SIZE: usize = core::mem::size_of::(); | ^^^^ doesn't have a size known at compile-time | ::: $SRC_DIR/core/src/mem/mod.rs:LL:COL | LL | pub const fn size_of() -> usize { | - required by this bound in `std::mem::size_of` | help: consider further restricting `Self` | LL | trait Bat: std::fmt::Display + Sized { | ^^^^^^^ error: aborting due to 11 previous errors For more information about this error, try `rustc --explain E0277`.", "commid": "rust_pr_81195"}], "negative_passages": []} {"query_id": "q-en-rust-e2ace22260b74516fbc755ffdb010cc809dd15bd5335cb5346fad0307d8fe994", "query": "Rustdoc on [nightly documents][1] methods have signature of as while it only has to document it as . ! On [rustdoc stable][2], things are normal: ! : Version 1.49.0 ( 2020-12-29) [1]: [2]:\nI'd like to investigate this :)\nsearched nightlies: from nightly-2020-12-29 to nightly-2021-01-22 regressed nightly: nightly-2021-01-09 searched commits: from to regressed commit: This regressed in , so this is probably caused by\nThis also affects Where will generate:\nSo a few observations: which displays mutability in patterns, causing to be displayed the mechanism for detecting and eliding works by , but as returns instead of , the comparison fails and displays the type.\nI see two path to solving this bug: an option for to not display the mutability in patterns revert cc what do you think?\nI would prefer to go for 1, there's no reason to have two completely different functions for pretty-printing.\nAssigning and removing as in the prioritization working group.\n(now a beta regression)", "positive_passages": [{"docid": "doc-en-rust-7d0824eebcaaecc09fce7694c5e543656c144b2aa2d7352d81bc5a7e8e0380c6", "text": ".iter() .enumerate() .map(|(i, ty)| Argument { name: Symbol::intern(&rustc_hir_pretty::param_to_string(&body.params[i])), name: name_from_pat(&body.params[i].pat), type_: ty.clean(cx), }) .collect(),", "commid": "rust_pr_81831"}], "negative_passages": []} {"query_id": "q-en-rust-e2ace22260b74516fbc755ffdb010cc809dd15bd5335cb5346fad0307d8fe994", "query": "Rustdoc on [nightly documents][1] methods have signature of as while it only has to document it as . ! On [rustdoc stable][2], things are normal: ! : Version 1.49.0 ( 2020-12-29) [1]: [2]:\nI'd like to investigate this :)\nsearched nightlies: from nightly-2020-12-29 to nightly-2021-01-22 regressed nightly: nightly-2021-01-09 searched commits: from to regressed commit: This regressed in , so this is probably caused by\nThis also affects Where will generate:\nSo a few observations: which displays mutability in patterns, causing to be displayed the mechanism for detecting and eliding works by , but as returns instead of , the comparison fails and displays the type.\nI see two path to solving this bug: an option for to not display the mutability in patterns revert cc what do you think?\nI would prefer to go for 1, there's no reason to have two completely different functions for pretty-printing.\nAssigning and removing as in the prioritization working group.\n(now a beta regression)", "positive_passages": [{"docid": "doc-en-rust-1807955f33e289ce51071900b664294b03805e2eb4bddd9d13beca5f7d5f6f69", "text": "Path { global: path.global, res: path.res, segments } } crate fn qpath_to_string(p: &hir::QPath<'_>) -> String { let segments = match *p { hir::QPath::Resolved(_, ref path) => &path.segments, hir::QPath::TypeRelative(_, ref segment) => return segment.ident.to_string(), hir::QPath::LangItem(lang_item, ..) => return lang_item.name().to_string(), }; let mut s = String::new(); for (i, seg) in segments.iter().enumerate() { if i > 0 { s.push_str(\"::\"); } if seg.ident.name != kw::PathRoot { s.push_str(&seg.ident.as_str()); } } s } crate fn build_deref_target_impls(cx: &DocContext<'_>, items: &[Item], ret: &mut Vec) { let tcx = cx.tcx;", "commid": "rust_pr_81831"}], "negative_passages": []} {"query_id": "q-en-rust-e2ace22260b74516fbc755ffdb010cc809dd15bd5335cb5346fad0307d8fe994", "query": "Rustdoc on [nightly documents][1] methods have signature of as while it only has to document it as . ! On [rustdoc stable][2], things are normal: ! : Version 1.49.0 ( 2020-12-29) [1]: [2]:\nI'd like to investigate this :)\nsearched nightlies: from nightly-2020-12-29 to nightly-2021-01-22 regressed nightly: nightly-2021-01-09 searched commits: from to regressed commit: This regressed in , so this is probably caused by\nThis also affects Where will generate:\nSo a few observations: which displays mutability in patterns, causing to be displayed the mechanism for detecting and eliding works by , but as returns instead of , the comparison fails and displays the type.\nI see two path to solving this bug: an option for to not display the mutability in patterns revert cc what do you think?\nI would prefer to go for 1, there's no reason to have two completely different functions for pretty-printing.\nAssigning and removing as in the prioritization working group.\n(now a beta regression)", "positive_passages": [{"docid": "doc-en-rust-1678d3b7f120953fd4135246d2c4cc0b6ba980e6ceca37e8948c576dda0ce7cd", "text": "} } crate fn name_from_pat(p: &hir::Pat<'_>) -> Symbol { use rustc_hir::*; debug!(\"trying to get a name from pattern: {:?}\", p); Symbol::intern(&match p.kind { PatKind::Wild => return kw::Underscore, PatKind::Binding(_, _, ident, _) => return ident.name, PatKind::TupleStruct(ref p, ..) | PatKind::Path(ref p) => qpath_to_string(p), PatKind::Struct(ref name, ref fields, etc) => format!( \"{} {{ {}{} }}\", qpath_to_string(name), fields .iter() .map(|fp| format!(\"{}: {}\", fp.ident, name_from_pat(&fp.pat))) .collect::>() .join(\", \"), if etc { \", ..\" } else { \"\" } ), PatKind::Or(ref pats) => pats .iter() .map(|p| name_from_pat(&**p).to_string()) .collect::>() .join(\" | \"), PatKind::Tuple(ref elts, _) => format!( \"({})\", elts.iter() .map(|p| name_from_pat(&**p).to_string()) .collect::>() .join(\", \") ), PatKind::Box(ref p) => return name_from_pat(&**p), PatKind::Ref(ref p, _) => return name_from_pat(&**p), PatKind::Lit(..) => { warn!( \"tried to get argument name from PatKind::Lit, which is silly in function arguments\" ); return Symbol::intern(\"()\"); } PatKind::Range(..) => return kw::Underscore, PatKind::Slice(ref begin, ref mid, ref end) => { let begin = begin.iter().map(|p| name_from_pat(&**p).to_string()); let mid = mid.as_ref().map(|p| format!(\"..{}\", name_from_pat(&**p))).into_iter(); let end = end.iter().map(|p| name_from_pat(&**p).to_string()); format!(\"[{}]\", begin.chain(mid).chain(end).collect::>().join(\", \")) } }) } crate fn print_const(cx: &DocContext<'_>, n: &'tcx ty::Const<'_>) -> String { match n.val { ty::ConstKind::Unevaluated(def, _, promoted) => {", "commid": "rust_pr_81831"}], "negative_passages": []} {"query_id": "q-en-rust-e2ace22260b74516fbc755ffdb010cc809dd15bd5335cb5346fad0307d8fe994", "query": "Rustdoc on [nightly documents][1] methods have signature of as while it only has to document it as . ! On [rustdoc stable][2], things are normal: ! : Version 1.49.0 ( 2020-12-29) [1]: [2]:\nI'd like to investigate this :)\nsearched nightlies: from nightly-2020-12-29 to nightly-2021-01-22 regressed nightly: nightly-2021-01-09 searched commits: from to regressed commit: This regressed in , so this is probably caused by\nThis also affects Where will generate:\nSo a few observations: which displays mutability in patterns, causing to be displayed the mechanism for detecting and eliding works by , but as returns instead of , the comparison fails and displays the type.\nI see two path to solving this bug: an option for to not display the mutability in patterns revert cc what do you think?\nI would prefer to go for 1, there's no reason to have two completely different functions for pretty-printing.\nAssigning and removing as in the prioritization working group.\n(now a beta regression)", "positive_passages": [{"docid": "doc-en-rust-922ea34f9e71c5e08bf2a47a307c03a2d8436c75b3c239348e8ecf173251848c", "text": " // Rustdoc shouldn't display `mut` in function arguments, which are // implementation details. Regression test for #81289. #![crate_name = \"foo\"] pub struct Foo; // @count foo/struct.Foo.html '//*[@class=\"impl-items\"]//*[@class=\"method\"]' 2 // @!has - '//*[@class=\"impl-items\"]//*[@class=\"method\"]' 'mut' impl Foo { pub fn foo(mut self) {} pub fn bar(mut bar: ()) {} } // @count foo/fn.baz.html '//*[@class=\"rust fn\"]' 1 // @!has - '//*[@class=\"rust fn\"]' 'mut' pub fn baz(mut foo: Foo) {} ", "commid": "rust_pr_81831"}], "negative_passages": []} {"query_id": "q-en-rust-e2ace22260b74516fbc755ffdb010cc809dd15bd5335cb5346fad0307d8fe994", "query": "Rustdoc on [nightly documents][1] methods have signature of as while it only has to document it as . ! On [rustdoc stable][2], things are normal: ! : Version 1.49.0 ( 2020-12-29) [1]: [2]:\nI'd like to investigate this :)\nsearched nightlies: from nightly-2020-12-29 to nightly-2021-01-22 regressed nightly: nightly-2021-01-09 searched commits: from to regressed commit: This regressed in , so this is probably caused by\nThis also affects Where will generate:\nSo a few observations: which displays mutability in patterns, causing to be displayed the mechanism for detecting and eliding works by , but as returns instead of , the comparison fails and displays the type.\nI see two path to solving this bug: an option for to not display the mutability in patterns revert cc what do you think?\nI would prefer to go for 1, there's no reason to have two completely different functions for pretty-printing.\nAssigning and removing as in the prioritization working group.\n(now a beta regression)", "positive_passages": [{"docid": "doc-en-rust-dd2b18505fa76b7eff4dcf0a5fae5fcaa88ff821eb49da5938dc2ac2094f15ce", "text": "#![crate_name = \"foo\"] // @has foo/fn.f.html // @has - '//*[@class=\"rust fn\"]' 'pub fn f(0u8 ...255: u8)' // @has - '//*[@class=\"rust fn\"]' 'pub fn f(_: u8)' pub fn f(0u8...255: u8) {}", "commid": "rust_pr_81831"}], "negative_passages": []} {"query_id": "q-en-rust-859413bda82fb182952ae7105a8ece5d9e4938237cf91d60fc2b4a4bbde65e14", "query": "Since a few weeks we are getting an unrelated failure in a ui test where the order of two diagnostics changes. It is not entirely random, since it only changes if there are compiler changes, but we should still find and fix it use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::stable_map::StableMap; use rustc_span::symbol::{sym, Symbol}; use std::lazy::SyncLazy;", "commid": "rust_pr_81393"}], "negative_passages": []} {"query_id": "q-en-rust-859413bda82fb182952ae7105a8ece5d9e4938237cf91d60fc2b4a4bbde65e14", "query": "Since a few weeks we are getting an unrelated failure in a ui test where the order of two diagnostics changes. It is not entirely random, since it only changes if there are compiler changes, but we should still find and fix it ( pub static WEAK_ITEMS_REFS: SyncLazy> = SyncLazy::new(|| { let mut map = FxHashMap::default(); pub static WEAK_ITEMS_REFS: SyncLazy> = SyncLazy::new(|| { let mut map = StableMap::default(); $(map.insert(sym::$name, LangItem::$item);)* map });", "commid": "rust_pr_81393"}], "negative_passages": []} {"query_id": "q-en-rust-859413bda82fb182952ae7105a8ece5d9e4938237cf91d60fc2b4a4bbde65e14", "query": "Since a few weeks we are getting an unrelated failure in a ui test where the order of two diagnostics changes. It is not entirely random, since it only changes if there are compiler changes, but we should still find and fix it for (name, &item) in WEAK_ITEMS_REFS.iter() { for (name, item) in WEAK_ITEMS_REFS.clone().into_sorted_vector().into_iter() { if missing.contains(&item) && required(tcx, item) && items.require(item).is_err() { if item == LangItem::PanicImpl { tcx.sess.err(\"`#[panic_handler]` function required, but not found\");", "commid": "rust_pr_81393"}], "negative_passages": []} {"query_id": "q-en-rust-f84a7e3a050f4b50aa7670e065448f005a07288f6beea9c2edc9a51e8b1c1a0f", "query": "The following comment causes a panic in jsondocck currently: Expected: Pretty error like 'Invalid split pattern on line X' Current: Giant stack trace, only obvious if you know what that line is doing. Opening this for tracking and so I don't forget to fix it $DIR/drop-elaboration-after-borrowck-error.rs:7:5 | LL | a[0] = String::new(); | ^^^^ | | | statics cannot evaluate destructors | value is dropped here error[E0493]: destructors cannot be evaluated at compile-time --> $DIR/drop-elaboration-after-borrowck-error.rs:5:9 | LL | let a: [String; 1]; | ^ statics cannot evaluate destructors ... LL | }; | - value is dropped here error[E0381]: use of possibly-uninitialized variable: `a` --> $DIR/drop-elaboration-after-borrowck-error.rs:7:5 | LL | a[0] = String::new(); | ^^^^ use of possibly-uninitialized `a` error[E0493]: destructors cannot be evaluated at compile-time --> $DIR/drop-elaboration-after-borrowck-error.rs:18:9 | LL | self.0[0] = other; | ^^^^^^^^^ | | | constant functions cannot evaluate destructors | value is dropped here error[E0493]: destructors cannot be evaluated at compile-time --> $DIR/drop-elaboration-after-borrowck-error.rs:16:13 | LL | let _this = self; | ^^^^^ constant functions cannot evaluate destructors ... LL | } | - value is dropped here error[E0382]: use of moved value: `self.0` --> $DIR/drop-elaboration-after-borrowck-error.rs:18:9 | LL | pub const fn f(mut self, other: T) -> Self { | -------- move occurs because `self` has type `B`, which does not implement the `Copy` trait LL | let _this = self; | ---- value moved here LL | LL | self.0[0] = other; | ^^^^^^^^^ value used here after move error: aborting due to 6 previous errors Some errors have detailed explanations: E0381, E0382, E0493. For more information about an error, try `rustc --explain E0381`. ", "commid": "rust_pr_92207"}], "negative_passages": []} {"query_id": "q-en-rust-9dc03a4c62f1938a774e09cd3cb0ec5a928b66469c31ded0c01a9c5dab9ce510", "query": " $DIR/issue-81712-cyclic-traits.rs:14:10 | LL | type DType: D; | ^^^^^ expected 1 type argument | note: associated type defined here, with 1 type parameter: `T` --> $DIR/issue-81712-cyclic-traits.rs:14:10 | LL | type DType: D; | ^^^^^ - help: use angle brackets to add missing type argument | LL | type DType: D; | ^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0107`. ", "commid": "rust_pr_82752"}], "negative_passages": []} {"query_id": "q-en-rust-123f16fea4d7b4846cee83d67b7d2469c0c7662521995b9841cb26e5f66df367", "query": " $DIR/issue-81827.rs:11:27 | LL | fn r()->i{0|{#[cfg(r(0{]0 | - - ^ | | | | | unclosed delimiter | unclosed delimiter error: this file contains an unclosed delimiter --> $DIR/issue-81827.rs:11:27 | LL | fn r()->i{0|{#[cfg(r(0{]0 | - - ^ | | | | | unclosed delimiter | unclosed delimiter error: mismatched closing delimiter: `]` --> $DIR/issue-81827.rs:11:23 | LL | fn r()->i{0|{#[cfg(r(0{]0 | - ^^ mismatched closing delimiter | | | | | unclosed delimiter | closing delimiter possibly meant for this error: expected one of `)` or `,`, found `{` --> $DIR/issue-81827.rs:11:23 | LL | fn r()->i{0|{#[cfg(r(0{]0 | ^ expected one of `)` or `,` error: aborting due to 4 previous errors ", "commid": "rust_pr_97220"}], "negative_passages": []} {"query_id": "q-en-rust-8844966a2258871275a3ba4e33da9bf8584479437b8a26c95dc7d3b556b0b951", "query": " $DIR/issue-81839.rs:11:14 | LL | / match num { LL | | 1 => { LL | | cx.answer_str(\"hi\"); | | -------------------- | | | | | | | help: consider removing this semicolon | | this is found to be of type `()` LL | | } LL | | _ => cx.answer_str(\"hi\"), | | ^^^^^^^^^^^^^^^^^^^ expected `()`, found opaque type LL | | } | |_____- `match` arms have incompatible types | ::: $DIR/auxiliary/issue-81839.rs:6:49 | LL | pub async fn answer_str(&self, _s: &str) -> Test { | ---- the `Output` of this `async fn`'s found opaque type | = note: expected type `()` found opaque type `impl Future` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_81991"}], "negative_passages": []} {"query_id": "q-en-rust-8844966a2258871275a3ba4e33da9bf8584479437b8a26c95dc7d3b556b0b951", "query": " $DIR/match-prev-arm-needing-semi.rs:39:18", "commid": "rust_pr_81991"}], "negative_passages": []} {"query_id": "q-en-rust-1a56168e9b7d71d8c5475937160e078cdfcd893d15b2024cf0fd63b20ee6dc7e", "query": "We should suggest the use of when writing :\nIs there any precedent for the compiler suggesting an external crate?\nYes, for async traits the compiler suggests to use :\nAs pointed out, we do for . We normally wouldn't recommend one crate over another (\"don't play favorites\"), but for small, single use, specialized crates that fill a common language \"hole\" that we might be working towards \"plugging\", I think it is a much better experience to immediately unblock newcomers than send them to a user forum to ask how to get around the limitation. The current error gives some idea of how to fix this, but not nearly enough for an even moderately knowledgeable Rust user to fix the problem without some searching. That being said, thanks for double checking and mildly pushing back on a potentially controversial and \"uncommon\" request. That is important for the health of the project.", "positive_passages": [{"docid": "doc-en-rust-0c69e660af50102d233fd2d5a8aba1f99ef3893440f57fbaa80a69b481851a66", "text": "struct_span_err!(tcx.sess, span, E0733, \"recursion in an `async fn` requires boxing\") .span_label(span, \"recursive `async fn`\") .note(\"a recursive `async fn` must be rewritten to return a boxed `dyn Future`\") .note( \"consider using the `async_recursion` crate: https://crates.io/crates/async_recursion\", ) .emit(); }", "commid": "rust_pr_81926"}], "negative_passages": []} {"query_id": "q-en-rust-1a56168e9b7d71d8c5475937160e078cdfcd893d15b2024cf0fd63b20ee6dc7e", "query": "We should suggest the use of when writing :\nIs there any precedent for the compiler suggesting an external crate?\nYes, for async traits the compiler suggests to use :\nAs pointed out, we do for . We normally wouldn't recommend one crate over another (\"don't play favorites\"), but for small, single use, specialized crates that fill a common language \"hole\" that we might be working towards \"plugging\", I think it is a much better experience to immediately unblock newcomers than send them to a user forum to ask how to get around the limitation. The current error gives some idea of how to fix this, but not nearly enough for an even moderately knowledgeable Rust user to fix the problem without some searching. That being said, thanks for double checking and mildly pushing back on a potentially controversial and \"uncommon\" request. That is important for the health of the project.", "positive_passages": [{"docid": "doc-en-rust-0d2f8f700715bd06f59d2e498ef126cfecc20a1b4342be00fcaaee349667125f", "text": "| ^ recursive `async fn` | = note: a recursive `async fn` must be rewritten to return a boxed `dyn Future` = note: consider using the `async_recursion` crate: https://crates.io/crates/async_recursion error[E0733]: recursion in an `async fn` requires boxing --> $DIR/mutually-recursive-async-impl-trait-type.rs:9:18", "commid": "rust_pr_81926"}], "negative_passages": []} {"query_id": "q-en-rust-1a56168e9b7d71d8c5475937160e078cdfcd893d15b2024cf0fd63b20ee6dc7e", "query": "We should suggest the use of when writing :\nIs there any precedent for the compiler suggesting an external crate?\nYes, for async traits the compiler suggests to use :\nAs pointed out, we do for . We normally wouldn't recommend one crate over another (\"don't play favorites\"), but for small, single use, specialized crates that fill a common language \"hole\" that we might be working towards \"plugging\", I think it is a much better experience to immediately unblock newcomers than send them to a user forum to ask how to get around the limitation. The current error gives some idea of how to fix this, but not nearly enough for an even moderately knowledgeable Rust user to fix the problem without some searching. That being said, thanks for double checking and mildly pushing back on a potentially controversial and \"uncommon\" request. That is important for the health of the project.", "positive_passages": [{"docid": "doc-en-rust-9e63504d6c47f4bd211b71383a220005bfe1c00b3e64d06f0c9f1ddbf69457c9", "text": "| ^ recursive `async fn` | = note: a recursive `async fn` must be rewritten to return a boxed `dyn Future` = note: consider using the `async_recursion` crate: https://crates.io/crates/async_recursion error: aborting due to 2 previous errors", "commid": "rust_pr_81926"}], "negative_passages": []} {"query_id": "q-en-rust-1a56168e9b7d71d8c5475937160e078cdfcd893d15b2024cf0fd63b20ee6dc7e", "query": "We should suggest the use of when writing :\nIs there any precedent for the compiler suggesting an external crate?\nYes, for async traits the compiler suggests to use :\nAs pointed out, we do for . We normally wouldn't recommend one crate over another (\"don't play favorites\"), but for small, single use, specialized crates that fill a common language \"hole\" that we might be working towards \"plugging\", I think it is a much better experience to immediately unblock newcomers than send them to a user forum to ask how to get around the limitation. The current error gives some idea of how to fix this, but not nearly enough for an even moderately knowledgeable Rust user to fix the problem without some searching. That being said, thanks for double checking and mildly pushing back on a potentially controversial and \"uncommon\" request. That is important for the health of the project.", "positive_passages": [{"docid": "doc-en-rust-2fcceb5447f6b6b469eda22d0b83594b28d96cab9c645fd2d5fdae7c1cabd2ad", "text": "| ^^ recursive `async fn` | = note: a recursive `async fn` must be rewritten to return a boxed `dyn Future` = note: consider using the `async_recursion` crate: https://crates.io/crates/async_recursion error: aborting due to previous error", "commid": "rust_pr_81926"}], "negative_passages": []} {"query_id": "q-en-rust-b0d3dd31a61fb113bd616c6788d76c9dd86c7d72d5555fc68ece4a6addccaeed", "query": "Gives:\ncc ICE-ing instead of user error on misuse seems somewhat okay given that it's an internal attribute, IMO.\nagreed. We don't protect against mis-use of rustc internal attributes beyond making sure that you can't trivially create buggy code. I don't think we should put any effort into making this nicer to use and instead create the const-generic based replacement.", "positive_passages": [{"docid": "doc-en-rust-adff4a978196666cc4e8dfdc46f9eec1c6f577f5a5920158a6f88d7f7c435c3e", "text": "} impl Attribute { #[inline] pub fn has_name(&self, name: Symbol) -> bool { match self.kind { AttrKind::Normal(ref item, _) => item.path == name,", "commid": "rust_pr_83054"}], "negative_passages": []} {"query_id": "q-en-rust-b0d3dd31a61fb113bd616c6788d76c9dd86c7d72d5555fc68ece4a6addccaeed", "query": "Gives:\ncc ICE-ing instead of user error on misuse seems somewhat okay given that it's an internal attribute, IMO.\nagreed. We don't protect against mis-use of rustc internal attributes beyond making sure that you can't trivially create buggy code. I don't think we should put any effort into making this nicer to use and instead create the const-generic based replacement.", "positive_passages": [{"docid": "doc-en-rust-5e81c75718f40a733f3cde27c32c51a7588108a36c9e79ddd8b4d94d83e365d3", "text": "use rustc_middle::ty::query::Providers; use rustc_middle::ty::TyCtxt; use rustc_ast::{Attribute, LitKind, NestedMetaItem}; use rustc_ast::{Attribute, Lit, LitKind, NestedMetaItem}; use rustc_errors::{pluralize, struct_span_err}; use rustc_hir as hir; use rustc_hir::def_id::LocalDefId;", "commid": "rust_pr_83054"}], "negative_passages": []} {"query_id": "q-en-rust-b0d3dd31a61fb113bd616c6788d76c9dd86c7d72d5555fc68ece4a6addccaeed", "query": "Gives:\ncc ICE-ing instead of user error on misuse seems somewhat okay given that it's an internal attribute, IMO.\nagreed. We don't protect against mis-use of rustc internal attributes beyond making sure that you can't trivially create buggy code. I don't think we should put any effort into making this nicer to use and instead create the const-generic based replacement.", "positive_passages": [{"docid": "doc-en-rust-2c9b7e731b7fd070882cc9cee8f6a4f8ca96a89f479edcdb5a4e7f1ffe6229b6", "text": "self.check_export_name(hir_id, &attr, span, target) } else if self.tcx.sess.check_name(attr, sym::rustc_args_required_const) { self.check_rustc_args_required_const(&attr, span, target, item) } else if self.tcx.sess.check_name(attr, sym::rustc_layout_scalar_valid_range_start) { self.check_rustc_layout_scalar_valid_range(&attr, span, target) } else if self.tcx.sess.check_name(attr, sym::rustc_layout_scalar_valid_range_end) { self.check_rustc_layout_scalar_valid_range(&attr, span, target) } else if self.tcx.sess.check_name(attr, sym::allow_internal_unstable) { self.check_allow_internal_unstable(hir_id, &attr, span, target, &attrs) } else if self.tcx.sess.check_name(attr, sym::rustc_allow_const_fn_unstable) {", "commid": "rust_pr_83054"}], "negative_passages": []} {"query_id": "q-en-rust-b0d3dd31a61fb113bd616c6788d76c9dd86c7d72d5555fc68ece4a6addccaeed", "query": "Gives:\ncc ICE-ing instead of user error on misuse seems somewhat okay given that it's an internal attribute, IMO.\nagreed. We don't protect against mis-use of rustc internal attributes beyond making sure that you can't trivially create buggy code. I don't think we should put any effort into making this nicer to use and instead create the const-generic based replacement.", "positive_passages": [{"docid": "doc-en-rust-7d269aec264b68c4821978f5192430f464692b54eeb297e5545d8913ddb07c38", "text": "} } fn check_rustc_layout_scalar_valid_range( &self, attr: &Attribute, span: &Span, target: Target, ) -> bool { if target != Target::Struct { self.tcx .sess .struct_span_err(attr.span, \"attribute should be applied to a struct\") .span_label(*span, \"not a struct\") .emit(); return false; } let list = match attr.meta_item_list() { None => return false, Some(it) => it, }; if matches!(&list[..], &[NestedMetaItem::Literal(Lit { kind: LitKind::Int(..), .. })]) { true } else { self.tcx .sess .struct_span_err(attr.span, \"expected exactly one integer literal argument\") .emit(); false } } /// Checks if `#[rustc_legacy_const_generics]` is applied to a function and has a valid argument. fn check_rustc_legacy_const_generics( &self,", "commid": "rust_pr_83054"}], "negative_passages": []} {"query_id": "q-en-rust-b0d3dd31a61fb113bd616c6788d76c9dd86c7d72d5555fc68ece4a6addccaeed", "query": "Gives:\ncc ICE-ing instead of user error on misuse seems somewhat okay given that it's an internal attribute, IMO.\nagreed. We don't protect against mis-use of rustc internal attributes beyond making sure that you can't trivially create buggy code. I don't think we should put any effort into making this nicer to use and instead create the const-generic based replacement.", "positive_passages": [{"docid": "doc-en-rust-4d3a9b065ab2c1e56c71e060826c8a544b66708350201e41bc62fdbb7aec16e6", "text": " #![feature(rustc_attrs)] #[rustc_layout_scalar_valid_range_start(u32::MAX)] //~ ERROR pub struct A(u32); #[rustc_layout_scalar_valid_range_end(1, 2)] //~ ERROR pub struct B(u8); #[rustc_layout_scalar_valid_range_end(a = \"a\")] //~ ERROR pub struct C(i32); #[rustc_layout_scalar_valid_range_end(1)] //~ ERROR enum E { X = 1, Y = 14, } fn main() { let _ = A(0); let _ = B(0); let _ = C(0); let _ = E::X; } ", "commid": "rust_pr_83054"}], "negative_passages": []} {"query_id": "q-en-rust-b0d3dd31a61fb113bd616c6788d76c9dd86c7d72d5555fc68ece4a6addccaeed", "query": "Gives:\ncc ICE-ing instead of user error on misuse seems somewhat okay given that it's an internal attribute, IMO.\nagreed. We don't protect against mis-use of rustc internal attributes beyond making sure that you can't trivially create buggy code. I don't think we should put any effort into making this nicer to use and instead create the const-generic based replacement.", "positive_passages": [{"docid": "doc-en-rust-49d2057271117b8937cac199ef2a497c0532ec8c4ffcc5613805330fa1db1e00", "text": " error: expected exactly one integer literal argument --> $DIR/invalid_rustc_layout_scalar_valid_range.rs:3:1 | LL | #[rustc_layout_scalar_valid_range_start(u32::MAX)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: expected exactly one integer literal argument --> $DIR/invalid_rustc_layout_scalar_valid_range.rs:6:1 | LL | #[rustc_layout_scalar_valid_range_end(1, 2)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: expected exactly one integer literal argument --> $DIR/invalid_rustc_layout_scalar_valid_range.rs:9:1 | LL | #[rustc_layout_scalar_valid_range_end(a = \"a\")] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: attribute should be applied to a struct --> $DIR/invalid_rustc_layout_scalar_valid_range.rs:12:1 | LL | #[rustc_layout_scalar_valid_range_end(1)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ LL | / enum E { LL | | X = 1, LL | | Y = 14, LL | | } | |_- not a struct error: aborting due to 4 previous errors ", "commid": "rust_pr_83054"}], "negative_passages": []} {"query_id": "q-en-rust-fb1aec0e37714a941c74698838fe47ea30ed31a306d97e6024ef1d702d6b0f65", "query": "Here is called for potential side effects until , ignoring however that it could have already been called in with those indexes. that demonstrates how this can be exploited to get two mutable references to the same data and cause an use-after-free bug.\nAssigning as part of the .\nThe gift that keeps on giving Maybe it is time to get rid of as it exists today and replace it with a ? One that imposes the additional requirement on the caller that it must bring the iterator back into a safe state after it is done iterating. The upside would be eliminating a lot of code in and enabling the same optimization for and similar sources. While the downside would be that external iteration (i.e. loops and manual calls to ) would cease to benefit from optimizations and only internal iteration methods (including ) would continue to do so.\nI don't know, I feel like that would just make more complex to upheld the invariants. IMO a good improvement over the current would be separating forward and backward iteration, this way it becomes easier to keep track of the state and I think would also make it possible to implement them for\nI don't see how separating the forward and backward state would help with a drop implementation. The issue is that the source () currently does not have access to that state, instead the consumer drives the iteration by direct access through and never informs the source about it.\nIt would help because it allows the source iterator to keep track of its state by updating it when the method is called. Anyway, looks like someone already and it resulted in worse optimizations, but maybe LLVM got better in the meantime, or maybe that implementation could have been better. Anyway I don't think this is the right place to discuss this, a topic on zulip would probably be better.\nhttps://rust-", "positive_passages": [{"docid": "doc-en-rust-aeb18219a96ac9199eb7c49527670516788cf7d286a7bbdd340ce719ca5618db", "text": "pub struct Zip { a: A, b: B, // index and len are only used by the specialized version of zip // index, len and a_len are only used by the specialized version of zip index: usize, len: usize, a_len: usize, } impl Zip { pub(in crate::iter) fn new(a: A, b: B) -> Zip {", "commid": "rust_pr_82292"}], "negative_passages": []} {"query_id": "q-en-rust-fb1aec0e37714a941c74698838fe47ea30ed31a306d97e6024ef1d702d6b0f65", "query": "Here is called for potential side effects until , ignoring however that it could have already been called in with those indexes. that demonstrates how this can be exploited to get two mutable references to the same data and cause an use-after-free bug.\nAssigning as part of the .\nThe gift that keeps on giving Maybe it is time to get rid of as it exists today and replace it with a ? One that imposes the additional requirement on the caller that it must bring the iterator back into a safe state after it is done iterating. The upside would be eliminating a lot of code in and enabling the same optimization for and similar sources. While the downside would be that external iteration (i.e. loops and manual calls to ) would cease to benefit from optimizations and only internal iteration methods (including ) would continue to do so.\nI don't know, I feel like that would just make more complex to upheld the invariants. IMO a good improvement over the current would be separating forward and backward iteration, this way it becomes easier to keep track of the state and I think would also make it possible to implement them for\nI don't see how separating the forward and backward state would help with a drop implementation. The issue is that the source () currently does not have access to that state, instead the consumer drives the iteration by direct access through and never informs the source about it.\nIt would help because it allows the source iterator to keep track of its state by updating it when the method is called. Anyway, looks like someone already and it resulted in worse optimizations, but maybe LLVM got better in the meantime, or maybe that implementation could have been better. Anyway I don't think this is the right place to discuss this, a topic on zulip would probably be better.\nhttps://rust-", "positive_passages": [{"docid": "doc-en-rust-a6899b34f310834c2ab7c29a421ce77e0e53837f059afdf8a41aea6ce2de669d", "text": "b, index: 0, // unused len: 0, // unused a_len: 0, // unused } }", "commid": "rust_pr_82292"}], "negative_passages": []} {"query_id": "q-en-rust-fb1aec0e37714a941c74698838fe47ea30ed31a306d97e6024ef1d702d6b0f65", "query": "Here is called for potential side effects until , ignoring however that it could have already been called in with those indexes. that demonstrates how this can be exploited to get two mutable references to the same data and cause an use-after-free bug.\nAssigning as part of the .\nThe gift that keeps on giving Maybe it is time to get rid of as it exists today and replace it with a ? One that imposes the additional requirement on the caller that it must bring the iterator back into a safe state after it is done iterating. The upside would be eliminating a lot of code in and enabling the same optimization for and similar sources. While the downside would be that external iteration (i.e. loops and manual calls to ) would cease to benefit from optimizations and only internal iteration methods (including ) would continue to do so.\nI don't know, I feel like that would just make more complex to upheld the invariants. IMO a good improvement over the current would be separating forward and backward iteration, this way it becomes easier to keep track of the state and I think would also make it possible to implement them for\nI don't see how separating the forward and backward state would help with a drop implementation. The issue is that the source () currently does not have access to that state, instead the consumer drives the iteration by direct access through and never informs the source about it.\nIt would help because it allows the source iterator to keep track of its state by updating it when the method is called. Anyway, looks like someone already and it resulted in worse optimizations, but maybe LLVM got better in the meantime, or maybe that implementation could have been better. Anyway I don't think this is the right place to discuss this, a topic on zulip would probably be better.\nhttps://rust-", "positive_passages": [{"docid": "doc-en-rust-f2652fa7950fef8e30724a38c7a73ef01b52f9f5639dd7b6da0fc9376e6580e5", "text": "B: TrustedRandomAccess + Iterator, { fn new(a: A, b: B) -> Self { let len = cmp::min(a.size(), b.size()); Zip { a, b, index: 0, len } let a_len = a.size(); let len = cmp::min(a_len, b.size()); Zip { a, b, index: 0, len, a_len } } #[inline]", "commid": "rust_pr_82292"}], "negative_passages": []} {"query_id": "q-en-rust-fb1aec0e37714a941c74698838fe47ea30ed31a306d97e6024ef1d702d6b0f65", "query": "Here is called for potential side effects until , ignoring however that it could have already been called in with those indexes. that demonstrates how this can be exploited to get two mutable references to the same data and cause an use-after-free bug.\nAssigning as part of the .\nThe gift that keeps on giving Maybe it is time to get rid of as it exists today and replace it with a ? One that imposes the additional requirement on the caller that it must bring the iterator back into a safe state after it is done iterating. The upside would be eliminating a lot of code in and enabling the same optimization for and similar sources. While the downside would be that external iteration (i.e. loops and manual calls to ) would cease to benefit from optimizations and only internal iteration methods (including ) would continue to do so.\nI don't know, I feel like that would just make more complex to upheld the invariants. IMO a good improvement over the current would be separating forward and backward iteration, this way it becomes easier to keep track of the state and I think would also make it possible to implement them for\nI don't see how separating the forward and backward state would help with a drop implementation. The issue is that the source () currently does not have access to that state, instead the consumer drives the iteration by direct access through and never informs the source about it.\nIt would help because it allows the source iterator to keep track of its state by updating it when the method is called. Anyway, looks like someone already and it resulted in worse optimizations, but maybe LLVM got better in the meantime, or maybe that implementation could have been better. Anyway I don't think this is the right place to discuss this, a topic on zulip would probably be better.\nhttps://rust-", "positive_passages": [{"docid": "doc-en-rust-b1b095e85bf1b3308a31f36d6f7ced4c1a54ecc7462fa110dc0cb6652f293963", "text": "unsafe { Some((self.a.__iterator_get_unchecked(i), self.b.__iterator_get_unchecked(i))) } } else if A::MAY_HAVE_SIDE_EFFECT && self.index < self.a.size() { } else if A::MAY_HAVE_SIDE_EFFECT && self.index < self.a_len { let i = self.index; self.index += 1; self.len += 1;", "commid": "rust_pr_82292"}], "negative_passages": []} {"query_id": "q-en-rust-fb1aec0e37714a941c74698838fe47ea30ed31a306d97e6024ef1d702d6b0f65", "query": "Here is called for potential side effects until , ignoring however that it could have already been called in with those indexes. that demonstrates how this can be exploited to get two mutable references to the same data and cause an use-after-free bug.\nAssigning as part of the .\nThe gift that keeps on giving Maybe it is time to get rid of as it exists today and replace it with a ? One that imposes the additional requirement on the caller that it must bring the iterator back into a safe state after it is done iterating. The upside would be eliminating a lot of code in and enabling the same optimization for and similar sources. While the downside would be that external iteration (i.e. loops and manual calls to ) would cease to benefit from optimizations and only internal iteration methods (including ) would continue to do so.\nI don't know, I feel like that would just make more complex to upheld the invariants. IMO a good improvement over the current would be separating forward and backward iteration, this way it becomes easier to keep track of the state and I think would also make it possible to implement them for\nI don't see how separating the forward and backward state would help with a drop implementation. The issue is that the source () currently does not have access to that state, instead the consumer drives the iteration by direct access through and never informs the source about it.\nIt would help because it allows the source iterator to keep track of its state by updating it when the method is called. Anyway, looks like someone already and it resulted in worse optimizations, but maybe LLVM got better in the meantime, or maybe that implementation could have been better. Anyway I don't think this is the right place to discuss this, a topic on zulip would probably be better.\nhttps://rust-", "positive_passages": [{"docid": "doc-en-rust-c80a418dca4d4cb616eddde43bfd21083a294e87f6d146ba8931f6ede4d1a319", "text": "for _ in 0..sz_a - self.len { self.a.next_back(); } self.a_len = self.len; } let sz_b = self.b.size(); if B::MAY_HAVE_SIDE_EFFECT && sz_b > self.len {", "commid": "rust_pr_82292"}], "negative_passages": []} {"query_id": "q-en-rust-fb1aec0e37714a941c74698838fe47ea30ed31a306d97e6024ef1d702d6b0f65", "query": "Here is called for potential side effects until , ignoring however that it could have already been called in with those indexes. that demonstrates how this can be exploited to get two mutable references to the same data and cause an use-after-free bug.\nAssigning as part of the .\nThe gift that keeps on giving Maybe it is time to get rid of as it exists today and replace it with a ? One that imposes the additional requirement on the caller that it must bring the iterator back into a safe state after it is done iterating. The upside would be eliminating a lot of code in and enabling the same optimization for and similar sources. While the downside would be that external iteration (i.e. loops and manual calls to ) would cease to benefit from optimizations and only internal iteration methods (including ) would continue to do so.\nI don't know, I feel like that would just make more complex to upheld the invariants. IMO a good improvement over the current would be separating forward and backward iteration, this way it becomes easier to keep track of the state and I think would also make it possible to implement them for\nI don't see how separating the forward and backward state would help with a drop implementation. The issue is that the source () currently does not have access to that state, instead the consumer drives the iteration by direct access through and never informs the source about it.\nIt would help because it allows the source iterator to keep track of its state by updating it when the method is called. Anyway, looks like someone already and it resulted in worse optimizations, but maybe LLVM got better in the meantime, or maybe that implementation could have been better. Anyway I don't think this is the right place to discuss this, a topic on zulip would probably be better.\nhttps://rust-", "positive_passages": [{"docid": "doc-en-rust-2cc08db4569cda8563bfae0123e3188452cc62dd2d54b4ab30ced48bc50242db", "text": "} if self.index < self.len { self.len -= 1; self.a_len -= 1; let i = self.len; // SAFETY: `i` is smaller than the previous value of `self.len`, // which is also smaller than or equal to `self.a.len()` and `self.b.len()`", "commid": "rust_pr_82292"}], "negative_passages": []} {"query_id": "q-en-rust-fb1aec0e37714a941c74698838fe47ea30ed31a306d97e6024ef1d702d6b0f65", "query": "Here is called for potential side effects until , ignoring however that it could have already been called in with those indexes. that demonstrates how this can be exploited to get two mutable references to the same data and cause an use-after-free bug.\nAssigning as part of the .\nThe gift that keeps on giving Maybe it is time to get rid of as it exists today and replace it with a ? One that imposes the additional requirement on the caller that it must bring the iterator back into a safe state after it is done iterating. The upside would be eliminating a lot of code in and enabling the same optimization for and similar sources. While the downside would be that external iteration (i.e. loops and manual calls to ) would cease to benefit from optimizations and only internal iteration methods (including ) would continue to do so.\nI don't know, I feel like that would just make more complex to upheld the invariants. IMO a good improvement over the current would be separating forward and backward iteration, this way it becomes easier to keep track of the state and I think would also make it possible to implement them for\nI don't see how separating the forward and backward state would help with a drop implementation. The issue is that the source () currently does not have access to that state, instead the consumer drives the iteration by direct access through and never informs the source about it.\nIt would help because it allows the source iterator to keep track of its state by updating it when the method is called. Anyway, looks like someone already and it resulted in worse optimizations, but maybe LLVM got better in the meantime, or maybe that implementation could have been better. Anyway I don't think this is the right place to discuss this, a topic on zulip would probably be better.\nhttps://rust-", "positive_passages": [{"docid": "doc-en-rust-ee201511a951614f9242d8816dc76a015d84d12f7d9ea8c4aa768a1f269b84a8", "text": "panic!(); } } #[test] fn test_issue_82291() { use std::cell::Cell; let mut v1 = [()]; let v2 = [()]; let called = Cell::new(0); let mut zip = v1 .iter_mut() .map(|r| { called.set(called.get() + 1); r }) .zip(&v2); zip.next_back(); assert_eq!(called.get(), 1); zip.next(); assert_eq!(called.get(), 1); } ", "commid": "rust_pr_82292"}], "negative_passages": []} {"query_id": "q-en-rust-bc991a8c63d81686bebf07f793f5c69e366692742b1f03c258d8aba8722870bd", "query": " $DIR/issue-82361.rs:10:9 | LL | / if true { LL | | a | | - expected because of this LL | | } else { LL | | b | | ^ | | | | | expected `usize`, found `&usize` | | help: consider dereferencing the borrow: `*b` LL | | }; | |_____- `if` and `else` have incompatible types error[E0308]: `if` and `else` have incompatible types --> $DIR/issue-82361.rs:16:9 | LL | / if true { LL | | 1 | | - expected because of this LL | | } else { LL | | &1 | | -^ | | | | | expected integer, found `&{integer}` | | help: consider removing the `&` LL | | }; | |_____- `if` and `else` have incompatible types error[E0308]: `if` and `else` have incompatible types --> $DIR/issue-82361.rs:22:9 | LL | / if true { LL | | 1 | | - expected because of this LL | | } else { LL | | &mut 1 | | -----^ | | | | | expected integer, found `&mut {integer}` | | help: consider removing the `&mut` LL | | }; | |_____- `if` and `else` have incompatible types error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_82364"}], "negative_passages": []} {"query_id": "q-en-rust-8c5e04bc625510dd6d05b7e83b4d3c63473933a9f4d3b6fc461b31b4998e5896", "query": "When running on Linux and a failure happens, a really nice diff of the output vs rustdoc-nightly is produced which very clearly shows the issue. This does not work on Windows for multiple reasons. One the call to fails: Second, the output from rustdoc-nightly seems to exactly match that from the local compiler making it not possible to do manual diffs.\nHow do you diff directories recursively on Windows? On Linux I would be very surprised for not to be installed. Oh I just thought of something - do you have a rustup overide that defaults to your local toolchain? Right now compiletest assumes the default is nightly, I've been meaning to change it.\nNo, I only set overrides on specific projects (and rust-lang/rust is not one of them). My default is the stable compiler.\nGit for Windows provides : Are you building Rust using Git Bash prompt?\nI don't think it'd be a good idea to require testing rustdoc in the Git for Windows bash shell. I'd like for this to work in and PowerShell if possible.", "positive_passages": [{"docid": "doc-en-rust-1132e6637b2e6cf7be0e65b4c825fc1b92a403b71e1307d0177cd7c711b17718", "text": "name = \"compiletest\" version = \"0.0.0\" dependencies = [ \"colored\", \"diff\", \"getopts\", \"glob\",", "commid": "rust_pr_82469"}], "negative_passages": []} {"query_id": "q-en-rust-8c5e04bc625510dd6d05b7e83b4d3c63473933a9f4d3b6fc461b31b4998e5896", "query": "When running on Linux and a failure happens, a really nice diff of the output vs rustdoc-nightly is produced which very clearly shows the issue. This does not work on Windows for multiple reasons. One the call to fails: Second, the output from rustdoc-nightly seems to exactly match that from the local compiler making it not possible to do manual diffs.\nHow do you diff directories recursively on Windows? On Linux I would be very surprised for not to be installed. Oh I just thought of something - do you have a rustup overide that defaults to your local toolchain? Right now compiletest assumes the default is nightly, I've been meaning to change it.\nNo, I only set overrides on specific projects (and rust-lang/rust is not one of them). My default is the stable compiler.\nGit for Windows provides : Are you building Rust using Git Bash prompt?\nI don't think it'd be a good idea to require testing rustdoc in the Git for Windows bash shell. I'd like for this to work in and PowerShell if possible.", "positive_passages": [{"docid": "doc-en-rust-03c177b1a7110de5ad2b4503d4053072ab7d56487ef2088e520e2539c5066ceb", "text": "\"serde_json\", \"tracing\", \"tracing-subscriber\", \"unified-diff\", \"walkdir\", \"winapi 0.3.9\", ]", "commid": "rust_pr_82469"}], "negative_passages": []} {"query_id": "q-en-rust-8c5e04bc625510dd6d05b7e83b4d3c63473933a9f4d3b6fc461b31b4998e5896", "query": "When running on Linux and a failure happens, a really nice diff of the output vs rustdoc-nightly is produced which very clearly shows the issue. This does not work on Windows for multiple reasons. One the call to fails: Second, the output from rustdoc-nightly seems to exactly match that from the local compiler making it not possible to do manual diffs.\nHow do you diff directories recursively on Windows? On Linux I would be very surprised for not to be installed. Oh I just thought of something - do you have a rustup overide that defaults to your local toolchain? Right now compiletest assumes the default is nightly, I've been meaning to change it.\nNo, I only set overrides on specific projects (and rust-lang/rust is not one of them). My default is the stable compiler.\nGit for Windows provides : Are you building Rust using Git Bash prompt?\nI don't think it'd be a good idea to require testing rustdoc in the Git for Windows bash shell. I'd like for this to work in and PowerShell if possible.", "positive_passages": [{"docid": "doc-en-rust-332fb501e5440732b71aade605dae8409fef12788a31f0e1a1bc2d1c025b6093", "text": "checksum = \"39ec24b3121d976906ece63c9daad25b85969647682eee313cb5779fdd69e14e\" [[package]] name = \"unified-diff\" version = \"0.2.1\" source = \"registry+https://github.com/rust-lang/crates.io-index\" checksum = \"496a3d395ed0c30f411ceace4a91f7d93b148fb5a9b383d5d4cff7850f048d5f\" dependencies = [ \"diff\", ] [[package]] name = \"unstable-book-gen\" version = \"0.1.0\" dependencies = [", "commid": "rust_pr_82469"}], "negative_passages": []} {"query_id": "q-en-rust-8c5e04bc625510dd6d05b7e83b4d3c63473933a9f4d3b6fc461b31b4998e5896", "query": "When running on Linux and a failure happens, a really nice diff of the output vs rustdoc-nightly is produced which very clearly shows the issue. This does not work on Windows for multiple reasons. One the call to fails: Second, the output from rustdoc-nightly seems to exactly match that from the local compiler making it not possible to do manual diffs.\nHow do you diff directories recursively on Windows? On Linux I would be very surprised for not to be installed. Oh I just thought of something - do you have a rustup overide that defaults to your local toolchain? Right now compiletest assumes the default is nightly, I've been meaning to change it.\nNo, I only set overrides on specific projects (and rust-lang/rust is not one of them). My default is the stable compiler.\nGit for Windows provides : Are you building Rust using Git Bash prompt?\nI don't think it'd be a good idea to require testing rustdoc in the Git for Windows bash shell. I'd like for this to work in and PowerShell if possible.", "positive_passages": [{"docid": "doc-en-rust-922bef6fb379f916ea22d134177215852e9b0d2e6193354f32c948f791c12184", "text": "edition = \"2018\" [dependencies] colored = \"2\" diff = \"0.1.10\" unified-diff = \"0.2.1\" getopts = \"0.2\" tracing = \"0.1\" tracing-subscriber = { version = \"0.2.13\", default-features = false, features = [\"fmt\", \"env-filter\", \"smallvec\", \"parking_lot\", \"ansi\"] }", "commid": "rust_pr_82469"}], "negative_passages": []} {"query_id": "q-en-rust-8c5e04bc625510dd6d05b7e83b4d3c63473933a9f4d3b6fc461b31b4998e5896", "query": "When running on Linux and a failure happens, a really nice diff of the output vs rustdoc-nightly is produced which very clearly shows the issue. This does not work on Windows for multiple reasons. One the call to fails: Second, the output from rustdoc-nightly seems to exactly match that from the local compiler making it not possible to do manual diffs.\nHow do you diff directories recursively on Windows? On Linux I would be very surprised for not to be installed. Oh I just thought of something - do you have a rustup overide that defaults to your local toolchain? Right now compiletest assumes the default is nightly, I've been meaning to change it.\nNo, I only set overrides on specific projects (and rust-lang/rust is not one of them). My default is the stable compiler.\nGit for Windows provides : Are you building Rust using Git Bash prompt?\nI don't think it'd be a good idea to require testing rustdoc in the Git for Windows bash shell. I'd like for this to work in and PowerShell if possible.", "positive_passages": [{"docid": "doc-en-rust-7cb8db1c14dfcea25a73b796dbd3af7f1bd9fedfcfea4e69d1beb30aa9c49bf0", "text": "use crate::json; use crate::util::get_pointer_width; use crate::util::{logv, PathBufExt}; use crate::ColorConfig; use regex::{Captures, Regex}; use rustfix::{apply_suggestions, get_suggestions_from_json, Filter};", "commid": "rust_pr_82469"}], "negative_passages": []} {"query_id": "q-en-rust-8c5e04bc625510dd6d05b7e83b4d3c63473933a9f4d3b6fc461b31b4998e5896", "query": "When running on Linux and a failure happens, a really nice diff of the output vs rustdoc-nightly is produced which very clearly shows the issue. This does not work on Windows for multiple reasons. One the call to fails: Second, the output from rustdoc-nightly seems to exactly match that from the local compiler making it not possible to do manual diffs.\nHow do you diff directories recursively on Windows? On Linux I would be very surprised for not to be installed. Oh I just thought of something - do you have a rustup overide that defaults to your local toolchain? Right now compiletest assumes the default is nightly, I've been meaning to change it.\nNo, I only set overrides on specific projects (and rust-lang/rust is not one of them). My default is the stable compiler.\nGit for Windows provides : Are you building Rust using Git Bash prompt?\nI don't think it'd be a good idea to require testing rustdoc in the Git for Windows bash shell. I'd like for this to work in and PowerShell if possible.", "positive_passages": [{"docid": "doc-en-rust-54a6a9d38ed0c7e52573676a7a800229097eaa5c867d8ac9f9beaf3df338390d", "text": "} }) }; let mut diff = Command::new(\"diff\"); // diff recursively, showing context, and excluding .css files diff.args(&[\"-u\", \"-r\", \"-x\", \"*.css\"]).args(&[&compare_dir, out_dir]); let output = if let Some(pager) = pager { let diff_pid = diff.stdout(Stdio::piped()).spawn().expect(\"failed to run `diff`\"); let diff_filename = format!(\"build/tmp/rustdoc-compare-{}.diff\", std::process::id()); { let mut diff_output = File::create(&diff_filename).unwrap(); for entry in walkdir::WalkDir::new(out_dir) { let entry = entry.expect(\"failed to read file\"); let extension = entry.path().extension().and_then(|p| p.to_str()); if entry.file_type().is_file() && (extension == Some(\"html\".into()) || extension == Some(\"js\".into())) { let expected_path = compare_dir.join(entry.path().strip_prefix(&out_dir).unwrap()); let expected = if let Ok(s) = std::fs::read(&expected_path) { s } else { continue }; let actual_path = entry.path(); let actual = std::fs::read(&actual_path).unwrap(); diff_output .write_all(&unified_diff::diff( &expected, &expected_path.to_string_lossy(), &actual, &actual_path.to_string_lossy(), 3, )) .unwrap(); } } } match self.config.color { ColorConfig::AlwaysColor => colored::control::set_override(true), ColorConfig::NeverColor => colored::control::set_override(false), _ => {} } if let Some(pager) = pager { let pager = pager.trim(); if self.config.verbose { eprintln!(\"using pager {}\", pager);", "commid": "rust_pr_82469"}], "negative_passages": []} {"query_id": "q-en-rust-8c5e04bc625510dd6d05b7e83b4d3c63473933a9f4d3b6fc461b31b4998e5896", "query": "When running on Linux and a failure happens, a really nice diff of the output vs rustdoc-nightly is produced which very clearly shows the issue. This does not work on Windows for multiple reasons. One the call to fails: Second, the output from rustdoc-nightly seems to exactly match that from the local compiler making it not possible to do manual diffs.\nHow do you diff directories recursively on Windows? On Linux I would be very surprised for not to be installed. Oh I just thought of something - do you have a rustup overide that defaults to your local toolchain? Right now compiletest assumes the default is nightly, I've been meaning to change it.\nNo, I only set overrides on specific projects (and rust-lang/rust is not one of them). My default is the stable compiler.\nGit for Windows provides : Are you building Rust using Git Bash prompt?\nI don't think it'd be a good idea to require testing rustdoc in the Git for Windows bash shell. I'd like for this to work in and PowerShell if possible.", "positive_passages": [{"docid": "doc-en-rust-204c4a568b430bae03d063805e040e33b5d95bd8b6ef0fac7946d0e2b9186daf", "text": "let output = Command::new(pager) // disable paging; we want this to be non-interactive .env(\"PAGER\", \"\") .stdin(diff_pid.stdout.unwrap()) .stdin(File::open(&diff_filename).unwrap()) // Capture output and print it explicitly so it will in turn be // captured by libtest. .output() .unwrap(); assert!(output.status.success()); output println!(\"{}\", String::from_utf8_lossy(&output.stdout)); eprintln!(\"{}\", String::from_utf8_lossy(&output.stderr)); } else { eprintln!(\"warning: no pager configured, falling back to `diff --color`\"); use colored::Colorize; eprintln!(\"warning: no pager configured, falling back to unified diff\"); eprintln!( \"help: try configuring a git pager (e.g. `delta`) with `git config --global core.pager delta`\" ); let output = diff.arg(\"--color\").output().unwrap(); assert!(output.status.success() || output.status.code() == Some(1)); output let mut out = io::stdout(); let mut diff = BufReader::new(File::open(&diff_filename).unwrap()); let mut line = Vec::new(); loop { line.truncate(0); match diff.read_until(b'n', &mut line) { Ok(0) => break, Ok(_) => {} Err(e) => eprintln!(\"ERROR: {:?}\", e), } match String::from_utf8(line.clone()) { Ok(line) => { if line.starts_with(\"+\") { write!(&mut out, \"{}\", line.green()).unwrap(); } else if line.starts_with(\"-\") { write!(&mut out, \"{}\", line.red()).unwrap(); } else if line.starts_with(\"@\") { write!(&mut out, \"{}\", line.blue()).unwrap(); } else { out.write_all(line.as_bytes()).unwrap(); } } Err(_) => { write!(&mut out, \"{}\", String::from_utf8_lossy(&line).reversed()).unwrap(); } } } }; println!(\"{}\", String::from_utf8_lossy(&output.stdout)); eprintln!(\"{}\", String::from_utf8_lossy(&output.stderr)); } fn run_rustdoc_json_test(&self) {", "commid": "rust_pr_82469"}], "negative_passages": []} {"query_id": "q-en-rust-23b2fceb945d70f8fd6a221cd3ac8220feb0e4e31cc486d2c75b4d4e2c4f3e19", "query": " $DIR/issue-82438-mut-without-upvar.rs:27:27 | LL | let c = |a, b, c, d| {}; | - help: consider changing this to be mutable: `mut c` LL | LL | A.f(participant_name, &mut c); | ^^^^^^ cannot borrow as mutable error: aborting due to previous error For more information about this error, try `rustc --explain E0596`. ", "commid": "rust_pr_82442"}], "negative_passages": []} {"query_id": "q-en-rust-8d2e01ecfa24a1234f37aa5f3db7f0a22d07e4c3f8e43556a4dc78518e89f0de", "query": "Hello, While trying to figure out how to keep a JoinHandle in a thread-local variable, I tried compiling this incorrect code, and rust crashed. It will not panic if I use a regular static var instead of in a block. $DIR/typeck_type_placeholder_item.rs:52:52", "commid": "rust_pr_82494"}], "negative_passages": []} {"query_id": "q-en-rust-8d2e01ecfa24a1234f37aa5f3db7f0a22d07e4c3f8e43556a4dc78518e89f0de", "query": "Hello, While trying to figure out how to keep a JoinHandle in a thread-local variable, I tried compiling this incorrect code, and rust crashed. It will not panic if I use a regular static var instead of in a block. $DIR/typeck_type_placeholder_item.rs:216:31 | LL | fn value() -> Option<&'static _> { | ----------------^- | | | | | not allowed in type signatures | help: replace with the correct return type: `Option<&'static u8>` error[E0121]: the type placeholder `_` is not allowed within types on item signatures --> $DIR/typeck_type_placeholder_item.rs:221:10 | LL | const _: Option<_> = map(value); | ^^^^^^^^^ | | | not allowed in type signatures | help: replace `_` with the correct type: `Option` error[E0121]: the type placeholder `_` is not allowed within types on item signatures --> $DIR/typeck_type_placeholder_item.rs:140:31 | LL | fn method_test1(&self, x: _);", "commid": "rust_pr_82494"}], "negative_passages": []} {"query_id": "q-en-rust-8d2e01ecfa24a1234f37aa5f3db7f0a22d07e4c3f8e43556a4dc78518e89f0de", "query": "Hello, While trying to figure out how to keep a JoinHandle in a thread-local variable, I tried compiling this incorrect code, and rust crashed. It will not panic if I use a regular static var instead of in a block. $DIR/unsized-function-parameter.rs:5:9 | LL | fn foo1(bar: str) {} | ^^^ doesn't have a size known at compile-time | = help: the trait `Sized` is not implemented for `str` = help: unsized fn params are gated as an unstable feature help: function arguments must have a statically known size, borrowed types always have a known size | LL | fn foo1(bar: &str) {} | ^ error[E0277]: the size for values of type `str` cannot be known at compilation time --> $DIR/unsized-function-parameter.rs:11:9 | LL | fn foo2(_bar: str) {} | ^^^^ doesn't have a size known at compile-time | = help: the trait `Sized` is not implemented for `str` = help: unsized fn params are gated as an unstable feature help: function arguments must have a statically known size, borrowed types always have a known size | LL | fn foo2(_bar: &str) {} | ^ error[E0277]: the size for values of type `str` cannot be known at compilation time --> $DIR/unsized-function-parameter.rs:17:9 | LL | fn foo3(_: str) {} | ^ doesn't have a size known at compile-time | = help: the trait `Sized` is not implemented for `str` = help: unsized fn params are gated as an unstable feature help: function arguments must have a statically known size, borrowed types always have a known size | LL | fn foo3(_: &str) {} | ^ error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_84728"}], "negative_passages": []} {"query_id": "q-en-rust-60c6a979d7c60b6b6bf9c33e0fd0fb5c3e656449103dc1dad7356e92d3966507", "query": ": $DIR/conflicting-repr-hints.rs:70:1 | LL | pub struct S(u16); | ^^^^^^^^^^^^^^^^^^ error[E0587]: type has conflicting packed and align representation hints --> $DIR/conflicting-repr-hints.rs:73:1 | LL | / pub union U { LL | | u: u16 LL | | } | |_^ error: aborting due to 12 previous errors Some errors have detailed explanations: E0566, E0587, E0634. For more information about an error, try `rustc --explain E0566`.", "commid": "rust_pr_83319"}], "negative_passages": []} {"query_id": "q-en-rust-68f99e8ab05fe87cadfa332057c1c197122a518fa7ecc913ab41b97fc23aab7b", "query": " $DIR/ub-incorrect-vtable.rs:10:14 | LL | / const INVALID_VTABLE_ALIGNMENT: &dyn Trait = LL | | unsafe { std::mem::transmute((&92u8, &[0usize, 1usize, 1000usize])) }; | |______________^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^__- | | | invalid vtable: alignment `1000` is not a power of 2 | = note: `#[deny(const_err)]` on by default = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #71800 error: any use of this value will cause an error --> $DIR/ub-incorrect-vtable.rs:16:14 | LL | / const INVALID_VTABLE_SIZE: &dyn Trait = LL | | unsafe { std::mem::transmute((&92u8, &[1usize, usize::MAX, 1usize])) }; | |______________^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^__- | | | invalid vtable: size is bigger than largest supported object | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #71800 error: aborting due to 2 previous errors ", "commid": "rust_pr_86174"}], "negative_passages": []} {"query_id": "q-en-rust-d94c0459ebd05e0816398d21266d1d14e8a6c274d439837a615275b2ac5b7f0d", "query": "Go to: It was but the rustdoc generates like: ! The problem is that the previous variant's version is placed at the same height. I think it's ideal to place the version of the item itself at the same height, like functions.\nI am new here but it looks like something I could do. Can you point me in the right direction?\nSorry but I'm not familiar with rustdoc, could you help them?\nI don't have time, sorry. may have suggestions.\nI can help if needed. But first: how would you want it to look like?\nAs mentioned first, the version at the same height should be the variant's IMO.\nAh ok sorry, so it's a layout bug. I thought you wanted to improve the display (which would be welcome too!). The problematic line is . It should be set before the docblock (in the variant header if possible).\nI will gladly take a look at it, so I can familiarize with the codebase.\nSure, go ahead! Do test locally: Like that you can then go to the page and check if the bug is fixed. ;)", "positive_passages": [{"docid": "doc-en-rust-25a8c66a11ab0c58ac13483b716892c8bb2f5f849723578429127a29a0a12558", "text": "} w.write_str(\")\"); } w.write_str(\"
\"); w.write_str(\"\"); render_stability_since(w, variant, it, cx.tcx()); w.write_str(\"\"); document(w, cx, variant, Some(it)); document_non_exhaustive(w, variant);", "commid": "rust_pr_86370"}], "negative_passages": []} {"query_id": "q-en-rust-d94c0459ebd05e0816398d21266d1d14e8a6c274d439837a615275b2ac5b7f0d", "query": "Go to: It was but the rustdoc generates like: ! The problem is that the previous variant's version is placed at the same height. I think it's ideal to place the version of the item itself at the same height, like functions.\nI am new here but it looks like something I could do. Can you point me in the right direction?\nSorry but I'm not familiar with rustdoc, could you help them?\nI don't have time, sorry. may have suggestions.\nI can help if needed. But first: how would you want it to look like?\nAs mentioned first, the version at the same height should be the variant's IMO.\nAh ok sorry, so it's a layout bug. I thought you wanted to improve the display (which would be welcome too!). The problematic line is . It should be set before the docblock (in the variant header if possible).\nI will gladly take a look at it, so I can familiarize with the codebase.\nSure, go ahead! Do test locally: Like that you can then go to the page and check if the bug is fixed. ;)", "positive_passages": [{"docid": "doc-en-rust-b15191dadc7bd96b16cc67e6018db217c9f81cba12dee22c93ca5eb3904dba95", "text": "w.write_str(\"\"); toggle_close(w); } render_stability_since(w, variant, it, cx.tcx()); } } let def_id = it.def_id.expect_real();", "commid": "rust_pr_86370"}], "negative_passages": []} {"query_id": "q-en-rust-d94c0459ebd05e0816398d21266d1d14e8a6c274d439837a615275b2ac5b7f0d", "query": "Go to: It was but the rustdoc generates like: ! The problem is that the previous variant's version is placed at the same height. I think it's ideal to place the version of the item itself at the same height, like functions.\nI am new here but it looks like something I could do. Can you point me in the right direction?\nSorry but I'm not familiar with rustdoc, could you help them?\nI don't have time, sorry. may have suggestions.\nI can help if needed. But first: how would you want it to look like?\nAs mentioned first, the version at the same height should be the variant's IMO.\nAh ok sorry, so it's a layout bug. I thought you wanted to improve the display (which would be welcome too!). The problematic line is . It should be set before the docblock (in the variant header if possible).\nI will gladly take a look at it, so I can familiarize with the codebase.\nSure, go ahead! Do test locally: Like that you can then go to the page and check if the bug is fixed. ;)", "positive_passages": [{"docid": "doc-en-rust-5bcf692d09aaddba6f0d366c404ed7afb36c39c411aaaecfa4b718e41aac90a4", "text": "background: transparent; } .small-section-header { display: flex; justify-content: space-between; position: relative; } .small-section-header:hover > .anchor { display: initial; }", "commid": "rust_pr_86370"}], "negative_passages": []} {"query_id": "q-en-rust-823c804d5693696ea4117f87a195b08e5e18eede29b221c6645d1be0f0e7999e", "query": "Yet another soundness bug in Zip's TRA specialization. Line 300 is not called when line 298 panics. This leaves outdated, which results in calling with an invalid index in line 242. Here is a that demonstrates creating two mutable references to the same memory location without unsafe code. $DIR/issue-82956.rs:25:24 | LL | let mut iter = IntoIter::new(self); | ^^^^^^^^ not found in this scope | help: consider importing one of these items | LL | use std::array::IntoIter; | LL | use std::collections::binary_heap::IntoIter; | LL | use std::collections::btree_map::IntoIter; | LL | use std::collections::btree_set::IntoIter; | and 8 other candidates error: aborting due to previous error For more information about this error, try `rustc --explain E0433`. ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": #![allow(incomplete_features)] #![feature(generic_const_exprs)] trait Bar {} trait Foo<'a> { const N: usize; type Baz: Bar<{ Self::N }>; //~^ ERROR: unconstrained generic constant } fn main() {} ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": error: unconstrained generic constant --> $DIR/issue-84659.rs:8:15 | LL | type Baz: Bar<{ Self::N }>; | ^^^^^^^^^^^^^^^^ | = help: try adding a `where` bound using this expression: `where [(); { Self::N }]:` error: aborting due to previous error ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": #![feature(generic_const_exprs)] #![allow(incomplete_features)] pub trait X { const Y: usize; } fn z(t: T) where T: X, [(); T::Y]: , { } fn unit_literals() { z(\" \"); //~^ ERROR: the trait bound `&str: X` is not satisfied } fn main() {} ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": error[E0277]: the trait bound `&str: X` is not satisfied --> $DIR/issue-86530.rs:16:7 | LL | z(\" \"); | ^^^ the trait `X` is not implemented for `&str` | note: required by a bound in `z` --> $DIR/issue-86530.rs:10:8 | LL | fn z(t: T) | - required by a bound in this LL | where LL | T: X, | ^ required by this bound in `z` error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": // run-pass #![feature(adt_const_params, generic_const_exprs)] #![allow(incomplete_features)] pub trait Foo { const ASSOC_C: usize; fn foo() where [(); Self::ASSOC_C]:; } struct Bar; impl Foo for Bar { const ASSOC_C: usize = 3; fn foo() where [u8; Self::ASSOC_C]: { let _: [u8; Self::ASSOC_C] = loop {}; } } fn main() {} ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": // run-pass #![feature(adt_const_params, generic_const_exprs)] #![allow(incomplete_features, unused_variables)] struct F; impl X for F<{ S }> { const W: usize = 3; fn d(r: &[u8; Self::W]) -> F<{ S }> { let x: [u8; Self::W] = [0; Self::W]; F } } pub trait X { const W: usize; fn d(r: &[u8; Self::W]) -> Self; } fn main() {} ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": trait Trait { const Assoc: usize; } impl Trait for () { const Assoc: usize = 1; } pub const fn foo() where (): Trait { let bar = [(); <()>::Assoc]; //~^ error: constant expression depends on a generic parameter } trait Trait2 { const Assoc2: usize; } impl Trait2 for () { const Assoc2: usize = N - 1; } pub const fn foo2() where (): Trait2 { let bar2 = [(); <()>::Assoc2]; //~^ error: constant expression depends on a generic parameter } fn main() { foo::<0>(); foo2::<0>(); } ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-1869db9f166de9741245801484507a49fd214a39594faa9b1bce392c6c888cca", "query": ": error: constant expression depends on a generic parameter --> $DIR/sneaky-array-repeat-expr.rs:11:20 | LL | let bar = [(); <()>::Assoc]; | ^^^^^^^^^^^ | = note: this may fail depending on what value the parameter takes error: constant expression depends on a generic parameter --> $DIR/sneaky-array-repeat-expr.rs:25:21 | LL | let bar2 = [(); <()>::Assoc2]; | ^^^^^^^^^^^^ | = note: this may fail depending on what value the parameter takes error: aborting due to 2 previous errors ", "commid": "rust_pr_88602"}], "negative_passages": []} {"query_id": "q-en-rust-09bb67fce6b347ba30fcb629338c464042351d96b12e2bafe7c13f6369eaaf58", "query": "Given the following code: The current output is: Ideally the output should look like: Right now, cargo and rustc operate basically independently of each other. The summary (\"aborting ...\" and \"could not compile ...\") is repeated twice, and both have different, incompatible ways to get more info about what went wrong. There's no reason to repeat these twice; we could include all the same information in half the space if we can get cargo and rustc to cooperate. I suggest the way this be implemented is by keeping rustc's output the same when run standalone, but omitting \"aborting due to ...\" and \"for more information ...\" when run with . Then cargo can aggregate the info it used to print into its own errors by using the JSON output. cc (meta note: I thought of this while working on , which has fully 12 lines of \"metadata\" after the 5 line error. Most builds are not that bad in comparison, but I do think it shows that it needs support from all the tools in the stack to keep the verbosity down.)\nI'm not sure whether making the decision on the error format scheme is the right way to go. I can't think of any reason not to do it, unless the implementation is messy, so", "positive_passages": [{"docid": "doc-en-rust-8317a1d4eb7e15aa2d44bb2d481f93c5f0cc893eeea347c80466c0d87929b7ed", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Cross-platform file path handling (re-write) use container::Container; use c_str::CString; use clone::Clone; use iter::Iterator; use option::{Option, None, Some}; use str; use str::StrSlice; use vec; use vec::{CopyableVector, OwnedCopyableVector, OwnedVector}; use vec::{ImmutableEqVector, ImmutableVector}; pub mod posix; pub mod windows; /// Typedef for POSIX file paths. /// See `posix::Path` for more info. pub type PosixPath = posix::Path; /// Typedef for Windows file paths. /// See `windows::Path` for more info. pub type WindowsPath = windows::Path; /// Typedef for the platform-native path type #[cfg(unix)] pub type Path = PosixPath; /// Typedef for the platform-native path type #[cfg(windows)] pub type Path = WindowsPath; /// Typedef for the POSIX path component iterator. /// See `posix::ComponentIter` for more info. pub type PosixComponentIter<'self> = posix::ComponentIter<'self>; // /// Typedef for the Windows path component iterator. // /// See `windows::ComponentIter` for more info. // pub type WindowsComponentIter<'self> = windows::ComponentIter<'self>; /// Typedef for the platform-native component iterator #[cfg(unix)] pub type ComponentIter<'self> = PosixComponentIter<'self>; // /// Typedef for the platform-native component iterator //#[cfg(windows)] //pub type ComponentIter<'self> = WindowsComponentIter<'self>; // Condition that is raised when a NUL is found in a byte vector given to a Path function condition! { // this should be a &[u8] but there's a lifetime issue null_byte: ~[u8] -> ~[u8]; } /// A trait that represents the generic operations available on paths pub trait GenericPath: Clone + GenericPathUnsafe { /// Creates a new Path from a byte vector. /// The resulting Path will always be normalized. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. /// /// See individual Path impls for additional restrictions. #[inline] fn from_vec(path: &[u8]) -> Self { if contains_nul(path) { let path = self::null_byte::cond.raise(path.to_owned()); assert!(!contains_nul(path)); unsafe { GenericPathUnsafe::from_vec_unchecked(path) } } else { unsafe { GenericPathUnsafe::from_vec_unchecked(path) } } } /// Creates a new Path from a string. /// The resulting Path will always be normalized. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. #[inline] fn from_str(path: &str) -> Self { let v = path.as_bytes(); if contains_nul(v) { GenericPath::from_vec(path.as_bytes()) // let from_vec handle the condition } else { unsafe { GenericPathUnsafe::from_str_unchecked(path) } } } /// Creates a new Path from a CString. /// The resulting Path will always be normalized. /// /// See individual Path impls for potential restrictions. #[inline] fn from_c_str(path: CString) -> Self { // CStrings can't contain NULs unsafe { GenericPathUnsafe::from_vec_unchecked(path.as_bytes()) } } /// Returns the path as a string, if possible. /// If the path is not representable in utf-8, this returns None. #[inline] fn as_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.as_vec()) } /// Returns the path as a byte vector fn as_vec<'a>(&'a self) -> &'a [u8]; /// Returns the directory component of `self`, as a byte vector (with no trailing separator). /// If `self` has no directory component, returns ['.']. fn dirname<'a>(&'a self) -> &'a [u8]; /// Returns the directory component of `self`, as a string, if possible. /// See `dirname` for details. #[inline] fn dirname_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.dirname()) } /// Returns the file component of `self`, as a byte vector. /// If `self` represents the root of the file hierarchy, returns the empty vector. /// If `self` is \".\", returns the empty vector. fn filename<'a>(&'a self) -> &'a [u8]; /// Returns the file component of `self`, as a string, if possible. /// See `filename` for details. #[inline] fn filename_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.filename()) } /// Returns the stem of the filename of `self`, as a byte vector. /// The stem is the portion of the filename just before the last '.'. /// If there is no '.', the entire filename is returned. fn filestem<'a>(&'a self) -> &'a [u8] { let name = self.filename(); let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => name, Some(1) if name == bytes!(\"..\") => name, Some(pos) => name.slice_to(pos) } } /// Returns the stem of the filename of `self`, as a string, if possible. /// See `filestem` for details. #[inline] fn filestem_str<'a>(&'a self) -> Option<&'a str> { str::from_utf8_slice_opt(self.filestem()) } /// Returns the extension of the filename of `self`, as an optional byte vector. /// The extension is the portion of the filename just after the last '.'. /// If there is no extension, None is returned. /// If the filename ends in '.', the empty vector is returned. fn extension<'a>(&'a self) -> Option<&'a [u8]> { let name = self.filename(); let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => None, Some(1) if name == bytes!(\"..\") => None, Some(pos) => Some(name.slice_from(pos+1)) } } /// Returns the extension of the filename of `self`, as a string, if possible. /// See `extension` for details. #[inline] fn extension_str<'a>(&'a self) -> Option<&'a str> { self.extension().and_then(|v| str::from_utf8_slice_opt(v)) } /// Replaces the directory portion of the path with the given byte vector. /// If `self` represents the root of the filesystem hierarchy, the last path component /// of the given byte vector becomes the filename. /// /// # Failure /// /// Raises the `null_byte` condition if the dirname contains a NUL. #[inline] fn set_dirname(&mut self, dirname: &[u8]) { if contains_nul(dirname) { let dirname = self::null_byte::cond.raise(dirname.to_owned()); assert!(!contains_nul(dirname)); unsafe { self.set_dirname_unchecked(dirname) } } else { unsafe { self.set_dirname_unchecked(dirname) } } } /// Replaces the directory portion of the path with the given string. /// See `set_dirname` for details. #[inline] fn set_dirname_str(&mut self, dirname: &str) { if contains_nul(dirname.as_bytes()) { self.set_dirname(dirname.as_bytes()) // triggers null_byte condition } else { unsafe { self.set_dirname_str_unchecked(dirname) } } } /// Replaces the filename portion of the path with the given byte vector. /// If the replacement name is [], this is equivalent to popping the path. /// /// # Failure /// /// Raises the `null_byte` condition if the filename contains a NUL. #[inline] fn set_filename(&mut self, filename: &[u8]) { if contains_nul(filename) { let filename = self::null_byte::cond.raise(filename.to_owned()); assert!(!contains_nul(filename)); unsafe { self.set_filename_unchecked(filename) } } else { unsafe { self.set_filename_unchecked(filename) } } } /// Replaces the filename portion of the path with the given string. /// See `set_filename` for details. #[inline] fn set_filename_str(&mut self, filename: &str) { if contains_nul(filename.as_bytes()) { self.set_filename(filename.as_bytes()) // triggers null_byte condition } else { unsafe { self.set_filename_str_unchecked(filename) } } } /// Replaces the filestem with the given byte vector. /// If there is no extension in `self` (or `self` has no filename), this is equivalent /// to `set_filename`. Otherwise, if the given byte vector is [], the extension (including /// the preceding '.') becomes the new filename. /// /// # Failure /// /// Raises the `null_byte` condition if the filestem contains a NUL. fn set_filestem(&mut self, filestem: &[u8]) { // borrowck is being a pain here let val = { let name = self.filename(); if !name.is_empty() { let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => None, Some(idx) => { let mut v; if contains_nul(filestem) { let filestem = self::null_byte::cond.raise(filestem.to_owned()); assert!(!contains_nul(filestem)); v = vec::with_capacity(filestem.len() + name.len() - idx); v.push_all(filestem); } else { v = vec::with_capacity(filestem.len() + name.len() - idx); v.push_all(filestem); } v.push_all(name.slice_from(idx)); Some(v) } } } else { None } }; match val { None => self.set_filename(filestem), Some(v) => unsafe { self.set_filename_unchecked(v) } } } /// Replaces the filestem with the given string. /// See `set_filestem` for details. #[inline] fn set_filestem_str(&mut self, filestem: &str) { self.set_filestem(filestem.as_bytes()) } /// Replaces the extension with the given byte vector. /// If there is no extension in `self`, this adds one. /// If the given byte vector is [], this removes the extension. /// If `self` has no filename, this is a no-op. /// /// # Failure /// /// Raises the `null_byte` condition if the extension contains a NUL. fn set_extension(&mut self, extension: &[u8]) { // borrowck causes problems here too let val = { let name = self.filename(); if !name.is_empty() { let dot = '.' as u8; match name.rposition_elem(&dot) { None | Some(0) => { if extension.is_empty() { None } else { let mut v; if contains_nul(extension) { let extension = self::null_byte::cond.raise(extension.to_owned()); assert!(!contains_nul(extension)); v = vec::with_capacity(name.len() + extension.len() + 1); v.push_all(name); v.push(dot); v.push_all(extension); } else { v = vec::with_capacity(name.len() + extension.len() + 1); v.push_all(name); v.push(dot); v.push_all(extension); } Some(v) } } Some(idx) => { if extension.is_empty() { Some(name.slice_to(idx).to_owned()) } else { let mut v; if contains_nul(extension) { let extension = self::null_byte::cond.raise(extension.to_owned()); assert!(!contains_nul(extension)); v = vec::with_capacity(idx + extension.len() + 1); v.push_all(name.slice_to(idx+1)); v.push_all(extension); } else { v = vec::with_capacity(idx + extension.len() + 1); v.push_all(name.slice_to(idx+1)); v.push_all(extension); } Some(v) } } } } else { None } }; match val { None => (), Some(v) => unsafe { self.set_filename_unchecked(v) } } } /// Replaces the extension with the given string. /// See `set_extension` for details. #[inline] fn set_extension_str(&mut self, extension: &str) { self.set_extension(extension.as_bytes()) } /// Returns a new Path constructed by replacing the dirname with the given byte vector. /// See `set_dirname` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the dirname contains a NUL. #[inline] fn with_dirname(&self, dirname: &[u8]) -> Self { let mut p = self.clone(); p.set_dirname(dirname); p } /// Returns a new Path constructed by replacing the dirname with the given string. /// See `set_dirname` for details. #[inline] fn with_dirname_str(&self, dirname: &str) -> Self { let mut p = self.clone(); p.set_dirname_str(dirname); p } /// Returns a new Path constructed by replacing the filename with the given byte vector. /// See `set_filename` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the filename contains a NUL. #[inline] fn with_filename(&self, filename: &[u8]) -> Self { let mut p = self.clone(); p.set_filename(filename); p } /// Returns a new Path constructed by replacing the filename with the given string. /// See `set_filename` for details. #[inline] fn with_filename_str(&self, filename: &str) -> Self { let mut p = self.clone(); p.set_filename_str(filename); p } /// Returns a new Path constructed by setting the filestem to the given byte vector. /// See `set_filestem` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the filestem contains a NUL. #[inline] fn with_filestem(&self, filestem: &[u8]) -> Self { let mut p = self.clone(); p.set_filestem(filestem); p } /// Returns a new Path constructed by setting the filestem to the given string. /// See `set_filestem` for details. #[inline] fn with_filestem_str(&self, filestem: &str) -> Self { let mut p = self.clone(); p.set_filestem_str(filestem); p } /// Returns a new Path constructed by setting the extension to the given byte vector. /// See `set_extension` for details. /// /// # Failure /// /// Raises the `null_byte` condition if the extension contains a NUL. #[inline] fn with_extension(&self, extension: &[u8]) -> Self { let mut p = self.clone(); p.set_extension(extension); p } /// Returns a new Path constructed by setting the extension to the given string. /// See `set_extension` for details. #[inline] fn with_extension_str(&self, extension: &str) -> Self { let mut p = self.clone(); p.set_extension_str(extension); p } /// Returns the directory component of `self`, as a Path. /// If `self` represents the root of the filesystem hierarchy, returns `self`. fn dir_path(&self) -> Self { // self.dirname() returns a NUL-free vector unsafe { GenericPathUnsafe::from_vec_unchecked(self.dirname()) } } /// Returns the file component of `self`, as a relative Path. /// If `self` represents the root of the filesystem hierarchy, returns None. fn file_path(&self) -> Option { // self.filename() returns a NUL-free vector match self.filename() { [] => None, v => Some(unsafe { GenericPathUnsafe::from_vec_unchecked(v) }) } } /// Pushes a path (as a byte vector) onto `self`. /// If the argument represents an absolute path, it replaces `self`. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. #[inline] fn push(&mut self, path: &[u8]) { if contains_nul(path) { let path = self::null_byte::cond.raise(path.to_owned()); assert!(!contains_nul(path)); unsafe { self.push_unchecked(path) } } else { unsafe { self.push_unchecked(path) } } } /// Pushes a path (as a string) onto `self. /// See `push` for details. #[inline] fn push_str(&mut self, path: &str) { if contains_nul(path.as_bytes()) { self.push(path.as_bytes()) // triggers null_byte condition } else { unsafe { self.push_str_unchecked(path) } } } /// Pushes a Path onto `self`. /// If the argument represents an absolute path, it replaces `self`. #[inline] fn push_path(&mut self, path: &Self) { self.push(path.as_vec()) } /// Pops the last path component off of `self` and returns it. /// If `self` represents the root of the file hierarchy, None is returned. fn pop_opt(&mut self) -> Option<~[u8]>; /// Pops the last path component off of `self` and returns it as a string, if possible. /// `self` will still be modified even if None is returned. /// See `pop_opt` for details. #[inline] fn pop_opt_str(&mut self) -> Option<~str> { self.pop_opt().and_then(|v| str::from_utf8_owned_opt(v)) } /// Returns a new Path constructed by joining `self` with the given path (as a byte vector). /// If the given path is absolute, the new Path will represent just that. /// /// # Failure /// /// Raises the `null_byte` condition if the path contains a NUL. #[inline] fn join(&self, path: &[u8]) -> Self { let mut p = self.clone(); p.push(path); p } /// Returns a new Path constructed by joining `self` with the given path (as a string). /// See `join` for details. #[inline] fn join_str(&self, path: &str) -> Self { let mut p = self.clone(); p.push_str(path); p } /// Returns a new Path constructed by joining `self` with the given path. /// If the given path is absolute, the new Path will represent just that. #[inline] fn join_path(&self, path: &Self) -> Self { let mut p = self.clone(); p.push_path(path); p } /// Returns whether `self` represents an absolute path. /// An absolute path is defined as one that, when joined to another path, will /// yield back the same absolute path. fn is_absolute(&self) -> bool; /// Returns whether `self` is equal to, or is an ancestor of, the given path. /// If both paths are relative, they are compared as though they are relative /// to the same parent path. fn is_ancestor_of(&self, other: &Self) -> bool; /// Returns the Path that, were it joined to `base`, would yield `self`. /// If no such path exists, None is returned. /// If `self` is absolute and `base` is relative, or on Windows if both /// paths refer to separate drives, an absolute path is returned. fn path_relative_from(&self, base: &Self) -> Option; } /// A trait that represents the unsafe operations on GenericPaths pub trait GenericPathUnsafe { /// Creates a new Path from a byte vector without checking for null bytes. /// The resulting Path will always be normalized. unsafe fn from_vec_unchecked(path: &[u8]) -> Self; /// Creates a new Path from a str without checking for null bytes. /// The resulting Path will always be normalized. #[inline] unsafe fn from_str_unchecked(path: &str) -> Self { GenericPathUnsafe::from_vec_unchecked(path.as_bytes()) } /// Replaces the directory portion of the path with the given byte vector without /// checking for null bytes. /// See `set_dirname` for details. unsafe fn set_dirname_unchecked(&mut self, dirname: &[u8]); /// Replaces the directory portion of the path with the given str without /// checking for null bytes. /// See `set_dirname_str` for details. #[inline] unsafe fn set_dirname_str_unchecked(&mut self, dirname: &str) { self.set_dirname_unchecked(dirname.as_bytes()) } /// Replaces the filename portion of the path with the given byte vector without /// checking for null bytes. /// See `set_filename` for details. unsafe fn set_filename_unchecked(&mut self, filename: &[u8]); /// Replaces the filename portion of the path with the given str without /// checking for null bytes. /// See `set_filename_str` for details. #[inline] unsafe fn set_filename_str_unchecked(&mut self, filename: &str) { self.set_filename_unchecked(filename.as_bytes()) } /// Pushes a byte vector onto `self` without checking for null bytes. /// See `push` for details. unsafe fn push_unchecked(&mut self, path: &[u8]); /// Pushes a str onto `self` without checking for null bytes. /// See `push_str` for details. #[inline] unsafe fn push_str_unchecked(&mut self, path: &str) { self.push_unchecked(path.as_bytes()) } } #[inline(always)] fn contains_nul(v: &[u8]) -> bool { v.iter().any(|&x| x == 0) } ", "commid": "rust_pr_9655"}], "negative_passages": []} {"query_id": "q-en-rust-09bb67fce6b347ba30fcb629338c464042351d96b12e2bafe7c13f6369eaaf58", "query": "Given the following code: The current output is: Ideally the output should look like: Right now, cargo and rustc operate basically independently of each other. The summary (\"aborting ...\" and \"could not compile ...\") is repeated twice, and both have different, incompatible ways to get more info about what went wrong. There's no reason to repeat these twice; we could include all the same information in half the space if we can get cargo and rustc to cooperate. I suggest the way this be implemented is by keeping rustc's output the same when run standalone, but omitting \"aborting due to ...\" and \"for more information ...\" when run with . Then cargo can aggregate the info it used to print into its own errors by using the JSON output. cc (meta note: I thought of this while working on , which has fully 12 lines of \"metadata\" after the 5 line error. Most builds are not that bad in comparison, but I do think it shows that it needs support from all the tools in the stack to keep the verbosity down.)\nI'm not sure whether making the decision on the error format scheme is the right way to go. I can't think of any reason not to do it, unless the implementation is messy, so", "positive_passages": [{"docid": "doc-en-rust-9764f9e3b31d11297b5310bbca4454373f440ec511a197d05720ba8c8b3b2cac", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! POSIX file path handling use container::Container; use c_str::{CString, ToCStr}; use clone::Clone; use cmp::Eq; use from_str::FromStr; use iter::{AdditiveIterator, Extendable, Iterator}; use option::{Option, None, Some}; use str; use str::Str; use util; use vec; use vec::CopyableVector; use vec::{Vector, VectorVector}; use super::{GenericPath, GenericPathUnsafe}; /// Iterator that yields successive components of a Path pub type ComponentIter<'self> = vec::SplitIterator<'self, u8>; /// Represents a POSIX file path #[deriving(Clone, DeepClone)] pub struct Path { priv repr: ~[u8], // assumed to never be empty or contain NULs priv sepidx: Option // index of the final separator in repr } /// The standard path separator character pub static sep: u8 = '/' as u8; /// Returns whether the given byte is a path separator #[inline] pub fn is_sep(u: &u8) -> bool { *u == sep } impl Eq for Path { #[inline] fn eq(&self, other: &Path) -> bool { self.repr == other.repr } } impl FromStr for Path { fn from_str(s: &str) -> Option { let v = s.as_bytes(); if contains_nul(v) { None } else { Some(unsafe { GenericPathUnsafe::from_vec_unchecked(v) }) } } } impl ToCStr for Path { #[inline] fn to_c_str(&self) -> CString { // The Path impl guarantees no internal NUL unsafe { self.as_vec().to_c_str_unchecked() } } #[inline] unsafe fn to_c_str_unchecked(&self) -> CString { self.as_vec().to_c_str_unchecked() } } impl GenericPathUnsafe for Path { unsafe fn from_vec_unchecked(path: &[u8]) -> Path { let path = Path::normalize(path); assert!(!path.is_empty()); let idx = path.rposition_elem(&sep); Path{ repr: path, sepidx: idx } } unsafe fn set_dirname_unchecked(&mut self, dirname: &[u8]) { match self.sepidx { None if bytes!(\".\") == self.repr || bytes!(\"..\") == self.repr => { self.repr = Path::normalize(dirname); } None => { let mut v = vec::with_capacity(dirname.len() + self.repr.len() + 1); v.push_all(dirname); v.push(sep); v.push_all(self.repr); self.repr = Path::normalize(v); } Some(0) if self.repr.len() == 1 && self.repr[0] == sep => { self.repr = Path::normalize(dirname); } Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => { self.repr = Path::normalize(dirname); } Some(idx) if dirname.is_empty() => { let v = Path::normalize(self.repr.slice_from(idx+1)); self.repr = v; } Some(idx) => { let mut v = vec::with_capacity(dirname.len() + self.repr.len() - idx); v.push_all(dirname); v.push_all(self.repr.slice_from(idx)); self.repr = Path::normalize(v); } } self.sepidx = self.repr.rposition_elem(&sep); } unsafe fn set_filename_unchecked(&mut self, filename: &[u8]) { match self.sepidx { None if bytes!(\"..\") == self.repr => { let mut v = vec::with_capacity(3 + filename.len()); v.push_all(dot_dot_static); v.push(sep); v.push_all(filename); self.repr = Path::normalize(v); } None => { self.repr = Path::normalize(filename); } Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => { let mut v = vec::with_capacity(self.repr.len() + 1 + filename.len()); v.push_all(self.repr); v.push(sep); v.push_all(filename); self.repr = Path::normalize(v); } Some(idx) => { let mut v = vec::with_capacity(idx + 1 + filename.len()); v.push_all(self.repr.slice_to(idx+1)); v.push_all(filename); self.repr = Path::normalize(v); } } self.sepidx = self.repr.rposition_elem(&sep); } unsafe fn push_unchecked(&mut self, path: &[u8]) { if !path.is_empty() { if path[0] == sep { self.repr = Path::normalize(path); } else { let mut v = vec::with_capacity(self.repr.len() + path.len() + 1); v.push_all(self.repr); v.push(sep); v.push_all(path); self.repr = Path::normalize(v); } self.sepidx = self.repr.rposition_elem(&sep); } } } impl GenericPath for Path { #[inline] fn as_vec<'a>(&'a self) -> &'a [u8] { self.repr.as_slice() } fn dirname<'a>(&'a self) -> &'a [u8] { match self.sepidx { None if bytes!(\"..\") == self.repr => self.repr.as_slice(), None => dot_static, Some(0) => self.repr.slice_to(1), Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => self.repr.as_slice(), Some(idx) => self.repr.slice_to(idx) } } fn filename<'a>(&'a self) -> &'a [u8] { match self.sepidx { None if bytes!(\".\") == self.repr || bytes!(\"..\") == self.repr => &[], None => self.repr.as_slice(), Some(idx) if self.repr.slice_from(idx+1) == bytes!(\"..\") => &[], Some(idx) => self.repr.slice_from(idx+1) } } fn pop_opt(&mut self) -> Option<~[u8]> { match self.sepidx { None if bytes!(\".\") == self.repr => None, None => { let mut v = ~['.' as u8]; util::swap(&mut v, &mut self.repr); self.sepidx = None; Some(v) } Some(0) if bytes!(\"/\") == self.repr => None, Some(idx) => { let v = self.repr.slice_from(idx+1).to_owned(); if idx == 0 { self.repr.truncate(idx+1); } else { self.repr.truncate(idx); } self.sepidx = self.repr.rposition_elem(&sep); Some(v) } } } #[inline] fn is_absolute(&self) -> bool { self.repr[0] == sep } fn is_ancestor_of(&self, other: &Path) -> bool { if self.is_absolute() != other.is_absolute() { false } else { let mut ita = self.component_iter(); let mut itb = other.component_iter(); if bytes!(\".\") == self.repr { return itb.next() != Some(bytes!(\"..\")); } loop { match (ita.next(), itb.next()) { (None, _) => break, (Some(a), Some(b)) if a == b => { loop }, (Some(a), _) if a == bytes!(\"..\") => { // if ita contains only .. components, it's an ancestor return ita.all(|x| x == bytes!(\"..\")); } _ => return false } } true } } fn path_relative_from(&self, base: &Path) -> Option { if self.is_absolute() != base.is_absolute() { if self.is_absolute() { Some(self.clone()) } else { None } } else { let mut ita = self.component_iter(); let mut itb = base.component_iter(); let mut comps = ~[]; loop { match (ita.next(), itb.next()) { (None, None) => break, (Some(a), None) => { comps.push(a); comps.extend(&mut ita); break; } (None, _) => comps.push(dot_dot_static), (Some(a), Some(b)) if comps.is_empty() && a == b => (), (Some(a), Some(b)) if b == bytes!(\".\") => comps.push(a), (Some(_), Some(b)) if b == bytes!(\"..\") => return None, (Some(a), Some(_)) => { comps.push(dot_dot_static); for _ in itb { comps.push(dot_dot_static); } comps.push(a); comps.extend(&mut ita); break; } } } Some(Path::new(comps.connect_vec(&sep))) } } } impl Path { /// Returns a new Path from a byte vector /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. #[inline] pub fn new(v: &[u8]) -> Path { GenericPath::from_vec(v) } /// Returns a new Path from a string /// /// # Failure /// /// Raises the `null_byte` condition if the str contains a NUL. #[inline] pub fn from_str(s: &str) -> Path { GenericPath::from_str(s) } /// Converts the Path into an owned byte vector pub fn into_vec(self) -> ~[u8] { self.repr } /// Converts the Path into an owned string, if possible pub fn into_str(self) -> Option<~str> { str::from_utf8_owned_opt(self.repr) } /// Returns a normalized byte vector representation of a path, by removing all empty /// components, and unnecessary . and .. components. pub fn normalize+CopyableVector>(v: V) -> ~[u8] { // borrowck is being very picky let val = { let is_abs = !v.as_slice().is_empty() && v.as_slice()[0] == sep; let v_ = if is_abs { v.as_slice().slice_from(1) } else { v.as_slice() }; let comps = normalize_helper(v_, is_abs); match comps { None => None, Some(comps) => { if is_abs && comps.is_empty() { Some(~[sep]) } else { let n = if is_abs { comps.len() } else { comps.len() - 1} + comps.iter().map(|v| v.len()).sum(); let mut v = vec::with_capacity(n); let mut it = comps.move_iter(); if !is_abs { match it.next() { None => (), Some(comp) => v.push_all(comp) } } for comp in it { v.push(sep); v.push_all(comp); } Some(v) } } } }; match val { None => v.into_owned(), Some(val) => val } } /// Returns an iterator that yields each component of the path in turn. /// Does not distinguish between absolute and relative paths, e.g. /// /a/b/c and a/b/c yield the same set of components. /// A path of \"/\" yields no components. A path of \".\" yields one component. pub fn component_iter<'a>(&'a self) -> ComponentIter<'a> { let v = if self.repr[0] == sep { self.repr.slice_from(1) } else { self.repr.as_slice() }; let mut ret = v.split_iter(is_sep); if v.is_empty() { // consume the empty \"\" component ret.next(); } ret } } // None result means the byte vector didn't need normalizing fn normalize_helper<'a>(v: &'a [u8], is_abs: bool) -> Option<~[&'a [u8]]> { if is_abs && v.as_slice().is_empty() { return None; } let mut comps: ~[&'a [u8]] = ~[]; let mut n_up = 0u; let mut changed = false; for comp in v.split_iter(is_sep) { if comp.is_empty() { changed = true } else if comp == bytes!(\".\") { changed = true } else if comp == bytes!(\"..\") { if is_abs && comps.is_empty() { changed = true } else if comps.len() == n_up { comps.push(dot_dot_static); n_up += 1 } else { comps.pop_opt(); changed = true } } else { comps.push(comp) } } if changed { if comps.is_empty() && !is_abs { if v == bytes!(\".\") { return None; } comps.push(dot_static); } Some(comps) } else { None } } // FIXME (#8169): Pull this into parent module once visibility works #[inline(always)] fn contains_nul(v: &[u8]) -> bool { v.iter().any(|&x| x == 0) } static dot_static: &'static [u8] = &'static ['.' as u8]; static dot_dot_static: &'static [u8] = &'static ['.' as u8, '.' as u8]; #[cfg(test)] mod tests { use super::*; use option::{Some, None}; use iter::Iterator; use str; use vec::Vector; macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_str(), Some($exp)); } ); (v: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_vec(), $exp); } ) ) macro_rules! b( ($($arg:expr),+) => ( bytes!($($arg),+) ) ) #[test] fn test_paths() { t!(v: Path::new([]), b!(\".\")); t!(v: Path::new(b!(\"/\")), b!(\"/\")); t!(v: Path::new(b!(\"a/b/c\")), b!(\"a/b/c\")); t!(v: Path::new(b!(\"a/b/c\", 0xff)), b!(\"a/b/c\", 0xff)); t!(v: Path::new(b!(0xff, \"/../foo\", 0x80)), b!(\"foo\", 0x80)); let p = Path::new(b!(\"a/b/c\", 0xff)); assert_eq!(p.as_str(), None); t!(s: Path::from_str(\"\"), \".\"); t!(s: Path::from_str(\"/\"), \"/\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"hi/\"), \"hi\"); t!(s: Path::from_str(\"/lib\"), \"/lib\"); t!(s: Path::from_str(\"/lib/\"), \"/lib\"); t!(s: Path::from_str(\"hi/there\"), \"hi/there\"); t!(s: Path::from_str(\"hi/there.txt\"), \"hi/there.txt\"); t!(s: Path::from_str(\"hi/there/\"), \"hi/there\"); t!(s: Path::from_str(\"hi/../there\"), \"there\"); t!(s: Path::from_str(\"../hi/there\"), \"../hi/there\"); t!(s: Path::from_str(\"/../hi/there\"), \"/hi/there\"); t!(s: Path::from_str(\"foo/..\"), \".\"); t!(s: Path::from_str(\"/foo/..\"), \"/\"); t!(s: Path::from_str(\"/foo/../..\"), \"/\"); t!(s: Path::from_str(\"/foo/../../bar\"), \"/bar\"); t!(s: Path::from_str(\"/./hi/./there/.\"), \"/hi/there\"); t!(s: Path::from_str(\"/./hi/./there/./..\"), \"/hi\"); t!(s: Path::from_str(\"foo/../..\"), \"..\"); t!(s: Path::from_str(\"foo/../../..\"), \"../..\"); t!(s: Path::from_str(\"foo/../../bar\"), \"../bar\"); assert_eq!(Path::new(b!(\"foo/bar\")).into_vec(), b!(\"foo/bar\").to_owned()); assert_eq!(Path::new(b!(\"/foo/../../bar\")).into_vec(), b!(\"/bar\").to_owned()); assert_eq!(Path::from_str(\"foo/bar\").into_str(), Some(~\"foo/bar\")); assert_eq!(Path::from_str(\"/foo/../../bar\").into_str(), Some(~\"/bar\")); let p = Path::new(b!(\"foo/bar\", 0x80)); assert_eq!(p.as_str(), None); assert_eq!(Path::new(b!(\"foo\", 0xff, \"/bar\")).into_str(), None); } #[test] fn test_null_byte() { use path2::null_byte::cond; let mut handled = false; let mut p = do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"foo/bar\", 0)); (b!(\"/bar\").to_owned()) }).inside { Path::new(b!(\"foo/bar\", 0)) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"/bar\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.set_filename(b!(\"f\", 0, \"o\")) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"/foo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"null/\", 0, \"/byte\")); (b!(\"null/byte\").to_owned()) }).inside { p.set_dirname(b!(\"null/\", 0, \"/byte\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"null/byte/foo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.push(b!(\"f\", 0, \"o\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"null/byte/foo/foo\")); } #[test] fn test_null_byte_fail() { use path2::null_byte::cond; use task; macro_rules! t( ($name:expr => $code:block) => ( { let mut t = task::task(); t.supervised(); t.name($name); let res = do t.try $code; assert!(res.is_err()); } ) ) t!(~\"new() w/nul\" => { do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { Path::new(b!(\"foo/bar\", 0)) }; }) t!(~\"set_filename w/nul\" => { let mut p = Path::new(b!(\"foo/bar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_filename(b!(\"foo\", 0)) }; }) t!(~\"set_dirname w/nul\" => { let mut p = Path::new(b!(\"foo/bar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_dirname(b!(\"foo\", 0)) }; }) t!(~\"push w/nul\" => { let mut p = Path::new(b!(\"foo/bar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.push(b!(\"foo\", 0)) }; }) } #[test] fn test_components() { macro_rules! t( (s: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::from_str($path); assert_eq!(path.$op(), ($exp).as_bytes()); } ); (s: $path:expr, $op:ident, $exp:expr, opt) => ( { let path = Path::from_str($path); let left = path.$op().map(|&x| str::from_utf8_slice(x)); assert_eq!(left, $exp); } ); (v: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::new($path); assert_eq!(path.$op(), $exp); } ) ) t!(v: b!(\"a/b/c\"), filename, b!(\"c\")); t!(v: b!(\"a/b/c\", 0xff), filename, b!(\"c\", 0xff)); t!(v: b!(\"a/b\", 0xff, \"/c\"), filename, b!(\"c\")); t!(s: \"a/b/c\", filename, \"c\"); t!(s: \"/a/b/c\", filename, \"c\"); t!(s: \"a\", filename, \"a\"); t!(s: \"/a\", filename, \"a\"); t!(s: \".\", filename, \"\"); t!(s: \"/\", filename, \"\"); t!(s: \"..\", filename, \"\"); t!(s: \"../..\", filename, \"\"); t!(v: b!(\"a/b/c\"), dirname, b!(\"a/b\")); t!(v: b!(\"a/b/c\", 0xff), dirname, b!(\"a/b\")); t!(v: b!(\"a/b\", 0xff, \"/c\"), dirname, b!(\"a/b\", 0xff)); t!(s: \"a/b/c\", dirname, \"a/b\"); t!(s: \"/a/b/c\", dirname, \"/a/b\"); t!(s: \"a\", dirname, \".\"); t!(s: \"/a\", dirname, \"/\"); t!(s: \".\", dirname, \".\"); t!(s: \"/\", dirname, \"/\"); t!(s: \"..\", dirname, \"..\"); t!(s: \"../..\", dirname, \"../..\"); t!(v: b!(\"hi/there.txt\"), filestem, b!(\"there\")); t!(v: b!(\"hi/there\", 0x80, \".txt\"), filestem, b!(\"there\", 0x80)); t!(v: b!(\"hi/there.t\", 0x80, \"xt\"), filestem, b!(\"there\")); t!(s: \"hi/there.txt\", filestem, \"there\"); t!(s: \"hi/there\", filestem, \"there\"); t!(s: \"there.txt\", filestem, \"there\"); t!(s: \"there\", filestem, \"there\"); t!(s: \".\", filestem, \"\"); t!(s: \"/\", filestem, \"\"); t!(s: \"foo/.bar\", filestem, \".bar\"); t!(s: \".bar\", filestem, \".bar\"); t!(s: \"..bar\", filestem, \".\"); t!(s: \"hi/there..txt\", filestem, \"there.\"); t!(s: \"..\", filestem, \"\"); t!(s: \"../..\", filestem, \"\"); t!(v: b!(\"hi/there.txt\"), extension, Some(b!(\"txt\"))); t!(v: b!(\"hi/there\", 0x80, \".txt\"), extension, Some(b!(\"txt\"))); t!(v: b!(\"hi/there.t\", 0x80, \"xt\"), extension, Some(b!(\"t\", 0x80, \"xt\"))); t!(v: b!(\"hi/there\"), extension, None); t!(v: b!(\"hi/there\", 0x80), extension, None); t!(s: \"hi/there.txt\", extension, Some(\"txt\"), opt); t!(s: \"hi/there\", extension, None, opt); t!(s: \"there.txt\", extension, Some(\"txt\"), opt); t!(s: \"there\", extension, None, opt); t!(s: \".\", extension, None, opt); t!(s: \"/\", extension, None, opt); t!(s: \"foo/.bar\", extension, None, opt); t!(s: \".bar\", extension, None, opt); t!(s: \"..bar\", extension, Some(\"bar\"), opt); t!(s: \"hi/there..txt\", extension, Some(\"txt\"), opt); t!(s: \"..\", extension, None, opt); t!(s: \"../..\", extension, None, opt); } #[test] fn test_push() { macro_rules! t( (s: $path:expr, $join:expr) => ( { let path = ($path); let join = ($join); let mut p1 = Path::from_str(path); let p2 = p1.clone(); p1.push_str(join); assert_eq!(p1, p2.join_str(join)); } ) ) t!(s: \"a/b/c\", \"..\"); t!(s: \"/a/b/c\", \"d\"); t!(s: \"a/b\", \"c/d\"); t!(s: \"a/b\", \"/c/d\"); } #[test] fn test_push_path() { macro_rules! t( (s: $path:expr, $push:expr, $exp:expr) => ( { let mut p = Path::from_str($path); let push = Path::from_str($push); p.push_path(&push); assert_eq!(p.as_str(), Some($exp)); } ) ) t!(s: \"a/b/c\", \"d\", \"a/b/c/d\"); t!(s: \"/a/b/c\", \"d\", \"/a/b/c/d\"); t!(s: \"a/b\", \"c/d\", \"a/b/c/d\"); t!(s: \"a/b\", \"/c/d\", \"/c/d\"); t!(s: \"a/b\", \".\", \"a/b\"); t!(s: \"a/b\", \"../c\", \"a/c\"); } #[test] fn test_pop() { macro_rules! t( (s: $path:expr, $left:expr, $right:expr) => ( { let mut p = Path::from_str($path); let file = p.pop_opt_str(); assert_eq!(p.as_str(), Some($left)); assert_eq!(file.map(|s| s.as_slice()), $right); } ); (v: [$($path:expr),+], [$($left:expr),+], Some($($right:expr),+)) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file.map(|v| v.as_slice()), Some(b!($($right),+))); } ); (v: [$($path:expr),+], [$($left:expr),+], None) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file, None); } ) ) t!(v: [\"a/b/c\"], [\"a/b\"], Some(\"c\")); t!(v: [\"a\"], [\".\"], Some(\"a\")); t!(v: [\".\"], [\".\"], None); t!(v: [\"/a\"], [\"/\"], Some(\"a\")); t!(v: [\"/\"], [\"/\"], None); t!(v: [\"a/b/c\", 0x80], [\"a/b\"], Some(\"c\", 0x80)); t!(v: [\"a/b\", 0x80, \"/c\"], [\"a/b\", 0x80], Some(\"c\")); t!(v: [0xff], [\".\"], Some(0xff)); t!(v: [\"/\", 0xff], [\"/\"], Some(0xff)); t!(s: \"a/b/c\", \"a/b\", Some(\"c\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \".\", \".\", None); t!(s: \"/a\", \"/\", Some(\"a\")); t!(s: \"/\", \"/\", None); assert_eq!(Path::new(b!(\"foo/bar\", 0x80)).pop_opt_str(), None); assert_eq!(Path::new(b!(\"foo\", 0x80, \"/bar\")).pop_opt_str(), Some(~\"bar\")); } #[test] fn test_join() { t!(v: Path::new(b!(\"a/b/c\")).join(b!(\"..\")), b!(\"a/b\")); t!(v: Path::new(b!(\"/a/b/c\")).join(b!(\"d\")), b!(\"/a/b/c/d\")); t!(v: Path::new(b!(\"a/\", 0x80, \"/c\")).join(b!(0xff)), b!(\"a/\", 0x80, \"/c/\", 0xff)); t!(s: Path::from_str(\"a/b/c\").join_str(\"..\"), \"a/b\"); t!(s: Path::from_str(\"/a/b/c\").join_str(\"d\"), \"/a/b/c/d\"); t!(s: Path::from_str(\"a/b\").join_str(\"c/d\"), \"a/b/c/d\"); t!(s: Path::from_str(\"a/b\").join_str(\"/c/d\"), \"/c/d\"); t!(s: Path::from_str(\".\").join_str(\"a/b\"), \"a/b\"); t!(s: Path::from_str(\"/\").join_str(\"a/b\"), \"/a/b\"); } #[test] fn test_join_path() { macro_rules! t( (s: $path:expr, $join:expr, $exp:expr) => ( { let path = Path::from_str($path); let join = Path::from_str($join); let res = path.join_path(&join); assert_eq!(res.as_str(), Some($exp)); } ) ) t!(s: \"a/b/c\", \"..\", \"a/b\"); t!(s: \"/a/b/c\", \"d\", \"/a/b/c/d\"); t!(s: \"a/b\", \"c/d\", \"a/b/c/d\"); t!(s: \"a/b\", \"/c/d\", \"/c/d\"); t!(s: \".\", \"a/b\", \"a/b\"); t!(s: \"/\", \"a/b\", \"/a/b\"); } #[test] fn test_with_helpers() { t!(v: Path::new(b!(\"a/b/c\")).with_dirname(b!(\"d\")), b!(\"d/c\")); t!(v: Path::new(b!(\"a/b/c\")).with_dirname(b!(\"d/e\")), b!(\"d/e/c\")); t!(v: Path::new(b!(\"a/\", 0x80, \"b/c\")).with_dirname(b!(0xff)), b!(0xff, \"/c\")); t!(v: Path::new(b!(\"a/b/\", 0x80)).with_dirname(b!(\"/\", 0xcd)), b!(\"/\", 0xcd, \"/\", 0x80)); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"d\"), \"d/c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"d/e\"), \"d/e/c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"\"), \"c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"/\"), \"/c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\".\"), \"c\"); t!(s: Path::from_str(\"a/b/c\").with_dirname_str(\"..\"), \"../c\"); t!(s: Path::from_str(\"/\").with_dirname_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"/\").with_dirname_str(\"\"), \".\"); t!(s: Path::from_str(\"/foo\").with_dirname_str(\"bar\"), \"bar/foo\"); t!(s: Path::from_str(\"..\").with_dirname_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"../..\").with_dirname_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"..\").with_dirname_str(\"\"), \".\"); t!(s: Path::from_str(\"../..\").with_dirname_str(\"\"), \".\"); t!(s: Path::from_str(\"foo\").with_dirname_str(\"..\"), \"../foo\"); t!(s: Path::from_str(\"foo\").with_dirname_str(\"../..\"), \"../../foo\"); t!(v: Path::new(b!(\"a/b/c\")).with_filename(b!(\"d\")), b!(\"a/b/d\")); t!(v: Path::new(b!(\"a/b/c\", 0xff)).with_filename(b!(0x80)), b!(\"a/b/\", 0x80)); t!(v: Path::new(b!(\"/\", 0xff, \"/foo\")).with_filename(b!(0xcd)), b!(\"/\", 0xff, \"/\", 0xcd)); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"d\"), \"a/b/d\"); t!(s: Path::from_str(\".\").with_filename_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"/a/b/c\").with_filename_str(\"d\"), \"/a/b/d\"); t!(s: Path::from_str(\"/\").with_filename_str(\"foo\"), \"/foo\"); t!(s: Path::from_str(\"/a\").with_filename_str(\"foo\"), \"/foo\"); t!(s: Path::from_str(\"foo\").with_filename_str(\"bar\"), \"bar\"); t!(s: Path::from_str(\"/\").with_filename_str(\"foo/\"), \"/foo\"); t!(s: Path::from_str(\"/a\").with_filename_str(\"foo/\"), \"/foo\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"\"), \"a/b\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\".\"), \"a/b\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"..\"), \"a\"); t!(s: Path::from_str(\"/a\").with_filename_str(\"\"), \"/\"); t!(s: Path::from_str(\"foo\").with_filename_str(\"\"), \".\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"d/e\"), \"a/b/d/e\"); t!(s: Path::from_str(\"a/b/c\").with_filename_str(\"/d\"), \"a/b/d\"); t!(s: Path::from_str(\"..\").with_filename_str(\"foo\"), \"../foo\"); t!(s: Path::from_str(\"../..\").with_filename_str(\"foo\"), \"../../foo\"); t!(s: Path::from_str(\"..\").with_filename_str(\"\"), \"..\"); t!(s: Path::from_str(\"../..\").with_filename_str(\"\"), \"../..\"); t!(v: Path::new(b!(\"hi/there\", 0x80, \".txt\")).with_filestem(b!(0xff)), b!(\"hi/\", 0xff, \".txt\")); t!(v: Path::new(b!(\"hi/there.txt\", 0x80)).with_filestem(b!(0xff)), b!(\"hi/\", 0xff, \".txt\", 0x80)); t!(v: Path::new(b!(\"hi/there\", 0xff)).with_filestem(b!(0x80)), b!(\"hi/\", 0x80)); t!(v: Path::new(b!(\"hi\", 0x80, \"/there\")).with_filestem([]), b!(\"hi\", 0x80)); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"here\"), \"hi/here.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"\"), \"hi/.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\".\"), \"hi/..txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"..\"), \"hi/...txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"/\"), \"hi/.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_filestem_str(\"foo/bar\"), \"hi/foo/bar.txt\"); t!(s: Path::from_str(\"hi/there.foo.txt\").with_filestem_str(\"here\"), \"hi/here.txt\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\"here\"), \"hi/here\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\"\"), \"hi\"); t!(s: Path::from_str(\"hi\").with_filestem_str(\"\"), \".\"); t!(s: Path::from_str(\"/hi\").with_filestem_str(\"\"), \"/\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\"..\"), \".\"); t!(s: Path::from_str(\"hi/there\").with_filestem_str(\".\"), \"hi\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\"foo\"), \"hi/foo.\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\"\"), \"hi\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\".\"), \".\"); t!(s: Path::from_str(\"hi/there.\").with_filestem_str(\"..\"), \"hi/...\"); t!(s: Path::from_str(\"/\").with_filestem_str(\"foo\"), \"/foo\"); t!(s: Path::from_str(\".\").with_filestem_str(\"foo\"), \"foo\"); t!(s: Path::from_str(\"hi/there..\").with_filestem_str(\"here\"), \"hi/here.\"); t!(s: Path::from_str(\"hi/there..\").with_filestem_str(\"\"), \"hi\"); t!(v: Path::new(b!(\"hi/there\", 0x80, \".txt\")).with_extension(b!(\"exe\")), b!(\"hi/there\", 0x80, \".exe\")); t!(v: Path::new(b!(\"hi/there.txt\", 0x80)).with_extension(b!(0xff)), b!(\"hi/there.\", 0xff)); t!(v: Path::new(b!(\"hi/there\", 0x80)).with_extension(b!(0xff)), b!(\"hi/there\", 0x80, \".\", 0xff)); t!(v: Path::new(b!(\"hi/there.\", 0xff)).with_extension([]), b!(\"hi/there\")); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\"exe\"), \"hi/there.exe\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\"\"), \"hi/there\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\".\"), \"hi/there..\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\"..\"), \"hi/there...\"); t!(s: Path::from_str(\"hi/there\").with_extension_str(\"txt\"), \"hi/there.txt\"); t!(s: Path::from_str(\"hi/there\").with_extension_str(\".\"), \"hi/there..\"); t!(s: Path::from_str(\"hi/there\").with_extension_str(\"..\"), \"hi/there...\"); t!(s: Path::from_str(\"hi/there.\").with_extension_str(\"txt\"), \"hi/there.txt\"); t!(s: Path::from_str(\"hi/.foo\").with_extension_str(\"txt\"), \"hi/.foo.txt\"); t!(s: Path::from_str(\"hi/there.txt\").with_extension_str(\".foo\"), \"hi/there..foo\"); t!(s: Path::from_str(\"/\").with_extension_str(\"txt\"), \"/\"); t!(s: Path::from_str(\"/\").with_extension_str(\".\"), \"/\"); t!(s: Path::from_str(\"/\").with_extension_str(\"..\"), \"/\"); t!(s: Path::from_str(\".\").with_extension_str(\"txt\"), \".\"); } #[test] fn test_setters() { macro_rules! t( (s: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::from_str(path); p1.$set(arg); let p2 = Path::from_str(path); assert_eq!(p1, p2.$with(arg)); } ); (v: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::new(path); p1.$set(arg); let p2 = Path::new(path); assert_eq!(p1, p2.$with(arg)); } ) ) t!(v: b!(\"a/b/c\"), set_dirname, with_dirname, b!(\"d\")); t!(v: b!(\"a/b/c\"), set_dirname, with_dirname, b!(\"d/e\")); t!(v: b!(\"a/\", 0x80, \"/c\"), set_dirname, with_dirname, b!(0xff)); t!(s: \"a/b/c\", set_dirname_str, with_dirname_str, \"d\"); t!(s: \"a/b/c\", set_dirname_str, with_dirname_str, \"d/e\"); t!(s: \"/\", set_dirname_str, with_dirname_str, \"foo\"); t!(s: \"/foo\", set_dirname_str, with_dirname_str, \"bar\"); t!(s: \"a/b/c\", set_dirname_str, with_dirname_str, \"\"); t!(s: \"../..\", set_dirname_str, with_dirname_str, \"x\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"../..\"); t!(v: b!(\"a/b/c\"), set_filename, with_filename, b!(\"d\")); t!(v: b!(\"/\"), set_filename, with_filename, b!(\"foo\")); t!(v: b!(0x80), set_filename, with_filename, b!(0xff)); t!(s: \"a/b/c\", set_filename_str, with_filename_str, \"d\"); t!(s: \"/\", set_filename_str, with_filename_str, \"foo\"); t!(s: \".\", set_filename_str, with_filename_str, \"foo\"); t!(s: \"a/b\", set_filename_str, with_filename_str, \"\"); t!(s: \"a\", set_filename_str, with_filename_str, \"\"); t!(v: b!(\"hi/there.txt\"), set_filestem, with_filestem, b!(\"here\")); t!(v: b!(\"hi/there\", 0x80, \".txt\"), set_filestem, with_filestem, b!(\"here\", 0xff)); t!(s: \"hi/there.txt\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hi/there.\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hi/there\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hi/there.txt\", set_filestem_str, with_filestem_str, \"\"); t!(s: \"hi/there\", set_filestem_str, with_filestem_str, \"\"); t!(v: b!(\"hi/there.txt\"), set_extension, with_extension, b!(\"exe\")); t!(v: b!(\"hi/there.t\", 0x80, \"xt\"), set_extension, with_extension, b!(\"exe\", 0xff)); t!(s: \"hi/there.txt\", set_extension_str, with_extension_str, \"exe\"); t!(s: \"hi/there.\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hi/there\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hi/there.txt\", set_extension_str, with_extension_str, \"\"); t!(s: \"hi/there\", set_extension_str, with_extension_str, \"\"); t!(s: \".\", set_extension_str, with_extension_str, \"txt\"); } #[test] fn test_getters() { macro_rules! t( (s: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename_str(), $filename); assert_eq!(path.dirname_str(), $dirname); assert_eq!(path.filestem_str(), $filestem); assert_eq!(path.extension_str(), $ext); } ); (v: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename(), $filename); assert_eq!(path.dirname(), $dirname); assert_eq!(path.filestem(), $filestem); assert_eq!(path.extension(), $ext); } ) ) t!(v: Path::new(b!(\"a/b/c\")), b!(\"c\"), b!(\"a/b\"), b!(\"c\"), None); t!(v: Path::new(b!(\"a/b/\", 0xff)), b!(0xff), b!(\"a/b\"), b!(0xff), None); t!(v: Path::new(b!(\"hi/there.\", 0xff)), b!(\"there.\", 0xff), b!(\"hi\"), b!(\"there\"), Some(b!(0xff))); t!(s: Path::from_str(\"a/b/c\"), Some(\"c\"), Some(\"a/b\"), Some(\"c\"), None); t!(s: Path::from_str(\".\"), Some(\"\"), Some(\".\"), Some(\"\"), None); t!(s: Path::from_str(\"/\"), Some(\"\"), Some(\"/\"), Some(\"\"), None); t!(s: Path::from_str(\"..\"), Some(\"\"), Some(\"..\"), Some(\"\"), None); t!(s: Path::from_str(\"../..\"), Some(\"\"), Some(\"../..\"), Some(\"\"), None); t!(s: Path::from_str(\"hi/there.txt\"), Some(\"there.txt\"), Some(\"hi\"), Some(\"there\"), Some(\"txt\")); t!(s: Path::from_str(\"hi/there\"), Some(\"there\"), Some(\"hi\"), Some(\"there\"), None); t!(s: Path::from_str(\"hi/there.\"), Some(\"there.\"), Some(\"hi\"), Some(\"there\"), Some(\"\")); t!(s: Path::from_str(\"hi/.there\"), Some(\".there\"), Some(\"hi\"), Some(\".there\"), None); t!(s: Path::from_str(\"hi/..there\"), Some(\"..there\"), Some(\"hi\"), Some(\".\"), Some(\"there\")); t!(s: Path::new(b!(\"a/b/\", 0xff)), None, Some(\"a/b\"), None, None); t!(s: Path::new(b!(\"a/b/\", 0xff, \".txt\")), None, Some(\"a/b\"), None, Some(\"txt\")); t!(s: Path::new(b!(\"a/b/c.\", 0x80)), None, Some(\"a/b\"), Some(\"c\"), None); t!(s: Path::new(b!(0xff, \"/b\")), Some(\"b\"), None, Some(\"b\"), None); } #[test] fn test_dir_file_path() { t!(v: Path::new(b!(\"hi/there\", 0x80)).dir_path(), b!(\"hi\")); t!(v: Path::new(b!(\"hi\", 0xff, \"/there\")).dir_path(), b!(\"hi\", 0xff)); t!(s: Path::from_str(\"hi/there\").dir_path(), \"hi\"); t!(s: Path::from_str(\"hi\").dir_path(), \".\"); t!(s: Path::from_str(\"/hi\").dir_path(), \"/\"); t!(s: Path::from_str(\"/\").dir_path(), \"/\"); t!(s: Path::from_str(\"..\").dir_path(), \"..\"); t!(s: Path::from_str(\"../..\").dir_path(), \"../..\"); macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = $path; let left = path.and_then_ref(|p| p.as_str()); assert_eq!(left, $exp); } ); (v: $path:expr, $exp:expr) => ( { let path = $path; let left = path.map(|p| p.as_vec()); assert_eq!(left, $exp); } ) ) t!(v: Path::new(b!(\"hi/there\", 0x80)).file_path(), Some(b!(\"there\", 0x80))); t!(v: Path::new(b!(\"hi\", 0xff, \"/there\")).file_path(), Some(b!(\"there\"))); t!(s: Path::from_str(\"hi/there\").file_path(), Some(\"there\")); t!(s: Path::from_str(\"hi\").file_path(), Some(\"hi\")); t!(s: Path::from_str(\".\").file_path(), None); t!(s: Path::from_str(\"/\").file_path(), None); t!(s: Path::from_str(\"..\").file_path(), None); t!(s: Path::from_str(\"../..\").file_path(), None); } #[test] fn test_is_absolute() { assert_eq!(Path::from_str(\"a/b/c\").is_absolute(), false); assert_eq!(Path::from_str(\"/a/b/c\").is_absolute(), true); assert_eq!(Path::from_str(\"a\").is_absolute(), false); assert_eq!(Path::from_str(\"/a\").is_absolute(), true); assert_eq!(Path::from_str(\".\").is_absolute(), false); assert_eq!(Path::from_str(\"/\").is_absolute(), true); assert_eq!(Path::from_str(\"..\").is_absolute(), false); assert_eq!(Path::from_str(\"../..\").is_absolute(), false); } #[test] fn test_is_ancestor_of() { macro_rules! t( (s: $path:expr, $dest:expr, $exp:expr) => ( { let path = Path::from_str($path); let dest = Path::from_str($dest); assert_eq!(path.is_ancestor_of(&dest), $exp); } ) ) t!(s: \"a/b/c\", \"a/b/c/d\", true); t!(s: \"a/b/c\", \"a/b/c\", true); t!(s: \"a/b/c\", \"a/b\", false); t!(s: \"/a/b/c\", \"/a/b/c\", true); t!(s: \"/a/b\", \"/a/b/c\", true); t!(s: \"/a/b/c/d\", \"/a/b/c\", false); t!(s: \"/a/b\", \"a/b/c\", false); t!(s: \"a/b\", \"/a/b/c\", false); t!(s: \"a/b/c\", \"a/b/d\", false); t!(s: \"../a/b/c\", \"a/b/c\", false); t!(s: \"a/b/c\", \"../a/b/c\", false); t!(s: \"a/b/c\", \"a/b/cd\", false); t!(s: \"a/b/cd\", \"a/b/c\", false); t!(s: \"../a/b\", \"../a/b/c\", true); t!(s: \".\", \"a/b\", true); t!(s: \".\", \".\", true); t!(s: \"/\", \"/\", true); t!(s: \"/\", \"/a/b\", true); t!(s: \"..\", \"a/b\", true); t!(s: \"../..\", \"a/b\", true); } #[test] fn test_path_relative_from() { macro_rules! t( (s: $path:expr, $other:expr, $exp:expr) => ( { let path = Path::from_str($path); let other = Path::from_str($other); let res = path.path_relative_from(&other); assert_eq!(res.and_then_ref(|x| x.as_str()), $exp); } ) ) t!(s: \"a/b/c\", \"a/b\", Some(\"c\")); t!(s: \"a/b/c\", \"a/b/d\", Some(\"../c\")); t!(s: \"a/b/c\", \"a/b/c/d\", Some(\"..\")); t!(s: \"a/b/c\", \"a/b/c\", Some(\".\")); t!(s: \"a/b/c\", \"a/b/c/d/e\", Some(\"../..\")); t!(s: \"a/b/c\", \"a/d/e\", Some(\"../../b/c\")); t!(s: \"a/b/c\", \"d/e/f\", Some(\"../../../a/b/c\")); t!(s: \"a/b/c\", \"/a/b/c\", None); t!(s: \"/a/b/c\", \"a/b/c\", Some(\"/a/b/c\")); t!(s: \"/a/b/c\", \"/a/b/c/d\", Some(\"..\")); t!(s: \"/a/b/c\", \"/a/b\", Some(\"c\")); t!(s: \"/a/b/c\", \"/a/b/c/d/e\", Some(\"../..\")); t!(s: \"/a/b/c\", \"/a/d/e\", Some(\"../../b/c\")); t!(s: \"/a/b/c\", \"/d/e/f\", Some(\"../../../a/b/c\")); t!(s: \"hi/there.txt\", \"hi/there\", Some(\"../there.txt\")); t!(s: \".\", \"a\", Some(\"..\")); t!(s: \".\", \"a/b\", Some(\"../..\")); t!(s: \".\", \".\", Some(\".\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \"a/b\", \".\", Some(\"a/b\")); t!(s: \"..\", \".\", Some(\"..\")); t!(s: \"a/b/c\", \"a/b/c\", Some(\".\")); t!(s: \"/a/b/c\", \"/a/b/c\", Some(\".\")); t!(s: \"/\", \"/\", Some(\".\")); t!(s: \"/\", \".\", Some(\"/\")); t!(s: \"../../a\", \"b\", Some(\"../../../a\")); t!(s: \"a\", \"../../b\", None); t!(s: \"../../a\", \"../../b\", Some(\"../a\")); t!(s: \"../../a\", \"../../a/b\", Some(\"..\")); t!(s: \"../../a/b\", \"../../a\", Some(\"b\")); } #[test] fn test_component_iter() { macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = Path::from_str($path); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; let exps = exp.iter().map(|x| x.as_bytes()).to_owned_vec(); assert_eq!(comps, exps); } ); (v: [$($arg:expr),+], [$([$($exp:expr),*]),*]) => ( { let path = Path::new(b!($($arg),+)); let comps = path.component_iter().to_owned_vec(); let exp: &[&[u8]] = [$(b!($($exp),*)),*]; assert_eq!(comps.as_slice(), exp); } ) ) t!(v: [\"a/b/c\"], [[\"a\"], [\"b\"], [\"c\"]]); t!(v: [\"/\", 0xff, \"/a/\", 0x80], [[0xff], [\"a\"], [0x80]]); t!(v: [\"../../foo\", 0xcd, \"bar\"], [[\"..\"], [\"..\"], [\"foo\", 0xcd, \"bar\"]]); t!(s: \"a/b/c\", [\"a\", \"b\", \"c\"]); t!(s: \"a/b/d\", [\"a\", \"b\", \"d\"]); t!(s: \"a/b/cd\", [\"a\", \"b\", \"cd\"]); t!(s: \"/a/b/c\", [\"a\", \"b\", \"c\"]); t!(s: \"a\", [\"a\"]); t!(s: \"/a\", [\"a\"]); t!(s: \"/\", []); t!(s: \".\", [\".\"]); t!(s: \"..\", [\"..\"]); t!(s: \"../..\", [\"..\", \"..\"]); t!(s: \"../../foo\", [\"..\", \"..\", \"foo\"]); } } ", "commid": "rust_pr_9655"}], "negative_passages": []} {"query_id": "q-en-rust-09bb67fce6b347ba30fcb629338c464042351d96b12e2bafe7c13f6369eaaf58", "query": "Given the following code: The current output is: Ideally the output should look like: Right now, cargo and rustc operate basically independently of each other. The summary (\"aborting ...\" and \"could not compile ...\") is repeated twice, and both have different, incompatible ways to get more info about what went wrong. There's no reason to repeat these twice; we could include all the same information in half the space if we can get cargo and rustc to cooperate. I suggest the way this be implemented is by keeping rustc's output the same when run standalone, but omitting \"aborting due to ...\" and \"for more information ...\" when run with . Then cargo can aggregate the info it used to print into its own errors by using the JSON output. cc (meta note: I thought of this while working on , which has fully 12 lines of \"metadata\" after the 5 line error. Most builds are not that bad in comparison, but I do think it shows that it needs support from all the tools in the stack to keep the verbosity down.)\nI'm not sure whether making the decision on the error format scheme is the right way to go. I can't think of any reason not to do it, unless the implementation is messy, so", "positive_passages": [{"docid": "doc-en-rust-5c403273909425ec01c762dcda1727b548953d9bdc8d4d969c4ee3927be2404c", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. //! Windows file path handling use ascii::AsciiCast; use c_str::{CString, ToCStr}; use cast; use cmp::Eq; use from_str::FromStr; use iter::{AdditiveIterator, Extendable, Iterator}; use option::{Option, Some, None}; use str; use str::{OwnedStr, Str, StrVector}; use util; use vec::Vector; use super::{GenericPath, GenericPathUnsafe}; /// Iterator that yields successive components of a Path pub type ComponentIter<'self> = str::CharSplitIterator<'self, char>; /// Represents a Windows path // Notes for Windows path impl: // The MAX_PATH is 260, but 253 is the practical limit due to some API bugs // See http://msdn.microsoft.com/en-us/library/windows/desktop/aa365247.aspx for good information // about windows paths. // That same page puts a bunch of restrictions on allowed characters in a path. // `foo.txt` means \"relative to current drive\", but will not be considered to be absolute here // as `\u2203P | P.join(\"foo.txt\") != \"foo.txt\"`. // `C:` is interesting, that means \"the current directory on drive C\". // Long absolute paths need to have ? prefix (or, for UNC, ?UNC). I think that can be // ignored for now, though, and only added in a hypothetical .to_pwstr() function. // However, if a path is parsed that has ?, this needs to be preserved as it disables the // processing of \".\" and \"..\" components and / as a separator. // Experimentally, ?foo is not the same thing as foo. // Also, foo is not valid either (certainly not equivalent to foo). // Similarly, C:Users is not equivalent to C:Users, although C:Usersfoo is equivalent // to C:Usersfoo. In fact the command prompt treats C:foobar as UNC path. But it might be // best to just ignore that and normalize it to C:foobar. // // Based on all this, I think the right approach is to do the following: // * Require valid utf-8 paths. Windows API may use WCHARs, but we don't, and utf-8 is convertible // to UTF-16 anyway (though does Windows use UTF-16 or UCS-2? Not sure). // * Parse the prefixes ?UNC, ?, and . explicitly. // * If ?UNC, treat following two path components as servershare. Don't error for missing // servershare. // * If ?, parse disk from following component, if present. Don't error for missing disk. // * If ., treat rest of path as just regular components. I don't know how . and .. are handled // here, they probably aren't, but I'm not going to worry about that. // * Else if starts with , treat following two components as servershare. Don't error for missing // servershare. // * Otherwise, attempt to parse drive from start of path. // // The only error condition imposed here is valid utf-8. All other invalid paths are simply // preserved by the data structure; let the Windows API error out on them. #[deriving(Clone, DeepClone)] pub struct Path { priv repr: ~str, // assumed to never be empty priv prefix: Option, priv sepidx: Option // index of the final separator in the non-prefix portion of repr } impl Eq for Path { #[inline] fn eq(&self, other: &Path) -> bool { self.repr == other.repr } } impl FromStr for Path { fn from_str(s: &str) -> Option { if contains_nul(s.as_bytes()) { None } else { Some(unsafe { GenericPathUnsafe::from_str_unchecked(s) }) } } } impl ToCStr for Path { #[inline] fn to_c_str(&self) -> CString { // The Path impl guarantees no embedded NULs unsafe { self.as_vec().to_c_str_unchecked() } } #[inline] unsafe fn to_c_str_unchecked(&self) -> CString { self.as_vec().to_c_str_unchecked() } } impl GenericPathUnsafe for Path { /// See `GenericPathUnsafe::from_vec_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn from_vec_unchecked(path: &[u8]) -> Path { if !str::is_utf8(path) { let path = str::from_utf8(path); // triggers not_utf8 condition GenericPathUnsafe::from_str_unchecked(path) } else { GenericPathUnsafe::from_str_unchecked(cast::transmute(path)) } } #[inline] unsafe fn from_str_unchecked(path: &str) -> Path { let (prefix, path) = Path::normalize_(path); assert!(!path.is_empty()); let mut ret = Path{ repr: path, prefix: prefix, sepidx: None }; ret.update_sepidx(); ret } /// See `GenericPathUnsafe::set_dirname_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn set_dirname_unchecked(&mut self, dirname: &[u8]) { if !str::is_utf8(dirname) { let dirname = str::from_utf8(dirname); // triggers not_utf8 condition self.set_dirname_str_unchecked(dirname); } else { self.set_dirname_str_unchecked(cast::transmute(dirname)) } } unsafe fn set_dirname_str_unchecked(&mut self, dirname: &str) { match self.sepidx_or_prefix_len() { None if \".\" == self.repr || \"..\" == self.repr => { self.update_normalized(dirname); } None => { let mut s = str::with_capacity(dirname.len() + self.repr.len() + 1); s.push_str(dirname); s.push_char(sep); s.push_str(self.repr); self.update_normalized(s); } Some((_,idxa,end)) if self.repr.slice(idxa,end) == \"..\" => { self.update_normalized(dirname); } Some((_,idxa,end)) if dirname.is_empty() => { let (prefix, path) = Path::normalize_(self.repr.slice(idxa,end)); self.repr = path; self.prefix = prefix; self.update_sepidx(); } Some((idxb,idxa,end)) => { let idx = if dirname.ends_with(\"\") { idxa } else { let prefix = parse_prefix(dirname); if prefix == Some(DiskPrefix) && prefix_len(prefix) == dirname.len() { idxa } else { idxb } }; let mut s = str::with_capacity(dirname.len() + end - idx); s.push_str(dirname); s.push_str(self.repr.slice(idx,end)); self.update_normalized(s); } } } /// See `GenericPathUnsafe::set_filename_unchecekd`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. #[inline] unsafe fn set_filename_unchecked(&mut self, filename: &[u8]) { if !str::is_utf8(filename) { let filename = str::from_utf8(filename); // triggers not_utf8 condition self.set_filename_str_unchecked(filename) } else { self.set_filename_str_unchecked(cast::transmute(filename)) } } unsafe fn set_filename_str_unchecked(&mut self, filename: &str) { match self.sepidx_or_prefix_len() { None if \"..\" == self.repr => { let mut s = str::with_capacity(3 + filename.len()); s.push_str(\"..\"); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } None => { self.update_normalized(filename); } Some((_,idxa,end)) if self.repr.slice(idxa,end) == \"..\" => { let mut s = str::with_capacity(end + 1 + filename.len()); s.push_str(self.repr.slice_to(end)); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } Some((idxb,idxa,_)) if self.prefix == Some(DiskPrefix) && idxa == self.prefix_len() => { let mut s = str::with_capacity(idxb + filename.len()); s.push_str(self.repr.slice_to(idxb)); s.push_str(filename); self.update_normalized(s); } Some((idxb,_,_)) => { let mut s = str::with_capacity(idxb + 1 + filename.len()); s.push_str(self.repr.slice_to(idxb)); s.push_char(sep); s.push_str(filename); self.update_normalized(s); } } } /// See `GenericPathUnsafe::push_unchecked`. /// /// # Failure /// /// Raises the `str::not_utf8` condition if not valid UTF-8. unsafe fn push_unchecked(&mut self, path: &[u8]) { if !str::is_utf8(path) { let path = str::from_utf8(path); // triggers not_utf8 condition self.push_str_unchecked(path); } else { self.push_str_unchecked(cast::transmute(path)); } } /// See `GenericPathUnsafe::push_str_unchecked`. /// /// Concatenating two Windows Paths is rather complicated. /// For the most part, it will behave as expected, except in the case of /// pushing a volume-relative path, e.g. `C:foo.txt`. Because we have no /// concept of per-volume cwds like Windows does, we can't behave exactly /// like Windows will. Instead, if the receiver is an absolute path on /// the same volume as the new path, it will be treated as the cwd that /// the new path is relative to. Otherwise, the new path will be treated /// as if it were absolute and will replace the receiver outright. unsafe fn push_str_unchecked(&mut self, path: &str) { fn is_vol_abs(path: &str, prefix: Option) -> bool { // assume prefix is Some(DiskPrefix) let rest = path.slice_from(prefix_len(prefix)); !rest.is_empty() && rest[0].is_ascii() && is_sep2(rest[0] as char) } fn shares_volume(me: &Path, path: &str) -> bool { // path is assumed to have a prefix of Some(DiskPrefix) match me.prefix { Some(DiskPrefix) => me.repr[0] == path[0].to_ascii().to_upper().to_byte(), Some(VerbatimDiskPrefix) => me.repr[4] == path[0].to_ascii().to_upper().to_byte(), _ => false } } fn is_sep_(prefix: Option, u: u8) -> bool { u.is_ascii() && if prefix_is_verbatim(prefix) { is_sep(u as char) } else { is_sep2(u as char) } } fn replace_path(me: &mut Path, path: &str, prefix: Option) { let newpath = Path::normalize__(path, prefix); me.repr = match newpath { Some(p) => p, None => path.to_owned() }; me.prefix = prefix; me.update_sepidx(); } fn append_path(me: &mut Path, path: &str) { // appends a path that has no prefix // if me is verbatim, we need to pre-normalize the new path let path_ = if me.is_verbatim() { Path::normalize__(path, None) } else { None }; let pathlen = path_.map_default(path.len(), |p| p.len()); let mut s = str::with_capacity(me.repr.len() + 1 + pathlen); s.push_str(me.repr); let plen = me.prefix_len(); if !(me.repr.len() > plen && me.repr[me.repr.len()-1] == sep as u8) { s.push_char(sep); } match path_ { None => s.push_str(path), Some(p) => s.push_str(p) }; me.update_normalized(s) } if !path.is_empty() { let prefix = parse_prefix(path); match prefix { Some(DiskPrefix) if !is_vol_abs(path, prefix) && shares_volume(self, path) => { // cwd-relative path, self is on the same volume append_path(self, path.slice_from(prefix_len(prefix))); } Some(_) => { // absolute path, or cwd-relative and self is not same volume replace_path(self, path, prefix); } None if !path.is_empty() && is_sep_(self.prefix, path[0]) => { // volume-relative path if self.prefix().is_some() { // truncate self down to the prefix, then append let n = self.prefix_len(); self.repr.truncate(n); append_path(self, path); } else { // we have no prefix, so nothing to be relative to replace_path(self, path, prefix); } } None => { // relative path append_path(self, path); } } } } } impl GenericPath for Path { /// See `GenericPath::as_str` for info. /// Always returns a `Some` value. #[inline] fn as_str<'a>(&'a self) -> Option<&'a str> { Some(self.repr.as_slice()) } #[inline] fn as_vec<'a>(&'a self) -> &'a [u8] { self.repr.as_bytes() } #[inline] fn dirname<'a>(&'a self) -> &'a [u8] { self.dirname_str().unwrap().as_bytes() } /// See `GenericPath::dirname_str` for info. /// Always returns a `Some` value. fn dirname_str<'a>(&'a self) -> Option<&'a str> { Some(match self.sepidx_or_prefix_len() { None if \"..\" == self.repr => self.repr.as_slice(), None => \".\", Some((_,idxa,end)) if self.repr.slice(idxa, end) == \"..\" => { self.repr.as_slice() } Some((idxb,_,end)) if self.repr.slice(idxb, end) == \"\" => { self.repr.as_slice() } Some((0,idxa,_)) => self.repr.slice_to(idxa), Some((idxb,idxa,_)) => { match self.prefix { Some(DiskPrefix) | Some(VerbatimDiskPrefix) if idxb == self.prefix_len() => { self.repr.slice_to(idxa) } _ => self.repr.slice_to(idxb) } } }) } #[inline] fn filename<'a>(&'a self) -> &'a [u8] { self.filename_str().unwrap().as_bytes() } /// See `GenericPath::filename_str` for info. /// Always returns a `Some` value. fn filename_str<'a>(&'a self) -> Option<&'a str> { Some(match self.sepidx_or_prefix_len() { None if \".\" == self.repr || \"..\" == self.repr => \"\", None => self.repr.as_slice(), Some((_,idxa,end)) if self.repr.slice(idxa, end) == \"..\" => \"\", Some((_,idxa,end)) => self.repr.slice(idxa, end) }) } /// See `GenericPath::filestem_str` for info. /// Always returns a `Some` value. #[inline] fn filestem_str<'a>(&'a self) -> Option<&'a str> { // filestem() returns a byte vector that's guaranteed valid UTF-8 Some(unsafe { cast::transmute(self.filestem()) }) } #[inline] fn extension_str<'a>(&'a self) -> Option<&'a str> { // extension() returns a byte vector that's guaranteed valid UTF-8 self.extension().map_move(|v| unsafe { cast::transmute(v) }) } fn dir_path(&self) -> Path { unsafe { GenericPathUnsafe::from_str_unchecked(self.dirname_str().unwrap()) } } fn file_path(&self) -> Option { match self.filename_str() { None | Some(\"\") => None, Some(s) => Some(unsafe { GenericPathUnsafe::from_str_unchecked(s) }) } } #[inline] fn push_path(&mut self, path: &Path) { self.push_str(path.as_str().unwrap()) } #[inline] fn pop_opt(&mut self) -> Option<~[u8]> { self.pop_opt_str().map_move(|s| s.into_bytes()) } fn pop_opt_str(&mut self) -> Option<~str> { match self.sepidx_or_prefix_len() { None if \".\" == self.repr => None, None => { let mut s = ~\".\"; util::swap(&mut s, &mut self.repr); self.sepidx = None; Some(s) } Some((idxb,idxa,end)) if idxb == idxa && idxb == end => None, Some((idxb,_,end)) if self.repr.slice(idxb, end) == \"\" => None, Some((idxb,idxa,end)) => { let s = self.repr.slice(idxa, end).to_owned(); let trunc = match self.prefix { Some(DiskPrefix) | Some(VerbatimDiskPrefix) | None => { let plen = self.prefix_len(); if idxb == plen { idxa } else { idxb } } _ => idxb }; self.repr.truncate(trunc); self.update_sepidx(); Some(s) } } } /// See `GenericPath::is_absolute` for info. /// /// A Windows Path is considered absolute only if it has a non-volume prefix, /// or if it has a volume prefix and the path starts with ''. /// A path of `foo` is not considered absolute because it's actually /// relative to the \"current volume\". A separate method `Path::is_vol_relative` /// is provided to indicate this case. Similarly a path of `C:foo` is not /// considered absolute because it's relative to the cwd on volume C:. A /// separate method `Path::is_cwd_relative` is provided to indicate this case. #[inline] fn is_absolute(&self) -> bool { match self.prefix { Some(DiskPrefix) => { let rest = self.repr.slice_from(self.prefix_len()); rest.len() > 0 && rest[0] == sep as u8 } Some(_) => true, None => false } } fn is_ancestor_of(&self, other: &Path) -> bool { if !self.equiv_prefix(other) { false } else if self.is_absolute() != other.is_absolute() || self.is_vol_relative() != other.is_vol_relative() { false } else { let mut ita = self.component_iter(); let mut itb = other.component_iter(); if \".\" == self.repr { return itb.next() != Some(\"..\"); } loop { match (ita.next(), itb.next()) { (None, _) => break, (Some(a), Some(b)) if a == b => { loop }, (Some(a), _) if a == \"..\" => { // if ita contains only .. components, it's an ancestor return ita.all(|x| x == \"..\"); } _ => return false } } true } } fn path_relative_from(&self, base: &Path) -> Option { fn comp_requires_verbatim(s: &str) -> bool { s == \".\" || s == \"..\" || s.contains_char(sep2) } if !self.equiv_prefix(base) { // prefixes differ if self.is_absolute() { Some(self.clone()) } else if self.prefix == Some(DiskPrefix) && base.prefix == Some(DiskPrefix) { // both drives, drive letters must differ or they'd be equiv Some(self.clone()) } else { None } } else if self.is_absolute() != base.is_absolute() { if self.is_absolute() { Some(self.clone()) } else { None } } else if self.is_vol_relative() != base.is_vol_relative() { if self.is_vol_relative() { Some(self.clone()) } else { None } } else { let mut ita = self.component_iter(); let mut itb = base.component_iter(); let mut comps = ~[]; let a_verb = self.is_verbatim(); let b_verb = base.is_verbatim(); loop { match (ita.next(), itb.next()) { (None, None) => break, (Some(a), None) if a_verb && comp_requires_verbatim(a) => { return Some(self.clone()) } (Some(a), None) => { comps.push(a); if !a_verb { comps.extend(&mut ita); break; } } (None, _) => comps.push(\"..\"), (Some(a), Some(b)) if comps.is_empty() && a == b => (), (Some(a), Some(b)) if !b_verb && b == \".\" => { if a_verb && comp_requires_verbatim(a) { return Some(self.clone()) } else { comps.push(a) } } (Some(_), Some(b)) if !b_verb && b == \"..\" => return None, (Some(a), Some(_)) if a_verb && comp_requires_verbatim(a) => { return Some(self.clone()) } (Some(a), Some(_)) => { comps.push(\"..\"); for _ in itb { comps.push(\"..\"); } comps.push(a); if !a_verb { comps.extend(&mut ita); break; } } } } Some(Path::from_str(comps.connect(\"\"))) } } } impl Path { /// Returns a new Path from a byte vector /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. /// Raises the `str::not_utf8` condition if invalid UTF-8. #[inline] pub fn new(v: &[u8]) -> Path { GenericPath::from_vec(v) } /// Returns a new Path from a string /// /// # Failure /// /// Raises the `null_byte` condition if the vector contains a NUL. #[inline] pub fn from_str(s: &str) -> Path { GenericPath::from_str(s) } /// Converts the Path into an owned byte vector pub fn into_vec(self) -> ~[u8] { self.repr.into_bytes() } /// Converts the Path into an owned string /// Returns an Option for compatibility with posix::Path, but the /// return value will always be Some. pub fn into_str(self) -> Option<~str> { Some(self.repr) } /// Returns a normalized string representation of a path, by removing all empty /// components, and unnecessary . and .. components. pub fn normalize(s: S) -> ~str { let (_, path) = Path::normalize_(s); path } /// Returns an iterator that yields each component of the path in turn. /// Does not yield the path prefix (including server/share components in UNC paths). /// Does not distinguish between volume-relative and relative paths, e.g. /// abc and abc. /// Does not distinguish between absolute and cwd-relative paths, e.g. /// C:foo and C:foo. pub fn component_iter<'a>(&'a self) -> ComponentIter<'a> { let s = match self.prefix { Some(_) => { let plen = self.prefix_len(); if self.repr.len() > plen && self.repr[plen] == sep as u8 { self.repr.slice_from(plen+1) } else { self.repr.slice_from(plen) } } None if self.repr[0] == sep as u8 => self.repr.slice_from(1), None => self.repr.as_slice() }; let ret = s.split_terminator_iter(sep); ret } /// Returns whether the path is considered \"volume-relative\", which means a path /// that looks like \"foo\". Paths of this form are relative to the current volume, /// but absolute within that volume. #[inline] pub fn is_vol_relative(&self) -> bool { self.prefix.is_none() && self.repr[0] == sep as u8 } /// Returns whether the path is considered \"cwd-relative\", which means a path /// with a volume prefix that is not absolute. This look like \"C:foo.txt\". Paths /// of this form are relative to the cwd on the given volume. #[inline] pub fn is_cwd_relative(&self) -> bool { self.prefix == Some(DiskPrefix) && !self.is_absolute() } /// Returns the PathPrefix for this Path #[inline] pub fn prefix(&self) -> Option { self.prefix } /// Returns whether the prefix is a verbatim prefix, i.e. ? #[inline] pub fn is_verbatim(&self) -> bool { prefix_is_verbatim(self.prefix) } fn equiv_prefix(&self, other: &Path) -> bool { match (self.prefix, other.prefix) { (Some(DiskPrefix), Some(VerbatimDiskPrefix)) => { self.is_absolute() && self.repr[0].to_ascii().eq_ignore_case(other.repr[4].to_ascii()) } (Some(VerbatimDiskPrefix), Some(DiskPrefix)) => { other.is_absolute() && self.repr[4].to_ascii().eq_ignore_case(other.repr[0].to_ascii()) } (Some(VerbatimDiskPrefix), Some(VerbatimDiskPrefix)) => { self.repr[4].to_ascii().eq_ignore_case(other.repr[4].to_ascii()) } (Some(UNCPrefix(_,_)), Some(VerbatimUNCPrefix(_,_))) => { self.repr.slice(2, self.prefix_len()) == other.repr.slice(8, other.prefix_len()) } (Some(VerbatimUNCPrefix(_,_)), Some(UNCPrefix(_,_))) => { self.repr.slice(8, self.prefix_len()) == other.repr.slice(2, other.prefix_len()) } (None, None) => true, (a, b) if a == b => { self.repr.slice_to(self.prefix_len()) == other.repr.slice_to(other.prefix_len()) } _ => false } } fn normalize_(s: S) -> (Option, ~str) { // make borrowck happy let (prefix, val) = { let prefix = parse_prefix(s.as_slice()); let path = Path::normalize__(s.as_slice(), prefix); (prefix, path) }; (prefix, match val { None => s.into_owned(), Some(val) => val }) } fn normalize__(s: &str, prefix: Option) -> Option<~str> { if prefix_is_verbatim(prefix) { // don't do any normalization match prefix { Some(VerbatimUNCPrefix(x, 0)) if s.len() == 8 + x => { // the server component has no trailing '' let mut s = s.into_owned(); s.push_char(sep); Some(s) } _ => None } } else { let (is_abs, comps) = normalize_helper(s, prefix); let mut comps = comps; match (comps.is_some(),prefix) { (false, Some(DiskPrefix)) => { if s[0] >= 'a' as u8 && s[0] <= 'z' as u8 { comps = Some(~[]); } } (false, Some(VerbatimDiskPrefix)) => { if s[4] >= 'a' as u8 && s[0] <= 'z' as u8 { comps = Some(~[]); } } _ => () } match comps { None => None, Some(comps) => { if prefix.is_some() && comps.is_empty() { match prefix.unwrap() { DiskPrefix => { let len = prefix_len(prefix) + is_abs as uint; let mut s = s.slice_to(len).to_owned(); s[0] = s[0].to_ascii().to_upper().to_byte(); if is_abs { s[2] = sep as u8; // normalize C:/ to C: } Some(s) } VerbatimDiskPrefix => { let len = prefix_len(prefix) + is_abs as uint; let mut s = s.slice_to(len).to_owned(); s[4] = s[4].to_ascii().to_upper().to_byte(); Some(s) } _ => { let plen = prefix_len(prefix); if s.len() > plen { Some(s.slice_to(plen).to_owned()) } else { None } } } } else if is_abs && comps.is_empty() { Some(str::from_char(sep)) } else { let prefix_ = s.slice_to(prefix_len(prefix)); let n = prefix_.len() + if is_abs { comps.len() } else { comps.len() - 1} + comps.iter().map(|v| v.len()).sum(); let mut s = str::with_capacity(n); match prefix { Some(DiskPrefix) => { s.push_char(prefix_[0].to_ascii().to_upper().to_char()); s.push_char(':'); } Some(VerbatimDiskPrefix) => { s.push_str(prefix_.slice_to(4)); s.push_char(prefix_[4].to_ascii().to_upper().to_char()); s.push_str(prefix_.slice_from(5)); } Some(UNCPrefix(a,b)) => { s.push_str(\"\"); s.push_str(prefix_.slice(2, a+2)); s.push_char(sep); s.push_str(prefix_.slice(3+a, 3+a+b)); } Some(_) => s.push_str(prefix_), None => () } let mut it = comps.move_iter(); if !is_abs { match it.next() { None => (), Some(comp) => s.push_str(comp) } } for comp in it { s.push_char(sep); s.push_str(comp); } Some(s) } } } } } fn update_sepidx(&mut self) { let s = if self.has_nonsemantic_trailing_slash() { self.repr.slice_to(self.repr.len()-1) } else { self.repr.as_slice() }; let idx = s.rfind(if !prefix_is_verbatim(self.prefix) { is_sep2 } else { is_sep }); let prefixlen = self.prefix_len(); self.sepidx = idx.and_then(|x| if x < prefixlen { None } else { Some(x) }); } fn prefix_len(&self) -> uint { prefix_len(self.prefix) } // Returns a tuple (before, after, end) where before is the index of the separator // and after is the index just after the separator. // end is the length of the string, normally, or the index of the final character if it is // a non-semantic trailing separator in a verbatim string. // If the prefix is considered the separator, before and after are the same. fn sepidx_or_prefix_len(&self) -> Option<(uint,uint,uint)> { match self.sepidx { None => match self.prefix_len() { 0 => None, x => Some((x,x,self.repr.len())) }, Some(x) => { if self.has_nonsemantic_trailing_slash() { Some((x,x+1,self.repr.len()-1)) } else { Some((x,x+1,self.repr.len())) } } } } fn has_nonsemantic_trailing_slash(&self) -> bool { self.is_verbatim() && self.repr.len() > self.prefix_len()+1 && self.repr[self.repr.len()-1] == sep as u8 } fn update_normalized(&mut self, s: S) { let (prefix, path) = Path::normalize_(s); self.repr = path; self.prefix = prefix; self.update_sepidx(); } } /// The standard path separator character pub static sep: char = ''; /// The alternative path separator character pub static sep2: char = '/'; /// Returns whether the given byte is a path separator. /// Only allows the primary separator ''; use is_sep2 to allow '/'. #[inline] pub fn is_sep(c: char) -> bool { c == sep } /// Returns whether the given byte is a path separator. /// Allows both the primary separator '' and the alternative separator '/'. #[inline] pub fn is_sep2(c: char) -> bool { c == sep || c == sep2 } /// Prefix types for Path #[deriving(Eq, Clone, DeepClone)] pub enum PathPrefix { /// Prefix `?`, uint is the length of the following component VerbatimPrefix(uint), /// Prefix `?UNC`, uints are the lengths of the UNC components VerbatimUNCPrefix(uint, uint), /// Prefix `?C:` (for any alphabetic character) VerbatimDiskPrefix, /// Prefix `.`, uint is the length of the following component DeviceNSPrefix(uint), /// UNC prefix `servershare`, uints are the lengths of the server/share UNCPrefix(uint, uint), /// Prefix `C:` for any alphabetic character DiskPrefix } /// Internal function; only public for tests. Don't use. // FIXME (#8169): Make private once visibility is fixed pub fn parse_prefix<'a>(mut path: &'a str) -> Option { if path.starts_with(\"\") { // path = path.slice_from(2); if path.starts_with(\"?\") { // ? path = path.slice_from(2); if path.starts_with(\"UNC\") { // ?UNCservershare path = path.slice_from(4); let (idx_a, idx_b) = match parse_two_comps(path, is_sep) { Some(x) => x, None => (path.len(), 0) }; return Some(VerbatimUNCPrefix(idx_a, idx_b)); } else { // ?path let idx = path.find(''); if idx == Some(2) && path[1] == ':' as u8 { let c = path[0]; if c.is_ascii() && ::char::is_alphabetic(c as char) { // ?C: path return Some(VerbatimDiskPrefix); } } let idx = idx.unwrap_or(path.len()); return Some(VerbatimPrefix(idx)); } } else if path.starts_with(\".\") { // .path path = path.slice_from(2); let idx = path.find('').unwrap_or(path.len()); return Some(DeviceNSPrefix(idx)); } match parse_two_comps(path, is_sep2) { Some((idx_a, idx_b)) if idx_a > 0 && idx_b > 0 => { // servershare return Some(UNCPrefix(idx_a, idx_b)); } _ => () } } else if path.len() > 1 && path[1] == ':' as u8 { // C: let c = path[0]; if c.is_ascii() && ::char::is_alphabetic(c as char) { return Some(DiskPrefix); } } return None; fn parse_two_comps<'a>(mut path: &'a str, f: &fn(char)->bool) -> Option<(uint, uint)> { let idx_a = match path.find(|x| f(x)) { None => return None, Some(x) => x }; path = path.slice_from(idx_a+1); let idx_b = path.find(f).unwrap_or(path.len()); Some((idx_a, idx_b)) } } // None result means the string didn't need normalizing fn normalize_helper<'a>(s: &'a str, prefix: Option) -> (bool,Option<~[&'a str]>) { let f = if !prefix_is_verbatim(prefix) { is_sep2 } else { is_sep }; let is_abs = s.len() > prefix_len(prefix) && f(s.char_at(prefix_len(prefix))); let s_ = s.slice_from(prefix_len(prefix)); let s_ = if is_abs { s_.slice_from(1) } else { s_ }; if is_abs && s_.is_empty() { return (is_abs, match prefix { Some(DiskPrefix) | None => (if is_sep(s.char_at(prefix_len(prefix))) { None } else { Some(~[]) }), Some(_) => Some(~[]), // need to trim the trailing separator }); } let mut comps: ~[&'a str] = ~[]; let mut n_up = 0u; let mut changed = false; for comp in s_.split_iter(f) { if comp.is_empty() { changed = true } else if comp == \".\" { changed = true } else if comp == \"..\" { let has_abs_prefix = match prefix { Some(DiskPrefix) => false, Some(_) => true, None => false }; if (is_abs || has_abs_prefix) && comps.is_empty() { changed = true } else if comps.len() == n_up { comps.push(\"..\"); n_up += 1 } else { comps.pop_opt(); changed = true } } else { comps.push(comp) } } if !changed && !prefix_is_verbatim(prefix) { changed = s.find(is_sep2).is_some(); } if changed { if comps.is_empty() && !is_abs && prefix.is_none() { if s == \".\" { return (is_abs, None); } comps.push(\".\"); } (is_abs, Some(comps)) } else { (is_abs, None) } } // FIXME (#8169): Pull this into parent module once visibility works #[inline(always)] fn contains_nul(v: &[u8]) -> bool { v.iter().any(|&x| x == 0) } fn prefix_is_verbatim(p: Option) -> bool { match p { Some(VerbatimPrefix(_)) | Some(VerbatimUNCPrefix(_,_)) | Some(VerbatimDiskPrefix) => true, Some(DeviceNSPrefix(_)) => true, // not really sure, but I think so _ => false } } fn prefix_len(p: Option) -> uint { match p { None => 0, Some(VerbatimPrefix(x)) => 4 + x, Some(VerbatimUNCPrefix(x,y)) => 8 + x + 1 + y, Some(VerbatimDiskPrefix) => 6, Some(UNCPrefix(x,y)) => 2 + x + 1 + y, Some(DeviceNSPrefix(x)) => 4 + x, Some(DiskPrefix) => 2 } } fn prefix_is_sep(p: Option, c: u8) -> bool { c.is_ascii() && if !prefix_is_verbatim(p) { is_sep2(c as char) } else { is_sep(c as char) } } #[cfg(test)] mod tests { use super::*; use option::{Some,None}; use iter::Iterator; use vec::Vector; macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_str(), Some($exp)); } ); (v: $path:expr, $exp:expr) => ( { let path = $path; assert_eq!(path.as_vec(), $exp); } ) ) macro_rules! b( ($($arg:expr),+) => ( bytes!($($arg),+) ) ) #[test] fn test_parse_prefix() { macro_rules! t( ($path:expr, $exp:expr) => ( { let path = $path; let exp = $exp; let res = parse_prefix(path); assert!(res == exp, \"parse_prefix(\"%s\"): expected %?, found %?\", path, exp, res); } ) ) t!(\"SERVERsharefoo\", Some(UNCPrefix(6,5))); t!(\"\", None); t!(\"SERVER\", None); t!(\"SERVER\", None); t!(\"SERVER\", None); t!(\"SERVERfoo\", None); t!(\"SERVERshare\", Some(UNCPrefix(6,5))); t!(\"SERVER/share/foo\", Some(UNCPrefix(6,5))); t!(\"SERVERshare/foo\", Some(UNCPrefix(6,5))); t!(\"//SERVER/share/foo\", None); t!(\"abc\", None); t!(\"?abc\", Some(VerbatimPrefix(1))); t!(\"?a/b/c\", Some(VerbatimPrefix(5))); t!(\"//?/a/b/c\", None); t!(\".ab\", Some(DeviceNSPrefix(1))); t!(\".a/b\", Some(DeviceNSPrefix(3))); t!(\"//./a/b\", None); t!(\"?UNCserversharefoo\", Some(VerbatimUNCPrefix(6,5))); t!(\"?UNCsharefoo\", Some(VerbatimUNCPrefix(0,5))); t!(\"?UNC\", Some(VerbatimUNCPrefix(0,0))); t!(\"?UNCserver/share/foo\", Some(VerbatimUNCPrefix(16,0))); t!(\"?UNCserver\", Some(VerbatimUNCPrefix(6,0))); t!(\"?UNCserver\", Some(VerbatimUNCPrefix(6,0))); t!(\"?UNC/server/share\", Some(VerbatimPrefix(16))); t!(\"?UNC\", Some(VerbatimPrefix(3))); t!(\"?C:ab.txt\", Some(VerbatimDiskPrefix)); t!(\"?z:\", Some(VerbatimDiskPrefix)); t!(\"?C:\", Some(VerbatimPrefix(2))); t!(\"?C:a.txt\", Some(VerbatimPrefix(7))); t!(\"?C:ab.txt\", Some(VerbatimPrefix(3))); t!(\"?C:/a\", Some(VerbatimPrefix(4))); t!(\"C:foo\", Some(DiskPrefix)); t!(\"z:/foo\", Some(DiskPrefix)); t!(\"d:\", Some(DiskPrefix)); t!(\"ab:\", None); t!(\"\u00fc:foo\", None); t!(\"3:foo\", None); t!(\" :foo\", None); t!(\"::foo\", None); t!(\"?C:\", Some(VerbatimPrefix(2))); t!(\"?z:\", Some(VerbatimDiskPrefix)); t!(\"?ab:\", Some(VerbatimPrefix(3))); t!(\"?C:a\", Some(VerbatimDiskPrefix)); t!(\"?C:/a\", Some(VerbatimPrefix(4))); t!(\"?C:a/b\", Some(VerbatimDiskPrefix)); } #[test] fn test_paths() { t!(v: Path::new([]), b!(\".\")); t!(v: Path::new(b!(\"\")), b!(\"\")); t!(v: Path::new(b!(\"abc\")), b!(\"abc\")); t!(s: Path::from_str(\"\"), \".\"); t!(s: Path::from_str(\"\"), \"\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"hi\"), \"hi\"); t!(s: Path::from_str(\"lib\"), \"lib\"); t!(s: Path::from_str(\"lib\"), \"lib\"); t!(s: Path::from_str(\"hithere\"), \"hithere\"); t!(s: Path::from_str(\"hithere.txt\"), \"hithere.txt\"); t!(s: Path::from_str(\"/\"), \"\"); t!(s: Path::from_str(\"hi/\"), \"hi\"); t!(s: Path::from_str(\"/lib\"), \"lib\"); t!(s: Path::from_str(\"/lib/\"), \"lib\"); t!(s: Path::from_str(\"hi/there\"), \"hithere\"); t!(s: Path::from_str(\"hithere\"), \"hithere\"); t!(s: Path::from_str(\"hi..there\"), \"there\"); t!(s: Path::from_str(\"hi/../there\"), \"there\"); t!(s: Path::from_str(\"..hithere\"), \"..hithere\"); t!(s: Path::from_str(\"..hithere\"), \"hithere\"); t!(s: Path::from_str(\"/../hi/there\"), \"hithere\"); t!(s: Path::from_str(\"foo..\"), \".\"); t!(s: Path::from_str(\"foo..\"), \"\"); t!(s: Path::from_str(\"foo....\"), \"\"); t!(s: Path::from_str(\"foo....bar\"), \"bar\"); t!(s: Path::from_str(\".hi.there.\"), \"hithere\"); t!(s: Path::from_str(\".hi.there...\"), \"hi\"); t!(s: Path::from_str(\"foo....\"), \"..\"); t!(s: Path::from_str(\"foo......\"), \"....\"); t!(s: Path::from_str(\"foo....bar\"), \"..bar\"); assert_eq!(Path::new(b!(\"foobar\")).into_vec(), b!(\"foobar\").to_owned()); assert_eq!(Path::new(b!(\"foo....bar\")).into_vec(), b!(\"bar\").to_owned()); assert_eq!(Path::from_str(\"foobar\").into_str(), Some(~\"foobar\")); assert_eq!(Path::from_str(\"foo....bar\").into_str(), Some(~\"bar\")); t!(s: Path::from_str(\"a\"), \"a\"); t!(s: Path::from_str(\"a\"), \"a\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"ab/\"), \"ab\"); t!(s: Path::from_str(\"b\"), \"b\"); t!(s: Path::from_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"abc\"), \"abc\"); t!(s: Path::from_str(\"servershare/path\"), \"serversharepath\"); t!(s: Path::from_str(\"server/share/path\"), \"serversharepath\"); t!(s: Path::from_str(\"C:ab.txt\"), \"C:ab.txt\"); t!(s: Path::from_str(\"C:a/b.txt\"), \"C:ab.txt\"); t!(s: Path::from_str(\"z:ab.txt\"), \"Z:ab.txt\"); t!(s: Path::from_str(\"z:/a/b.txt\"), \"Z:ab.txt\"); t!(s: Path::from_str(\"ab:/a/b.txt\"), \"ab:ab.txt\"); t!(s: Path::from_str(\"C:\"), \"C:\"); t!(s: Path::from_str(\"C:\"), \"C:\"); t!(s: Path::from_str(\"q:\"), \"Q:\"); t!(s: Path::from_str(\"C:/\"), \"C:\"); t!(s: Path::from_str(\"C:foo..\"), \"C:\"); t!(s: Path::from_str(\"C:foo..\"), \"C:\"); t!(s: Path::from_str(\"C:a\"), \"C:a\"); t!(s: Path::from_str(\"C:a/\"), \"C:a\"); t!(s: Path::from_str(\"C:ab\"), \"C:ab\"); t!(s: Path::from_str(\"C:ab/\"), \"C:ab\"); t!(s: Path::from_str(\"C:a\"), \"C:a\"); t!(s: Path::from_str(\"C:a/\"), \"C:a\"); t!(s: Path::from_str(\"C:ab\"), \"C:ab\"); t!(s: Path::from_str(\"C:ab/\"), \"C:ab\"); t!(s: Path::from_str(\"?z:ab.txt\"), \"?z:ab.txt\"); t!(s: Path::from_str(\"?C:/a/b.txt\"), \"?C:/a/b.txt\"); t!(s: Path::from_str(\"?C:a/b.txt\"), \"?C:a/b.txt\"); t!(s: Path::from_str(\"?testab.txt\"), \"?testab.txt\"); t!(s: Path::from_str(\"?foobar\"), \"?foobar\"); t!(s: Path::from_str(\".foobar\"), \".foobar\"); t!(s: Path::from_str(\".\"), \".\"); t!(s: Path::from_str(\"?UNCserversharefoo\"), \"?UNCserversharefoo\"); t!(s: Path::from_str(\"?UNCserver/share\"), \"?UNCserver/share\"); t!(s: Path::from_str(\"?UNCserver\"), \"?UNCserver\"); t!(s: Path::from_str(\"?UNC\"), \"?UNC\"); t!(s: Path::from_str(\"?UNC\"), \"?UNC\"); // I'm not sure whether .foo/bar should normalize to .foobar // as information is sparse and this isn't really googleable. // I'm going to err on the side of not normalizing it, as this skips the filesystem t!(s: Path::from_str(\".foo/bar\"), \".foo/bar\"); t!(s: Path::from_str(\".foobar\"), \".foobar\"); } #[test] fn test_null_byte() { use path2::null_byte::cond; let mut handled = false; let mut p = do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"foobar\", 0)); (b!(\"bar\").to_owned()) }).inside { Path::new(b!(\"foobar\", 0)) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"bar\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.set_filename(b!(\"f\", 0, \"o\")) }; assert!(handled); assert_eq!(p.as_vec(), b!(\"foo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"null\", 0, \"byte\")); (b!(\"nullbyte\").to_owned()) }).inside { p.set_dirname(b!(\"null\", 0, \"byte\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"nullbytefoo\")); handled = false; do cond.trap(|v| { handled = true; assert_eq!(v.as_slice(), b!(\"f\", 0, \"o\")); (b!(\"foo\").to_owned()) }).inside { p.push(b!(\"f\", 0, \"o\")); }; assert!(handled); assert_eq!(p.as_vec(), b!(\"nullbytefoofoo\")); } #[test] fn test_null_byte_fail() { use path2::null_byte::cond; use task; macro_rules! t( ($name:expr => $code:block) => ( { let mut t = task::task(); t.supervised(); t.name($name); let res = do t.try $code; assert!(res.is_err()); } ) ) t!(~\"new() wnul\" => { do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { Path::new(b!(\"foobar\", 0)) }; }) t!(~\"set_filename wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_filename(b!(\"foo\", 0)) }; }) t!(~\"set_dirname wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.set_dirname(b!(\"foo\", 0)) }; }) t!(~\"push wnul\" => { let mut p = Path::new(b!(\"foobar\")); do cond.trap(|_| { (b!(\"null\", 0).to_owned()) }).inside { p.push(b!(\"foo\", 0)) }; }) } #[test] #[should_fail] fn test_not_utf8_fail() { Path::new(b!(\"hello\", 0x80, \".txt\")); } #[test] fn test_components() { macro_rules! t( (s: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::from_str($path); assert_eq!(path.$op(), Some($exp)); } ); (s: $path:expr, $op:ident, $exp:expr, opt) => ( { let path = Path::from_str($path); let left = path.$op(); assert_eq!(left, $exp); } ); (v: $path:expr, $op:ident, $exp:expr) => ( { let path = Path::new($path); assert_eq!(path.$op(), $exp); } ) ) t!(v: b!(\"abc\"), filename, b!(\"c\")); t!(s: \"abc\", filename_str, \"c\"); t!(s: \"abc\", filename_str, \"c\"); t!(s: \"a\", filename_str, \"a\"); t!(s: \"a\", filename_str, \"a\"); t!(s: \".\", filename_str, \"\"); t!(s: \"\", filename_str, \"\"); t!(s: \"..\", filename_str, \"\"); t!(s: \"....\", filename_str, \"\"); t!(s: \"c:foo.txt\", filename_str, \"foo.txt\"); t!(s: \"C:\", filename_str, \"\"); t!(s: \"C:\", filename_str, \"\"); t!(s: \"serversharefoo.txt\", filename_str, \"foo.txt\"); t!(s: \"servershare\", filename_str, \"\"); t!(s: \"server\", filename_str, \"server\"); t!(s: \"?barfoo.txt\", filename_str, \"foo.txt\"); t!(s: \"?bar\", filename_str, \"\"); t!(s: \"?\", filename_str, \"\"); t!(s: \"?UNCserversharefoo.txt\", filename_str, \"foo.txt\"); t!(s: \"?UNCserver\", filename_str, \"\"); t!(s: \"?UNC\", filename_str, \"\"); t!(s: \"?C:foo.txt\", filename_str, \"foo.txt\"); t!(s: \"?C:\", filename_str, \"\"); t!(s: \"?C:\", filename_str, \"\"); t!(s: \"?foo/bar\", filename_str, \"\"); t!(s: \"?C:/foo\", filename_str, \"\"); t!(s: \".foobar\", filename_str, \"bar\"); t!(s: \".foo\", filename_str, \"\"); t!(s: \".foo/bar\", filename_str, \"\"); t!(s: \".foobar/baz\", filename_str, \"bar/baz\"); t!(s: \".\", filename_str, \"\"); t!(s: \"?ab\", filename_str, \"b\"); t!(v: b!(\"abc\"), dirname, b!(\"ab\")); t!(s: \"abc\", dirname_str, \"ab\"); t!(s: \"abc\", dirname_str, \"ab\"); t!(s: \"a\", dirname_str, \".\"); t!(s: \"a\", dirname_str, \"\"); t!(s: \".\", dirname_str, \".\"); t!(s: \"\", dirname_str, \"\"); t!(s: \"..\", dirname_str, \"..\"); t!(s: \"....\", dirname_str, \"....\"); t!(s: \"c:foo.txt\", dirname_str, \"C:\"); t!(s: \"C:\", dirname_str, \"C:\"); t!(s: \"C:\", dirname_str, \"C:\"); t!(s: \"C:foo.txt\", dirname_str, \"C:\"); t!(s: \"serversharefoo.txt\", dirname_str, \"servershare\"); t!(s: \"servershare\", dirname_str, \"servershare\"); t!(s: \"server\", dirname_str, \"\"); t!(s: \"?barfoo.txt\", dirname_str, \"?bar\"); t!(s: \"?bar\", dirname_str, \"?bar\"); t!(s: \"?\", dirname_str, \"?\"); t!(s: \"?UNCserversharefoo.txt\", dirname_str, \"?UNCservershare\"); t!(s: \"?UNCserver\", dirname_str, \"?UNCserver\"); t!(s: \"?UNC\", dirname_str, \"?UNC\"); t!(s: \"?C:foo.txt\", dirname_str, \"?C:\"); t!(s: \"?C:\", dirname_str, \"?C:\"); t!(s: \"?C:\", dirname_str, \"?C:\"); t!(s: \"?C:/foo/bar\", dirname_str, \"?C:/foo/bar\"); t!(s: \"?foo/bar\", dirname_str, \"?foo/bar\"); t!(s: \".foobar\", dirname_str, \".foo\"); t!(s: \".foo\", dirname_str, \".foo\"); t!(s: \"?ab\", dirname_str, \"?a\"); t!(v: b!(\"hithere.txt\"), filestem, b!(\"there\")); t!(s: \"hithere.txt\", filestem_str, \"there\"); t!(s: \"hithere\", filestem_str, \"there\"); t!(s: \"there.txt\", filestem_str, \"there\"); t!(s: \"there\", filestem_str, \"there\"); t!(s: \".\", filestem_str, \"\"); t!(s: \"\", filestem_str, \"\"); t!(s: \"foo.bar\", filestem_str, \".bar\"); t!(s: \".bar\", filestem_str, \".bar\"); t!(s: \"..bar\", filestem_str, \".\"); t!(s: \"hithere..txt\", filestem_str, \"there.\"); t!(s: \"..\", filestem_str, \"\"); t!(s: \"....\", filestem_str, \"\"); // filestem is based on filename, so we don't need the full set of prefix tests t!(v: b!(\"hithere.txt\"), extension, Some(b!(\"txt\"))); t!(v: b!(\"hithere\"), extension, None); t!(s: \"hithere.txt\", extension_str, Some(\"txt\"), opt); t!(s: \"hithere\", extension_str, None, opt); t!(s: \"there.txt\", extension_str, Some(\"txt\"), opt); t!(s: \"there\", extension_str, None, opt); t!(s: \".\", extension_str, None, opt); t!(s: \"\", extension_str, None, opt); t!(s: \"foo.bar\", extension_str, None, opt); t!(s: \".bar\", extension_str, None, opt); t!(s: \"..bar\", extension_str, Some(\"bar\"), opt); t!(s: \"hithere..txt\", extension_str, Some(\"txt\"), opt); t!(s: \"..\", extension_str, None, opt); t!(s: \"....\", extension_str, None, opt); // extension is based on filename, so we don't need the full set of prefix tests } #[test] fn test_push() { macro_rules! t( (s: $path:expr, $join:expr) => ( { let path = ($path); let join = ($join); let mut p1 = Path::from_str(path); let p2 = p1.clone(); p1.push_str(join); assert_eq!(p1, p2.join_str(join)); } ) ) t!(s: \"abc\", \"..\"); t!(s: \"abc\", \"d\"); t!(s: \"ab\", \"cd\"); t!(s: \"ab\", \"cd\"); // this is just a sanity-check test. push_str and join_str share an implementation, // so there's no need for the full set of prefix tests // we do want to check one odd case though to ensure the prefix is re-parsed let mut p = Path::from_str(\"?C:\"); assert_eq!(p.prefix(), Some(VerbatimPrefix(2))); p.push_str(\"foo\"); assert_eq!(p.prefix(), Some(VerbatimDiskPrefix)); assert_eq!(p.as_str(), Some(\"?C:foo\")); // and another with verbatim non-normalized paths let mut p = Path::from_str(\"?C:a\"); p.push_str(\"foo\"); assert_eq!(p.as_str(), Some(\"?C:afoo\")); } #[test] fn test_push_path() { macro_rules! t( (s: $path:expr, $push:expr, $exp:expr) => ( { let mut p = Path::from_str($path); let push = Path::from_str($push); p.push_path(&push); assert_eq!(p.as_str(), Some($exp)); } ) ) t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"ab\", \"cd\", \"abcd\"); t!(s: \"ab\", \"cd\", \"cd\"); t!(s: \"ab\", \".\", \"ab\"); t!(s: \"ab\", \"..c\", \"ac\"); t!(s: \"ab\", \"C:a.txt\", \"C:a.txt\"); t!(s: \"ab\", \"......c\", \"..c\"); t!(s: \"ab\", \"C:a.txt\", \"C:a.txt\"); t!(s: \"C:a\", \"C:b.txt\", \"C:b.txt\"); t!(s: \"C:abc\", \"C:d\", \"C:abcd\"); t!(s: \"C:abc\", \"C:d\", \"C:abcd\"); t!(s: \"C:ab\", \"......c\", \"C:..c\"); t!(s: \"C:ab\", \"......c\", \"C:c\"); t!(s: \"serversharefoo\", \"bar\", \"serversharefoobar\"); t!(s: \"serversharefoo\", \"....bar\", \"serversharebar\"); t!(s: \"serversharefoo\", \"C:baz\", \"C:baz\"); t!(s: \"?C:ab\", \"C:cd\", \"?C:abcd\"); t!(s: \"?C:ab\", \"C:cd\", \"C:cd\"); t!(s: \"?C:ab\", \"C:cd\", \"C:cd\"); t!(s: \"?foobar\", \"baz\", \"?foobarbaz\"); t!(s: \"?C:ab\", \"......c\", \"?C:ab......c\"); t!(s: \"?foobar\", \"....c\", \"?foobar....c\"); t!(s: \"?\", \"foo\", \"?foo\"); t!(s: \"?UNCserversharefoo\", \"bar\", \"?UNCserversharefoobar\"); t!(s: \"?UNCservershare\", \"C:a\", \"C:a\"); t!(s: \"?UNCservershare\", \"C:a\", \"C:a\"); t!(s: \"?UNCserver\", \"foo\", \"?UNCserverfoo\"); t!(s: \"C:a\", \"?UNCservershare\", \"?UNCservershare\"); t!(s: \".foobar\", \"baz\", \".foobarbaz\"); t!(s: \".foobar\", \"C:a\", \"C:a\"); // again, not sure about the following, but I'm assuming . should be verbatim t!(s: \".foo\", \"..bar\", \".foo..bar\"); t!(s: \"?C:\", \"foo\", \"?C:foo\"); // this is a weird one } #[test] fn test_pop() { macro_rules! t( (s: $path:expr, $left:expr, $right:expr) => ( { let pstr = $path; let mut p = Path::from_str(pstr); let file = p.pop_opt_str(); let left = $left; assert!(p.as_str() == Some(left), \"`%s`.pop() failed; expected remainder `%s`, found `%s`\", pstr, left, p.as_str().unwrap()); let right = $right; let res = file.map(|s| s.as_slice()); assert!(res == right, \"`%s`.pop() failed; expected `%?`, found `%?`\", pstr, right, res); } ); (v: [$($path:expr),+], [$($left:expr),+], Some($($right:expr),+)) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file.map(|v| v.as_slice()), Some(b!($($right),+))); } ); (v: [$($path:expr),+], [$($left:expr),+], None) => ( { let mut p = Path::new(b!($($path),+)); let file = p.pop_opt(); assert_eq!(p.as_vec(), b!($($left),+)); assert_eq!(file, None); } ) ) t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \".\", \".\", None); t!(s: \"a\", \"\", Some(\"a\")); t!(s: \"\", \"\", None); t!(v: [\"abc\"], [\"ab\"], Some(\"c\")); t!(v: [\"a\"], [\".\"], Some(\"a\")); t!(v: [\".\"], [\".\"], None); t!(v: [\"a\"], [\"\"], Some(\"a\")); t!(v: [\"\"], [\"\"], None); t!(s: \"C:ab\", \"C:a\", Some(\"b\")); t!(s: \"C:a\", \"C:\", Some(\"a\")); t!(s: \"C:\", \"C:\", None); t!(s: \"C:ab\", \"C:a\", Some(\"b\")); t!(s: \"C:a\", \"C:\", Some(\"a\")); t!(s: \"C:\", \"C:\", None); t!(s: \"servershareab\", \"serversharea\", Some(\"b\")); t!(s: \"serversharea\", \"servershare\", Some(\"a\")); t!(s: \"servershare\", \"servershare\", None); t!(s: \"?abc\", \"?ab\", Some(\"c\")); t!(s: \"?ab\", \"?a\", Some(\"b\")); t!(s: \"?a\", \"?a\", None); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:a\", \"?C:\", Some(\"a\")); t!(s: \"?C:\", \"?C:\", None); t!(s: \"?UNCservershareab\", \"?UNCserversharea\", Some(\"b\")); t!(s: \"?UNCserversharea\", \"?UNCservershare\", Some(\"a\")); t!(s: \"?UNCservershare\", \"?UNCservershare\", None); t!(s: \".abc\", \".ab\", Some(\"c\")); t!(s: \".ab\", \".a\", Some(\"b\")); t!(s: \".a\", \".a\", None); t!(s: \"?ab\", \"?a\", Some(\"b\")); } #[test] fn test_join() { t!(s: Path::from_str(\"abc\").join_str(\"..\"), \"ab\"); t!(s: Path::from_str(\"abc\").join_str(\"d\"), \"abcd\"); t!(s: Path::from_str(\"ab\").join_str(\"cd\"), \"abcd\"); t!(s: Path::from_str(\"ab\").join_str(\"cd\"), \"cd\"); t!(s: Path::from_str(\".\").join_str(\"ab\"), \"ab\"); t!(s: Path::from_str(\"\").join_str(\"ab\"), \"ab\"); t!(v: Path::new(b!(\"abc\")).join(b!(\"..\")), b!(\"ab\")); t!(v: Path::new(b!(\"abc\")).join(b!(\"d\")), b!(\"abcd\")); // full join testing is covered under test_push_path, so no need for // the full set of prefix tests } #[test] fn test_join_path() { macro_rules! t( (s: $path:expr, $join:expr, $exp:expr) => ( { let path = Path::from_str($path); let join = Path::from_str($join); let res = path.join_path(&join); assert_eq!(res.as_str(), Some($exp)); } ) ) t!(s: \"abc\", \"..\", \"ab\"); t!(s: \"abc\", \"d\", \"abcd\"); t!(s: \"ab\", \"cd\", \"abcd\"); t!(s: \"ab\", \"cd\", \"cd\"); t!(s: \".\", \"ab\", \"ab\"); t!(s: \"\", \"ab\", \"ab\"); // join_path is implemented using push_path, so there's no need for // the full set of prefix tests } #[test] fn test_with_helpers() { macro_rules! t( (s: $path:expr, $op:ident, $arg:expr, $res:expr) => ( { let pstr = $path; let path = Path::from_str(pstr); let arg = $arg; let res = path.$op(arg); let exp = $res; assert!(res.as_str() == Some(exp), \"`%s`.%s(\"%s\"): Expected `%s`, found `%s`\", pstr, stringify!($op), arg, exp, res.as_str().unwrap()); } ) ) t!(s: \"abc\", with_dirname_str, \"d\", \"dc\"); t!(s: \"abc\", with_dirname_str, \"de\", \"dec\"); t!(s: \"abc\", with_dirname_str, \"\", \"c\"); t!(s: \"abc\", with_dirname_str, \"\", \"c\"); t!(s: \"abc\", with_dirname_str, \"/\", \"c\"); t!(s: \"abc\", with_dirname_str, \".\", \"c\"); t!(s: \"abc\", with_dirname_str, \"..\", \"..c\"); t!(s: \"\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"\", with_dirname_str, \"\", \".\"); t!(s: \"foo\", with_dirname_str, \"bar\", \"barfoo\"); t!(s: \"..\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"....\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"..\", with_dirname_str, \"\", \".\"); t!(s: \"....\", with_dirname_str, \"\", \".\"); t!(s: \".\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"foo\", with_dirname_str, \"..\", \"..foo\"); t!(s: \"foo\", with_dirname_str, \"....\", \"....foo\"); t!(s: \"C:ab\", with_dirname_str, \"foo\", \"foob\"); t!(s: \"foo\", with_dirname_str, \"C:ab\", \"C:abfoo\"); t!(s: \"C:ab\", with_dirname_str, \"servershare\", \"servershareb\"); t!(s: \"a\", with_dirname_str, \"servershare\", \"serversharea\"); t!(s: \"ab\", with_dirname_str, \"?\", \"?b\"); t!(s: \"ab\", with_dirname_str, \"C:\", \"C:b\"); t!(s: \"ab\", with_dirname_str, \"C:\", \"C:b\"); t!(s: \"ab\", with_dirname_str, \"C:/\", \"C:b\"); t!(s: \"C:\", with_dirname_str, \"foo\", \"foo\"); t!(s: \"C:\", with_dirname_str, \"foo\", \"foo\"); t!(s: \".\", with_dirname_str, \"C:\", \"C:\"); t!(s: \".\", with_dirname_str, \"C:/\", \"C:\"); t!(s: \"?C:foo\", with_dirname_str, \"C:\", \"C:foo\"); t!(s: \"?C:\", with_dirname_str, \"bar\", \"bar\"); t!(s: \"foobar\", with_dirname_str, \"?C:baz\", \"?C:bazbar\"); t!(s: \"?foo\", with_dirname_str, \"C:bar\", \"C:bar\"); t!(s: \"?afoo\", with_dirname_str, \"C:bar\", \"C:barfoo\"); t!(s: \"?afoo/bar\", with_dirname_str, \"C:baz\", \"C:bazfoobar\"); t!(s: \"?UNCserversharebaz\", with_dirname_str, \"a\", \"abaz\"); t!(s: \"foobar\", with_dirname_str, \"?UNCserversharebaz\", \"?UNCserversharebazbar\"); t!(s: \".foo\", with_dirname_str, \"bar\", \"bar\"); t!(s: \".foobar\", with_dirname_str, \"baz\", \"bazbar\"); t!(s: \".foobar\", with_dirname_str, \"baz\", \"bazbar\"); t!(s: \".foobar\", with_dirname_str, \"baz/\", \"bazbar\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \".\", with_filename_str, \"foo\", \"foo\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \"\", with_filename_str, \"foo\", \"foo\"); t!(s: \"a\", with_filename_str, \"foo\", \"foo\"); t!(s: \"foo\", with_filename_str, \"bar\", \"bar\"); t!(s: \"\", with_filename_str, \"foo\", \"foo\"); t!(s: \"a\", with_filename_str, \"foo\", \"foo\"); t!(s: \"abc\", with_filename_str, \"\", \"ab\"); t!(s: \"abc\", with_filename_str, \".\", \"ab\"); t!(s: \"abc\", with_filename_str, \"..\", \"a\"); t!(s: \"a\", with_filename_str, \"\", \"\"); t!(s: \"foo\", with_filename_str, \"\", \".\"); t!(s: \"abc\", with_filename_str, \"de\", \"abde\"); t!(s: \"abc\", with_filename_str, \"d\", \"abd\"); t!(s: \"..\", with_filename_str, \"foo\", \"..foo\"); t!(s: \"....\", with_filename_str, \"foo\", \"....foo\"); t!(s: \"..\", with_filename_str, \"\", \"..\"); t!(s: \"....\", with_filename_str, \"\", \"....\"); t!(s: \"C:foobar\", with_filename_str, \"baz\", \"C:foobaz\"); t!(s: \"C:foo\", with_filename_str, \"bar\", \"C:bar\"); t!(s: \"C:\", with_filename_str, \"foo\", \"C:foo\"); t!(s: \"C:foobar\", with_filename_str, \"baz\", \"C:foobaz\"); t!(s: \"C:foo\", with_filename_str, \"bar\", \"C:bar\"); t!(s: \"C:\", with_filename_str, \"foo\", \"C:foo\"); t!(s: \"C:foo\", with_filename_str, \"\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"\", \"C:\"); t!(s: \"C:foobar\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foobar\", with_filename_str, \"..\", \"C:\"); t!(s: \"C:foo\", with_filename_str, \"..\", \"C:..\"); t!(s: \"C:\", with_filename_str, \"..\", \"C:..\"); t!(s: \"serversharefoo\", with_filename_str, \"bar\", \"serversharebar\"); t!(s: \"servershare\", with_filename_str, \"foo\", \"serversharefoo\"); t!(s: \"serversharefoo\", with_filename_str, \"\", \"servershare\"); t!(s: \"servershare\", with_filename_str, \"\", \"servershare\"); t!(s: \"serversharefoo\", with_filename_str, \"..\", \"servershare\"); t!(s: \"servershare\", with_filename_str, \"..\", \"servershare\"); t!(s: \"?C:foobar\", with_filename_str, \"baz\", \"?C:foobaz\"); t!(s: \"?C:foo\", with_filename_str, \"bar\", \"?C:bar\"); t!(s: \"?C:\", with_filename_str, \"foo\", \"?C:foo\"); t!(s: \"?C:foo\", with_filename_str, \"..\", \"?C:..\"); t!(s: \"?foobar\", with_filename_str, \"baz\", \"?foobaz\"); t!(s: \"?foo\", with_filename_str, \"bar\", \"?foobar\"); t!(s: \"?\", with_filename_str, \"foo\", \"?foo\"); t!(s: \"?foobar\", with_filename_str, \"..\", \"?foo..\"); t!(s: \".foobar\", with_filename_str, \"baz\", \".foobaz\"); t!(s: \".foo\", with_filename_str, \"bar\", \".foobar\"); t!(s: \".foobar\", with_filename_str, \"..\", \".foo..\"); t!(s: \"hithere.txt\", with_filestem_str, \"here\", \"hihere.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"\", \"hi.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \".\", \"hi..txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"..\", \"hi...txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"\", \"hi.txt\"); t!(s: \"hithere.txt\", with_filestem_str, \"foobar\", \"hifoobar.txt\"); t!(s: \"hithere.foo.txt\", with_filestem_str, \"here\", \"hihere.txt\"); t!(s: \"hithere\", with_filestem_str, \"here\", \"hihere\"); t!(s: \"hithere\", with_filestem_str, \"\", \"hi\"); t!(s: \"hi\", with_filestem_str, \"\", \".\"); t!(s: \"hi\", with_filestem_str, \"\", \"\"); t!(s: \"hithere\", with_filestem_str, \"..\", \".\"); t!(s: \"hithere\", with_filestem_str, \".\", \"hi\"); t!(s: \"hithere.\", with_filestem_str, \"foo\", \"hifoo.\"); t!(s: \"hithere.\", with_filestem_str, \"\", \"hi\"); t!(s: \"hithere.\", with_filestem_str, \".\", \".\"); t!(s: \"hithere.\", with_filestem_str, \"..\", \"hi...\"); t!(s: \"\", with_filestem_str, \"foo\", \"foo\"); t!(s: \".\", with_filestem_str, \"foo\", \"foo\"); t!(s: \"hithere..\", with_filestem_str, \"here\", \"hihere.\"); t!(s: \"hithere..\", with_filestem_str, \"\", \"hi\"); // filestem setter calls filename setter internally, no need for extended tests t!(s: \"hithere.txt\", with_extension_str, \"exe\", \"hithere.exe\"); t!(s: \"hithere.txt\", with_extension_str, \"\", \"hithere\"); t!(s: \"hithere.txt\", with_extension_str, \".\", \"hithere..\"); t!(s: \"hithere.txt\", with_extension_str, \"..\", \"hithere...\"); t!(s: \"hithere\", with_extension_str, \"txt\", \"hithere.txt\"); t!(s: \"hithere\", with_extension_str, \".\", \"hithere..\"); t!(s: \"hithere\", with_extension_str, \"..\", \"hithere...\"); t!(s: \"hithere.\", with_extension_str, \"txt\", \"hithere.txt\"); t!(s: \"hi.foo\", with_extension_str, \"txt\", \"hi.foo.txt\"); t!(s: \"hithere.txt\", with_extension_str, \".foo\", \"hithere..foo\"); t!(s: \"\", with_extension_str, \"txt\", \"\"); t!(s: \"\", with_extension_str, \".\", \"\"); t!(s: \"\", with_extension_str, \"..\", \"\"); t!(s: \".\", with_extension_str, \"txt\", \".\"); // extension setter calls filename setter internally, no need for extended tests } #[test] fn test_setters() { macro_rules! t( (s: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::from_str(path); p1.$set(arg); let p2 = Path::from_str(path); assert_eq!(p1, p2.$with(arg)); } ); (v: $path:expr, $set:ident, $with:ident, $arg:expr) => ( { let path = $path; let arg = $arg; let mut p1 = Path::new(path); p1.$set(arg); let p2 = Path::new(path); assert_eq!(p1, p2.$with(arg)); } ) ) t!(v: b!(\"abc\"), set_dirname, with_dirname, b!(\"d\")); t!(v: b!(\"abc\"), set_dirname, with_dirname, b!(\"de\")); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"d\"); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"de\"); t!(s: \"\", set_dirname_str, with_dirname_str, \"foo\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"bar\"); t!(s: \"abc\", set_dirname_str, with_dirname_str, \"\"); t!(s: \"....\", set_dirname_str, with_dirname_str, \"x\"); t!(s: \"foo\", set_dirname_str, with_dirname_str, \"....\"); t!(v: b!(\"abc\"), set_filename, with_filename, b!(\"d\")); t!(v: b!(\"\"), set_filename, with_filename, b!(\"foo\")); t!(s: \"abc\", set_filename_str, with_filename_str, \"d\"); t!(s: \"\", set_filename_str, with_filename_str, \"foo\"); t!(s: \".\", set_filename_str, with_filename_str, \"foo\"); t!(s: \"ab\", set_filename_str, with_filename_str, \"\"); t!(s: \"a\", set_filename_str, with_filename_str, \"\"); t!(v: b!(\"hithere.txt\"), set_filestem, with_filestem, b!(\"here\")); t!(s: \"hithere.txt\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere.\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere\", set_filestem_str, with_filestem_str, \"here\"); t!(s: \"hithere.txt\", set_filestem_str, with_filestem_str, \"\"); t!(s: \"hithere\", set_filestem_str, with_filestem_str, \"\"); t!(v: b!(\"hithere.txt\"), set_extension, with_extension, b!(\"exe\")); t!(s: \"hithere.txt\", set_extension_str, with_extension_str, \"exe\"); t!(s: \"hithere.\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hithere\", set_extension_str, with_extension_str, \"txt\"); t!(s: \"hithere.txt\", set_extension_str, with_extension_str, \"\"); t!(s: \"hithere\", set_extension_str, with_extension_str, \"\"); t!(s: \".\", set_extension_str, with_extension_str, \"txt\"); // with_ helpers use the setter internally, so the tests for the with_ helpers // will suffice. No need for the full set of prefix tests. } #[test] fn test_getters() { macro_rules! t( (s: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename_str(), $filename); assert_eq!(path.dirname_str(), $dirname); assert_eq!(path.filestem_str(), $filestem); assert_eq!(path.extension_str(), $ext); } ); (v: $path:expr, $filename:expr, $dirname:expr, $filestem:expr, $ext:expr) => ( { let path = $path; assert_eq!(path.filename(), $filename); assert_eq!(path.dirname(), $dirname); assert_eq!(path.filestem(), $filestem); assert_eq!(path.extension(), $ext); } ) ) t!(v: Path::new(b!(\"abc\")), b!(\"c\"), b!(\"ab\"), b!(\"c\"), None); t!(s: Path::from_str(\"abc\"), Some(\"c\"), Some(\"ab\"), Some(\"c\"), None); t!(s: Path::from_str(\".\"), Some(\"\"), Some(\".\"), Some(\"\"), None); t!(s: Path::from_str(\"\"), Some(\"\"), Some(\"\"), Some(\"\"), None); t!(s: Path::from_str(\"..\"), Some(\"\"), Some(\"..\"), Some(\"\"), None); t!(s: Path::from_str(\"....\"), Some(\"\"), Some(\"....\"), Some(\"\"), None); t!(s: Path::from_str(\"hithere.txt\"), Some(\"there.txt\"), Some(\"hi\"), Some(\"there\"), Some(\"txt\")); t!(s: Path::from_str(\"hithere\"), Some(\"there\"), Some(\"hi\"), Some(\"there\"), None); t!(s: Path::from_str(\"hithere.\"), Some(\"there.\"), Some(\"hi\"), Some(\"there\"), Some(\"\")); t!(s: Path::from_str(\"hi.there\"), Some(\".there\"), Some(\"hi\"), Some(\".there\"), None); t!(s: Path::from_str(\"hi..there\"), Some(\"..there\"), Some(\"hi\"), Some(\".\"), Some(\"there\")); // these are already tested in test_components, so no need for extended tests } #[test] fn test_dir_file_path() { t!(s: Path::from_str(\"hithere\").dir_path(), \"hi\"); t!(s: Path::from_str(\"hi\").dir_path(), \".\"); t!(s: Path::from_str(\"hi\").dir_path(), \"\"); t!(s: Path::from_str(\"\").dir_path(), \"\"); t!(s: Path::from_str(\"..\").dir_path(), \"..\"); t!(s: Path::from_str(\"....\").dir_path(), \"....\"); macro_rules! t( ($path:expr, $exp:expr) => ( { let path = $path; let left = path.and_then_ref(|p| p.as_str()); assert_eq!(left, $exp); } ); ) t!(Path::from_str(\"hithere\").file_path(), Some(\"there\")); t!(Path::from_str(\"hi\").file_path(), Some(\"hi\")); t!(Path::from_str(\".\").file_path(), None); t!(Path::from_str(\"\").file_path(), None); t!(Path::from_str(\"..\").file_path(), None); t!(Path::from_str(\"....\").file_path(), None); // dir_path and file_path are just dirname and filename interpreted as paths. // No need for extended tests } #[test] fn test_is_absolute() { macro_rules! t( ($path:expr, $abs:expr, $vol:expr, $cwd:expr) => ( { let path = Path::from_str($path); let (abs, vol, cwd) = ($abs, $vol, $cwd); let b = path.is_absolute(); assert!(b == abs, \"Path '%s'.is_absolute(): expected %?, found %?\", path.as_str().unwrap(), abs, b); let b = path.is_vol_relative(); assert!(b == vol, \"Path '%s'.is_vol_relative(): expected %?, found %?\", path.as_str().unwrap(), vol, b); let b = path.is_cwd_relative(); assert!(b == cwd, \"Path '%s'.is_cwd_relative(): expected %?, found %?\", path.as_str().unwrap(), cwd, b); } ) ) t!(\"abc\", false, false, false); t!(\"abc\", false, true, false); t!(\"a\", false, false, false); t!(\"a\", false, true, false); t!(\".\", false, false, false); t!(\"\", false, true, false); t!(\"..\", false, false, false); t!(\"....\", false, false, false); t!(\"C:ab.txt\", false, false, true); t!(\"C:ab.txt\", true, false, false); t!(\"servershareab.txt\", true, false, false); t!(\"?abc.txt\", true, false, false); t!(\"?C:ab.txt\", true, false, false); t!(\"?C:ab.txt\", true, false, false); // NB: not equivalent to C:ab.txt t!(\"?UNCservershareab.txt\", true, false, false); t!(\".ab\", true, false, false); } #[test] fn test_is_ancestor_of() { macro_rules! t( (s: $path:expr, $dest:expr, $exp:expr) => ( { let path = Path::from_str($path); let dest = Path::from_str($dest); let exp = $exp; let res = path.is_ancestor_of(&dest); assert!(res == exp, \"`%s`.is_ancestor_of(`%s`): Expected %?, found %?\", path.as_str().unwrap(), dest.as_str().unwrap(), exp, res); } ) ) t!(s: \"abc\", \"abcd\", true); t!(s: \"abc\", \"abc\", true); t!(s: \"abc\", \"ab\", false); t!(s: \"abc\", \"abc\", true); t!(s: \"ab\", \"abc\", true); t!(s: \"abcd\", \"abc\", false); t!(s: \"ab\", \"abc\", false); t!(s: \"ab\", \"abc\", false); t!(s: \"abc\", \"abd\", false); t!(s: \"..abc\", \"abc\", false); t!(s: \"abc\", \"..abc\", false); t!(s: \"abc\", \"abcd\", false); t!(s: \"abcd\", \"abc\", false); t!(s: \"..ab\", \"..abc\", true); t!(s: \".\", \"ab\", true); t!(s: \".\", \".\", true); t!(s: \"\", \"\", true); t!(s: \"\", \"ab\", true); t!(s: \"..\", \"ab\", true); t!(s: \"....\", \"ab\", true); t!(s: \"foobar\", \"foobar\", false); t!(s: \"foobar\", \"foobar\", false); t!(s: \"foo\", \"C:foo\", false); t!(s: \"C:foo\", \"foo\", false); t!(s: \"C:foo\", \"C:foobar\", true); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foo\", \"C:foobar\", true); t!(s: \"C:\", \"C:\", true); t!(s: \"C:\", \"C:\", false); t!(s: \"C:\", \"C:\", false); t!(s: \"C:\", \"C:\", true); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foobar\", \"C:foo\", false); t!(s: \"C:foo\", \"foo\", false); t!(s: \"foo\", \"C:foo\", false); t!(s: \"serversharefoo\", \"serversharefoobar\", true); t!(s: \"servershare\", \"serversharefoo\", true); t!(s: \"serversharefoo\", \"servershare\", false); t!(s: \"C:foo\", \"serversharefoo\", false); t!(s: \"serversharefoo\", \"C:foo\", false); t!(s: \"?foobar\", \"?foobarbaz\", true); t!(s: \"?foobarbaz\", \"?foobar\", false); t!(s: \"?foobar\", \"foobarbaz\", false); t!(s: \"foobar\", \"?foobarbaz\", false); t!(s: \"?C:foobar\", \"?C:foobarbaz\", true); t!(s: \"?C:foobarbaz\", \"?C:foobar\", false); t!(s: \"?C:\", \"?C:foo\", true); t!(s: \"?C:\", \"?C:\", false); // this is a weird one t!(s: \"?C:\", \"?C:\", false); t!(s: \"?C:a\", \"?c:ab\", true); t!(s: \"?c:a\", \"?C:ab\", true); t!(s: \"?C:a\", \"?D:ab\", false); t!(s: \"?foo\", \"?foobar\", false); t!(s: \"?ab\", \"?abc\", true); t!(s: \"?ab\", \"?ab\", true); t!(s: \"?ab\", \"?ab\", true); t!(s: \"?abc\", \"?ab\", false); t!(s: \"?abc\", \"?ab\", false); t!(s: \"?UNCabc\", \"?UNCabcd\", true); t!(s: \"?UNCabcd\", \"?UNCabc\", false); t!(s: \"?UNCab\", \"?UNCabc\", true); t!(s: \".foobar\", \".foobarbaz\", true); t!(s: \".foobarbaz\", \".foobar\", false); t!(s: \".foo\", \".foobar\", true); t!(s: \".foo\", \".foobar\", false); t!(s: \"ab\", \"?ab\", false); t!(s: \"?ab\", \"ab\", false); t!(s: \"ab\", \"?C:ab\", false); t!(s: \"?C:ab\", \"ab\", false); t!(s: \"Z:ab\", \"?z:ab\", true); t!(s: \"C:ab\", \"?D:ab\", false); t!(s: \"ab\", \"?ab\", false); t!(s: \"?ab\", \"ab\", false); t!(s: \"C:ab\", \"?C:ab\", true); t!(s: \"?C:ab\", \"C:ab\", true); t!(s: \"C:ab\", \"?C:ab\", false); t!(s: \"C:ab\", \"?C:ab\", false); t!(s: \"?C:ab\", \"C:ab\", false); t!(s: \"?C:ab\", \"C:ab\", false); t!(s: \"C:ab\", \"?C:ab\", true); t!(s: \"?C:ab\", \"C:ab\", true); t!(s: \"abc\", \"?UNCabc\", true); t!(s: \"?UNCabc\", \"abc\", true); } #[test] fn test_path_relative_from() { macro_rules! t( (s: $path:expr, $other:expr, $exp:expr) => ( { let path = Path::from_str($path); let other = Path::from_str($other); let res = path.path_relative_from(&other); let exp = $exp; assert!(res.and_then_ref(|x| x.as_str()) == exp, \"`%s`.path_relative_from(`%s`): Expected %?, got %?\", path.as_str().unwrap(), other.as_str().unwrap(), exp, res.and_then_ref(|x| x.as_str())); } ) ) t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"abc\", \"abd\", Some(\"..c\")); t!(s: \"abc\", \"abcd\", Some(\"..\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"abc\", \"abcde\", Some(\"....\")); t!(s: \"abc\", \"ade\", Some(\"....bc\")); t!(s: \"abc\", \"def\", Some(\"......abc\")); t!(s: \"abc\", \"abc\", None); t!(s: \"abc\", \"abc\", Some(\"abc\")); t!(s: \"abc\", \"abcd\", Some(\"..\")); t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"abc\", \"abcde\", Some(\"....\")); t!(s: \"abc\", \"ade\", Some(\"....bc\")); t!(s: \"abc\", \"def\", Some(\"......abc\")); t!(s: \"hithere.txt\", \"hithere\", Some(\"..there.txt\")); t!(s: \".\", \"a\", Some(\"..\")); t!(s: \".\", \"ab\", Some(\"....\")); t!(s: \".\", \".\", Some(\".\")); t!(s: \"a\", \".\", Some(\"a\")); t!(s: \"ab\", \".\", Some(\"ab\")); t!(s: \"..\", \".\", Some(\"..\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"abc\", \"abc\", Some(\".\")); t!(s: \"\", \"\", Some(\".\")); t!(s: \"\", \".\", Some(\"\")); t!(s: \"....a\", \"b\", Some(\"......a\")); t!(s: \"a\", \"....b\", None); t!(s: \"....a\", \"....b\", Some(\"..a\")); t!(s: \"....a\", \"....ab\", Some(\"..\")); t!(s: \"....ab\", \"....a\", Some(\"b\")); t!(s: \"C:abc\", \"C:ab\", Some(\"c\")); t!(s: \"C:ab\", \"C:abc\", Some(\"..\")); t!(s: \"C:\" ,\"C:ab\", Some(\"....\")); t!(s: \"C:ab\", \"C:cd\", Some(\"....ab\")); t!(s: \"C:ab\", \"D:cd\", Some(\"C:ab\")); t!(s: \"C:ab\", \"C:..c\", None); t!(s: \"C:..a\", \"C:bc\", Some(\"......a\")); t!(s: \"C:abc\", \"C:ab\", Some(\"c\")); t!(s: \"C:ab\", \"C:abc\", Some(\"..\")); t!(s: \"C:\", \"C:ab\", Some(\"....\")); t!(s: \"C:ab\", \"C:cd\", Some(\"....ab\")); t!(s: \"C:ab\", \"C:ab\", Some(\"C:ab\")); t!(s: \"C:ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"ab\", \"C:ab\", None); t!(s: \"abc\", \"ab\", Some(\"c\")); t!(s: \"ab\", \"abc\", Some(\"..\")); t!(s: \"abce\", \"abcd\", Some(\"..e\")); t!(s: \"acd\", \"abd\", Some(\"acd\")); t!(s: \"bcd\", \"acd\", Some(\"bcd\")); t!(s: \"abc\", \"de\", Some(\"abc\")); t!(s: \"de\", \"abc\", None); t!(s: \"de\", \"abc\", None); t!(s: \"C:abc\", \"abc\", Some(\"C:abc\")); t!(s: \"C:c\", \"abc\", Some(\"C:c\")); t!(s: \"?ab\", \"ab\", Some(\"?ab\")); t!(s: \"?ab\", \"ab\", Some(\"?ab\")); t!(s: \"?ab\", \"b\", Some(\"?ab\")); t!(s: \"?ab\", \"b\", Some(\"?ab\")); t!(s: \"?ab\", \"?abc\", Some(\"..\")); t!(s: \"?abc\", \"?ab\", Some(\"c\")); t!(s: \"?ab\", \"?cd\", Some(\"?ab\")); t!(s: \"?a\", \"?b\", Some(\"?a\")); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:a\", \"?C:ab\", Some(\"..\")); t!(s: \"?C:a\", \"?C:b\", Some(\"..a\")); t!(s: \"?C:a\", \"?D:a\", Some(\"?C:a\")); t!(s: \"?C:ab\", \"?c:a\", Some(\"b\")); t!(s: \"?C:ab\", \"C:a\", Some(\"b\")); t!(s: \"?C:a\", \"C:ab\", Some(\"..\")); t!(s: \"C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"C:a\", \"?C:ab\", Some(\"..\")); t!(s: \"?C:a\", \"D:a\", Some(\"?C:a\")); t!(s: \"?c:ab\", \"C:a\", Some(\"b\")); t!(s: \"?C:ab\", \"C:ab\", Some(\"?C:ab\")); t!(s: \"?C:a.b\", \"C:a\", Some(\"?C:a.b\")); t!(s: \"?C:ab/c\", \"C:a\", Some(\"?C:ab/c\")); t!(s: \"?C:a..b\", \"C:a\", Some(\"?C:a..b\")); t!(s: \"C:ab\", \"?C:ab\", None); t!(s: \"?C:a.b\", \"?C:a\", Some(\"?C:a.b\")); t!(s: \"?C:ab/c\", \"?C:a\", Some(\"?C:ab/c\")); t!(s: \"?C:a..b\", \"?C:a\", Some(\"?C:a..b\")); t!(s: \"?C:ab\", \"?C:a\", Some(\"b\")); t!(s: \"?C:.b\", \"?C:.\", Some(\"b\")); t!(s: \"C:b\", \"?C:.\", Some(\"..b\")); t!(s: \"?a.bc\", \"?a.b\", Some(\"c\")); t!(s: \"?abc\", \"?a.d\", Some(\"....bc\")); t!(s: \"?a..b\", \"?a..\", Some(\"b\")); t!(s: \"?ab..\", \"?ab\", Some(\"?ab..\")); t!(s: \"?abc\", \"?a..b\", Some(\"....bc\")); t!(s: \"?UNCabc\", \"?UNCab\", Some(\"c\")); t!(s: \"?UNCab\", \"?UNCabc\", Some(\"..\")); t!(s: \"?UNCabc\", \"?UNCacd\", Some(\"?UNCabc\")); t!(s: \"?UNCbcd\", \"?UNCacd\", Some(\"?UNCbcd\")); t!(s: \"?UNCabc\", \"?abc\", Some(\"?UNCabc\")); t!(s: \"?UNCabc\", \"?C:abc\", Some(\"?UNCabc\")); t!(s: \"?UNCabc/d\", \"?UNCab\", Some(\"?UNCabc/d\")); t!(s: \"?UNCab.\", \"?UNCab\", Some(\"?UNCab.\")); t!(s: \"?UNCab..\", \"?UNCab\", Some(\"?UNCab..\")); t!(s: \"?UNCabc\", \"ab\", Some(\"c\")); t!(s: \"?UNCab\", \"abc\", Some(\"..\")); t!(s: \"?UNCabc\", \"acd\", Some(\"?UNCabc\")); t!(s: \"?UNCbcd\", \"acd\", Some(\"?UNCbcd\")); t!(s: \"?UNCab.\", \"ab\", Some(\"?UNCab.\")); t!(s: \"?UNCabc/d\", \"ab\", Some(\"?UNCabc/d\")); t!(s: \"?UNCab..\", \"ab\", Some(\"?UNCab..\")); t!(s: \"abc\", \"?UNCab\", Some(\"c\")); t!(s: \"abc\", \"?UNCacd\", Some(\"abc\")); } #[test] fn test_component_iter() { macro_rules! t( (s: $path:expr, $exp:expr) => ( { let path = Path::from_str($path); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; assert_eq!(comps.as_slice(), exp); } ); (v: [$($arg:expr),+], $exp:expr) => ( { let path = Path::new(b!($($arg),+)); let comps = path.component_iter().to_owned_vec(); let exp: &[&str] = $exp; assert_eq!(comps.as_slice(), exp); } ) ) t!(v: [\"abc\"], [\"a\", \"b\", \"c\"]); t!(s: \"abc\", [\"a\", \"b\", \"c\"]); t!(s: \"abd\", [\"a\", \"b\", \"d\"]); t!(s: \"abcd\", [\"a\", \"b\", \"cd\"]); t!(s: \"abc\", [\"a\", \"b\", \"c\"]); t!(s: \"a\", [\"a\"]); t!(s: \"a\", [\"a\"]); t!(s: \"\", []); t!(s: \".\", [\".\"]); t!(s: \"..\", [\"..\"]); t!(s: \"....\", [\"..\", \"..\"]); t!(s: \"....foo\", [\"..\", \"..\", \"foo\"]); t!(s: \"C:foobar\", [\"foo\", \"bar\"]); t!(s: \"C:foo\", [\"foo\"]); t!(s: \"C:\", []); t!(s: \"C:foobar\", [\"foo\", \"bar\"]); t!(s: \"C:foo\", [\"foo\"]); t!(s: \"C:\", []); t!(s: \"serversharefoobar\", [\"foo\", \"bar\"]); t!(s: \"serversharefoo\", [\"foo\"]); t!(s: \"servershare\", []); t!(s: \"?foobarbaz\", [\"bar\", \"baz\"]); t!(s: \"?foobar\", [\"bar\"]); t!(s: \"?foo\", []); t!(s: \"?\", []); t!(s: \"?ab\", [\"b\"]); t!(s: \"?ab\", [\"b\"]); t!(s: \"?foobarbaz\", [\"bar\", \"\", \"baz\"]); t!(s: \"?C:foobar\", [\"foo\", \"bar\"]); t!(s: \"?C:foo\", [\"foo\"]); t!(s: \"?C:\", []); t!(s: \"?C:foo\", [\"foo\"]); t!(s: \"?UNCserversharefoobar\", [\"foo\", \"bar\"]); t!(s: \"?UNCserversharefoo\", [\"foo\"]); t!(s: \"?UNCservershare\", []); t!(s: \".foobarbaz\", [\"bar\", \"baz\"]); t!(s: \".foobar\", [\"bar\"]); t!(s: \".foo\", []); } } ", "commid": "rust_pr_9655"}], "negative_passages": []} {"query_id": "q-en-rust-09bb67fce6b347ba30fcb629338c464042351d96b12e2bafe7c13f6369eaaf58", "query": "Given the following code: The current output is: Ideally the output should look like: Right now, cargo and rustc operate basically independently of each other. The summary (\"aborting ...\" and \"could not compile ...\") is repeated twice, and both have different, incompatible ways to get more info about what went wrong. There's no reason to repeat these twice; we could include all the same information in half the space if we can get cargo and rustc to cooperate. I suggest the way this be implemented is by keeping rustc's output the same when run standalone, but omitting \"aborting due to ...\" and \"for more information ...\" when run with . Then cargo can aggregate the info it used to print into its own errors by using the JSON output. cc (meta note: I thought of this while working on , which has fully 12 lines of \"metadata\" after the 5 line error. Most builds are not that bad in comparison, but I do think it shows that it needs support from all the tools in the stack to keep the verbosity down.)\nI'm not sure whether making the decision on the error format scheme is the right way to go. I can't think of any reason not to do it, unless the implementation is messy, so", "positive_passages": [{"docid": "doc-en-rust-ddea2bffcc042993a01b929d5e2daea658bb84856dab22b7abb7b540d565adf0", "text": "pub mod c_str; pub mod os; pub mod path; pub mod path2; pub mod rand; pub mod run; pub mod sys;", "commid": "rust_pr_9655"}], "negative_passages": []} {"query_id": "q-en-rust-ca145d0b0a360d1cdc2027bda966f1f02bfd8d113f6a3aebe64727b24e17dada", "query": "Compare to This is almost certainly caused by $DIR/issue-87932.rs:13:8 | LL | pub struct A {} | ------------ function or associated item `deserialize` not found for this ... LL | A::deserialize(); | ^^^^^^^^^^^ function or associated item not found in `A` | = help: items from traits can only be used if the trait is in scope help: the following trait is implemented but not in scope; perhaps add a `use` for it: | LL | use ::deserialize::_a::Deserialize; | error: aborting due to previous error For more information about this error, try `rustc --explain E0599`. ", "commid": "rust_pr_89738"}], "negative_passages": []} {"query_id": "q-en-rust-dfb1853f080d636b0154a993ccc58567464750f8671998b9ed42ac8ff219ae7d", "query": "More information on\nTo be clear, I'm not actually sure that's a private link, I didn't look at it in detail. It might just be doc(hidden) or something.\nI didn't either. Just that most of weird things happening are in .", "positive_passages": [{"docid": "doc-en-rust-61476c907914973156625c7ea3d9adbb6ebc2b72eb825ef5b233a9185be448c7", "text": "// of a macro that is not vendored by Rust and included in the toolchain. // See https://github.com/rust-analyzer/rust-analyzer/issues/6038. // On certain platforms right now the \"main modules\" modules that are // documented don't compile (missing things in `libc` which is empty), // so just omit them with an empty module and add the \"unstable\" attribute. // Unix, linux, wasi and windows are handled a bit differently. #[cfg(all( doc, not(any( any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") )) ) ))] #[path = \".\"] mod doc { // When documenting std we want to show the `unix`, `windows`, `linux` and `wasi` // modules as these are the \"main modules\" that are used across platforms, // so these modules are enabled when `cfg(doc)` is set. // This should help show platform-specific functionality in a hopefully cross-platform // way in the documentation. pub mod unix; pub mod linux; pub mod wasi; pub mod windows; } #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod unix {} #[cfg(all( doc, any(", "commid": "rust_pr_88619"}], "negative_passages": []} {"query_id": "q-en-rust-dfb1853f080d636b0154a993ccc58567464750f8671998b9ed42ac8ff219ae7d", "query": "More information on\nTo be clear, I'm not actually sure that's a private link, I didn't look at it in detail. It might just be doc(hidden) or something.\nI didn't either. Just that most of weird things happening are in .", "positive_passages": [{"docid": "doc-en-rust-d756f1004d0524a0432aeefcc11c7704e03624a989a2f440007212f39b9d44fe", "text": "all(target_vendor = \"fortanix\", target_env = \"sgx\") ) ))] mod doc { // On certain platforms right now the \"main modules\" modules that are // documented don't compile (missing things in `libc` which is empty), // so just omit them with an empty module. #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod unix {} #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod linux {} #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod wasi {} #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod windows {} } #[cfg(doc)] #[stable(feature = \"os\", since = \"1.0.0\")] pub use doc::*; #[cfg(not(doc))] #[path = \".\"] mod imp { // If we're not documenting std then we only expose modules appropriate for the // current platform. #[cfg(all(target_vendor = \"fortanix\", target_env = \"sgx\"))] pub mod fortanix_sgx; #[cfg(target_os = \"hermit\")] #[path = \"hermit/mod.rs\"] pub mod unix; #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod linux {} #[cfg(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) ))] #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod wasi {} #[cfg(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) ))] #[unstable(issue = \"none\", feature = \"std_internals\")] pub mod windows {} #[cfg(target_os = \"android\")] pub mod android; #[cfg(target_os = \"dragonfly\")] pub mod dragonfly; #[cfg(target_os = \"emscripten\")] pub mod emscripten; #[cfg(target_os = \"espidf\")] pub mod espidf; #[cfg(target_os = \"freebsd\")] pub mod freebsd; #[cfg(target_os = \"fuchsia\")] pub mod fuchsia; #[cfg(target_os = \"haiku\")] pub mod haiku; #[cfg(target_os = \"illumos\")] pub mod illumos; #[cfg(target_os = \"ios\")] pub mod ios; #[cfg(target_os = \"l4re\")] pub mod linux; #[cfg(target_os = \"linux\")] pub mod linux; #[cfg(target_os = \"macos\")] pub mod macos; #[cfg(target_os = \"netbsd\")] pub mod netbsd; #[cfg(target_os = \"openbsd\")] pub mod openbsd; #[cfg(target_os = \"redox\")] pub mod redox; #[cfg(target_os = \"solaris\")] pub mod solaris; #[cfg(unix)] pub mod unix; // unix #[cfg(not(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) )))] #[cfg(target_os = \"hermit\")] #[path = \"hermit/mod.rs\"] pub mod unix; #[cfg(not(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) )))] #[cfg(all(not(target_os = \"hermit\"), any(unix, doc)))] pub mod unix; #[cfg(target_os = \"vxworks\")] pub mod vxworks; // linux #[cfg(not(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) )))] #[cfg(any(target_os = \"linux\", target_os = \"l4re\", doc))] pub mod linux; #[cfg(target_os = \"wasi\")] pub mod wasi; // wasi #[cfg(not(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) )))] #[cfg(any(target_os = \"wasi\", doc))] pub mod wasi; #[cfg(windows)] pub mod windows; } #[cfg(not(doc))] #[stable(feature = \"os\", since = \"1.0.0\")] pub use imp::*; // windows #[cfg(not(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) )))] #[cfg(any(windows, doc))] pub mod windows; // Others. #[cfg(target_os = \"android\")] pub mod android; #[cfg(target_os = \"dragonfly\")] pub mod dragonfly; #[cfg(target_os = \"emscripten\")] pub mod emscripten; #[cfg(target_os = \"espidf\")] pub mod espidf; #[cfg(all(target_vendor = \"fortanix\", target_env = \"sgx\"))] pub mod fortanix_sgx; #[cfg(target_os = \"freebsd\")] pub mod freebsd; #[cfg(target_os = \"fuchsia\")] pub mod fuchsia; #[cfg(target_os = \"haiku\")] pub mod haiku; #[cfg(target_os = \"illumos\")] pub mod illumos; #[cfg(target_os = \"ios\")] pub mod ios; #[cfg(target_os = \"macos\")] pub mod macos; #[cfg(target_os = \"netbsd\")] pub mod netbsd; #[cfg(target_os = \"openbsd\")] pub mod openbsd; #[cfg(target_os = \"redox\")] pub mod redox; #[cfg(target_os = \"solaris\")] pub mod solaris; #[cfg(target_os = \"vxworks\")] pub mod vxworks; #[cfg(any(unix, target_os = \"wasi\", doc))] mod fd;", "commid": "rust_pr_88619"}], "negative_passages": []} {"query_id": "q-en-rust-dfb1853f080d636b0154a993ccc58567464750f8671998b9ed42ac8ff219ae7d", "query": "More information on\nTo be clear, I'm not actually sure that's a private link, I didn't look at it in detail. It might just be doc(hidden) or something.\nI didn't either. Just that most of weird things happening are in .", "positive_passages": [{"docid": "doc-en-rust-a3b406c05809be011aa4d817d0c46523c70a528a22772c3fc6310aade58afd63", "text": "pub const PATH_BUF_AS_PATH: [&str; 4] = [\"std\", \"path\", \"PathBuf\", \"as_path\"]; pub const PATH_TO_PATH_BUF: [&str; 4] = [\"std\", \"path\", \"Path\", \"to_path_buf\"]; pub const PERMISSIONS: [&str; 3] = [\"std\", \"fs\", \"Permissions\"]; pub const PERMISSIONS_FROM_MODE: [&str; 7] = [\"std\", \"os\", \"imp\", \"unix\", \"fs\", \"PermissionsExt\", \"from_mode\"]; pub const PERMISSIONS_FROM_MODE: [&str; 6] = [\"std\", \"os\", \"unix\", \"fs\", \"PermissionsExt\", \"from_mode\"]; pub const POLL: [&str; 4] = [\"core\", \"task\", \"poll\", \"Poll\"]; pub const POLL_PENDING: [&str; 5] = [\"core\", \"task\", \"poll\", \"Poll\", \"Pending\"]; pub const POLL_READY: [&str; 5] = [\"core\", \"task\", \"poll\", \"Poll\", \"Ready\"];", "commid": "rust_pr_88619"}], "negative_passages": []} {"query_id": "q-en-rust-f06be4eeb23d61914d7d1ce11114535af633e4b43bbe880b3c7b54ce6464292f", "query": "The following ICE was found migrating . cc which was very similar, but has been fixed. Also found in: - https://crater- $DIR/issue-88331.rs:11:20 | LL | pub struct Opcode(pub u8); | -------------------------- `Opcode` defined here ... LL | move |i| match msg_type { | ^^^^^^^^ patterns `Opcode(0_u8)` and `Opcode(2_u8..=u8::MAX)` not covered | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms = note: the matched value is of type `Opcode` error[E0004]: non-exhaustive patterns: `Opcode2(Opcode(0_u8))` and `Opcode2(Opcode(2_u8..=u8::MAX))` not covered --> $DIR/issue-88331.rs:27:20 | LL | pub struct Opcode2(Opcode); | --------------------------- `Opcode2` defined here ... LL | move |i| match msg_type { | ^^^^^^^^ patterns `Opcode2(Opcode(0_u8))` and `Opcode2(Opcode(2_u8..=u8::MAX))` not covered | = help: ensure that all possible cases are being handled, possibly by adding wildcards or more match arms = note: the matched value is of type `Opcode2` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0004`. ", "commid": "rust_pr_88390"}], "negative_passages": []} {"query_id": "q-en-rust-0be28fadf20dce32573d4bbb8027947ed8f22b9609da39a50daee7365edaacf6", "query": " $DIR/issue-89008.rs:38:43 | LL | fn line_stream<'a, Repr>(&'a self) -> Self::LineStreamFut<'a, Repr> { | ---- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ type mismatch resolving ` as Stream>::Item == Repr` | | | this type parameter | note: expected this to be `()` --> $DIR/issue-89008.rs:17:17 | LL | type Item = (); | ^^ = note: expected unit type `()` found type parameter `Repr` error: aborting due to previous error For more information about this error, try `rustc --explain E0271`. ", "commid": "rust_pr_103335"}], "negative_passages": []} {"query_id": "q-en-rust-12b65699efe1682bdd41783774d4731ae3a731807bd72b7dd51b8b76a0a358cf", "query": " $DIR/const-match-pat-generic.rs:8:11 --> $DIR/const-match-pat-generic.rs:9:9 | LL | const { V } => {}, | ^^^^^ LL | const { V } => {}, | ^^^^^^^^^^^ error: aborting due to previous error error: constant pattern depends on a generic parameter --> $DIR/const-match-pat-generic.rs:21:9 | LL | const { f(V) } => {}, | ^^^^^^^^^^^^^^ error: constant pattern depends on a generic parameter --> $DIR/const-match-pat-generic.rs:21:9 | LL | const { f(V) } => {}, | ^^^^^^^^^^^^^^ error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0158`.", "commid": "rust_pr_91570"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: // Fat LTO also suffers from the invalid DWARF issue similar to Thin LTO. // Here we rewrite all `DICompileUnit` pointers if there is only one `DICompileUnit`. // This only works around the problem when codegen-units = 1. // Refer to the comments in the `optimize_thin_module` function for more details. let mut cu1 = ptr::null_mut(); let mut cu2 = ptr::null_mut(); unsafe { llvm::LLVMRustLTOGetDICompileUnit(llmod, &mut cu1, &mut cu2) }; if !cu2.is_null() { let _timer = cgcx.prof.generic_activity_with_arg(\"LLVM_fat_lto_patch_debuginfo\", &*module.name); unsafe { llvm::LLVMRustLTOPatchDICompileUnit(llmod, cu1) }; save_temp_bitcode(cgcx, &module, \"fat-lto-after-patch\"); } // Internalize everything below threshold to help strip out more modules and such. unsafe { let ptr = symbols_below_threshold.as_ptr();", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: llvm::LLVMRustLTOGetDICompileUnit(llmod, &mut cu1, &mut cu2); llvm::LLVMRustThinLTOGetDICompileUnit(llmod, &mut cu1, &mut cu2); if !cu2.is_null() { let msg = \"multiple source DICompileUnits found\"; return Err(write::llvm_err(&diag_handler, msg));", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: llvm::LLVMRustLTOPatchDICompileUnit(llmod, cu1); llvm::LLVMRustThinLTOPatchDICompileUnit(llmod, cu1); save_temp_bitcode(cgcx, &module, \"thin-lto-after-patch\"); }", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: *const u8; pub fn LLVMRustLTOGetDICompileUnit(M: &Module, CU1: &mut *mut c_void, CU2: &mut *mut c_void); pub fn LLVMRustLTOPatchDICompileUnit(M: &Module, CU: *mut c_void); pub fn LLVMRustThinLTOGetDICompileUnit( M: &Module, CU1: &mut *mut c_void, CU2: &mut *mut c_void, ); pub fn LLVMRustThinLTOPatchDICompileUnit(M: &Module, CU: *mut c_void); pub fn LLVMRustLinkerNew(M: &Module) -> &mut Linker<'_>; pub fn LLVMRustLinkerAdd(", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: LLVMRustLTOGetDICompileUnit(LLVMModuleRef Mod, LLVMRustThinLTOGetDICompileUnit(LLVMModuleRef Mod, DICompileUnit **A, DICompileUnit **B) { Module *M = unwrap(Mod);", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: LLVMRustLTOPatchDICompileUnit(LLVMModuleRef Mod, DICompileUnit *Unit) { LLVMRustThinLTOPatchDICompileUnit(LLVMModuleRef Mod, DICompileUnit *Unit) { Module *M = unwrap(Mod); // If the original source module didn't have a `DICompileUnit` then try to", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: // Caveat - gdb doesn't know about UTF-32 character encoding and will print a // rust char as only its numerical value. // min-lldb-version: 310 // min-gdb-version: 8.0 // no-prefer-dynamic // compile-flags:-g -C lto // gdb-command:run // gdbg-command:print 'basic_types_globals::B' // gdbr-command:print B // gdb-check:$1 = false // gdbg-command:print 'basic_types_globals::I' // gdbr-command:print I // gdb-check:$2 = -1 // gdbg-command:print 'basic_types_globals::C' // gdbr-command:print C // gdbg-check:$3 = 97 // gdbr-check:$3 = 97 // gdbg-command:print/d 'basic_types_globals::I8' // gdbr-command:print I8 // gdb-check:$4 = 68 // gdbg-command:print 'basic_types_globals::I16' // gdbr-command:print I16 // gdb-check:$5 = -16 // gdbg-command:print 'basic_types_globals::I32' // gdbr-command:print I32 // gdb-check:$6 = -32 // gdbg-command:print 'basic_types_globals::I64' // gdbr-command:print I64 // gdb-check:$7 = -64 // gdbg-command:print 'basic_types_globals::U' // gdbr-command:print U // gdb-check:$8 = 1 // gdbg-command:print/d 'basic_types_globals::U8' // gdbr-command:print U8 // gdb-check:$9 = 100 // gdbg-command:print 'basic_types_globals::U16' // gdbr-command:print U16 // gdb-check:$10 = 16 // gdbg-command:print 'basic_types_globals::U32' // gdbr-command:print U32 // gdb-check:$11 = 32 // gdbg-command:print 'basic_types_globals::U64' // gdbr-command:print U64 // gdb-check:$12 = 64 // gdbg-command:print 'basic_types_globals::F32' // gdbr-command:print F32 // gdb-check:$13 = 2.5 // gdbg-command:print 'basic_types_globals::F64' // gdbr-command:print F64 // gdb-check:$14 = 3.5 // gdb-command:continue #![allow(unused_variables)] #![feature(omit_gdb_pretty_printer_section)] #![omit_gdb_pretty_printer_section] // N.B. These are `mut` only so they don't constant fold away. static mut B: bool = false; static mut I: isize = -1; static mut C: char = 'a'; static mut I8: i8 = 68; static mut I16: i16 = -16; static mut I32: i32 = -32; static mut I64: i64 = -64; static mut U: usize = 1; static mut U8: u8 = 100; static mut U16: u16 = 16; static mut U32: u32 = 32; static mut U64: u64 = 64; static mut F32: f32 = 2.5; static mut F64: f64 = 3.5; fn main() { _zzz(); // #break let a = unsafe { (B, I, C, I8, I16, I32, I64, U, U8, U16, U32, U64, F32, F64) }; } fn _zzz() {()} ", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: // Caveats - gdb prints any 8-bit value (meaning rust I8 and u8 values) // as its numerical value along with its associated ASCII char, there // doesn't seem to be any way around this. Also, gdb doesn't know // about UTF-32 character encoding and will print a rust char as only // its numerical value. // Caveat - gdb doesn't know about UTF-32 character encoding and will print a // rust char as only its numerical value. // min-lldb-version: 310 // ignore-gdb // Test temporarily ignored due to debuginfo tests being disabled, see PR 47155 // min-gdb-version: 8.0 // compile-flags:-g // gdb-command:run", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-27dd75f2bd2b57cfe6e866fc8cbcee0ffc72beba7c419da644790aa17b71607f", "query": "As of the 2021-09-23 nightly, rustc has stopped producing debug output for variables in a specific combination of circumstances: The variable is It is defined in a lib crate. The lib crate is included in a bin crate. The bin crate is built with . We noticed this after bumping past 09-23 to the 2021-10-27 nightly, when our debugger broke. The static is still being included in the ELF output (both allocated space, and represented correctly in the symtab), but is missing from DWARF. Bisecting indicates 2021-09-23 as the culprit (09-22 works). Here is a repro case that I've reduced that works on linux x86-64, at least, and probably other DWARF platforms. modify labels: +regression-from-stable-to-nightly -regression-untriaged\nBumping to regression-stable-to-beta, because 2021-09-23 is a 1.57 nightly. But would be good to confirm this reproduces on beta.\npoints at (). That being a rollup, looks the most relevant.\nAssigning priority as discussed in the of the Prioritization Working Group. label -I-prioritize +P-high\nI think this may be a duplicate of . Please test on a current nightly, and when goes out it should have the backport via . It's still possible that introduced its own problem, but maybe it just exacerbated the LLVM issue.\nIt looks to me like the problem still reoccurs on the current nightly: // gdbr-check:$3 = 97 'a' // gdbr-check:$3 = 97 // gdbg-command:print/d 'basic_types_globals::I8' // gdbr-command:print I8 // gdb-check:$4 = 68", "commid": "rust_pr_95685"}], "negative_passages": []} {"query_id": "q-en-rust-da3e893f9e97d0f0dee9720faa1a8c6cd3b3807cc50fa473adcec29852350692", "query": " $DIR/issue-90444.rs:3:16 | LL | fn from(_: fn((), (), &mut ())) -> Self { | ^^^^^^^^^^^^^^^^^^^ | | | types differ in mutability | help: change the parameter type to match the trait: `for<'r> fn((), (), &'r ())` | = note: expected fn pointer `fn(for<'r> fn((), (), &'r ())) -> A` found fn pointer `fn(for<'r> fn((), (), &'r mut ())) -> A` error[E0053]: method `from` has an incompatible type for trait --> $DIR/issue-90444.rs:11:16 | LL | fn from(_: fn((), (), u64)) -> Self { | ^^^^^^^^^^^^^^^ | | | expected `u32`, found `u64` | help: change the parameter type to match the trait: `fn((), (), u32)` | = note: expected fn pointer `fn(fn((), (), u32)) -> B` found fn pointer `fn(fn((), (), u64)) -> B` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0053`. ", "commid": "rust_pr_90646"}], "negative_passages": []} {"query_id": "q-en-rust-b497ffb2768fbe931338b55809aa60d52d95be9cb9ecca7274d463f200361997", "query": "This function is only used once, with the current session's directory. Is there a reason to keep it, or should it be replaced by ? Originally posted by in $DIR/issue-90871.rs:2:8 | LL | 2: n([u8; || 1]) | ^ expecting a type here because of type ascription error[E0308]: mismatched types --> $DIR/issue-90871.rs:2:15 | LL | 2: n([u8; || 1]) | ^^^^ expected `usize`, found closure | = note: expected type `usize` found closure `[closure@$DIR/issue-90871.rs:2:15: 2:17]` help: use parentheses to call this closure | LL | 2: n([u8; (|| 1)()]) | + +++ error: aborting due to 2 previous errors Some errors have detailed explanations: E0308, E0412. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_100105"}], "negative_passages": []} {"query_id": "q-en-rust-547a355dfa91914a25c47a235fcacabab72ec3b1e4f4ee0b845fa851b451547b", "query": "Consider the following code with custom DSTs: The generated assembly for x86_64 is as follows (with my comments ): This can be seen on . Why does the compiler check for zero alignment? It seems that it's impossible, as the alignment cannot be zero. :\nWhy alignment can't be zero ? Are you sure it's check for alignment ? As I read the asm using for ref: test if is zero, use 1 as default, move rcx if it was not zero. make sense for me but asm is not my field of expertise and even less vtable so I don't really understand why put 1 instead of just move rcx assuming the alignment should be correct but I guess I miss a crucial information about rust internal. I guess is why rust need to test since it's \"maybe\" sized\nAdding gets me So, at least as far as the compiler is concerned, there is no obvious reason it could not be a zero-sized type. I could believe this is a missed optimization, where the compiler could infer that because of the i8 field, it has to be aligned to at least 1, but I do not believe it is a bug. It seems to satisfy the current reasoning around alignment and DSTs.\nAs stated in , Yes. The references to consist of two pointers, one of with is a base pointer ( in the assembly above) and the second one ( in the assembly above) is the pointer to vtable. The entry of vtable with index (written as in assembly) is the alignment (as it can be seen ).\nHow does the compilation error above indicate that? Also, ZSTs has alignment 1, as you can check using Why it's not a bug? The compiler generates a seemingly unnecessary instruction, which is an optimization issue. I agree that the issue is quite minor, though, and most probably it doesn't cause observable slowdown. Maybe there are some reasons why the alignment in vtable can be zero, but it seems to contradict the documentation.\nI fail to see any explanation in the link that say that, is about the alignment of the vtable not its position or whatever registry contain it. For me it's make more sense it's the size of the object or something, why this code would need the vtable anyway ? It's probably search the alignement of to know the offset of the strucutre: Again just random guess.\nIt's the alignment of the type corresponding to this vtable, not the alignment of vtable itself. Yes, it is. The issue is that while calculating the offset dynamically, it does redundantly check whether the alignment is zero.\nwhy do you suppose rust internal doesn't need this ? having a zero alignment could mean something like unknown alignment ? I mean you first need to prove test for zero is useless before say it's seem useless. We don't know for sure what Rust really need to do here, the type is dyn, maybe Sized. That not trivial, one should perfectly know Rust internal to know if this check is useless. Sorry for my poor knowledge in this if my comment are just losing your time just ignore them, I just try to understand your point.\nI didn't dive into Rust internals code, but I know that the compiler doesn't always check for zero alignment. Here's the example: gives just But, even if I add the transparent wrapper, the check for zero is generated. See the following example: The generated assembly becomes much more complex: I don't have an idea on why the check happens. If it's omitted sometimes, then the compiler kind of knows that the alignment in vtables cannot be zero, yet it generates the check in a more complex example.\nI think the diff is caused by versus but you use transparent so here I agree with you it's look strange. You convince me.\nThis check is generated by the following code: and is used to determine the runtime alignment of the field, which is . In the general case, this check is required, because the alignment of may be 1. You can see this if you change the field in to an , which has 2-byte alignment: In the specific cases you've identified, where the alignment of is 1, I agree that it is unnecessary. When loading alignment from the vtable, we should be adding range metadata to tell LLVM that it's nonzero. (opened for this)", "positive_passages": [{"docid": "doc-en-rust-afaace0481a4bc69f3b44ac2eccd787ca37fe13b7902989809a339f919fa4cd6", "text": "use crate::meth; use crate::traits::*; use rustc_middle::ty::{self, Ty}; use rustc_target::abi::WrappingRange; pub fn size_and_align_of_dst<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>( bx: &mut Bx,", "commid": "rust_pr_91569"}], "negative_passages": []} {"query_id": "q-en-rust-547a355dfa91914a25c47a235fcacabab72ec3b1e4f4ee0b845fa851b451547b", "query": "Consider the following code with custom DSTs: The generated assembly for x86_64 is as follows (with my comments ): This can be seen on . Why does the compiler check for zero alignment? It seems that it's impossible, as the alignment cannot be zero. :\nWhy alignment can't be zero ? Are you sure it's check for alignment ? As I read the asm using for ref: test if is zero, use 1 as default, move rcx if it was not zero. make sense for me but asm is not my field of expertise and even less vtable so I don't really understand why put 1 instead of just move rcx assuming the alignment should be correct but I guess I miss a crucial information about rust internal. I guess is why rust need to test since it's \"maybe\" sized\nAdding gets me So, at least as far as the compiler is concerned, there is no obvious reason it could not be a zero-sized type. I could believe this is a missed optimization, where the compiler could infer that because of the i8 field, it has to be aligned to at least 1, but I do not believe it is a bug. It seems to satisfy the current reasoning around alignment and DSTs.\nAs stated in , Yes. The references to consist of two pointers, one of with is a base pointer ( in the assembly above) and the second one ( in the assembly above) is the pointer to vtable. The entry of vtable with index (written as in assembly) is the alignment (as it can be seen ).\nHow does the compilation error above indicate that? Also, ZSTs has alignment 1, as you can check using Why it's not a bug? The compiler generates a seemingly unnecessary instruction, which is an optimization issue. I agree that the issue is quite minor, though, and most probably it doesn't cause observable slowdown. Maybe there are some reasons why the alignment in vtable can be zero, but it seems to contradict the documentation.\nI fail to see any explanation in the link that say that, is about the alignment of the vtable not its position or whatever registry contain it. For me it's make more sense it's the size of the object or something, why this code would need the vtable anyway ? It's probably search the alignement of to know the offset of the strucutre: Again just random guess.\nIt's the alignment of the type corresponding to this vtable, not the alignment of vtable itself. Yes, it is. The issue is that while calculating the offset dynamically, it does redundantly check whether the alignment is zero.\nwhy do you suppose rust internal doesn't need this ? having a zero alignment could mean something like unknown alignment ? I mean you first need to prove test for zero is useless before say it's seem useless. We don't know for sure what Rust really need to do here, the type is dyn, maybe Sized. That not trivial, one should perfectly know Rust internal to know if this check is useless. Sorry for my poor knowledge in this if my comment are just losing your time just ignore them, I just try to understand your point.\nI didn't dive into Rust internals code, but I know that the compiler doesn't always check for zero alignment. Here's the example: gives just But, even if I add the transparent wrapper, the check for zero is generated. See the following example: The generated assembly becomes much more complex: I don't have an idea on why the check happens. If it's omitted sometimes, then the compiler kind of knows that the alignment in vtables cannot be zero, yet it generates the check in a more complex example.\nI think the diff is caused by versus but you use transparent so here I agree with you it's look strange. You convince me.\nThis check is generated by the following code: and is used to determine the runtime alignment of the field, which is . In the general case, this check is required, because the alignment of may be 1. You can see this if you change the field in to an , which has 2-byte alignment: In the specific cases you've identified, where the alignment of is 1, I agree that it is unnecessary. When loading alignment from the vtable, we should be adding range metadata to tell LLVM that it's nonzero. (opened for this)", "positive_passages": [{"docid": "doc-en-rust-7a0e1a4ef663fe17b69f557bb83876b3e36505edea7b79409c5161c4c6150f9e", "text": "} match t.kind() { ty::Dynamic(..) => { // load size/align from vtable // Load size/align from vtable. let vtable = info.unwrap(); ( meth::VirtualIndex::from_index(ty::COMMON_VTABLE_ENTRIES_SIZE) .get_usize(bx, vtable), meth::VirtualIndex::from_index(ty::COMMON_VTABLE_ENTRIES_ALIGN) .get_usize(bx, vtable), ) let size = meth::VirtualIndex::from_index(ty::COMMON_VTABLE_ENTRIES_SIZE) .get_usize(bx, vtable); let align = meth::VirtualIndex::from_index(ty::COMMON_VTABLE_ENTRIES_ALIGN) .get_usize(bx, vtable); // Alignment is always nonzero. bx.range_metadata(align, WrappingRange { start: 1, end: !0 }); (size, align) } ty::Slice(_) | ty::Str => { let unit = layout.field(bx, 0);", "commid": "rust_pr_91569"}], "negative_passages": []} {"query_id": "q-en-rust-547a355dfa91914a25c47a235fcacabab72ec3b1e4f4ee0b845fa851b451547b", "query": "Consider the following code with custom DSTs: The generated assembly for x86_64 is as follows (with my comments ): This can be seen on . Why does the compiler check for zero alignment? It seems that it's impossible, as the alignment cannot be zero. :\nWhy alignment can't be zero ? Are you sure it's check for alignment ? As I read the asm using for ref: test if is zero, use 1 as default, move rcx if it was not zero. make sense for me but asm is not my field of expertise and even less vtable so I don't really understand why put 1 instead of just move rcx assuming the alignment should be correct but I guess I miss a crucial information about rust internal. I guess is why rust need to test since it's \"maybe\" sized\nAdding gets me So, at least as far as the compiler is concerned, there is no obvious reason it could not be a zero-sized type. I could believe this is a missed optimization, where the compiler could infer that because of the i8 field, it has to be aligned to at least 1, but I do not believe it is a bug. It seems to satisfy the current reasoning around alignment and DSTs.\nAs stated in , Yes. The references to consist of two pointers, one of with is a base pointer ( in the assembly above) and the second one ( in the assembly above) is the pointer to vtable. The entry of vtable with index (written as in assembly) is the alignment (as it can be seen ).\nHow does the compilation error above indicate that? Also, ZSTs has alignment 1, as you can check using Why it's not a bug? The compiler generates a seemingly unnecessary instruction, which is an optimization issue. I agree that the issue is quite minor, though, and most probably it doesn't cause observable slowdown. Maybe there are some reasons why the alignment in vtable can be zero, but it seems to contradict the documentation.\nI fail to see any explanation in the link that say that, is about the alignment of the vtable not its position or whatever registry contain it. For me it's make more sense it's the size of the object or something, why this code would need the vtable anyway ? It's probably search the alignement of to know the offset of the strucutre: Again just random guess.\nIt's the alignment of the type corresponding to this vtable, not the alignment of vtable itself. Yes, it is. The issue is that while calculating the offset dynamically, it does redundantly check whether the alignment is zero.\nwhy do you suppose rust internal doesn't need this ? having a zero alignment could mean something like unknown alignment ? I mean you first need to prove test for zero is useless before say it's seem useless. We don't know for sure what Rust really need to do here, the type is dyn, maybe Sized. That not trivial, one should perfectly know Rust internal to know if this check is useless. Sorry for my poor knowledge in this if my comment are just losing your time just ignore them, I just try to understand your point.\nI didn't dive into Rust internals code, but I know that the compiler doesn't always check for zero alignment. Here's the example: gives just But, even if I add the transparent wrapper, the check for zero is generated. See the following example: The generated assembly becomes much more complex: I don't have an idea on why the check happens. If it's omitted sometimes, then the compiler kind of knows that the alignment in vtables cannot be zero, yet it generates the check in a more complex example.\nI think the diff is caused by versus but you use transparent so here I agree with you it's look strange. You convince me.\nThis check is generated by the following code: and is used to determine the runtime alignment of the field, which is . In the general case, this check is required, because the alignment of may be 1. You can see this if you change the field in to an , which has 2-byte alignment: In the specific cases you've identified, where the alignment of is 1, I agree that it is unnecessary. When loading alignment from the vtable, we should be adding range metadata to tell LLVM that it's nonzero. (opened for this)", "positive_passages": [{"docid": "doc-en-rust-2f4454abc8c2f0d98b3eedc95428cdd2913f9c73e9e6bfb87c6ccea874544957", "text": " // compile-flags: -O #![crate_type = \"lib\"] // This test checks that we annotate alignment loads from vtables with nonzero range metadata, // and that this allows LLVM to eliminate redundant `align >= 1` checks. pub trait Trait { fn f(&self); } pub struct WrapperWithAlign1 { x: u8, y: T } pub struct WrapperWithAlign2 { x: u16, y: T } pub struct Struct { _field: i8, dst: W, } // CHECK-LABEL: @eliminates_runtime_check_when_align_1 #[no_mangle] pub fn eliminates_runtime_check_when_align_1( x: &Struct> ) -> &WrapperWithAlign1 { // CHECK: load [[USIZE:i[0-9]+]], {{.+}} !range [[RANGE_META:![0-9]+]] // CHECK-NOT: icmp // CHECK-NOT: select // CHECK: ret &x.dst } // CHECK-LABEL: @does_not_eliminate_runtime_check_when_align_2 #[no_mangle] pub fn does_not_eliminate_runtime_check_when_align_2( x: &Struct> ) -> &WrapperWithAlign2 { // CHECK: [[X0:%[0-9]+]] = load [[USIZE]], {{.+}} !range [[RANGE_META]] // CHECK: [[X1:%[0-9]+]] = icmp {{.+}} [[X0]] // CHECK: [[X2:%[0-9]+]] = select {{.+}} [[X1]] // CHECK: ret &x.dst } // CHECK: [[RANGE_META]] = !{[[USIZE]] 1, [[USIZE]] 0} ", "commid": "rust_pr_91569"}], "negative_passages": []} {"query_id": "q-en-rust-b6488972e8ce63112cd8768af311936de620bbce7ebb0d0df674a58a6fa02972", "query": "I've rebased this morning and I can't build the repo anymore since landed. It seems there are missing tools in the LLVM CI tarball. AFAICT, these are the tools CI copies in the tarball: And these are the tools bootstrap tries to install in : being defined as: As you can see, there are many tools in that are not included in that step shown above. I'm on macOS, and here is the list of tools that I found in the downloaded LLVM CI tarball: Cc who authored .\nI think we should disable copying those tools under ci-llvm for now, and then decide if we want to include them in the ci-llvm artifact or not.\nmakes sense to me!\nActually the problem seems to be bigger, for now I disabled using the LLVM CI tarball locally and it still failed.\nWhat error are you getting with it disabled?\nExactly the same. I restarted the whole build after running , I'll report back once (if) the error reproduces again.\nAh! The compiler profile sets to , my bad, I'll explicitly override that to in my config and the problem will probably go away. I should have noticed I didn't actually rebuild LLVM hah.\nI'll open a PR to make copying the tools conditional on LLVM being built locally.\nOr just call , like I suggested originally ... I'm not sure why dist is being treated differently from all the other build commands.\nThose tools don't exist in the downloaded archive, so won't help here", "positive_passages": [{"docid": "doc-en-rust-0dd44f224e8563f63506801d640485e3766631e1f73b7f082c3623ce35b2fb4c", "text": "// (e.g. the `bootimage` crate). for tool in LLVM_TOOLS { let tool_exe = exe(tool, target_compiler.host); builder.copy(&llvm_bin_dir.join(&tool_exe), &libdir_bin.join(&tool_exe)); let src_path = llvm_bin_dir.join(&tool_exe); // When using `donwload-ci-llvm`, some of the tools // may not exist, so skip trying to copy them. if src_path.exists() { builder.copy(&src_path, &libdir_bin.join(&tool_exe)); } } } }", "commid": "rust_pr_91720"}], "negative_passages": []} {"query_id": "q-en-rust-bfcef229268ab132206e13379ee4cf5df4e7dea3f90cab3fb01350431e065876", "query": "The following code works in 1.56.0 but crashes 1.57.0 as well as the most recent nightly — . HashStable for InferTy { fn hash_stable(&self, ctx: &mut CTX, hasher: &mut StableHasher) { use InferTy::*; discriminant(self).hash_stable(ctx, hasher); match self { TyVar(v) => v.as_u32().hash_stable(ctx, hasher), IntVar(v) => v.index.hash_stable(ctx, hasher),", "commid": "rust_pr_91892"}], "negative_passages": []} {"query_id": "q-en-rust-bfcef229268ab132206e13379ee4cf5df4e7dea3f90cab3fb01350431e065876", "query": "The following code works in 1.56.0 but crashes 1.57.0 as well as the most recent nightly — . // check-pass // incremental struct Struct(T); impl std::ops::Deref for Struct { type Target = dyn Fn(T); fn deref(&self) -> &Self::Target { unimplemented!() } } fn main() { let f = Struct(Default::default()); f(0); f(0); } ", "commid": "rust_pr_91892"}], "negative_passages": []} {"query_id": "q-en-rust-59263108c9524243f8bc03354294c2b77e1f7ac4cdc637a6f8ac7af96658fc7b", "query": "contains two sets of methods to access the Arc's wrapped object. and are tagged unsafe as \"it is possible to construct a circular reference among multiple Arcs by mutating the underlying data. This creates potential for deadlock, but worse, this will guarantee a memory leak of all involved Arcs.\" There are also and methods defined when the inner type is freezable as that guarantees the type can't contain a . However, it's still trivially easy to deadlock using the safe methods: It seems that the consensus in was that possibility of deadlock is inherently unsafe, so should and be removed? cc\nPerhaps access should fail when it is called by the same task twice? It would be nice if people didn't have to use unsafe blocks just to get normal stuff done.\nNested locks in the same task aren't the only way you can run into deadlock:\nI am aware of that, but it seems to me this particular error would be easy and cheap to catch, so if we aren't doing that yet, we probably should. Personally I don't think the possibility of a deadlock should require an unsafe block.\nConsensus in the past has been that deadlock is not unsafe (). You can deadlock with pipes too, with just two lines of code. The non- accessors for are unsafe for a totally different reason, which is documented in . Regarding \"cheap and easy to catch\" - I don't believe we should try to fail in the single-task deadlock case unless we can reliably fail in all deadlock cases (which is not out of the question but requires scheduler support, see ). A user who is used to seeing a failure message from their previous attempts which had deadlocked would be all the more confused when they find their cross-task deadlocking code does not fail.\nIs the possibility of memory leakage really considered unsafe? doesn't seem to share that assumption, but I may be missing something there.\nNeither or provides a way to leak memory via the safe constructors.\nWe currently don't officially consider leaks to be , but I think leaking past the lifetime of tasks with references is a very questionable thing to allow.\nThe box annihilator is not a good solution to rc reference cycles, but it is at least a solution. Cycles of MutexArcs will produce valgrind errors.\nnow provides a way to leak with the safe constructors.", "positive_passages": [{"docid": "doc-en-rust-0386c8ada77b6c294dc7fb3969fd5a7eadd377a04cf2cc4a3b83ceacaf574c41", "text": "* other tasks wishing to access the data will block until the closure * finishes running. * * The reason this function is 'unsafe' is because it is possible to * construct a circular reference among multiple Arcs by mutating the * underlying data. This creates potential for deadlock, but worse, this * will guarantee a memory leak of all involved Arcs. Using MutexArcs * inside of other Arcs is safe in absence of circular references. * * If you wish to nest MutexArcs, one strategy for ensuring safety at * runtime is to add a \"nesting level counter\" inside the stored data, and * when traversing the arcs, assert that they monotonically decrease.", "commid": "rust_pr_12336.0"}], "negative_passages": []} {"query_id": "q-en-rust-59263108c9524243f8bc03354294c2b77e1f7ac4cdc637a6f8ac7af96658fc7b", "query": "contains two sets of methods to access the Arc's wrapped object. and are tagged unsafe as \"it is possible to construct a circular reference among multiple Arcs by mutating the underlying data. This creates potential for deadlock, but worse, this will guarantee a memory leak of all involved Arcs.\" There are also and methods defined when the inner type is freezable as that guarantees the type can't contain a . However, it's still trivially easy to deadlock using the safe methods: It seems that the consensus in was that possibility of deadlock is inherently unsafe, so should and be removed? cc\nPerhaps access should fail when it is called by the same task twice? It would be nice if people didn't have to use unsafe blocks just to get normal stuff done.\nNested locks in the same task aren't the only way you can run into deadlock:\nI am aware of that, but it seems to me this particular error would be easy and cheap to catch, so if we aren't doing that yet, we probably should. Personally I don't think the possibility of a deadlock should require an unsafe block.\nConsensus in the past has been that deadlock is not unsafe (). You can deadlock with pipes too, with just two lines of code. The non- accessors for are unsafe for a totally different reason, which is documented in . Regarding \"cheap and easy to catch\" - I don't believe we should try to fail in the single-task deadlock case unless we can reliably fail in all deadlock cases (which is not out of the question but requires scheduler support, see ). A user who is used to seeing a failure message from their previous attempts which had deadlocked would be all the more confused when they find their cross-task deadlocking code does not fail.\nIs the possibility of memory leakage really considered unsafe? doesn't seem to share that assumption, but I may be missing something there.\nNeither or provides a way to leak memory via the safe constructors.\nWe currently don't officially consider leaks to be , but I think leaking past the lifetime of tasks with references is a very questionable thing to allow.\nThe box annihilator is not a good solution to rc reference cycles, but it is at least a solution. Cycles of MutexArcs will produce valgrind errors.\nnow provides a way to leak with the safe constructors.", "positive_passages": [{"docid": "doc-en-rust-64a933ce468271cb0dbc0944214402036bf9d8cb8d530a0eff0415777c4d12c0", "text": "* blocked on the mutex) will also fail immediately. */ #[inline] pub unsafe fn unsafe_access(&self, blk: |x: &mut T| -> U) -> U { pub fn access(&self, blk: |x: &mut T| -> U) -> U { let state = self.x.get(); // Borrowck would complain about this if the function were // not already unsafe. See borrow_rwlock, far below. (&(*state).lock).lock(|| { check_poison(true, (*state).failed); let _z = PoisonOnFail::new(&mut (*state).failed); blk(&mut (*state).data) }) unsafe { // Borrowck would complain about this if the code were // not already unsafe. See borrow_rwlock, far below. (&(*state).lock).lock(|| { check_poison(true, (*state).failed); let _z = PoisonOnFail::new(&mut (*state).failed); blk(&mut (*state).data) }) } } /// As unsafe_access(), but with a condvar, as sync::mutex.lock_cond(). /// As access(), but with a condvar, as sync::mutex.lock_cond(). #[inline] pub unsafe fn unsafe_access_cond(&self, blk: |x: &mut T, c: &Condvar| -> U) -> U { pub fn access_cond(&self, blk: |x: &mut T, c: &Condvar| -> U) -> U { let state = self.x.get(); (&(*state).lock).lock_cond(|cond| { check_poison(true, (*state).failed); let _z = PoisonOnFail::new(&mut (*state).failed); blk(&mut (*state).data, &Condvar {is_mutex: true, failed: &(*state).failed, cond: cond }) }) } } impl MutexArc { /** * As unsafe_access. * * The difference between access and unsafe_access is that the former * forbids mutexes to be nested. While unsafe_access can be used on * MutexArcs without freezable interiors, this safe version of access * requires the Freeze bound, which prohibits access on MutexArcs which * might contain nested MutexArcs inside. * * The purpose of this is to offer a safe implementation of MutexArc to be * used instead of RWArc in cases where no readers are needed and slightly * better performance is required. * * Both methods have the same failure behaviour as unsafe_access and * unsafe_access_cond. */ #[inline] pub fn access(&self, blk: |x: &mut T| -> U) -> U { unsafe { self.unsafe_access(blk) } } /// As unsafe_access_cond but safe and Freeze. #[inline] pub fn access_cond(&self, blk: |x: &mut T, c: &Condvar| -> U) -> U { unsafe { self.unsafe_access_cond(blk) } unsafe { (&(*state).lock).lock_cond(|cond| { check_poison(true, (*state).failed); let _z = PoisonOnFail::new(&mut (*state).failed); blk(&mut (*state).data, &Condvar {is_mutex: true, failed: &(*state).failed, cond: cond }) }) } } }", "commid": "rust_pr_12336.0"}], "negative_passages": []} {"query_id": "q-en-rust-59263108c9524243f8bc03354294c2b77e1f7ac4cdc637a6f8ac7af96658fc7b", "query": "contains two sets of methods to access the Arc's wrapped object. and are tagged unsafe as \"it is possible to construct a circular reference among multiple Arcs by mutating the underlying data. This creates potential for deadlock, but worse, this will guarantee a memory leak of all involved Arcs.\" There are also and methods defined when the inner type is freezable as that guarantees the type can't contain a . However, it's still trivially easy to deadlock using the safe methods: It seems that the consensus in was that possibility of deadlock is inherently unsafe, so should and be removed? cc\nPerhaps access should fail when it is called by the same task twice? It would be nice if people didn't have to use unsafe blocks just to get normal stuff done.\nNested locks in the same task aren't the only way you can run into deadlock:\nI am aware of that, but it seems to me this particular error would be easy and cheap to catch, so if we aren't doing that yet, we probably should. Personally I don't think the possibility of a deadlock should require an unsafe block.\nConsensus in the past has been that deadlock is not unsafe (). You can deadlock with pipes too, with just two lines of code. The non- accessors for are unsafe for a totally different reason, which is documented in . Regarding \"cheap and easy to catch\" - I don't believe we should try to fail in the single-task deadlock case unless we can reliably fail in all deadlock cases (which is not out of the question but requires scheduler support, see ). A user who is used to seeing a failure message from their previous attempts which had deadlocked would be all the more confused when they find their cross-task deadlocking code does not fail.\nIs the possibility of memory leakage really considered unsafe? doesn't seem to share that assumption, but I may be missing something there.\nNeither or provides a way to leak memory via the safe constructors.\nWe currently don't officially consider leaks to be , but I think leaking past the lifetime of tasks with references is a very questionable thing to allow.\nThe box annihilator is not a good solution to rc reference cycles, but it is at least a solution. Cycles of MutexArcs will produce valgrind errors.\nnow provides a way to leak with the safe constructors.", "positive_passages": [{"docid": "doc-en-rust-2cfd74c485a73aa7bb3a9584535f5df76995159938ac5fbc00ab222477adc936", "text": "impl Clone for CowArc { /// Duplicate a Copy-on-write Arc. See arc::clone for more details. #[inline] fn clone(&self) -> CowArc { CowArc { x: self.x.clone() } }", "commid": "rust_pr_12336.0"}], "negative_passages": []} {"query_id": "q-en-rust-59263108c9524243f8bc03354294c2b77e1f7ac4cdc637a6f8ac7af96658fc7b", "query": "contains two sets of methods to access the Arc's wrapped object. and are tagged unsafe as \"it is possible to construct a circular reference among multiple Arcs by mutating the underlying data. This creates potential for deadlock, but worse, this will guarantee a memory leak of all involved Arcs.\" There are also and methods defined when the inner type is freezable as that guarantees the type can't contain a . However, it's still trivially easy to deadlock using the safe methods: It seems that the consensus in was that possibility of deadlock is inherently unsafe, so should and be removed? cc\nPerhaps access should fail when it is called by the same task twice? It would be nice if people didn't have to use unsafe blocks just to get normal stuff done.\nNested locks in the same task aren't the only way you can run into deadlock:\nI am aware of that, but it seems to me this particular error would be easy and cheap to catch, so if we aren't doing that yet, we probably should. Personally I don't think the possibility of a deadlock should require an unsafe block.\nConsensus in the past has been that deadlock is not unsafe (). You can deadlock with pipes too, with just two lines of code. The non- accessors for are unsafe for a totally different reason, which is documented in . Regarding \"cheap and easy to catch\" - I don't believe we should try to fail in the single-task deadlock case unless we can reliably fail in all deadlock cases (which is not out of the question but requires scheduler support, see ). A user who is used to seeing a failure message from their previous attempts which had deadlocked would be all the more confused when they find their cross-task deadlocking code does not fail.\nIs the possibility of memory leakage really considered unsafe? doesn't seem to share that assumption, but I may be missing something there.\nNeither or provides a way to leak memory via the safe constructors.\nWe currently don't officially consider leaks to be , but I think leaking past the lifetime of tasks with references is a very questionable thing to allow.\nThe box annihilator is not a good solution to rc reference cycles, but it is at least a solution. Cycles of MutexArcs will produce valgrind errors.\nnow provides a way to leak with the safe constructors.", "positive_passages": [{"docid": "doc-en-rust-ec02f9686599aa11b90648121c98a50b7a37aaf60bef82c20a7179ed8ec253d0", "text": "} #[test] fn test_unsafe_mutex_arc_nested() { unsafe { // Tests nested mutexes and access // to underlaying data. let arc = ~MutexArc::new(1); let arc2 = ~MutexArc::new(*arc); task::spawn(proc() { (*arc2).unsafe_access(|mutex| { (*mutex).access(|one| { assert!(*one == 1); }) fn test_mutex_arc_nested() { // Tests nested mutexes and access // to underlaying data. let arc = ~MutexArc::new(1); let arc2 = ~MutexArc::new(*arc); task::spawn(proc() { (*arc2).access(|mutex| { (*mutex).access(|one| { assert!(*one == 1); }) }); } }) }); } #[test]", "commid": "rust_pr_12336.0"}], "negative_passages": []} {"query_id": "q-en-rust-59263108c9524243f8bc03354294c2b77e1f7ac4cdc637a6f8ac7af96658fc7b", "query": "contains two sets of methods to access the Arc's wrapped object. and are tagged unsafe as \"it is possible to construct a circular reference among multiple Arcs by mutating the underlying data. This creates potential for deadlock, but worse, this will guarantee a memory leak of all involved Arcs.\" There are also and methods defined when the inner type is freezable as that guarantees the type can't contain a . However, it's still trivially easy to deadlock using the safe methods: It seems that the consensus in was that possibility of deadlock is inherently unsafe, so should and be removed? cc\nPerhaps access should fail when it is called by the same task twice? It would be nice if people didn't have to use unsafe blocks just to get normal stuff done.\nNested locks in the same task aren't the only way you can run into deadlock:\nI am aware of that, but it seems to me this particular error would be easy and cheap to catch, so if we aren't doing that yet, we probably should. Personally I don't think the possibility of a deadlock should require an unsafe block.\nConsensus in the past has been that deadlock is not unsafe (). You can deadlock with pipes too, with just two lines of code. The non- accessors for are unsafe for a totally different reason, which is documented in . Regarding \"cheap and easy to catch\" - I don't believe we should try to fail in the single-task deadlock case unless we can reliably fail in all deadlock cases (which is not out of the question but requires scheduler support, see ). A user who is used to seeing a failure message from their previous attempts which had deadlocked would be all the more confused when they find their cross-task deadlocking code does not fail.\nIs the possibility of memory leakage really considered unsafe? doesn't seem to share that assumption, but I may be missing something there.\nNeither or provides a way to leak memory via the safe constructors.\nWe currently don't officially consider leaks to be , but I think leaking past the lifetime of tasks with references is a very questionable thing to allow.\nThe box annihilator is not a good solution to rc reference cycles, but it is at least a solution. Cycles of MutexArcs will produce valgrind errors.\nnow provides a way to leak with the safe constructors.", "positive_passages": [{"docid": "doc-en-rust-a0c137cde7b70521f451024878b58467839bd1a7ab5a9581c29da27e79f05ec0", "text": " // Copyright 2013 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. extern crate sync; use std::task; use sync::MutexArc; fn test_mutex_arc_nested() { let arc = ~MutexArc::new(1); let arc2 = ~MutexArc::new(*arc); task::spawn(proc() { (*arc2).access(|mutex| { //~ ERROR instantiating a type parameter with an incompatible type }) }); } fn main() {} ", "commid": "rust_pr_12336.0"}], "negative_passages": []} {"query_id": "q-en-rust-b6b6e55be0c08a084250b1fbdfe5b6dc2e283acea53531fd5ae3f79025285d92", "query": "Feature gate: This is a tracking issue for scoped threads. RFC: https://rust- Documentation: [x] RFC attempt 1: [x] RFC attempt 2: [x] Implementation: [x] Change signatures a bit to remove the argument to the spawn closures: [x] Fix soundness issue in implementation: [x] Document lifetimes: [x] Final comment period (FCP): [x] Stabilization PR: [x] Can we omit the argument to the functions given to ? That is, rather than . It's already possible by forcing the user to use instead, but that's not great. Maybe the language could be subtly changed to capture references or certain types by value rather than by reference(-to-reference). See also and the collapsed section in . Mostly answered in . Working idea in . Implementation in [x] How to document the and lifetimes clearly without scaring people away.\nNominating this for team discussion for the unresolved question.\nAwesome API, thanks again for taking care of driving this