{"query_id": "q-en-rust-6bfd1043e57920f5ebdb2f39053599f3a97cc146682b8dfafd0f928d39f82647", "query": "I am attempting to use the () syntax to control how my test binary is compiled. I have created a repro at In , I tell to link 2 static libraries, one with (+whole-archive) and one without (-whole-archive). I add a nonsense link argument so I can inspect the produced link line. I expect to see linked without whole archive. Instead, I see both and libraries linked with --whole-archive: This may be related to the default documented . When I use nightly and specify it works as expected: However, I cannot use the nightly toolchain, so I cannot use , and seems inappropriate for a final binary crate types like cdylib, staticlib, and executables anyway. Tested with 1.62.1 stable and 1.64 nightly. Possibly related issues: ,\nThis is a bug introduced in The backward compatibility condition should look like and not .\nCould you submit a PR with this fix? (We can backport it to beta if it's done in time.)\nSure, I'll create a PR. I'm not sure what tests to include, however. I had been looking at that exact code and almost submitted a PR earlier, but I wasn't sure about the whole context of that code and all the compatibility needs.\nYou can try adding a test case with to . One with and , and another with and without any (the compatibility case).", "positive_passages": [{"docid": "doc-en-rust-4afe5857f53f951490166aa76a1b7c29ce180c19c88275a9279183844654065a", "text": "// be added explicitly if necessary, see the error in `fn link_rlib`) compiled // as an executable due to `--test`. Use whole-archive implicitly, like before // the introduction of native lib modifiers. || (bundle != Some(false) && sess.opts.test) || (whole_archive == None && bundle != Some(false) && sess.opts.test) { cmd.link_whole_staticlib( name,", "commid": "rust_pr_100068"}], "negative_passages": []} {"query_id": "q-en-rust-6bfd1043e57920f5ebdb2f39053599f3a97cc146682b8dfafd0f928d39f82647", "query": "I am attempting to use the () syntax to control how my test binary is compiled. I have created a repro at In , I tell to link 2 static libraries, one with (+whole-archive) and one without (-whole-archive). I add a nonsense link argument so I can inspect the produced link line. I expect to see linked without whole archive. Instead, I see both and libraries linked with --whole-archive: This may be related to the default documented . When I use nightly and specify it works as expected: However, I cannot use the nightly toolchain, so I cannot use , and seems inappropriate for a final binary crate types like cdylib, staticlib, and executables anyway. Tested with 1.62.1 stable and 1.64 nightly. Possibly related issues: ,\nThis is a bug introduced in The backward compatibility condition should look like and not .\nCould you submit a PR with this fix? (We can backport it to beta if it's done in time.)\nSure, I'll create a PR. I'm not sure what tests to include, however. I had been looking at that exact code and almost submitted a PR earlier, but I wasn't sure about the whole context of that code and all the compatibility needs.\nYou can try adding a test case with to . One with and , and another with and without any (the compatibility case).", "positive_passages": [{"docid": "doc-en-rust-624ae0ac265c5e384b2cc5141d19685b1cc3d46a07c1e1d230e838675c412dca", "text": "# ignore-cross-compile -- compiling C++ code does not work well when cross-compiling # This test case makes sure that native libraries are linked with --whole-archive semantics # when the `-bundle,+whole-archive` modifiers are applied to them. # This test case makes sure that native libraries are linked with appropriate semantics # when the `[+-]bundle,[+-]whole-archive` modifiers are applied to them. # # The test works by checking that the resulting executables produce the expected output, # part of which is emitted by otherwise unreferenced C code. If +whole-archive didn't work", "commid": "rust_pr_100068"}], "negative_passages": []} {"query_id": "q-en-rust-6bfd1043e57920f5ebdb2f39053599f3a97cc146682b8dfafd0f928d39f82647", "query": "I am attempting to use the () syntax to control how my test binary is compiled. I have created a repro at In , I tell to link 2 static libraries, one with (+whole-archive) and one without (-whole-archive). I add a nonsense link argument so I can inspect the produced link line. I expect to see linked without whole archive. Instead, I see both and libraries linked with --whole-archive: This may be related to the default documented . When I use nightly and specify it works as expected: However, I cannot use the nightly toolchain, so I cannot use , and seems inappropriate for a final binary crate types like cdylib, staticlib, and executables anyway. Tested with 1.62.1 stable and 1.64 nightly. Possibly related issues: ,\nThis is a bug introduced in The backward compatibility condition should look like and not .\nCould you submit a PR with this fix? (We can backport it to beta if it's done in time.)\nSure, I'll create a PR. I'm not sure what tests to include, however. I had been looking at that exact code and almost submitted a PR earlier, but I wasn't sure about the whole context of that code and all the compatibility needs.\nYou can try adding a test case with to . One with and , and another with and without any (the compatibility case).", "positive_passages": [{"docid": "doc-en-rust-b6731058b5b36a612c3849f7ce661953e5a368440fb315c251e35e53770074fd", "text": "-include ../../run-make-fulldeps/tools.mk all: $(TMPDIR)/$(call BIN,directly_linked) $(TMPDIR)/$(call BIN,indirectly_linked) $(TMPDIR)/$(call BIN,indirectly_linked_via_attr) all: $(TMPDIR)/$(call BIN,directly_linked) $(TMPDIR)/$(call BIN,directly_linked_test_plus_whole_archive) $(TMPDIR)/$(call BIN,directly_linked_test_minus_whole_archive) $(TMPDIR)/$(call BIN,indirectly_linked) $(TMPDIR)/$(call BIN,indirectly_linked_via_attr) $(call RUN,directly_linked) | $(CGREP) 'static-initializer.directly_linked.' $(call RUN,directly_linked_test_plus_whole_archive) --nocapture | $(CGREP) 'static-initializer.' $(call RUN,directly_linked_test_minus_whole_archive) --nocapture | $(CGREP) -v 'static-initializer.' $(call RUN,indirectly_linked) | $(CGREP) 'static-initializer.indirectly_linked.' $(call RUN,indirectly_linked_via_attr) | $(CGREP) 'static-initializer.native_lib_in_src.'", "commid": "rust_pr_100068"}], "negative_passages": []} {"query_id": "q-en-rust-6bfd1043e57920f5ebdb2f39053599f3a97cc146682b8dfafd0f928d39f82647", "query": "I am attempting to use the () syntax to control how my test binary is compiled. I have created a repro at In , I tell to link 2 static libraries, one with (+whole-archive) and one without (-whole-archive). I add a nonsense link argument so I can inspect the produced link line. I expect to see linked without whole archive. Instead, I see both and libraries linked with --whole-archive: This may be related to the default documented . When I use nightly and specify it works as expected: However, I cannot use the nightly toolchain, so I cannot use , and seems inappropriate for a final binary crate types like cdylib, staticlib, and executables anyway. Tested with 1.62.1 stable and 1.64 nightly. Possibly related issues: ,\nThis is a bug introduced in The backward compatibility condition should look like and not .\nCould you submit a PR with this fix? (We can backport it to beta if it's done in time.)\nSure, I'll create a PR. I'm not sure what tests to include, however. I had been looking at that exact code and almost submitted a PR earlier, but I wasn't sure about the whole context of that code and all the compatibility needs.\nYou can try adding a test case with to . One with and , and another with and without any (the compatibility case).", "positive_passages": [{"docid": "doc-en-rust-5c287c509077516e7d8920d4d582ea5496786a7d313c94b12f29b1ce5a9a9665", "text": "$(TMPDIR)/$(call BIN,directly_linked): $(call NATIVE_STATICLIB,c_static_lib_with_constructor) $(RUSTC) directly_linked.rs -l static:+whole-archive=c_static_lib_with_constructor # Native lib linked into test executable, +whole-archive $(TMPDIR)/$(call BIN,directly_linked_test_plus_whole_archive): $(call NATIVE_STATICLIB,c_static_lib_with_constructor) $(RUSTC) directly_linked_test_plus_whole_archive.rs --test -l static:+whole-archive=c_static_lib_with_constructor # Native lib linked into test executable, -whole-archive $(TMPDIR)/$(call BIN,directly_linked_test_minus_whole_archive): $(call NATIVE_STATICLIB,c_static_lib_with_constructor) $(RUSTC) directly_linked_test_minus_whole_archive.rs --test -l static:-whole-archive=c_static_lib_with_constructor # Native lib linked into RLIB via `-l static:-bundle,+whole-archive`, RLIB linked into executable $(TMPDIR)/$(call BIN,indirectly_linked): $(TMPDIR)/librlib_with_cmdline_native_lib.rlib $(RUSTC) indirectly_linked.rs", "commid": "rust_pr_100068"}], "negative_passages": []} {"query_id": "q-en-rust-6bfd1043e57920f5ebdb2f39053599f3a97cc146682b8dfafd0f928d39f82647", "query": "I am attempting to use the () syntax to control how my test binary is compiled. I have created a repro at In , I tell to link 2 static libraries, one with (+whole-archive) and one without (-whole-archive). I add a nonsense link argument so I can inspect the produced link line. I expect to see linked without whole archive. Instead, I see both and libraries linked with --whole-archive: This may be related to the default documented . When I use nightly and specify it works as expected: However, I cannot use the nightly toolchain, so I cannot use , and seems inappropriate for a final binary crate types like cdylib, staticlib, and executables anyway. Tested with 1.62.1 stable and 1.64 nightly. Possibly related issues: ,\nThis is a bug introduced in The backward compatibility condition should look like and not .\nCould you submit a PR with this fix? (We can backport it to beta if it's done in time.)\nSure, I'll create a PR. I'm not sure what tests to include, however. I had been looking at that exact code and almost submitted a PR earlier, but I wasn't sure about the whole context of that code and all the compatibility needs.\nYou can try adding a test case with to . One with and , and another with and without any (the compatibility case).", "positive_passages": [{"docid": "doc-en-rust-67471abdd0ddc9119e2079c17e37000dac506c26c58ff18e79b3e69938c381a3", "text": " use std::io::Write; #[test] fn test_thing() { print!(\"ran the test\"); std::io::stdout().flush().unwrap(); } ", "commid": "rust_pr_100068"}], "negative_passages": []} {"query_id": "q-en-rust-b9bac5eeea90d943d4c64a4113c95a2330a5912785a3a86724f8233e15b7e0db", "query": ": and then if I increase the recursion limit enough... adding a backtrace gives nothing new, however, compiling on nightly gives the following: { // Mark that we've failed to coerce the types here to suppress // any superfluous errors we might encounter while trying to // emit or provide suggestions on how to fix the initial error. fcx.set_tainted_by_errors(); let (expected, found) = if label_expression_as_expected { // In the case where this is a \"forced unit\", like // `break`, we want to call the `()` \"expected\"", "commid": "rust_pr_100261"}], "negative_passages": []} {"query_id": "q-en-rust-b9bac5eeea90d943d4c64a4113c95a2330a5912785a3a86724f8233e15b7e0db", "query": ": and then if I increase the recursion limit enough... adding a backtrace gives nothing new, however, compiling on nightly gives the following: #![recursion_limit = \"5\"] // To reduce noise //expect incompatible type error when ambiguous traits are in scope //and not an overflow error on the span in the main function. struct Ratio(T); pub trait Pow { fn pow(self) -> Self; } impl<'a, T> Pow for &'a Ratio where &'a T: Pow, { fn pow(self) -> Self { self } } fn downcast<'a, W: ?Sized>() -> std::io::Result<&'a W> { todo!() } struct Other; fn main() -> std::io::Result<()> { let other: Other = downcast()?;//~ERROR 28:24: 28:35: `?` operator has incompatible types Ok(()) } ", "commid": "rust_pr_100261"}], "negative_passages": []} {"query_id": "q-en-rust-b9bac5eeea90d943d4c64a4113c95a2330a5912785a3a86724f8233e15b7e0db", "query": ": and then if I increase the recursion limit enough... adding a backtrace gives nothing new, however, compiling on nightly gives the following: error[E0308]: `?` operator has incompatible types --> $DIR/issue-100246.rs:28:24 | LL | let other: Other = downcast()?; | ^^^^^^^^^^^ expected struct `Other`, found reference | = note: `?` operator cannot convert from `&_` to `Other` = note: expected struct `Other` found reference `&_` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_100261"}], "negative_passages": []} {"query_id": "q-en-rust-e5cbaff957b7dd56f98e56c1df85bed010864740e94247d6d1fc2a825701fbbc", "query": "Currently we have , should and also be implemented for where ? $DIR/issue-100478.rs:34:5 | LL | three_diff(T2::new(0)); | ^^^^^^^^^^------------ | || | |an argument of type `T1` is missing | an argument of type `T3` is missing | note: function defined here --> $DIR/issue-100478.rs:30:4 | LL | fn three_diff(_a: T1, _b: T2, _c: T3) {} | ^^^^^^^^^^ ------ ------ ------ help: provide the arguments | LL | three_diff(/* T1 */, T2::new(0), /* T3 */); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error[E0308]: arguments to this function are incorrect --> $DIR/issue-100478.rs:35:5 | LL | four_shuffle(T3::default(), T4::default(), T1::default(), T2::default()); | ^^^^^^^^^^^^ ------------- ------------- ------------- ------------- expected `T4`, found `T2` | | | | | | | expected `T3`, found `T1` | | expected `T2`, found `T4` | expected `T1`, found `T3` | note: function defined here --> $DIR/issue-100478.rs:31:4 | LL | fn four_shuffle(_a: T1, _b: T2, _c: T3, _d: T4) {} | ^^^^^^^^^^^^ ------ ------ ------ ------ help: did you mean | LL | four_shuffle(T1::default(), T2::default(), T3::default(), T4::default()); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error[E0308]: arguments to this function are incorrect --> $DIR/issue-100478.rs:36:5 | LL | four_shuffle(T3::default(), T2::default(), T1::default(), T3::default()); | ^^^^^^^^^^^^ ------------- ------------- ------------- expected struct `T4`, found struct `T3` | | | | | expected `T3`, found `T1` | expected `T1`, found `T3` | note: function defined here --> $DIR/issue-100478.rs:31:4 | LL | fn four_shuffle(_a: T1, _b: T2, _c: T3, _d: T4) {} | ^^^^^^^^^^^^ ------ ------ ------ ------ help: swap these arguments | LL | four_shuffle(T1::default(), T2::default(), T3::default(), /* T4 */); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error[E0061]: this function takes 8 arguments but 7 arguments were supplied --> $DIR/issue-100478.rs:47:5 | LL | foo( | ^^^ ... LL | p3, p4, p5, p6, p7, p8, | -- an argument of type `Arc` is missing | note: function defined here --> $DIR/issue-100478.rs:29:4 | LL | fn foo(p1: T1, p2: Arc, p3: T3, p4: Arc, p5: T5, p6: T6, p7: T7, p8: Arc) {} | ^^^ ------ ----------- ------ ----------- ------ ------ ------ ----------- help: provide the argument | LL | foo(p1, /* Arc */, p3, p4, p5, p6, p7, p8); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error: aborting due to 4 previous errors Some errors have detailed explanations: E0061, E0308. For more information about an error, try `rustc --explain E0061`. ", "commid": "rust_pr_100502"}], "negative_passages": []} {"query_id": "q-en-rust-3e82f335426135337f41dd99e376facdeaa80604c0592c4fefb3e18380ef265e", "query": " $DIR/issue-101097.rs:16:5 | LL | f(C, A, A, A, B, B, C); | ^ - - - - expected `C`, found `B` | | | | | | | argument of type `A` unexpected | | expected `B`, found `A` | expected `A`, found `C` | note: function defined here --> $DIR/issue-101097.rs:6:4 | LL | fn f( | ^ LL | a1: A, | ----- LL | a2: A, | ----- LL | b1: B, | ----- LL | b2: B, | ----- LL | c1: C, | ----- LL | c2: C, | ----- help: did you mean | LL | f(A, A, B, B, C, C); | ~~~~~~~~~~~~~~~~~~ error[E0308]: arguments to this function are incorrect --> $DIR/issue-101097.rs:17:5 | LL | f(C, C, A, A, B, B); | ^ | note: function defined here --> $DIR/issue-101097.rs:6:4 | LL | fn f( | ^ LL | a1: A, | ----- LL | a2: A, | ----- LL | b1: B, | ----- LL | b2: B, | ----- LL | c1: C, | ----- LL | c2: C, | ----- help: did you mean | LL | f(A, A, B, B, C, C); | ~~~~~~~~~~~~~~~~~~ error[E0308]: arguments to this function are incorrect --> $DIR/issue-101097.rs:18:5 | LL | f(A, A, D, D, B, B); | ^ - - ---- two arguments of type `C` and `C` are missing | | | | | argument of type `D` unexpected | argument of type `D` unexpected | note: function defined here --> $DIR/issue-101097.rs:6:4 | LL | fn f( | ^ LL | a1: A, | ----- LL | a2: A, | ----- LL | b1: B, | ----- LL | b2: B, | ----- LL | c1: C, | ----- LL | c2: C, | ----- help: did you mean | LL | f(A, A, B, B, /* C */, /* C */); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ error[E0308]: arguments to this function are incorrect --> $DIR/issue-101097.rs:19:5 | LL | f(C, C, B, B, A, A); | ^ - - - - expected `C`, found `A` | | | | | | | expected `C`, found `A` | | expected `A`, found `C` | expected `A`, found `C` | note: function defined here --> $DIR/issue-101097.rs:6:4 | LL | fn f( | ^ LL | a1: A, | ----- LL | a2: A, | ----- LL | b1: B, | ----- LL | b2: B, | ----- LL | c1: C, | ----- LL | c2: C, | ----- help: did you mean | LL | f(A, A, B, B, C, C); | ~~~~~~~~~~~~~~~~~~ error[E0308]: arguments to this function are incorrect --> $DIR/issue-101097.rs:20:5 | LL | f(C, C, A, B, A, A); | ^ - - - - - expected `C`, found `A` | | | | | | | | | expected `C`, found `A` | | | expected struct `B`, found struct `A` | | expected `A`, found `C` | expected `A`, found `C` | note: function defined here --> $DIR/issue-101097.rs:6:4 | LL | fn f( | ^ LL | a1: A, | ----- LL | a2: A, | ----- LL | b1: B, | ----- LL | b2: B, | ----- LL | c1: C, | ----- LL | c2: C, | ----- help: did you mean | LL | f(A, A, /* B */, B, C, C); | ~~~~~~~~~~~~~~~~~~~~~~~~ error: aborting due to 5 previous errors Some errors have detailed explanations: E0061, E0308. For more information about an error, try `rustc --explain E0061`. ", "commid": "rust_pr_100502"}], "negative_passages": []} {"query_id": "q-en-rust-5e1948c67505a572d665bc41af3ff1f6f40135c39f62af10cf93a291ad24b297", "query": "produces: But I think it should be. For the HTML backend, rustdoc \"resugars\" the return type to get the from I think we should move this to a pass. cc does this change seem like a good idea. modify labels: +T-rustdoc +A-rustdoc-json\nI'm not sure if it's worth adding a pass for this... Maybe handling this into a specific function shared between both formats could be a better idea?\nWe could call in . Or we could move this to when constructing . I'm not sure which makes most sense.\nIn directly seems like a good approach.", "positive_passages": [{"docid": "doc-en-rust-c2cc41b5bc4000bb98837f086956ce211a39cdddaa7469adca9ec6fd4ae4be5b", "text": "// NOTE: generics must be cleaned before args let generics = clean_generics(generics, cx); let args = clean_args_from_types_and_body_id(cx, sig.decl.inputs, body_id); let decl = clean_fn_decl_with_args(cx, sig.decl, args); let mut decl = clean_fn_decl_with_args(cx, sig.decl, args); if sig.header.is_async() { decl.output = decl.sugared_async_return_type(); } (generics, decl) }); Box::new(Function { decl, generics })", "commid": "rust_pr_101204"}], "negative_passages": []} {"query_id": "q-en-rust-5e1948c67505a572d665bc41af3ff1f6f40135c39f62af10cf93a291ad24b297", "query": "produces: But I think it should be. For the HTML backend, rustdoc \"resugars\" the return type to get the from I think we should move this to a pass. cc does this change seem like a good idea. modify labels: +T-rustdoc +A-rustdoc-json\nI'm not sure if it's worth adding a pass for this... Maybe handling this into a specific function shared between both formats could be a better idea?\nWe could call in . Or we could move this to when constructing . I'm not sure which makes most sense.\nIn directly seems like a good approach.", "positive_passages": [{"docid": "doc-en-rust-f23a1cc0dea2e83595057cc5195c1c3911ad470579c5a93c922a658daf25b7b7", "text": "///
Used to determine line-wrapping. /// * `indent`: The number of spaces to indent each successive line with, if line-wrapping is /// necessary. /// * `asyncness`: Whether the function is async or not. pub(crate) fn full_print<'a, 'tcx: 'a>( &'a self, header_len: usize, indent: usize, asyncness: hir::IsAsync, cx: &'a Context<'tcx>, ) -> impl fmt::Display + 'a + Captures<'tcx> { display_fn(move |f| self.inner_full_print(header_len, indent, asyncness, f, cx)) display_fn(move |f| self.inner_full_print(header_len, indent, f, cx)) } fn inner_full_print( &self, header_len: usize, indent: usize, asyncness: hir::IsAsync, f: &mut fmt::Formatter<'_>, cx: &Context<'_>, ) -> fmt::Result {", "commid": "rust_pr_101204"}], "negative_passages": []} {"query_id": "q-en-rust-5e1948c67505a572d665bc41af3ff1f6f40135c39f62af10cf93a291ad24b297", "query": "produces: But I think it should be. For the HTML backend, rustdoc \"resugars\" the return type to get the from I think we should move this to a pass. cc does this change seem like a good idea. modify labels: +T-rustdoc +A-rustdoc-json\nI'm not sure if it's worth adding a pass for this... Maybe handling this into a specific function shared between both formats could be a better idea?\nWe could call in . Or we could move this to when constructing . I'm not sure which makes most sense.\nIn directly seems like a good approach.", "positive_passages": [{"docid": "doc-en-rust-faa6ea1d5096d1f84fe72722d0489952d17549b0ab261aba2dc9a4dc3c055bc1", "text": "args_plain.push_str(\", ...\"); } let arrow_plain; let arrow = if let hir::IsAsync::Async = asyncness { let output = self.sugared_async_return_type(); arrow_plain = format!(\"{:#}\", output.print(cx)); if f.alternate() { arrow_plain.clone() } else { format!(\"{}\", output.print(cx)) } } else { arrow_plain = format!(\"{:#}\", self.output.print(cx)); if f.alternate() { arrow_plain.clone() } else { format!(\"{}\", self.output.print(cx)) } }; let arrow_plain = format!(\"{:#}\", self.output.print(cx)); let arrow = if f.alternate() { arrow_plain.clone() } else { format!(\"{}\", self.output.print(cx)) }; let declaration_len = header_len + args_plain.len() + arrow_plain.len(); let output = if declaration_len > 80 {", "commid": "rust_pr_101204"}], "negative_passages": []} {"query_id": "q-en-rust-5e1948c67505a572d665bc41af3ff1f6f40135c39f62af10cf93a291ad24b297", "query": "produces: But I think it should be. For the HTML backend, rustdoc \"resugars\" the return type to get the from I think we should move this to a pass. cc does this change seem like a good idea. modify labels: +T-rustdoc +A-rustdoc-json\nI'm not sure if it's worth adding a pass for this... Maybe handling this into a specific function shared between both formats could be a better idea?\nWe could call in . Or we could move this to when constructing . I'm not sure which makes most sense.\nIn directly seems like a good approach.", "positive_passages": [{"docid": "doc-en-rust-967156c9a20cb1a9d03b741d038f1c461180b7b735d43ed9d697a598ccd33427", "text": "href = href, name = name, generics = g.print(cx), decl = d.full_print(header_len, indent, header.asyncness, cx), decl = d.full_print(header_len, indent, cx), notable_traits = notable_traits_decl(d, cx), where_clause = print_where_clause(g, cx, indent, end_newline), )", "commid": "rust_pr_101204"}], "negative_passages": []} {"query_id": "q-en-rust-5e1948c67505a572d665bc41af3ff1f6f40135c39f62af10cf93a291ad24b297", "query": "produces: But I think it should be. For the HTML backend, rustdoc \"resugars\" the return type to get the from I think we should move this to a pass. cc does this change seem like a good idea. modify labels: +T-rustdoc +A-rustdoc-json\nI'm not sure if it's worth adding a pass for this... Maybe handling this into a specific function shared between both formats could be a better idea?\nWe could call in . Or we could move this to when constructing . I'm not sure which makes most sense.\nIn directly seems like a good approach.", "positive_passages": [{"docid": "doc-en-rust-869961c083ad0a831bb9672af92dea916d6e5d58e33168d2048c5648f4e43fb3", "text": "name = name, generics = f.generics.print(cx), where_clause = print_where_clause(&f.generics, cx, 0, Ending::Newline), decl = f.decl.full_print(header_len, 0, header.asyncness, cx), decl = f.decl.full_print(header_len, 0, cx), notable_traits = notable_traits_decl(&f.decl, cx), ); });", "commid": "rust_pr_101204"}], "negative_passages": []} {"query_id": "q-en-rust-5e1948c67505a572d665bc41af3ff1f6f40135c39f62af10cf93a291ad24b297", "query": "produces: But I think it should be. For the HTML backend, rustdoc \"resugars\" the return type to get the from I think we should move this to a pass. cc does this change seem like a good idea. modify labels: +T-rustdoc +A-rustdoc-json\nI'm not sure if it's worth adding a pass for this... Maybe handling this into a specific function shared between both formats could be a better idea?\nWe could call in . Or we could move this to when constructing . I'm not sure which makes most sense.\nIn directly seems like a good approach.", "positive_passages": [{"docid": "doc-en-rust-64243003a241e9945e8c6c48683f198525294751eb24da1f8ea3c962060eb8d2", "text": " // edition:2021 // ignore-tidy-linelength // Regression test for use std::future::Future; // @is \"$.index[*][?(@.name=='get_int')].inner.decl.output\" '{\"inner\": \"i32\", \"kind\": \"primitive\"}' // @is \"$.index[*][?(@.name=='get_int')].inner.header.async\" false pub fn get_int() -> i32 { 42 } // @is \"$.index[*][?(@.name=='get_int_async')].inner.decl.output\" '{\"inner\": \"i32\", \"kind\": \"primitive\"}' // @is \"$.index[*][?(@.name=='get_int_async')].inner.header.async\" true pub async fn get_int_async() -> i32 { 42 } // @is \"$.index[*][?(@.name=='get_int_future')].inner.decl.output.kind\" '\"impl_trait\"' // @is \"$.index[*][?(@.name=='get_int_future')].inner.decl.output.inner[0].trait_bound.trait.name\" '\"Future\"' // @is \"$.index[*][?(@.name=='get_int_future')].inner.decl.output.inner[0].trait_bound.trait.args.angle_bracketed.bindings[0].name\" '\"Output\"' // @is \"$.index[*][?(@.name=='get_int_future')].inner.decl.output.inner[0].trait_bound.trait.args.angle_bracketed.bindings[0].binding.equality.type\" '{\"inner\": \"i32\", \"kind\": \"primitive\"}' // @is \"$.index[*][?(@.name=='get_int_future')].inner.header.async\" false pub fn get_int_future() -> impl Future { async { 42 } } // @is \"$.index[*][?(@.name=='get_int_future_async')].inner.decl.output.kind\" '\"impl_trait\"' // @is \"$.index[*][?(@.name=='get_int_future_async')].inner.decl.output.inner[0].trait_bound.trait.name\" '\"Future\"' // @is \"$.index[*][?(@.name=='get_int_future_async')].inner.decl.output.inner[0].trait_bound.trait.args.angle_bracketed.bindings[0].name\" '\"Output\"' // @is \"$.index[*][?(@.name=='get_int_future_async')].inner.decl.output.inner[0].trait_bound.trait.args.angle_bracketed.bindings[0].binding.equality.type\" '{\"inner\": \"i32\", \"kind\": \"primitive\"}' // @is \"$.index[*][?(@.name=='get_int_future_async')].inner.header.async\" true pub async fn get_int_future_async() -> impl Future { async { 42 } } ", "commid": "rust_pr_101204"}], "negative_passages": []} {"query_id": "q-en-rust-96c24bdc694b3160ccef41740482e4325e7835f2dfcf3ecbdc7c7376edabaacd", "query": "I tried -ing this proc macro crate: rust /// pub extern crate procmacrocrate; /// /// #[macroexport] /// macrorules! foo { /// (path:ident)::) =( /// $crate::procmacrocrate::identity!{ /// path):: /// } /// ) /// } /// /// #[macroexport] /// macrorules! baz { /// () =( /// crate::BAR) /// ) /// } /// /// pub const BAR: u32 = 19; /// /// fn main(){ /// println!(\"{}\", crate::baz!()); /// } /// /// I expected to see this happen: The test completes successfully Instead, this happened: The test fails compilation with this error: It most recently worked on: : This also fails in 1.62 and 1.63. This appears to be fixed in the beta channel, but I could not find a mention of this bug or a related one. It's entirely possible that the bug will trigger again in future versions if this code isn't used in tests. $DIR/public-instead-of-pub-3.rs:3:12 | LL | public const X: i32 = 123; | ^^^^^ expected one of `!` or `::` | help: write `pub` instead of `public` to make the item public | LL | pub const X: i32 = 123; | ~~~ error: aborting due to previous error ", "commid": "rust_pr_101668"}], "negative_passages": []} {"query_id": "q-en-rust-71624e114117325039d54484a9c24292cbd8090a9b883929ac77b3d4e90427d3", "query": " $DIR/sugg-else-for-closure.rs:6:26 | LL | let _s = y.unwrap_or(|| x.split('.').nth(1).unwrap()); | --------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `&str`, found closure | | | arguments to this function are incorrect | = note: expected reference `&str` found closure `[closure@$DIR/sugg-else-for-closure.rs:6:26: 6:28]` note: associated function defined here --> $SRC_DIR/core/src/option.rs:LL:COL | LL | pub const fn unwrap_or(self, default: T) -> T | ^^^^^^^^^ help: try calling `unwrap_or_else` instead | LL | let _s = y.unwrap_or_else(|| x.split('.').nth(1).unwrap()); | +++++ error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_102441"}], "negative_passages": []} {"query_id": "q-en-rust-1311500c93d3f78dcec4f24006ed2071a21e0fa48bb42e6423709f35a4074c96", "query": "with synchronization assumes that does not panic. This assumption is however not correct. For instance, a panic occurs if the TLS key used to store the thread info , , , and . The code examples above all use intrusive lists and have to wait on a condition to before destroying the list nodes. If panics, the nodes will be dropped during stack unwinding without checking that condition, leading to undefined behaviour (some of these cases are in , so I'm marking this as a soundness problem with ). Fixing this would either require guarding against panics in each of these cases or ensuring that never panics. label +I-unsound +A-atomic +T-libs +T-libs-api\nThanks for bringing this up! I agree this is a problem. In general, the pattern with parking is: Unwinding would mean leaving that loop even though the condition isn't true. While code right after the loop isn't executed, there are many cases where leaving the current frame before the condition is true is problematic, as all your examples show. I think it's not reasonable to expect every user of to take this subtle pitfall into account. Instead, I believe we should guarantee that never unwinds.\nAll the situations in which can currently panic are very unusual things that are most likely unrecoverable errors. So I think it'd be fine to just never let unwind, and instead always on panic.\nI think Rust should allocate the needed resources eagerly when creating a new thread, to avoid this problem.\nIt actually does most of the time, as always creates a . However, if the thread is created through other means, the struct needs to be lazily allocated, which can of course fail. Either way, we also need to protect against the case where fails - it should not, but who knows what some obscure system might do (the UNIX parking implementation is used on systems like L4RE and 3DS/Horizon). In general, I think it is easier for maintainability if we just always convert panics to aborts in / rather than going through every system and abstraction to ensure no panics are caused. Still, this is possible to do in the future (as is returning instantly).\nI just tripped over an issue in this area. I'm trying to put together a userspace mutex for learning and it works fine apart from when there is a panic. If there has been a panic, then I get a panic when handling panic and fall over with this stack trace: Note that at 18, some destructors are run, that appear to be releasing Thread local storage. By the time I try to run , at 7, the thread is already invalid (although , I don't know that). I tried: But, I get the same problem. I presume because of raciness between the check for panicking and the panic actually starting.\nIt looks like you're implementing a global allocator using your thread::park_timeout-based mutex? That's going to be problematic, since the thread parking data is stored in a heap allocation (an ) that gets deallocated on thread destruction.\nThank you. I have an approach to the problem which works as a spinlock, but I was experimenting to see if I could do something more efficient using . It's one of those where I wasn't sure if it was interesting to report, but I thought I'd mention it.", "positive_passages": [{"docid": "doc-en-rust-ab2a4cfb2916c123bc3e270b24a6a2cc00afde724abfb69f7a8e92b44e6fb23b", "text": "use crate::fmt; use crate::io; use crate::marker::PhantomData; use crate::mem; use crate::mem::{self, forget}; use crate::num::NonZeroU64; use crate::num::NonZeroUsize; use crate::panic;", "commid": "rust_pr_102412"}], "negative_passages": []} {"query_id": "q-en-rust-1311500c93d3f78dcec4f24006ed2071a21e0fa48bb42e6423709f35a4074c96", "query": "with synchronization assumes that does not panic. This assumption is however not correct. For instance, a panic occurs if the TLS key used to store the thread info , , , and . The code examples above all use intrusive lists and have to wait on a condition to before destroying the list nodes. If panics, the nodes will be dropped during stack unwinding without checking that condition, leading to undefined behaviour (some of these cases are in , so I'm marking this as a soundness problem with ). Fixing this would either require guarding against panics in each of these cases or ensuring that never panics. label +I-unsound +A-atomic +T-libs +T-libs-api\nThanks for bringing this up! I agree this is a problem. In general, the pattern with parking is: Unwinding would mean leaving that loop even though the condition isn't true. While code right after the loop isn't executed, there are many cases where leaving the current frame before the condition is true is problematic, as all your examples show. I think it's not reasonable to expect every user of to take this subtle pitfall into account. Instead, I believe we should guarantee that never unwinds.\nAll the situations in which can currently panic are very unusual things that are most likely unrecoverable errors. So I think it'd be fine to just never let unwind, and instead always on panic.\nI think Rust should allocate the needed resources eagerly when creating a new thread, to avoid this problem.\nIt actually does most of the time, as always creates a . However, if the thread is created through other means, the struct needs to be lazily allocated, which can of course fail. Either way, we also need to protect against the case where fails - it should not, but who knows what some obscure system might do (the UNIX parking implementation is used on systems like L4RE and 3DS/Horizon). In general, I think it is easier for maintainability if we just always convert panics to aborts in / rather than going through every system and abstraction to ensure no panics are caused. Still, this is possible to do in the future (as is returning instantly).\nI just tripped over an issue in this area. I'm trying to put together a userspace mutex for learning and it works fine apart from when there is a panic. If there has been a panic, then I get a panic when handling panic and fall over with this stack trace: Note that at 18, some destructors are run, that appear to be releasing Thread local storage. By the time I try to run , at 7, the thread is already invalid (although , I don't know that). I tried: But, I get the same problem. I presume because of raciness between the check for panicking and the panic actually starting.\nIt looks like you're implementing a global allocator using your thread::park_timeout-based mutex? That's going to be problematic, since the thread parking data is stored in a heap allocation (an ) that gets deallocated on thread destruction.\nThank you. I have an approach to the problem which works as a spinlock, but I was experimenting to see if I could do something more efficient using . It's one of those where I wasn't sure if it was interesting to report, but I thought I'd mention it.", "positive_passages": [{"docid": "doc-en-rust-033faff9d8e0e7d243e93dd44d907588787c4077294b279c82e55eceee9d11c2", "text": "imp::Thread::sleep(dur) } /// Used to ensure that `park` and `park_timeout` do not unwind, as that can /// cause undefined behaviour if not handled correctly (see #102398 for context). struct PanicGuard; impl Drop for PanicGuard { fn drop(&mut self) { rtabort!(\"an irrecoverable error occurred while synchronizing threads\") } } /// Blocks unless or until the current thread's token is made available. /// /// A call to `park` does not guarantee that the thread will remain parked /// forever, and callers should be prepared for this possibility. /// forever, and callers should be prepared for this possibility. However, /// it is guaranteed that this function will not panic (it may abort the /// process if the implementation encounters some rare errors). /// /// # park and unpark ///", "commid": "rust_pr_102412"}], "negative_passages": []} {"query_id": "q-en-rust-1311500c93d3f78dcec4f24006ed2071a21e0fa48bb42e6423709f35a4074c96", "query": "with synchronization assumes that does not panic. This assumption is however not correct. For instance, a panic occurs if the TLS key used to store the thread info , , , and . The code examples above all use intrusive lists and have to wait on a condition to before destroying the list nodes. If panics, the nodes will be dropped during stack unwinding without checking that condition, leading to undefined behaviour (some of these cases are in , so I'm marking this as a soundness problem with ). Fixing this would either require guarding against panics in each of these cases or ensuring that never panics. label +I-unsound +A-atomic +T-libs +T-libs-api\nThanks for bringing this up! I agree this is a problem. In general, the pattern with parking is: Unwinding would mean leaving that loop even though the condition isn't true. While code right after the loop isn't executed, there are many cases where leaving the current frame before the condition is true is problematic, as all your examples show. I think it's not reasonable to expect every user of to take this subtle pitfall into account. Instead, I believe we should guarantee that never unwinds.\nAll the situations in which can currently panic are very unusual things that are most likely unrecoverable errors. So I think it'd be fine to just never let unwind, and instead always on panic.\nI think Rust should allocate the needed resources eagerly when creating a new thread, to avoid this problem.\nIt actually does most of the time, as always creates a . However, if the thread is created through other means, the struct needs to be lazily allocated, which can of course fail. Either way, we also need to protect against the case where fails - it should not, but who knows what some obscure system might do (the UNIX parking implementation is used on systems like L4RE and 3DS/Horizon). In general, I think it is easier for maintainability if we just always convert panics to aborts in / rather than going through every system and abstraction to ensure no panics are caused. Still, this is possible to do in the future (as is returning instantly).\nI just tripped over an issue in this area. I'm trying to put together a userspace mutex for learning and it works fine apart from when there is a panic. If there has been a panic, then I get a panic when handling panic and fall over with this stack trace: Note that at 18, some destructors are run, that appear to be releasing Thread local storage. By the time I try to run , at 7, the thread is already invalid (although , I don't know that). I tried: But, I get the same problem. I presume because of raciness between the check for panicking and the panic actually starting.\nIt looks like you're implementing a global allocator using your thread::park_timeout-based mutex? That's going to be problematic, since the thread parking data is stored in a heap allocation (an ) that gets deallocated on thread destruction.\nThank you. I have an approach to the problem which works as a spinlock, but I was experimenting to see if I could do something more efficient using . It's one of those where I wasn't sure if it was interesting to report, but I thought I'd mention it.", "positive_passages": [{"docid": "doc-en-rust-c8846382e071e573268ff724c03bf391504fe53c6e8a3d9e756c3b920d67ce53", "text": "/// [`thread::park_timeout`]: park_timeout #[stable(feature = \"rust1\", since = \"1.0.0\")] pub fn park() { let guard = PanicGuard; // SAFETY: park_timeout is called on the parker owned by this thread. unsafe { current().inner.as_ref().parker().park(); } // No panic occurred, do not abort. forget(guard); } /// Use [`park_timeout`].", "commid": "rust_pr_102412"}], "negative_passages": []} {"query_id": "q-en-rust-1311500c93d3f78dcec4f24006ed2071a21e0fa48bb42e6423709f35a4074c96", "query": "with synchronization assumes that does not panic. This assumption is however not correct. For instance, a panic occurs if the TLS key used to store the thread info , , , and . The code examples above all use intrusive lists and have to wait on a condition to before destroying the list nodes. If panics, the nodes will be dropped during stack unwinding without checking that condition, leading to undefined behaviour (some of these cases are in , so I'm marking this as a soundness problem with ). Fixing this would either require guarding against panics in each of these cases or ensuring that never panics. label +I-unsound +A-atomic +T-libs +T-libs-api\nThanks for bringing this up! I agree this is a problem. In general, the pattern with parking is: Unwinding would mean leaving that loop even though the condition isn't true. While code right after the loop isn't executed, there are many cases where leaving the current frame before the condition is true is problematic, as all your examples show. I think it's not reasonable to expect every user of to take this subtle pitfall into account. Instead, I believe we should guarantee that never unwinds.\nAll the situations in which can currently panic are very unusual things that are most likely unrecoverable errors. So I think it'd be fine to just never let unwind, and instead always on panic.\nI think Rust should allocate the needed resources eagerly when creating a new thread, to avoid this problem.\nIt actually does most of the time, as always creates a . However, if the thread is created through other means, the struct needs to be lazily allocated, which can of course fail. Either way, we also need to protect against the case where fails - it should not, but who knows what some obscure system might do (the UNIX parking implementation is used on systems like L4RE and 3DS/Horizon). In general, I think it is easier for maintainability if we just always convert panics to aborts in / rather than going through every system and abstraction to ensure no panics are caused. Still, this is possible to do in the future (as is returning instantly).\nI just tripped over an issue in this area. I'm trying to put together a userspace mutex for learning and it works fine apart from when there is a panic. If there has been a panic, then I get a panic when handling panic and fall over with this stack trace: Note that at 18, some destructors are run, that appear to be releasing Thread local storage. By the time I try to run , at 7, the thread is already invalid (although , I don't know that). I tried: But, I get the same problem. I presume because of raciness between the check for panicking and the panic actually starting.\nIt looks like you're implementing a global allocator using your thread::park_timeout-based mutex? That's going to be problematic, since the thread parking data is stored in a heap allocation (an ) that gets deallocated on thread destruction.\nThank you. I have an approach to the problem which works as a spinlock, but I was experimenting to see if I could do something more efficient using . It's one of those where I wasn't sure if it was interesting to report, but I thought I'd mention it.", "positive_passages": [{"docid": "doc-en-rust-3dca63e4680d7674d4ba023584ba545c398ef9b0a050a9c2b2dd82d6af47a6aa", "text": "/// ``` #[stable(feature = \"park_timeout\", since = \"1.4.0\")] pub fn park_timeout(dur: Duration) { let guard = PanicGuard; // SAFETY: park_timeout is called on the parker owned by this thread. unsafe { current().inner.as_ref().parker().park_timeout(dur); } // No panic occurred, do not abort. forget(guard); } ////////////////////////////////////////////////////////////////////////////////", "commid": "rust_pr_102412"}], "negative_passages": []} {"query_id": "q-en-rust-9f1dd73f4796445c6efdda8bc49922a5942914396ceed377a31c5675239ab868", "query": "Since the nightly 2022-09-11, the following code doesn't compile anymore. $DIR/closure_wf_outlives.rs:14:27 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^^^^^^^^^^^^^^^^^^^ | note: lifetime parameter instantiated with the lifetime `'a` as defined here --> $DIR/closure_wf_outlives.rs:14:17 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^ note: but lifetime parameter must outlive the lifetime `'b` as defined here --> $DIR/closure_wf_outlives.rs:14:21 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^ error[E0495]: cannot infer an appropriate lifetime due to conflicting requirements --> $DIR/closure_wf_outlives.rs:27:27 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^^^^^^^^^^^^^^^^^^^ | note: first, the lifetime cannot outlive the lifetime `'a` as defined here... --> $DIR/closure_wf_outlives.rs:27:17 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^ note: ...so that the declared lifetime parameter bounds are satisfied --> $DIR/closure_wf_outlives.rs:27:27 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^^^^^^^^^^^^^^^^^^^ note: but, the lifetime must be valid for the lifetime `'b` as defined here... --> $DIR/closure_wf_outlives.rs:27:21 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^ note: ...so that the declared lifetime parameter bounds are satisfied --> $DIR/closure_wf_outlives.rs:27:27 | LL | type Opaque<'a, 'b> = impl Sized + 'a + 'b; | ^^^^^^^^^^^^^^^^^^^^ error[E0310]: the parameter type `T` may not live long enough --> $DIR/closure_wf_outlives.rs:54:22 | LL | type Opaque = impl Sized; | ^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds... | note: ...that is required by this bound --> $DIR/closure_wf_outlives.rs:59:12 | LL | T: 'static, | ^^^^^^^ help: consider adding an explicit lifetime bound... | LL | type Opaque = impl Sized; | +++++++++ error: aborting due to 3 previous errors Some errors have detailed explanations: E0310, E0478, E0495. For more information about an error, try `rustc --explain E0310`. ", "commid": "rust_pr_103008"}], "negative_passages": []} {"query_id": "q-en-rust-123b2c197f4fc8f552f62be77cf6b368374fd77ece27e9d8e579a32a46ad9247", "query": "While working on an experimental runtime and using a couple of experimental features I found that the compiler while in resolving the async call on a trait crash badly. This is a minimal reproducible example and this is the stacktrace generated N.B: I guess this happens with any runtime? BTW also if happens only with rio the compiler should not crash, but I do not think this is related to a specific runtime, but I guess instead, the runtime triggers the resolution of the futures $DIR/lint-global-asm-as-unsafe.rs:17:1 | LL | global_asm!(\"\"); | ^^^^^^^^^^^^^^^ | = note: using this macro is unsafe even though it does not need an `unsafe` block note: the lint level is defined here --> $DIR/lint-global-asm-as-unsafe.rs:2:9 | LL | #![deny(unsafe_code)] | ^^^^^^^^^^^ error: usage of `core::arch::global_asm` --> $DIR/lint-global-asm-as-unsafe.rs:13:9 | LL | global_asm!(\"\"); | ^^^^^^^^^^^^^^^ ... LL | unsafe_in_macro!(); | ------------------ in this macro invocation | = note: using this macro is unsafe even though it does not need an `unsafe` block = note: this error originates in the macro `unsafe_in_macro` (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 2 previous errors ", "commid": "rust_pr_121318"}], "negative_passages": []} {"query_id": "q-en-rust-03ec7c58198fb530752945cd24e77007b795aee2cc4950d7ed702597bdecc275", "query": "This malformed code () hangs the compiler: Found by fuzzing with a . must be defined as a named-field struct (not a tuple struct or unit struct), although it can have 0 fields. must be the last token to produce the hang: No opening brace (note that braces are required for literals of named-field structs) No closing delimiters Not even a comment If I understand correctly, the hang occurs in LateResolutionVisitor, with never finishing (while calling various functions such as and ) This also hangs. I'm guessing it's the same bug. : with: /// - if next_point reached the end of source, return span with lo = hi /// - if next_point reached the end of source, return a span exceeding the end of source, /// which means sm.span_to_snippet(next_point) will get `Err` /// - respect multi-byte characters pub fn next_point(&self, sp: Span) -> Span { if sp.is_dummy() {", "commid": "rust_pr_103521"}], "negative_passages": []} {"query_id": "q-en-rust-03ec7c58198fb530752945cd24e77007b795aee2cc4950d7ed702597bdecc275", "query": "This malformed code () hangs the compiler: Found by fuzzing with a . must be defined as a named-field struct (not a tuple struct or unit struct), although it can have 0 fields. must be the last token to produce the hang: No opening brace (note that braces are required for literals of named-field structs) No closing delimiters Not even a comment If I understand correctly, the hang occurs in LateResolutionVisitor, with never finishing (while calling various functions such as and ) This also hangs. I'm guessing it's the same bug. : with: if width == 0 { return Span::new(sp.hi(), sp.hi(), sp.ctxt(), None); } // If the width is 1, then the next span should only contain the next char besides current ending. // However, in the case of a multibyte character, where the width != 1, the next span should // span multiple bytes to include the whole character.", "commid": "rust_pr_103521"}], "negative_passages": []} {"query_id": "q-en-rust-03ec7c58198fb530752945cd24e77007b795aee2cc4950d7ed702597bdecc275", "query": "This malformed code () hangs the compiler: Found by fuzzing with a . must be defined as a named-field struct (not a tuple struct or unit struct), although it can have 0 fields. must be the last token to produce the hang: No opening brace (note that braces are required for literals of named-field structs) No closing delimiters Not even a comment If I understand correctly, the hang occurs in LateResolutionVisitor, with never finishing (while calling various functions such as and ) This also hangs. I'm guessing it's the same bug. : with: end_index || end_index > source_len - 1 { debug!(\"find_width_of_character_at_span: source indexes are malformed\"); return 0; return 1; } let src = local_begin.sf.external_src.borrow();", "commid": "rust_pr_103521"}], "negative_passages": []} {"query_id": "q-en-rust-03ec7c58198fb530752945cd24e77007b795aee2cc4950d7ed702597bdecc275", "query": "This malformed code () hangs the compiler: Found by fuzzing with a . must be defined as a named-field struct (not a tuple struct or unit struct), although it can have 0 fields. must be the last token to produce the hang: No opening brace (note that braces are required for literals of named-field structs) No closing delimiters Not even a comment If I understand correctly, the hang occurs in LateResolutionVisitor, with never finishing (while calling various functions such as and ) This also hangs. I'm guessing it's the same bug. : with: // A non-empty span at the last byte should advance to create an empty // span pointing at the end of the file. // Reaching to the end of file, return a span that will get error with `span_to_snippet` let span = Span::with_root_ctxt(BytePos(4), BytePos(5)); let span = sm.next_point(span); assert_eq!(span.lo().0, 5); assert_eq!(span.hi().0, 5); assert_eq!(span.hi().0, 6); assert!(sm.span_to_snippet(span).is_err()); // Empty span pointing just past the last byte. // Reaching to the end of file, return a span that will get error with `span_to_snippet` let span = Span::with_root_ctxt(BytePos(5), BytePos(5)); let span = sm.next_point(span); assert_eq!(span.lo().0, 5); assert_eq!(span.hi().0, 5); assert_eq!(span.hi().0, 6); assert!(sm.span_to_snippet(span).is_err()); }", "commid": "rust_pr_103521"}], "negative_passages": []} {"query_id": "q-en-rust-03ec7c58198fb530752945cd24e77007b795aee2cc4950d7ed702597bdecc275", "query": "This malformed code () hangs the compiler: Found by fuzzing with a . must be defined as a named-field struct (not a tuple struct or unit struct), although it can have 0 fields. must be the last token to produce the hang: No opening brace (note that braces are required for literals of named-field structs) No closing delimiters Not even a comment If I understand correctly, the hang occurs in LateResolutionVisitor, with never finishing (while calling various functions such as and ) This also hangs. I'm guessing it's the same bug. : with: // error-pattern: this file contains an unclosed delimiter // error-pattern: expected value, found struct `R` struct R { } struct S { x: [u8; R ", "commid": "rust_pr_103521"}], "negative_passages": []} {"query_id": "q-en-rust-03ec7c58198fb530752945cd24e77007b795aee2cc4950d7ed702597bdecc275", "query": "This malformed code () hangs the compiler: Found by fuzzing with a . must be defined as a named-field struct (not a tuple struct or unit struct), although it can have 0 fields. must be the last token to produce the hang: No opening brace (note that braces are required for literals of named-field structs) No closing delimiters Not even a comment If I understand correctly, the hang occurs in LateResolutionVisitor, with never finishing (while calling various functions such as and ) This also hangs. I'm guessing it's the same bug. : with: error: this file contains an unclosed delimiter --> $DIR/issue-103451.rs:5:15 | LL | struct S { | - unclosed delimiter LL | x: [u8; R | - ^ | | | unclosed delimiter error: this file contains an unclosed delimiter --> $DIR/issue-103451.rs:5:15 | LL | struct S { | - unclosed delimiter LL | x: [u8; R | - ^ | | | unclosed delimiter error[E0423]: expected value, found struct `R` --> $DIR/issue-103451.rs:5:13 | LL | struct R { } | ------------ `R` defined here LL | struct S { LL | x: [u8; R | ^ help: use struct literal syntax instead: `R {}` error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0423`. ", "commid": "rust_pr_103521"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2a4e34f7a7f5e673c258501e1673064b0874cf7d1e3d52e9d281a6a034de4", "query": "The following code: currently produces a heap of semi-helpful diagnostics: fn parse_fn_params(&mut self, req_name: ReqName) -> PResult<'a, Vec> { pub(super) fn parse_fn_params(&mut self, req_name: ReqName) -> PResult<'a, Vec> { let mut first_param = true; // Parse the arguments, starting out with `self` being allowed... let (mut params, _) = self.parse_paren_comma_seq(|p| {", "commid": "rust_pr_104531"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2a4e34f7a7f5e673c258501e1673064b0874cf7d1e3d52e9d281a6a034de4", "query": "The following code: currently produces a heap of semi-helpful diagnostics: PResult<'a, GenericBound> { let lifetime_defs = self.parse_late_bound_lifetime_defs()?; let path = if self.token.is_keyword(kw::Fn) let mut lifetime_defs = self.parse_late_bound_lifetime_defs()?; let mut path = if self.token.is_keyword(kw::Fn) && self.look_ahead(1, |tok| tok.kind == TokenKind::OpenDelim(Delimiter::Parenthesis)) && let Some(path) = self.recover_path_from_fn() {", "commid": "rust_pr_104531"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2a4e34f7a7f5e673c258501e1673064b0874cf7d1e3d52e9d281a6a034de4", "query": "The following code: currently produces a heap of semi-helpful diagnostics: if self.may_recover() && self.token == TokenKind::OpenDelim(Delimiter::Parenthesis) { self.recover_fn_trait_with_lifetime_params(&mut path, &mut lifetime_defs)?; } if has_parens { if self.token.is_like_plus() { // Someone has written something like `&dyn (Trait + Other)`. The correct code", "commid": "rust_pr_104531"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2a4e34f7a7f5e673c258501e1673064b0874cf7d1e3d52e9d281a6a034de4", "query": "The following code: currently produces a heap of semi-helpful diagnostics: /// Recover from `Fn`-family traits (Fn, FnMut, FnOnce) with lifetime arguments /// (e.g. `FnOnce<'a>(&'a str) -> bool`). Up to generic arguments have already /// been eaten. fn recover_fn_trait_with_lifetime_params( &mut self, fn_path: &mut ast::Path, lifetime_defs: &mut Vec, ) -> PResult<'a, ()> { let fn_path_segment = fn_path.segments.last_mut().unwrap(); let generic_args = if let Some(p_args) = &fn_path_segment.args { p_args.clone().into_inner() } else { // Normally it wouldn't come here because the upstream should have parsed // generic parameters (otherwise it's impossible to call this function). return Ok(()); }; let lifetimes = if let ast::GenericArgs::AngleBracketed(ast::AngleBracketedArgs { span: _, args }) = &generic_args { args.into_iter() .filter_map(|arg| { if let ast::AngleBracketedArg::Arg(generic_arg) = arg && let ast::GenericArg::Lifetime(lifetime) = generic_arg { Some(lifetime) } else { None } }) .collect() } else { Vec::new() }; // Only try to recover if the trait has lifetime params. if lifetimes.is_empty() { return Ok(()); } // Parse `(T, U) -> R`. let inputs_lo = self.token.span; let inputs: Vec<_> = self.parse_fn_params(|_| false)?.into_iter().map(|input| input.ty).collect(); let inputs_span = inputs_lo.to(self.prev_token.span); let output = self.parse_ret_ty(AllowPlus::No, RecoverQPath::No, RecoverReturnSign::No)?; let args = ast::ParenthesizedArgs { span: fn_path_segment.span().to(self.prev_token.span), inputs, inputs_span, output, } .into(); *fn_path_segment = ast::PathSegment { ident: fn_path_segment.ident, args, id: ast::DUMMY_NODE_ID }; // Convert parsed `<'a>` in `Fn<'a>` into `for<'a>`. let mut generic_params = lifetimes .iter() .map(|lt| GenericParam { id: lt.id, ident: lt.ident, attrs: ast::AttrVec::new(), bounds: Vec::new(), is_placeholder: false, kind: ast::GenericParamKind::Lifetime, colon_span: None, }) .collect::>(); lifetime_defs.append(&mut generic_params); let generic_args_span = generic_args.span(); let mut err = self.struct_span_err(generic_args_span, \"`Fn` traits cannot take lifetime parameters\"); let snippet = format!( \"for<{}> \", lifetimes.iter().map(|lt| lt.ident.as_str()).intersperse(\", \").collect::(), ); let before_fn_path = fn_path.span.shrink_to_lo(); err.multipart_suggestion( \"consider using a higher-ranked trait bound instead\", vec![(generic_args_span, \"\".to_owned()), (before_fn_path, snippet)], Applicability::MaybeIncorrect, ) .emit(); Ok(()) } pub(super) fn check_lifetime(&mut self) -> bool { self.expected_tokens.push(TokenType::Lifetime); self.token.is_lifetime()", "commid": "rust_pr_104531"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2a4e34f7a7f5e673c258501e1673064b0874cf7d1e3d52e9d281a6a034de4", "query": "The following code: currently produces a heap of semi-helpful diagnostics: // Test that Fn-family traits with lifetime parameters shouldn't compile and // we suggest the usage of higher-rank trait bounds instead. fn fa(_: impl Fn<'a>(&'a str) -> bool) {} //~^ ERROR `Fn` traits cannot take lifetime parameters fn fb(_: impl FnMut<'a, 'b>(&'a str, &'b str) -> bool) {} //~^ ERROR `Fn` traits cannot take lifetime parameters fn fc(_: impl std::fmt::Display + FnOnce<'a>(&'a str) -> bool + std::fmt::Debug) {} //~^ ERROR `Fn` traits cannot take lifetime parameters use std::ops::Fn as AliasedFn; fn fd(_: impl AliasedFn<'a>(&'a str) -> bool) {} //~^ ERROR `Fn` traits cannot take lifetime parameters fn fe(_: F) where F: Fn<'a>(&'a str) -> bool {} //~^ ERROR `Fn` traits cannot take lifetime parameters fn main() {} ", "commid": "rust_pr_104531"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2a4e34f7a7f5e673c258501e1673064b0874cf7d1e3d52e9d281a6a034de4", "query": "The following code: currently produces a heap of semi-helpful diagnostics: error: `Fn` traits cannot take lifetime parameters --> $DIR/hrtb-malformed-lifetime-generics.rs:4:17 | LL | fn fa(_: impl Fn<'a>(&'a str) -> bool) {} | ^^^^ | help: consider using a higher-ranked trait bound instead | LL - fn fa(_: impl Fn<'a>(&'a str) -> bool) {} LL + fn fa(_: impl for<'a> Fn(&'a str) -> bool) {} | error: `Fn` traits cannot take lifetime parameters --> $DIR/hrtb-malformed-lifetime-generics.rs:7:20 | LL | fn fb(_: impl FnMut<'a, 'b>(&'a str, &'b str) -> bool) {} | ^^^^^^^^ | help: consider using a higher-ranked trait bound instead | LL - fn fb(_: impl FnMut<'a, 'b>(&'a str, &'b str) -> bool) {} LL + fn fb(_: impl for<'a, 'b> FnMut(&'a str, &'b str) -> bool) {} | error: `Fn` traits cannot take lifetime parameters --> $DIR/hrtb-malformed-lifetime-generics.rs:10:41 | LL | fn fc(_: impl std::fmt::Display + FnOnce<'a>(&'a str) -> bool + std::fmt::Debug) {} | ^^^^ | help: consider using a higher-ranked trait bound instead | LL - fn fc(_: impl std::fmt::Display + FnOnce<'a>(&'a str) -> bool + std::fmt::Debug) {} LL + fn fc(_: impl std::fmt::Display + for<'a> FnOnce(&'a str) -> bool + std::fmt::Debug) {} | error: `Fn` traits cannot take lifetime parameters --> $DIR/hrtb-malformed-lifetime-generics.rs:14:24 | LL | fn fd(_: impl AliasedFn<'a>(&'a str) -> bool) {} | ^^^^ | help: consider using a higher-ranked trait bound instead | LL - fn fd(_: impl AliasedFn<'a>(&'a str) -> bool) {} LL + fn fd(_: impl for<'a> AliasedFn(&'a str) -> bool) {} | error: `Fn` traits cannot take lifetime parameters --> $DIR/hrtb-malformed-lifetime-generics.rs:17:27 | LL | fn fe(_: F) where F: Fn<'a>(&'a str) -> bool {} | ^^^^ | help: consider using a higher-ranked trait bound instead | LL - fn fe(_: F) where F: Fn<'a>(&'a str) -> bool {} LL + fn fe(_: F) where F: for<'a> Fn(&'a str) -> bool {} | error: aborting due to 5 previous errors ", "commid": "rust_pr_104531"}], "negative_passages": []} {"query_id": "q-en-rust-5d77a16141cfeca8bf93fe2f51dcb07c57ea47d456ca05c865361b5c7bfa2576", "query": "Today for no reason in particular I was curious about the behaviour of arrays when zipped together and then mapped into a single array, roughly: I wrote some tests for this and disassembled them in Godbolt to see what was going on behind the scenes and wow was a whole lot going on that seemed like it didn't need to be. So I wrote something kinda like what's in the stdlib for mapping and zipping arrays, just without the iterators, to see if I could do any better. Turns out I absolutely can: Maybe the design isn't as optimal (or as safe) as it could be, but it performs perfectly well in benchmarks (read: as fast as an unguarded version and anywhere between 3-25 times faster than the definition shown at the top of this issue. Here's a rough table of execution times averaged over the primitive ops (, , &|^`): ! Not really liking how things are looking for . So, question then is: is there something in Rust that needs to change to let these optimisations happen, or do I need to look at putting this in a library somewhere (or maybe stdlib wink wink nudge nudge?)? *The 25x comes from the benchmarks. benches at 11.230ns, and benches at 452.593ps, which seems suspicious but I can't seem to find anything wrong with the result.\nI suspect the issue here is that has the wrong shape. It returns but vector instructions generally want . isn't an iterator type that describes the desired iteration and then performs it once, rather it's one loop that creates an intermediate result and then a second loop that produces the final output. Perhaps we could massage the code into a shape that the optimizer can disentangle, but I think it would be better to offer something like your , albeit with a nicer name.\nThis is missing an actual reproducer... Tried a few sizes here: I assume the report is about the case where this doesn't get unrolled, like with N=128?\nI'm not entirely sure what you're looking for in a reproducer, but I have a benchmark program set up using to check out the performance of the various functions (see the \"Benchmark\" section at the bottom of this post). More or less I have as shown above (renamed to ), and both of these: Then I set up a lot of benchmarks. These test the speed of some primitive operations (, , , , , ) on arrays of lengths 8, 16, 32, 64, 128, 256, and 512 for types u8, u16, u32, and u64. This gives a total of 168 benchmarks per function, which reduces down to 28 per test after the primitive tests are averaged for each function, length, and type. The results of this are: ! This was run on the latest nightly as of 2022-12-21. I think this is pretty hard evidence that performance using is not where it should be. and are grouped so close together in the bottom section of the graph you can't visually see the difference. But for ? It's very obviously slower than the other two. Notice how it takes more time to map together two than it takes the other methods to map together two . Even if we just look at shorter inputs like was suggested, it doesn't look good for : ! These are just the results for lengths up through 64, and we can pretty clearly see that is just always slower. /// 'Zips up' two arrays into a single array of pairs. /// /// `zip()` returns a new array where every element is a tuple where the /// first element comes from the first array, and the second element comes /// from the second array. In other words, it zips two arrays together, /// into a single one. /// /// # Examples /// /// ``` /// #![feature(array_zip)] /// let x = [1, 2, 3]; /// let y = [4, 5, 6]; /// let z = x.zip(y); /// assert_eq!(z, [(1, 4), (2, 5), (3, 6)]); /// ``` #[unstable(feature = \"array_zip\", issue = \"80094\")] pub fn zip(self, rhs: [U; N]) -> [(T, U); N] { drain_array_with(self, |lhs| { drain_array_with(rhs, |rhs| from_trusted_iterator(crate::iter::zip(lhs, rhs))) }) } /// Returns a slice containing the entire array. Equivalent to `&s[..]`. #[stable(feature = \"array_as_slice\", since = \"1.57.0\")] #[rustc_const_stable(feature = \"array_as_slice\", since = \"1.57.0\")]", "commid": "rust_pr_112096"}], "negative_passages": []} {"query_id": "q-en-rust-5d77a16141cfeca8bf93fe2f51dcb07c57ea47d456ca05c865361b5c7bfa2576", "query": "Today for no reason in particular I was curious about the behaviour of arrays when zipped together and then mapped into a single array, roughly: I wrote some tests for this and disassembled them in Godbolt to see what was going on behind the scenes and wow was a whole lot going on that seemed like it didn't need to be. So I wrote something kinda like what's in the stdlib for mapping and zipping arrays, just without the iterators, to see if I could do any better. Turns out I absolutely can: Maybe the design isn't as optimal (or as safe) as it could be, but it performs perfectly well in benchmarks (read: as fast as an unguarded version and anywhere between 3-25 times faster than the definition shown at the top of this issue. Here's a rough table of execution times averaged over the primitive ops (, , &|^`): ! Not really liking how things are looking for . So, question then is: is there something in Rust that needs to change to let these optimisations happen, or do I need to look at putting this in a library somewhere (or maybe stdlib wink wink nudge nudge?)? *The 25x comes from the benchmarks. benches at 11.230ns, and benches at 452.593ps, which seems suspicious but I can't seem to find anything wrong with the result.\nI suspect the issue here is that has the wrong shape. It returns but vector instructions generally want . isn't an iterator type that describes the desired iteration and then performs it once, rather it's one loop that creates an intermediate result and then a second loop that produces the final output. Perhaps we could massage the code into a shape that the optimizer can disentangle, but I think it would be better to offer something like your , albeit with a nicer name.\nThis is missing an actual reproducer... Tried a few sizes here: I assume the report is about the case where this doesn't get unrolled, like with N=128?\nI'm not entirely sure what you're looking for in a reproducer, but I have a benchmark program set up using to check out the performance of the various functions (see the \"Benchmark\" section at the bottom of this post). More or less I have as shown above (renamed to ), and both of these: Then I set up a lot of benchmarks. These test the speed of some primitive operations (, , , , , ) on arrays of lengths 8, 16, 32, 64, 128, 256, and 512 for types u8, u16, u32, and u64. This gives a total of 168 benchmarks per function, which reduces down to 28 per test after the primitive tests are averaged for each function, length, and type. The results of this are: ! This was run on the latest nightly as of 2022-12-21. I think this is pretty hard evidence that performance using is not where it should be. and are grouped so close together in the bottom section of the graph you can't visually see the difference. But for ? It's very obviously slower than the other two. Notice how it takes more time to map together two than it takes the other methods to map together two . Even if we just look at shorter inputs like was suggested, it doesn't look good for : ! These are just the results for lengths up through 64, and we can pretty clearly see that is just always slower. #![feature(array_zip)] // CHECK-LABEL: @short_integer_map #[no_mangle]", "commid": "rust_pr_112096"}], "negative_passages": []} {"query_id": "q-en-rust-5d77a16141cfeca8bf93fe2f51dcb07c57ea47d456ca05c865361b5c7bfa2576", "query": "Today for no reason in particular I was curious about the behaviour of arrays when zipped together and then mapped into a single array, roughly: I wrote some tests for this and disassembled them in Godbolt to see what was going on behind the scenes and wow was a whole lot going on that seemed like it didn't need to be. So I wrote something kinda like what's in the stdlib for mapping and zipping arrays, just without the iterators, to see if I could do any better. Turns out I absolutely can: Maybe the design isn't as optimal (or as safe) as it could be, but it performs perfectly well in benchmarks (read: as fast as an unguarded version and anywhere between 3-25 times faster than the definition shown at the top of this issue. Here's a rough table of execution times averaged over the primitive ops (, , &|^`): ! Not really liking how things are looking for . So, question then is: is there something in Rust that needs to change to let these optimisations happen, or do I need to look at putting this in a library somewhere (or maybe stdlib wink wink nudge nudge?)? *The 25x comes from the benchmarks. benches at 11.230ns, and benches at 452.593ps, which seems suspicious but I can't seem to find anything wrong with the result.\nI suspect the issue here is that has the wrong shape. It returns but vector instructions generally want . isn't an iterator type that describes the desired iteration and then performs it once, rather it's one loop that creates an intermediate result and then a second loop that produces the final output. Perhaps we could massage the code into a shape that the optimizer can disentangle, but I think it would be better to offer something like your , albeit with a nicer name.\nThis is missing an actual reproducer... Tried a few sizes here: I assume the report is about the case where this doesn't get unrolled, like with N=128?\nI'm not entirely sure what you're looking for in a reproducer, but I have a benchmark program set up using to check out the performance of the various functions (see the \"Benchmark\" section at the bottom of this post). More or less I have as shown above (renamed to ), and both of these: Then I set up a lot of benchmarks. These test the speed of some primitive operations (, , , , , ) on arrays of lengths 8, 16, 32, 64, 128, 256, and 512 for types u8, u16, u32, and u64. This gives a total of 168 benchmarks per function, which reduces down to 28 per test after the primitive tests are averaged for each function, length, and type. The results of this are: ! This was run on the latest nightly as of 2022-12-21. I think this is pretty hard evidence that performance using is not where it should be. and are grouped so close together in the bottom section of the graph you can't visually see the difference. But for ? It's very obviously slower than the other two. Notice how it takes more time to map together two than it takes the other methods to map together two . Even if we just look at shorter inputs like was suggested, it doesn't look good for : ! These are just the results for lengths up through 64, and we can pretty clearly see that is just always slower. // CHECK-LABEL: @short_integer_zip_map #[no_mangle] pub fn short_integer_zip_map(x: [u32; 8], y: [u32; 8]) -> [u32; 8] { // CHECK: %[[A:.+]] = load <8 x i32> // CHECK: %[[B:.+]] = load <8 x i32> // CHECK: sub <8 x i32> %[[B]], %[[A]] // CHECK: store <8 x i32> x.zip(y).map(|(x, y)| x - y) } // This test is checking that LLVM can SRoA away a bunch of the overhead, // like fully moving the iterators to registers. Notably, previous implementations // of `map` ended up `alloca`ing the whole `array::IntoIterator`, meaning both a", "commid": "rust_pr_112096"}], "negative_passages": []} {"query_id": "q-en-rust-5d77a16141cfeca8bf93fe2f51dcb07c57ea47d456ca05c865361b5c7bfa2576", "query": "Today for no reason in particular I was curious about the behaviour of arrays when zipped together and then mapped into a single array, roughly: I wrote some tests for this and disassembled them in Godbolt to see what was going on behind the scenes and wow was a whole lot going on that seemed like it didn't need to be. So I wrote something kinda like what's in the stdlib for mapping and zipping arrays, just without the iterators, to see if I could do any better. Turns out I absolutely can: Maybe the design isn't as optimal (or as safe) as it could be, but it performs perfectly well in benchmarks (read: as fast as an unguarded version and anywhere between 3-25 times faster than the definition shown at the top of this issue. Here's a rough table of execution times averaged over the primitive ops (, , &|^`): ! Not really liking how things are looking for . So, question then is: is there something in Rust that needs to change to let these optimisations happen, or do I need to look at putting this in a library somewhere (or maybe stdlib wink wink nudge nudge?)? *The 25x comes from the benchmarks. benches at 11.230ns, and benches at 452.593ps, which seems suspicious but I can't seem to find anything wrong with the result.\nI suspect the issue here is that has the wrong shape. It returns but vector instructions generally want . isn't an iterator type that describes the desired iteration and then performs it once, rather it's one loop that creates an intermediate result and then a second loop that produces the final output. Perhaps we could massage the code into a shape that the optimizer can disentangle, but I think it would be better to offer something like your , albeit with a nicer name.\nThis is missing an actual reproducer... Tried a few sizes here: I assume the report is about the case where this doesn't get unrolled, like with N=128?\nI'm not entirely sure what you're looking for in a reproducer, but I have a benchmark program set up using to check out the performance of the various functions (see the \"Benchmark\" section at the bottom of this post). More or less I have as shown above (renamed to ), and both of these: Then I set up a lot of benchmarks. These test the speed of some primitive operations (, , , , , ) on arrays of lengths 8, 16, 32, 64, 128, 256, and 512 for types u8, u16, u32, and u64. This gives a total of 168 benchmarks per function, which reduces down to 28 per test after the primitive tests are averaged for each function, length, and type. The results of this are: ! This was run on the latest nightly as of 2022-12-21. I think this is pretty hard evidence that performance using is not where it should be. and are grouped so close together in the bottom section of the graph you can't visually see the difference. But for ? It's very obviously slower than the other two. Notice how it takes more time to map together two than it takes the other methods to map together two . Even if we just look at shorter inputs like was suggested, it doesn't look good for : ! These are just the results for lengths up through 64, and we can pretty clearly see that is just always slower. #![feature(array_zip)] // CHECK-LABEL: @auto_vectorize_direct #[no_mangle]", "commid": "rust_pr_112096"}], "negative_passages": []} {"query_id": "q-en-rust-5d77a16141cfeca8bf93fe2f51dcb07c57ea47d456ca05c865361b5c7bfa2576", "query": "Today for no reason in particular I was curious about the behaviour of arrays when zipped together and then mapped into a single array, roughly: I wrote some tests for this and disassembled them in Godbolt to see what was going on behind the scenes and wow was a whole lot going on that seemed like it didn't need to be. So I wrote something kinda like what's in the stdlib for mapping and zipping arrays, just without the iterators, to see if I could do any better. Turns out I absolutely can: Maybe the design isn't as optimal (or as safe) as it could be, but it performs perfectly well in benchmarks (read: as fast as an unguarded version and anywhere between 3-25 times faster than the definition shown at the top of this issue. Here's a rough table of execution times averaged over the primitive ops (, , &|^`): ! Not really liking how things are looking for . So, question then is: is there something in Rust that needs to change to let these optimisations happen, or do I need to look at putting this in a library somewhere (or maybe stdlib wink wink nudge nudge?)? *The 25x comes from the benchmarks. benches at 11.230ns, and benches at 452.593ps, which seems suspicious but I can't seem to find anything wrong with the result.\nI suspect the issue here is that has the wrong shape. It returns but vector instructions generally want . isn't an iterator type that describes the desired iteration and then performs it once, rather it's one loop that creates an intermediate result and then a second loop that produces the final output. Perhaps we could massage the code into a shape that the optimizer can disentangle, but I think it would be better to offer something like your , albeit with a nicer name.\nThis is missing an actual reproducer... Tried a few sizes here: I assume the report is about the case where this doesn't get unrolled, like with N=128?\nI'm not entirely sure what you're looking for in a reproducer, but I have a benchmark program set up using to check out the performance of the various functions (see the \"Benchmark\" section at the bottom of this post). More or less I have as shown above (renamed to ), and both of these: Then I set up a lot of benchmarks. These test the speed of some primitive operations (, , , , , ) on arrays of lengths 8, 16, 32, 64, 128, 256, and 512 for types u8, u16, u32, and u64. This gives a total of 168 benchmarks per function, which reduces down to 28 per test after the primitive tests are averaged for each function, length, and type. The results of this are: ! This was run on the latest nightly as of 2022-12-21. I think this is pretty hard evidence that performance using is not where it should be. and are grouped so close together in the bottom section of the graph you can't visually see the difference. But for ? It's very obviously slower than the other two. Notice how it takes more time to map together two than it takes the other methods to map together two . Even if we just look at shorter inputs like was suggested, it doesn't look good for : ! These are just the results for lengths up through 64, and we can pretty clearly see that is just always slower. // CHECK-LABEL: @auto_vectorize_array_zip_map // CHECK-LABEL: @auto_vectorize_array_from_fn #[no_mangle] pub fn auto_vectorize_array_zip_map(a: [f32; 4], b: [f32; 4]) -> [f32; 4] { pub fn auto_vectorize_array_from_fn(a: [f32; 4], b: [f32; 4]) -> [f32; 4] { // CHECK: load <4 x float> // CHECK: load <4 x float> // CHECK: fadd <4 x float> // CHECK: store <4 x float> a.zip(b).map(|(a, b)| a + b) std::array::from_fn(|i| a[i] + b[i]) }", "commid": "rust_pr_112096"}], "negative_passages": []} {"query_id": "q-en-rust-df45647081fdb09b78d288b55a3189f34dadd00f060fa349104e969fb56fc978", "query": "I tried this: I expected to see this happen: The second invocation takes essentially no time. Instead, this happened: The docs were completely rebuilt. The problem is that we remove the docs between each invocation: this is a regression from - do you think you'll have time to follow up? I think we can either add a stamp file to avoid rebuilding, or find some other way to avoid removing the whole directory - maybe we can only remove/copy the files, rather than everything? $DIR/uninhabited-irrefutable.rs:27:9 --> $DIR/uninhabited-irrefutable.rs:29:9 | LL | let Foo::D(_y, _z) = x; | ^^^^^^^^^^^^^^ pattern `Foo::A(_)` not covered", "commid": "rust_pr_111624"}], "negative_passages": []} {"query_id": "q-en-rust-b62d977fe0ef5e78465b3151072b546173520f9ec446843ddd5f002cf180ddd6", "query": "The following code compiles on nightly (): But if the wrapper type is moved into a module to make the field private, then it does not: (): Making the field makes it compile again. Is this expected? I suppose I could understand if it is; the private fields could change to make actually be constructable and therefore make the pattern refutable. But if that's the case, then I think the compiler should point this out, like . If indeed this is expected, is there any way for me as a library author to somehow convince the compiler that I will never change the fields to make the type constructable? I want users of my library to be able to elide match arms when I give them an . label +T-compiler +F-never_type +D-confusing +requires-nightly +S-bug-has-mcve\nlabel +F-exhaustive_patterns\nare we sure this is actually a bug? As I said, I could understand how it's not:\nI'm not sure either way - however at the very least, as you point out, it makes sense to have some better diagnostics\nThis is definitely intentional. But yeah, it should have a better error message. Not currently I don't think. It would probably require us to add an trait, eg. something like: This trait would then need to be a lang item so that the exhaustiveness check could make use of it. Adding this trait would require an RFC but I doubt there'll be much enthusiasm for adding another special trait for such a niche use-case. Alternatively, you could give an or method to expose the uninhabitedness of the inner value. It's not quite what you're asking for but it's still concise-ish:\nIs supposed to be \u201cinjected\u201d where an otherwise required match arm was omitted? Wouldn't that run into the same discussions project-deref-patterns had about impure code (code with side effects) inside patterns? Their existence would heavily hinder optimizations of match expressions (like translation to jump tables or reordering).\nIf the type implementing really is uninhabited then there won't actually be any impure code in the match. The call to would only need to be injected for types that fake uninhabitedness with an impl that panics/aborts. We could also rule that out be instead making -only like .\nMaking a trait isn't straightforward exactly because inhabitedness depends on visibility questions. It would probably have to mean \"Uninhabited as far as the outside world is concerned\", or it could use a trick like the in to make it depend on the module.", "positive_passages": [{"docid": "doc-en-rust-2e18fad3ae262a993f8c7965fbbe9f21491f454ca27c13f09bbc5fa48d19e63a", "text": "| LL | enum Foo { | ^^^ LL | LL | A(foo::SecretlyEmpty), | - not covered = note: pattern `Foo::A(_)` is currently uninhabited, but this variant contains private fields which may become inhabited in the future = note: the matched value is of type `Foo` help: you might want to use `let else` to handle the variant that isn't matched |", "commid": "rust_pr_111624"}], "negative_passages": []} {"query_id": "q-en-rust-7a51f8ba54fcc7ed7bf44847f69e92b5cfd226421781f8b96bc3b999a32f2a58", "query": "Found with a match c.kind() { ConstKind::Param(_) => {} ConstKind::Param(_) | ConstKind::Error(_) => {} _ => bug!(\"only ConstKind::Param should be encountered here, got {:#?}\", c), }, ConstantKind::Unevaluated(..) => self.required_consts.push(*constant),", "commid": "rust_pr_104233"}], "negative_passages": []} {"query_id": "q-en-rust-7a51f8ba54fcc7ed7bf44847f69e92b5cfd226421781f8b96bc3b999a32f2a58", "query": "Found with a fn f() -> impl Sized { 2.0E //~^ ERROR expected at least one digit in exponent } fn main() {} ", "commid": "rust_pr_104233"}], "negative_passages": []} {"query_id": "q-en-rust-7a51f8ba54fcc7ed7bf44847f69e92b5cfd226421781f8b96bc3b999a32f2a58", "query": "Found with a error: expected at least one digit in exponent --> $DIR/invalid-const-in-body.rs:2:5 | LL | 2.0E | ^^^^ error: aborting due to previous error ", "commid": "rust_pr_104233"}], "negative_passages": []} {"query_id": "q-en-rust-c74376d76cafcb7b4b6082a2dab8a1cfdb0b2e4337471905c0d8b3d353257b43", "query": "To reproduce, navigate to and click the the \"source\" link for any of the methods. Nothing at all happens when I do that. I tried Firefox and Chromium. This seems to be a fairly recent regression.\nIt's because it's under the for some reason. Let's see what changed in the last few days.\nI think it's because of I'll try to send a fix today.\nIt's because a to have a correct positioning of the tooltip on mobile devices.", "positive_passages": [{"docid": "doc-en-rust-dae98f47b72f867a1ae86d3396acd90f7db755d107d1641723860acdcef21c4a", "text": "font-weight: 600; margin: 0; padding: 0; /* position notable traits in mobile mode within the header */ position: relative; } #crate-search,", "commid": "rust_pr_104319"}], "negative_passages": []} {"query_id": "q-en-rust-c74376d76cafcb7b4b6082a2dab8a1cfdb0b2e4337471905c0d8b3d353257b43", "query": "To reproduce, navigate to and click the the \"source\" link for any of the methods. Nothing at all happens when I do that. I tried Firefox and Chromium. This seems to be a fairly recent regression.\nIt's because it's under the for some reason. Let's see what changed in the last few days.\nI think it's because of I'll try to send a fix today.\nIt's because a to have a correct positioning of the tooltip on mobile devices.", "positive_passages": [{"docid": "doc-en-rust-52a402a65bebc40e37708480f7def66d38ab55d6710e04ec4338220114639b84", "text": "// Check the impl items. assert-css: (\".impl-items .has-srclink .srclink\", {\"font-size\": \"16px\", \"font-weight\": 400}, ALL) assert-css: (\".impl-items .has-srclink .code-header\", {\"font-size\": \"16px\", \"font-weight\": 600}, ALL) // Check that we can click on source link store-document-property: (url, \"URL\") click: \".impl-items .has-srclink .srclink\" assert-document-property-false: {\"URL\": |url|} ", "commid": "rust_pr_104319"}], "negative_passages": []} {"query_id": "q-en-rust-9be5a1c11e6a03a00e799efc086f9abf6c0c19f1bf525256ccf0fce909442a6d", "query": "Currently all of its functions immediately . We need to either remove this or implement it. I vote for implementing it.", "positive_passages": [{"docid": "doc-en-rust-2dd7c5780b5cf6ddcfa3e671d81a0b610429615db0337950062f1bb5e7dd4687", "text": "//! //! * Should probably have something like this for strings. //! * Should they implement Closable? Would take extra state. use cmp::max; use cmp::min; use prelude::*; use super::*;", "commid": "rust_pr_10451.0"}], "negative_passages": []} {"query_id": "q-en-rust-9be5a1c11e6a03a00e799efc086f9abf6c0c19f1bf525256ccf0fce909442a6d", "query": "Currently all of its functions immediately . We need to either remove this or implement it. I vote for implementing it.", "positive_passages": [{"docid": "doc-en-rust-81b9bcac09820a958917105f529f452a13065b337f9c8ca17e82ce9df7c6fc89", "text": "} } // FIXME(#10432) impl Seek for MemWriter { fn tell(&self) -> u64 { self.pos as u64 } fn seek(&mut self, pos: i64, style: SeekStyle) { match style { SeekSet => { self.pos = pos as uint; } SeekEnd => { self.pos = self.buf.len() + pos as uint; } SeekCur => { self.pos += pos as uint; } } // compute offset as signed and clamp to prevent overflow let offset = match style { SeekSet => { 0 } SeekEnd => { self.buf.len() } SeekCur => { self.pos } } as i64; self.pos = max(0, offset+pos) as uint; } }", "commid": "rust_pr_10451.0"}], "negative_passages": []} {"query_id": "q-en-rust-9be5a1c11e6a03a00e799efc086f9abf6c0c19f1bf525256ccf0fce909442a6d", "query": "Currently all of its functions immediately . We need to either remove this or implement it. I vote for implementing it.", "positive_passages": [{"docid": "doc-en-rust-c5ba2db8bf97d4ecbb2da08434a93f95cd677a207080c47d98bd46693168cf2e", "text": "/// Writes to a fixed-size byte slice /// /// If a write will not fit in the buffer, it raises the `io_error` /// condition and does not write any data. pub struct BufWriter<'self> { priv buf: &'self mut [u8], priv pos: uint", "commid": "rust_pr_10451.0"}], "negative_passages": []} {"query_id": "q-en-rust-9be5a1c11e6a03a00e799efc086f9abf6c0c19f1bf525256ccf0fce909442a6d", "query": "Currently all of its functions immediately . We need to either remove this or implement it. I vote for implementing it.", "positive_passages": [{"docid": "doc-en-rust-c613c67566ca31167d66a00878e411e62852494580c2e66f17bed7b457bcaa46", "text": "} impl<'self> Writer for BufWriter<'self> { fn write(&mut self, _buf: &[u8]) { fail!() } fn write(&mut self, buf: &[u8]) { // raises a condition if the entire write does not fit in the buffer let max_size = self.buf.len(); if self.pos >= max_size || (self.pos + buf.len()) > max_size { io_error::cond.raise(IoError { kind: OtherIoError, desc: \"Trying to write past end of buffer\", detail: None }); return; } fn flush(&mut self) { fail!() } vec::bytes::copy_memory(self.buf.mut_slice_from(self.pos), buf, buf.len()); self.pos += buf.len(); } } // FIXME(#10432) impl<'self> Seek for BufWriter<'self> { fn tell(&self) -> u64 { fail!() } fn tell(&self) -> u64 { self.pos as u64 } fn seek(&mut self, _pos: i64, _style: SeekStyle) { fail!() } fn seek(&mut self, pos: i64, style: SeekStyle) { // compute offset as signed and clamp to prevent overflow let offset = match style { SeekSet => { 0 } SeekEnd => { self.buf.len() } SeekCur => { self.pos } } as i64; self.pos = max(0, offset+pos) as uint; } }", "commid": "rust_pr_10451.0"}], "negative_passages": []} {"query_id": "q-en-rust-9be5a1c11e6a03a00e799efc086f9abf6c0c19f1bf525256ccf0fce909442a6d", "query": "Currently all of its functions immediately . We need to either remove this or implement it. I vote for implementing it.", "positive_passages": [{"docid": "doc-en-rust-713750027e213bf62249d1c3daf30a0355e948f12607a350f291890c6e4deaac", "text": "} #[test] fn test_buf_writer() { let mut buf = [0 as u8, ..8]; { let mut writer = BufWriter::new(buf); assert_eq!(writer.tell(), 0); writer.write([0]); assert_eq!(writer.tell(), 1); writer.write([1, 2, 3]); writer.write([4, 5, 6, 7]); assert_eq!(writer.tell(), 8); } assert_eq!(buf, [0, 1, 2, 3, 4, 5, 6, 7]); } #[test] fn test_buf_writer_seek() { let mut buf = [0 as u8, ..8]; { let mut writer = BufWriter::new(buf); assert_eq!(writer.tell(), 0); writer.write([1]); assert_eq!(writer.tell(), 1); writer.seek(2, SeekSet); assert_eq!(writer.tell(), 2); writer.write([2]); assert_eq!(writer.tell(), 3); writer.seek(-2, SeekCur); assert_eq!(writer.tell(), 1); writer.write([3]); assert_eq!(writer.tell(), 2); writer.seek(-1, SeekEnd); assert_eq!(writer.tell(), 7); writer.write([4]); assert_eq!(writer.tell(), 8); } assert_eq!(buf, [1, 3, 2, 0, 0, 0, 0, 4]); } #[test] fn test_buf_writer_error() { let mut buf = [0 as u8, ..2]; let mut writer = BufWriter::new(buf); writer.write([0]); let mut called = false; do io_error::cond.trap(|err| { assert_eq!(err.kind, OtherIoError); called = true; }).inside { writer.write([0, 0]); } assert!(called); } #[test] fn test_mem_reader() { let mut reader = MemReader::new(~[0, 1, 2, 3, 4, 5, 6, 7]); let mut buf = [];", "commid": "rust_pr_10451.0"}], "negative_passages": []} {"query_id": "q-en-rust-60121fc379df37aebbc4066801024ecdb08b92b61e49f5acda21d72ec5531588", "query": " $DIR/missing-clone-for-suggestion.rs:17:7 | LL | fn f(x: *mut u8) { | - move occurs because `x` has type `*mut u8`, which does not implement the `Copy` trait LL | g(x); | - value moved here LL | g(x); | ^ value used here after move | note: consider changing this parameter type in function `g` to borrow instead if owning the value isn't necessary --> $DIR/missing-clone-for-suggestion.rs:13:12 | LL | fn g(x: T) {} | - ^ this parameter takes ownership of the value | | | in this function error: aborting due to previous error For more information about this error, try `rustc --explain E0382`. ", "commid": "rust_pr_104956"}], "negative_passages": []} {"query_id": "q-en-rust-064780391279c45c18858b8e283685220eaff4d94686b1cc7d05c342308b7900", "query": "Given the following code: current output is: The last part of the suggestion is not valid Rust code and therefore should not be suggested like this. This problem occurs on the Playground on both stable and nightly with basically identical error messages. serde v1.0.147 requires such that will be . Otherwise the problem does not seem serde specific. $DIR/issue-104884-trait-impl-sugg-err.rs:13:10 | LL | #[derive(PartialOrd, AddImpl)] | ^^^^^^^^^^ no implementation for `PriorityQueue == PriorityQueue` | = help: the trait `PartialEq` is not implemented for `PriorityQueue` note: required by a bound in `PartialOrd` --> $SRC_DIR/core/src/cmp.rs:LL:COL | LL | pub trait PartialOrd: PartialEq { | ^^^^^^^^^^^^^^ required by this bound in `PartialOrd` = note: this error originates in the derive macro `PartialOrd` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0277]: the trait bound `PriorityQueue: Eq` is not satisfied --> $DIR/issue-104884-trait-impl-sugg-err.rs:13:22 | LL | #[derive(PartialOrd, AddImpl)] | ^^^^^^^ the trait `Eq` is not implemented for `PriorityQueue` | note: required by a bound in `Ord` --> $SRC_DIR/core/src/cmp.rs:LL:COL | LL | pub trait Ord: Eq + PartialOrd { | ^^ required by this bound in `Ord` = note: this error originates in the derive macro `AddImpl` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0277]: can't compare `T` with `T` --> $DIR/issue-104884-trait-impl-sugg-err.rs:13:22 | LL | #[derive(PartialOrd, AddImpl)] | ^^^^^^^ no implementation for `T < T` and `T > T` | note: required for `PriorityQueue` to implement `PartialOrd` --> $DIR/issue-104884-trait-impl-sugg-err.rs:13:10 | LL | #[derive(PartialOrd, AddImpl)] | ^^^^^^^^^^ note: required by a bound in `Ord` --> $SRC_DIR/core/src/cmp.rs:LL:COL | LL | pub trait Ord: Eq + PartialOrd { | ^^^^^^^^^^^^^^^^ required by this bound in `Ord` = note: this error originates in the derive macro `AddImpl` which comes from the expansion of the derive macro `PartialOrd` (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_104895"}], "negative_passages": []} {"query_id": "q-en-rust-0c3c098e61bfa494c57e13c1e0e02dff13b76459b048dd7ca0205fd914b464fe", "query": "Any ideas?\nRegressed with\ncc\nIs there more information available from the error? The module should not be part of the FreeBSD build. Also, an error in resolving doesn't make much sense to me because that module should exist regardless of whether the feature is enabled. Is it possible that there's either (1) another error further up in the build log, or (2) an existing incorrect dependency that's been exposed by the recent change?\nThe full log is available here:\nIt only happens if I build the documentation. Is it possible to fix this problem before rust 1.67 lands?\nReproduced with the following command: could you please see if the patch in fixes the build for you?\nYour patch fixes the issue. Many thanks!", "positive_passages": [{"docid": "doc-en-rust-250356a05983b669717f7cfcbaa12f367ed9335d34fda8c59ccf3fa7bd38a4ce", "text": "//! OS-specific networking functionality. // See cfg macros in `library/std/src/os/mod.rs` for why these platforms must // be special-cased during rustdoc generation. #[cfg(not(all( doc, any( all(target_arch = \"wasm32\", not(target_os = \"wasi\")), all(target_vendor = \"fortanix\", target_env = \"sgx\") ) )))] #[cfg(any(target_os = \"linux\", target_os = \"android\", doc))] pub(super) mod linux_ext;", "commid": "rust_pr_106618"}], "negative_passages": []} {"query_id": "q-en-rust-1ff5db68e78505d6ff39ccecd9a7ac285f37f8327f10f89bbd2cf48b62f76052", "query": "I tried this code (): I expected to see this happen: Instead, this happened: This is just a very specific instance where the suggestion to add some code, while \"technically correct\", is less useful than the better suggestion to remove some code. $DIR/issue-105494.rs:2:19 | LL | let _v: i32 = (1 as i32).to_string(); | --- ^^^^^^^^^^^^^^^^^^^^^^ expected `i32`, found struct `String` | | | expected due to this | help: try removing the method call | LL - let _v: i32 = (1 as i32).to_string(); LL + let _v: i32 = (1 as i32); | error[E0308]: mismatched types --> $DIR/issue-105494.rs:5:19 | LL | let _v: i32 = (1 as i128).to_string(); | --- ^^^^^^^^^^^^^^^^^^^^^^^ expected `i32`, found struct `String` | | | expected due to this error[E0308]: mismatched types --> $DIR/issue-105494.rs:7:20 | LL | let _v: &str = \"foo\".to_string(); | ---- ^^^^^^^^^^^^^^^^^ expected `&str`, found struct `String` | | | expected due to this | help: try removing the method call | LL - let _v: &str = \"foo\".to_string(); LL + let _v: &str = \"foo\"; | error[E0308]: mismatched types --> $DIR/issue-105494.rs:14:12 | LL | let mut path: String = \"/usr\".to_string(); | ------ expected due to this type ... LL | path = format!(\"{}/{}\", path, folder).as_str(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `String`, found `&str` | help: try removing the method call | LL - path = format!(\"{}/{}\", path, folder).as_str(); LL + path = format!(\"{}/{}\", path, folder); | error: aborting due to 4 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_105872"}], "negative_passages": []} {"query_id": "q-en-rust-b623b0321a8a9302a1bc9c7660b5c91d4ff40a6a2da9cd7efa1657ffc03b7bd4", "query": "In debug mode, this prints But with optimizations, this prints: With optimizations and , this prints: This check in the MIR inliner is hiding this problem from the track-caller UI tests: label +A-mir-opt +A-mir-opt-inlining :\nNevermind that, the panic here is correct actually :D\nIgnoring changes in the standard library and optimization level at which MIR inliner is enabled, bisection points to cc\nWG-prioritization assigning priority (). label -I-prioritize +P-medium", "positive_passages": [{"docid": "doc-en-rust-6b67df3bb7d7b83e3d705ae9003cee4bbdc9ac458e44294e85f43a28c4e4abae", "text": ") -> OperandRef<'tcx, Bx::Value> { let tcx = bx.tcx(); let mut span_to_caller_location = |span: Span| { let mut span_to_caller_location = |mut span: Span| { // Remove `Inlined` marks as they pollute `expansion_cause`. while span.is_inlined() { span.remove_mark(); } let topmost = span.ctxt().outer_expn().expansion_cause().unwrap_or(span); let caller = tcx.sess.source_map().lookup_char_pos(topmost.lo()); let const_loc = tcx.const_caller_location((", "commid": "rust_pr_109307"}], "negative_passages": []} {"query_id": "q-en-rust-b623b0321a8a9302a1bc9c7660b5c91d4ff40a6a2da9cd7efa1657ffc03b7bd4", "query": "In debug mode, this prints But with optimizations, this prints: With optimizations and , this prints: This check in the MIR inliner is hiding this problem from the track-caller UI tests: label +A-mir-opt +A-mir-opt-inlining :\nNevermind that, the panic here is correct actually :D\nIgnoring changes in the standard library and optimization level at which MIR inliner is enabled, bisection points to cc\nWG-prioritization assigning priority (). label -I-prioritize +P-medium", "positive_passages": [{"docid": "doc-en-rust-c61f8bb0fad8bc377606b089ef41ebda0503983d6c9f48c8accbcaa03d83947f", "text": "location } pub(crate) fn location_triple_for_span(&self, span: Span) -> (Symbol, u32, u32) { pub(crate) fn location_triple_for_span(&self, mut span: Span) -> (Symbol, u32, u32) { // Remove `Inlined` marks as they pollute `expansion_cause`. while span.is_inlined() { span.remove_mark(); } let topmost = span.ctxt().outer_expn().expansion_cause().unwrap_or(span); let caller = self.tcx.sess.source_map().lookup_char_pos(topmost.lo()); (", "commid": "rust_pr_109307"}], "negative_passages": []} {"query_id": "q-en-rust-b623b0321a8a9302a1bc9c7660b5c91d4ff40a6a2da9cd7efa1657ffc03b7bd4", "query": "In debug mode, this prints But with optimizations, this prints: With optimizations and , this prints: This check in the MIR inliner is hiding this problem from the track-caller UI tests: label +A-mir-opt +A-mir-opt-inlining :\nNevermind that, the panic here is correct actually :D\nIgnoring changes in the standard library and optimization level at which MIR inliner is enabled, bisection points to cc\nWG-prioritization assigning priority (). label -I-prioritize +P-medium", "positive_passages": [{"docid": "doc-en-rust-5df73485d1b86fe1d7af04491f4d18b20400a015ffc84065c6ee561962f6d6d0", "text": "pub fn fresh_expansion(self, expn_id: LocalExpnId) -> Span { HygieneData::with(|data| { self.with_ctxt(data.apply_mark( SyntaxContext::root(), self.ctxt(), expn_id.to_expn_id(), Transparency::Transparent, ))", "commid": "rust_pr_109307"}], "negative_passages": []} {"query_id": "q-en-rust-b623b0321a8a9302a1bc9c7660b5c91d4ff40a6a2da9cd7efa1657ffc03b7bd4", "query": "In debug mode, this prints But with optimizations, this prints: With optimizations and , this prints: This check in the MIR inliner is hiding this problem from the track-caller UI tests: label +A-mir-opt +A-mir-opt-inlining :\nNevermind that, the panic here is correct actually :D\nIgnoring changes in the standard library and optimization level at which MIR inliner is enabled, bisection points to cc\nWG-prioritization assigning priority (). label -I-prioritize +P-medium", "positive_passages": [{"docid": "doc-en-rust-7cdea07721f18aabb3ccb3dad6765e24ba6572bb814fbfaf74db08532a2bd159", "text": "// run-pass // revisions: default mir-opt //[default] compile-flags: -Zinline-mir=no //[mir-opt] compile-flags: -Zmir-opt-level=4 macro_rules! caller_location_from_macro {", "commid": "rust_pr_109307"}], "negative_passages": []} {"query_id": "q-en-rust-b623b0321a8a9302a1bc9c7660b5c91d4ff40a6a2da9cd7efa1657ffc03b7bd4", "query": "In debug mode, this prints But with optimizations, this prints: With optimizations and , this prints: This check in the MIR inliner is hiding this problem from the track-caller UI tests: label +A-mir-opt +A-mir-opt-inlining :\nNevermind that, the panic here is correct actually :D\nIgnoring changes in the standard library and optimization level at which MIR inliner is enabled, bisection points to cc\nWG-prioritization assigning priority (). label -I-prioritize +P-medium", "positive_passages": [{"docid": "doc-en-rust-0523094a1a5c6164148f6efbbcb6388c9221d4f672c777688c49c9aeafeb0b99", "text": "fn main() { let loc = core::panic::Location::caller(); assert_eq!(loc.file(), file!()); assert_eq!(loc.line(), 10); assert_eq!(loc.line(), 11); assert_eq!(loc.column(), 15); // `Location::caller()` in a macro should behave similarly to `file!` and `line!`, // i.e. point to where the macro was invoked, instead of the macro itself. let loc2 = caller_location_from_macro!(); assert_eq!(loc2.file(), file!()); assert_eq!(loc2.line(), 17); assert_eq!(loc2.line(), 18); assert_eq!(loc2.column(), 16); }", "commid": "rust_pr_109307"}], "negative_passages": []} {"query_id": "q-en-rust-b623b0321a8a9302a1bc9c7660b5c91d4ff40a6a2da9cd7efa1657ffc03b7bd4", "query": "In debug mode, this prints But with optimizations, this prints: With optimizations and , this prints: This check in the MIR inliner is hiding this problem from the track-caller UI tests: label +A-mir-opt +A-mir-opt-inlining :\nNevermind that, the panic here is correct actually :D\nIgnoring changes in the standard library and optimization level at which MIR inliner is enabled, bisection points to cc\nWG-prioritization assigning priority (). label -I-prioritize +P-medium", "positive_passages": [{"docid": "doc-en-rust-dbde42a5a4d7f289f9694b0ba7db768b72f3d89c7f725433f34a0be77d6359fe", "text": " // run-pass // revisions: default mir-opt //[default] compile-flags: -Zinline-mir=no //[mir-opt] compile-flags: -Zmir-opt-level=4 use std::panic::Location; macro_rules! f { () => { Location::caller() }; } #[inline(always)] fn g() -> &'static Location<'static> { f!() } fn main() { let loc = g(); assert_eq!(loc.line(), 16); assert_eq!(loc.column(), 5); } ", "commid": "rust_pr_109307"}], "negative_passages": []} {"query_id": "q-en-rust-366c068360ca0468ceeef52960e6e472989744f7b1bb35639b1dd02bd619dd7e", "query": "Given the following The current output is: The second error is not reasonable, since we don't allow associated items in auto trait. Ideally the output should look like: $DIR/issue-105732.rs:9:14 error[E0599]: no method named `g` found for reference `&Self` in the current scope --> $DIR/issue-105732.rs:10:14 | LL | self.g(); | ^ | = note: the following trait bounds were not satisfied: `Self: Foo` which is required by `&Self: Foo` `&Self: Foo` = help: items from traits can only be used if the type parameter is bounded by the trait help: the following trait defines an item `g`, perhaps you need to add a supertrait for it: | LL | trait Bar: Foo { | +++++ | ^ help: there is a method with a similar name: `f` error: aborting due to 2 previous errors", "commid": "rust_pr_105817"}], "negative_passages": []} {"query_id": "q-en-rust-d7d50257a9ca89e3c7535785553153f8c4353d0859383e13af96a63f2120b1bd", "query": "Spawned off of I tried this code: I expected to see this happen: I expected the to be mapped to the local paths in the installed component, rather than being left as raw Instead, this happened: is showing up in the output. remapped-tests-ui/errors/remap-path-prefix-sysroot.rs:LL:COL | LL | self.thread.join().unwrap(); | ^^^^^^^^^^^ ------ `self.thread` moved due to this method call | | | move occurs because `self.thread` has type `JoinHandle<()>`, which does not implement the `Copy` trait | note: `JoinHandle::::join` takes ownership of the receiver `self`, which moves `self.thread` --> remapped/library/std/src/thread/mod.rs:LL:COL | LL | pub fn join(self) -> Result { | ^^^^ error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0507`. ", "commid": "rust_pr_129687"}], "negative_passages": []} {"query_id": "q-en-rust-d7d50257a9ca89e3c7535785553153f8c4353d0859383e13af96a63f2120b1bd", "query": "Spawned off of I tried this code: I expected to see this happen: I expected the to be mapped to the local paths in the installed component, rather than being left as raw Instead, this happened: is showing up in the output. $DIR/remap-path-prefix-sysroot.rs:LL:COL | LL | self.thread.join().unwrap(); | ^^^^^^^^^^^ ------ `self.thread` moved due to this method call | | | move occurs because `self.thread` has type `JoinHandle<()>`, which does not implement the `Copy` trait | note: `JoinHandle::::join` takes ownership of the receiver `self`, which moves `self.thread` --> $SRC_DIR_REAL/std/src/thread/mod.rs:LL:COL | LL | pub fn join(self) -> Result { | ^^^^ error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0507`. ", "commid": "rust_pr_129687"}], "negative_passages": []} {"query_id": "q-en-rust-c3488a970ba40a795bb1a5d917c87ef5d97643bff1aa0c14c564264a3c60cd8e", "query": "we should suggest cloning. Also, we should suggest $DIR/issue-106443-sugg-clone-for-arg.rs:11:9 | LL | foo(s); | --- ^ expected struct `S`, found `&S` | | | arguments to this function are incorrect | note: function defined here --> $DIR/issue-106443-sugg-clone-for-arg.rs:7:4 | LL | fn foo(_: S) {} | ^^^ ---- help: consider using clone here | LL | foo(s.clone()); | ++++++++ error[E0308]: mismatched types --> $DIR/issue-106443-sugg-clone-for-arg.rs:17:9 | LL | bar(t); | --- ^ expected struct `T`, found `&T` | | | arguments to this function are incorrect | note: function defined here --> $DIR/issue-106443-sugg-clone-for-arg.rs:14:4 | LL | fn bar(_: T) {} | ^^^ ---- error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_106497"}], "negative_passages": []} {"query_id": "q-en-rust-c3488a970ba40a795bb1a5d917c87ef5d97643bff1aa0c14c564264a3c60cd8e", "query": "we should suggest cloning. Also, we should suggest $DIR/issue-106443-sugg-clone-for-bound.rs:10:9 | LL | foo(s); | ^ the trait `X` is not implemented for `&T` | help: consider further restricting this bound | LL | fn bar(s: &T) { | +++++++ help: consider using clone here | LL | foo(s.clone()); | ++++++++ error[E0277]: the trait bound `&T: X` is not satisfied --> $DIR/issue-106443-sugg-clone-for-bound.rs:14:9 | LL | foo(s); | ^ the trait `X` is not implemented for `&T` | help: consider using clone here | LL | foo(s.clone()); | ++++++++ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_106497"}], "negative_passages": []} {"query_id": "q-en-rust-24124e0750f95ae4f33cea03a94482025c719c9cbef0591808b327ee06349625", "query": "There is a latent bug in rustdoc that hangs in when is enabled.\nFrom the thread on :\nInterestingly, also gets stuck. I believe that runs the beta (non-parallel) rustdoc. So that'd mean that rustdoc gets stuck when documenting parallel rustc, not that parallel rustdoc gets stuck when documenting rustc.\nBoth of the PRs that are blocked by this issue are changes that change the ast.\nIt seems it's not an infinite loop, but just bad (exponential?) time complexity somewhere. suggested trying out what happens when explicitly implementing Send and Sync for the new ast node, and that indeed 'fixes' the problem in So this issue has almost nothing to do with parallel compiler. It occurs in regular non-parallel rustdoc. It's just that with , the ast is Send and Sync, and apparently that takes a huge amount of time to prove when documenting rustcmiddle. (Oli is investigating.)\nData point: Adding the following snippet to will not stop it from compiling successfully, even though documenting the crate will never stop. This makes me think the issue is very much rustdoc related, as it appears to get stuck on proving .\nWG-prioritization assigning priority () (note: seems not a regression) label -I-prioritize +P-high\ncc\nand , https://rust-\n[Edit] I'm trying to solve this problem and what I'm currently observing is that the does not store all the information when using rustdoc. This causes rustdoc to recalculate many times when collecting auto trait impl and blanket trait impl. I guess it may be that rustdoc does not calculate the trait implementation on some data structures (such as ) in advance, which resulting in too deep recursion\nA revealing phenomenon: The following struct in will cause rustdoc to block while proving that is : But when the generics are instantiated, it will not cause blocking: So I guess it is that causes the values in to not be reused. Because this cache needs inserting (predictions, ).\ndoes this happen even with disabled?\nIf so, this is probably related to . This is how we determine the paramenv:\nI can't reproduce this issue , including my local test Have there been any recent changes?", "positive_passages": [{"docid": "doc-en-rust-a0c9b5642d393ee4d7e0fc0eacc548c331e68f9f73466c85156b20b9051386d6", "text": "names: FxHashMap, } // FIXME: Rustdoc has trouble proving Send/Sync for this. See #106930. #[cfg(parallel_compiler)] unsafe impl Sync for FormatArguments {} #[cfg(parallel_compiler)] unsafe impl Send for FormatArguments {} impl FormatArguments { pub fn new() -> Self { Self {", "commid": "rust_pr_114321"}], "negative_passages": []} {"query_id": "q-en-rust-24124e0750f95ae4f33cea03a94482025c719c9cbef0591808b327ee06349625", "query": "There is a latent bug in rustdoc that hangs in when is enabled.\nFrom the thread on :\nInterestingly, also gets stuck. I believe that runs the beta (non-parallel) rustdoc. So that'd mean that rustdoc gets stuck when documenting parallel rustc, not that parallel rustdoc gets stuck when documenting rustc.\nBoth of the PRs that are blocked by this issue are changes that change the ast.\nIt seems it's not an infinite loop, but just bad (exponential?) time complexity somewhere. suggested trying out what happens when explicitly implementing Send and Sync for the new ast node, and that indeed 'fixes' the problem in So this issue has almost nothing to do with parallel compiler. It occurs in regular non-parallel rustdoc. It's just that with , the ast is Send and Sync, and apparently that takes a huge amount of time to prove when documenting rustcmiddle. (Oli is investigating.)\nData point: Adding the following snippet to will not stop it from compiling successfully, even though documenting the crate will never stop. This makes me think the issue is very much rustdoc related, as it appears to get stuck on proving .\nWG-prioritization assigning priority () (note: seems not a regression) label -I-prioritize +P-high\ncc\nand , https://rust-\n[Edit] I'm trying to solve this problem and what I'm currently observing is that the does not store all the information when using rustdoc. This causes rustdoc to recalculate many times when collecting auto trait impl and blanket trait impl. I guess it may be that rustdoc does not calculate the trait implementation on some data structures (such as ) in advance, which resulting in too deep recursion\nA revealing phenomenon: The following struct in will cause rustdoc to block while proving that is : But when the generics are instantiated, it will not cause blocking: So I guess it is that causes the values in to not be reused. Because this cache needs inserting (predictions, ).\ndoes this happen even with disabled?\nIf so, this is probably related to . This is how we determine the paramenv:\nI can't reproduce this issue , including my local test Have there been any recent changes?", "positive_passages": [{"docid": "doc-en-rust-671f2e6c5b862d2a5697b9a65b12e9cecfece874cec96d261cac2d00934c36c4", "text": "cx: &mut DocContext<'_>, item_def_id: DefId, ) -> impl Iterator { // FIXME: To be removed once `parallel_compiler` bugs are fixed! // More information in . if cfg!(parallel_compiler) { return vec![].into_iter().chain(vec![].into_iter()); } let auto_impls = cx .sess() .prof", "commid": "rust_pr_114321"}], "negative_passages": []} {"query_id": "q-en-rust-7044be7c4f48dc57245cdf885183ad0f3ac22deda98c2eee71adeb2c53d36cb9", "query": "This is one of those ICEs that are terribly hard to minimize because they happen cache-dependently, [update: see below for repro instructions]. If nothing else, notice that there is a panic while trying to print the query stack; it might be desirable to look into making that not happen so as to be able to get more information. \u001b[0m\u001b[0m$DIR/multiline-multipart-suggestion.rs:4:34\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m\u001b[0m \u001b[0m\u001b[0mfn short(foo_bar: &Vec<&i32>) -> &i32 { \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m| \u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m----------\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;9m^\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;9mexpected named lifetime parameter\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m= \u001b[0m\u001b[0m\u001b[1mhelp\u001b[0m\u001b[0m: this function's return type contains a borrowed value, but the signature does not say which one of `foo_bar`'s 2 lifetimes it is borrowed from\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;14mhelp\u001b[0m\u001b[0m: consider introducing a named lifetime parameter\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m| \u001b[0m\u001b[0mfn short\u001b[0m\u001b[0m\u001b[38;5;10m<'a>\u001b[0m\u001b[0m(foo_bar: &\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mVec<&\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mi32>) -> &\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mi32 { \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m++++\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m++\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m++\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m++\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;9merror[E0106]\u001b[0m\u001b[0m\u001b[1m: missing lifetime specifier\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m--> \u001b[0m\u001b[0m$DIR/multiline-multipart-suggestion.rs:11:6\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m\u001b[0m \u001b[0m\u001b[0m foo_bar: &Vec<&i32>,\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m| \u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m----------\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m\u001b[0m \u001b[0m\u001b[0m something_very_long_so_that_the_line_will_wrap_around__________: i32,\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m\u001b[0m \u001b[0m\u001b[0m) -> &i32 {\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m| \u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;9m^\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;9mexpected named lifetime parameter\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m= \u001b[0m\u001b[0m\u001b[1mhelp\u001b[0m\u001b[0m: this function's return type contains a borrowed value, but the signature does not say which one of `foo_bar`'s 2 lifetimes it is borrowed from\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;14mhelp\u001b[0m\u001b[0m: consider introducing a named lifetime parameter\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m~ \u001b[0m\u001b[0mfn long\u001b[0m\u001b[0m\u001b[38;5;10m<'a>\u001b[0m\u001b[0m( \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m~ \u001b[0m\u001b[0m foo_bar: &\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mVec<&\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mi32>,\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m| \u001b[0m\u001b[0m something_very_long_so_that_the_line_will_wrap_around__________: i32,\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m~ \u001b[0m\u001b[0m) -> &\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mi32 {\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;9merror[E0106]\u001b[0m\u001b[0m\u001b[1m: missing lifetime specifier\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m--> \u001b[0m\u001b[0m$DIR/multiline-multipart-suggestion.rs:16:29\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m\u001b[0m \u001b[0m\u001b[0m foo_bar: &Vec<&i32>) -> &i32 {\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m| \u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m----------\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;9m^\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;9mexpected named lifetime parameter\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m= \u001b[0m\u001b[0m\u001b[1mhelp\u001b[0m\u001b[0m: this function's return type contains a borrowed value, but the signature does not say which one of `foo_bar`'s 2 lifetimes it is borrowed from\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;14mhelp\u001b[0m\u001b[0m: consider introducing a named lifetime parameter\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m~ \u001b[0m\u001b[0mfn long2\u001b[0m\u001b[0m\u001b[38;5;10m<'a>\u001b[0m\u001b[0m( \u001b[0m\u001b[1m\u001b[38;5;12mLL\u001b[0m\u001b[0m \u001b[0m\u001b[0m\u001b[38;5;10m~ \u001b[0m\u001b[0m foo_bar: &\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mVec<&\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mi32>) -> &\u001b[0m\u001b[0m\u001b[38;5;10m'a \u001b[0m\u001b[0mi32 {\u001b[0m \u001b[0m \u001b[0m\u001b[0m\u001b[1m\u001b[38;5;12m|\u001b[0m \u001b[0m\u001b[1m\u001b[38;5;9merror\u001b[0m\u001b[0m\u001b[1m: aborting due to 3 previous errors\u001b[0m \u001b[0m\u001b[1mFor more information about this error, try `rustc --explain E0106`.\u001b[0m ", "commid": "rust_pr_108627"}], "negative_passages": []} {"query_id": "q-en-rust-ed142ea33ae82432ec6babb8c90603e998254f08abc445aa748744b512cfc070", "query": "When running with the above code, the docs list the following items: /// This visitor is used to go through only the \"top level\" of a item and not enter any sub /// item while looking for a given `Ident` which is stored into `item` if found. struct OneLevelVisitor<'hir> { /// Get DefId of of an item's user-visible parent. /// /// \"User-visible\" should account for re-exporting and inlining, which is why this function isn't /// just `tcx.parent(def_id)`. If the provided `path` has more than one path element, the `DefId` /// of the second-to-last will be given. /// /// ```text /// use crate::foo::Bar; /// ^^^ DefId of this item will be returned /// ``` /// /// If the provided path has only one item, `tcx.parent(def_id)` will be returned instead. fn get_path_parent_def_id( tcx: TyCtxt<'_>, def_id: DefId, path: &hir::UsePath<'_>, ) -> Option { if let [.., parent_segment, _] = &path.segments { match parent_segment.res { hir::def::Res::Def(_, parent_def_id) => Some(parent_def_id), _ if parent_segment.ident.name == kw::Crate => { // In case the \"parent\" is the crate, it'll give `Res::Err` so we need to // circumvent it this way. Some(tcx.parent(def_id)) } _ => None, } } else { // If the path doesn't have a parent, then the parent is the current module. Some(tcx.parent(def_id)) } } /// This visitor is used to find an HIR Item based on its `use` path. This doesn't use the ordinary /// name resolver because it does not walk all the way through a chain of re-exports. pub(crate) struct OneLevelVisitor<'hir> { map: rustc_middle::hir::map::Map<'hir>, item: Option<&'hir hir::Item<'hir>>, pub(crate) item: Option<&'hir hir::Item<'hir>>, looking_for: Ident, target_def_id: LocalDefId, } impl<'hir> OneLevelVisitor<'hir> { fn new(map: rustc_middle::hir::map::Map<'hir>, target_def_id: LocalDefId) -> Self { pub(crate) fn new(map: rustc_middle::hir::map::Map<'hir>, target_def_id: LocalDefId) -> Self { Self { map, item: None, looking_for: Ident::empty(), target_def_id } } fn reset(&mut self, looking_for: Ident) { self.looking_for = looking_for; pub(crate) fn find_target( &mut self, tcx: TyCtxt<'_>, def_id: DefId, path: &hir::UsePath<'_>, ) -> Option<&'hir hir::Item<'hir>> { let parent_def_id = get_path_parent_def_id(tcx, def_id, path)?; let parent = self.map.get_if_local(parent_def_id)?; // We get the `Ident` we will be looking for into `item`. self.looking_for = path.segments[path.segments.len() - 1].ident; // We reset the `item`. self.item = None; match parent { hir::Node::Item(parent_item) => { hir::intravisit::walk_item(self, parent_item); } hir::Node::Crate(m) => { hir::intravisit::walk_mod( self, m, tcx.local_def_id_to_hir_id(parent_def_id.as_local().unwrap()), ); } _ => return None, } self.item } }", "commid": "rust_pr_108870"}], "negative_passages": []} {"query_id": "q-en-rust-ed142ea33ae82432ec6babb8c90603e998254f08abc445aa748744b512cfc070", "query": "When running with the above code, the docs list the following items: let def_id = if let [.., parent_segment, _] = &path.segments { match parent_segment.res { hir::def::Res::Def(_, def_id) => def_id, _ if parent_segment.ident.name == kw::Crate => { // In case the \"parent\" is the crate, it'll give `Res::Err` so we need to // circumvent it this way. tcx.parent(item.owner_id.def_id.to_def_id()) } _ => break, } } else { // If the path doesn't have a parent, then the parent is the current module. tcx.parent(item.owner_id.def_id.to_def_id()) }; let Some(parent) = hir_map.get_if_local(def_id) else { break }; // We get the `Ident` we will be looking for into `item`. let looking_for = path.segments[path.segments.len() - 1].ident; visitor.reset(looking_for); match parent { hir::Node::Item(parent_item) => { hir::intravisit::walk_item(&mut visitor, parent_item); } hir::Node::Crate(m) => { hir::intravisit::walk_mod( &mut visitor, m, tcx.local_def_id_to_hir_id(def_id.as_local().unwrap()), ); } _ => break, } if let Some(i) = visitor.item { if let Some(i) = visitor.find_target(tcx, item.owner_id.def_id.to_def_id(), path) { item = i; } else { break;", "commid": "rust_pr_108870"}], "negative_passages": []} {"query_id": "q-en-rust-ed142ea33ae82432ec6babb8c90603e998254f08abc445aa748744b512cfc070", "query": "When running with the above code, the docs list the following items: use crate::clean::{cfg::Cfg, AttributesExt, NestedAttributesExt}; use crate::clean::{cfg::Cfg, AttributesExt, NestedAttributesExt, OneLevelVisitor}; use crate::core; /// This module is used to store stuff from Rust's AST in a more convenient", "commid": "rust_pr_108870"}], "negative_passages": []} {"query_id": "q-en-rust-ed142ea33ae82432ec6babb8c90603e998254f08abc445aa748744b512cfc070", "query": "When running with the above code, the docs list the following items: , glob: bool, please_inline: bool, path: &hir::UsePath<'_>, ) -> bool { debug!(\"maybe_inline_local res: {:?}\", res);", "commid": "rust_pr_108870"}], "negative_passages": []} {"query_id": "q-en-rust-ed142ea33ae82432ec6babb8c90603e998254f08abc445aa748744b512cfc070", "query": "When running with the above code, the docs list the following items: if !please_inline && let mut visitor = OneLevelVisitor::new(self.cx.tcx.hir(), res_did) && let Some(item) = visitor.find_target(self.cx.tcx, def_id.to_def_id(), path) && let item_def_id = item.owner_id.def_id && item_def_id != def_id && self .cx .cache .effective_visibilities .is_directly_public(self.cx.tcx, item_def_id.to_def_id()) && !inherits_doc_hidden(self.cx.tcx, item_def_id) { // The imported item is public and not `doc(hidden)` so no need to inline it. return false; } let ret = match tcx.hir().get_by_def_id(res_did) { Node::Item(&hir::Item { kind: hir::ItemKind::Mod(ref m), .. }) if glob => { let prev = mem::replace(&mut self.inlining, true);", "commid": "rust_pr_108870"}], "negative_passages": []} {"query_id": "q-en-rust-ed142ea33ae82432ec6babb8c90603e998254f08abc445aa748744b512cfc070", "query": "When running with the above code, the docs list the following items: path, ) { continue; }", "commid": "rust_pr_108870"}], "negative_passages": []} {"query_id": "q-en-rust-ed142ea33ae82432ec6babb8c90603e998254f08abc445aa748744b512cfc070", "query": "When running with the above code, the docs list the following items: // This test ensures that the `struct.B.html` only exists in `a`: // since `a::B` is public (and inlined too), `self::a::B` doesn't // need to be inlined as well. #![crate_name = \"foo\"] pub mod a { // @has 'foo/a/index.html' // Should only contain \"Structs\". // @count - '//*[@id=\"main-content\"]//*[@class=\"item-table\"]' 1 // @has - '//*[@id=\"structs\"]' 'Structs' // @has - '//*[@id=\"main-content\"]//a[@href=\"struct.A.html\"]' 'A' // @has - '//*[@id=\"main-content\"]//a[@href=\"struct.B.html\"]' 'B' mod b { pub struct B; } pub use self::b::B; pub struct A; } // @has 'foo/index.html' // @!has - '//*[@id=\"structs\"]' 'Structs' // @has - '//*[@id=\"reexports\"]' 'Re-exports' // @has - '//*[@id=\"modules\"]' 'Modules' // @has - '//*[@id=\"main-content\"]//*[@id=\"reexport.A\"]' 'pub use self::a::A;' // @has - '//*[@id=\"main-content\"]//*[@id=\"reexport.B\"]' 'pub use self::a::B;' // Should only contain \"Modules\" and \"Re-exports\". // @count - '//*[@id=\"main-content\"]//*[@class=\"item-table\"]' 2 pub use self::a::{A, B}; ", "commid": "rust_pr_108870"}], "negative_passages": []} {"query_id": "q-en-rust-ba365e2a00d046a6340b03c6f34194dd4376efd387ba57aa5b0b598f735cdf13", "query": "No response No response It could also recommend using $DIR/missing-writer.rs:5:21 | LL | write!(\"{}_{}\", x, y); | ^ | help: you might be missing a string literal to format with | LL | write!(\"{}_{}\", \"{} {}\", x, y); | ++++++++ error: format argument must be a string literal --> $DIR/missing-writer.rs:11:23 | LL | writeln!(\"{}_{}\", x, y); | ^ | help: you might be missing a string literal to format with | LL | writeln!(\"{}_{}\", \"{} {}\", x, y); | ++++++++ error[E0599]: cannot write into `&'static str` --> $DIR/missing-writer.rs:5:12 | LL | write!(\"{}_{}\", x, y); | -------^^^^^^^------- method not found in `&str` | note: must implement `io::Write`, `fmt::Write`, or have a `write_fmt` method --> $DIR/missing-writer.rs:5:12 | LL | write!(\"{}_{}\", x, y); | ^^^^^^^ help: a writer is needed before this format string --> $DIR/missing-writer.rs:5:12 | LL | write!(\"{}_{}\", x, y); | ^ error[E0599]: cannot write into `&'static str` --> $DIR/missing-writer.rs:11:14 | LL | writeln!(\"{}_{}\", x, y); | ---------^^^^^^^------- method not found in `&str` | note: must implement `io::Write`, `fmt::Write`, or have a `write_fmt` method --> $DIR/missing-writer.rs:11:14 | LL | writeln!(\"{}_{}\", x, y); | ^^^^^^^ help: a writer is needed before this format string --> $DIR/missing-writer.rs:11:14 | LL | writeln!(\"{}_{}\", x, y); | ^ error: aborting due to 4 previous errors For more information about this error, try `rustc --explain E0599`. ", "commid": "rust_pr_109149"}], "negative_passages": []} {"query_id": "q-en-rust-ba365e2a00d046a6340b03c6f34194dd4376efd387ba57aa5b0b598f735cdf13", "query": "No response No response It could also recommend using $SRC_DIR/std/src/io/buffered/bufwriter.rs:LL:COL error[E0599]: the method `write_fmt` exists for struct `BufWriter<&dyn Write>`, but its trait bounds were not satisfied --> $DIR/mut-borrow-needed-by-trait.rs:21:5 --> $DIR/mut-borrow-needed-by-trait.rs:21:14 | LL | writeln!(fp, \"hello world\").unwrap(); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ method cannot be called on `BufWriter<&dyn Write>` due to unsatisfied trait bounds | ---------^^---------------- method cannot be called on `BufWriter<&dyn Write>` due to unsatisfied trait bounds --> $SRC_DIR/std/src/io/buffered/bufwriter.rs:LL:COL | = note: doesn't satisfy `BufWriter<&dyn std::io::Write>: std::io::Write` | note: must implement `io::Write`, `fmt::Write`, or have a `write_fmt` method --> $DIR/mut-borrow-needed-by-trait.rs:21:14 | LL | writeln!(fp, \"hello world\").unwrap(); | ^^ = note: the following trait bounds were not satisfied: `&dyn std::io::Write: std::io::Write` which is required by `BufWriter<&dyn std::io::Write>: std::io::Write` = note: this error originates in the macro `writeln` (in Nightly builds, run with -Z macro-backtrace for more info) error: aborting due to 3 previous errors", "commid": "rust_pr_109149"}], "negative_passages": []} {"query_id": "q-en-rust-f5b9a0bc1d631184bc499912b9af76d18cef51b7d34b53cba9122630d693d80d", "query": " $DIR/region-error-ice-109072.rs:8:9 | LL | impl Lt<'missing> for () { | - ^^^^^^^^ undeclared lifetime | | | help: consider introducing lifetime `'missing` here: `<'missing>` error[E0261]: use of undeclared lifetime name `'missing` --> $DIR/region-error-ice-109072.rs:9:15 | LL | type T = &'missing (); | ^^^^^^^^ undeclared lifetime | help: consider introducing lifetime `'missing` here | LL | type T<'missing> = &'missing (); | ++++++++++ help: consider introducing lifetime `'missing` here | LL | impl<'missing> Lt<'missing> for () { | ++++++++++ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0261`. ", "commid": "rust_pr_109165"}], "negative_passages": []} {"query_id": "q-en-rust-b74b057116a0bffe5f15addd9c9d0658af5f97e482ab80300fba31628f5b4c91", "query": "or perhaps The bad help isn't related to the choice of or (even though the former needs to be followed by to be a statement and the latter doesn't, which could explain the source of the hint); also generates the help to add . %20%3D%3E%20%7B%3B%7D%3B%0A%7D%0A%0Afn%20main()%20%7B%0A%20%20%20%20let%20%20%3D%20statement!()%3B%0A%7D%0A) No response modify labels +D-incorrect $DIR/issue-109237.rs:2:12 | LL | () => {;}; | ^ expected expression ... LL | let _ = statement!(); | ------------ in this macro invocation | = note: the macro call doesn't expand to an expression, but it can expand to a statement = note: this error originates in the macro `statement` (in Nightly builds, run with -Z macro-backtrace for more info) help: surround the macro invocation with `{}` to interpret the expansion as a statement | LL | let _ = { statement!(); }; | ~~~~~~~~~~~~~~~~~ error: aborting due to previous error ", "commid": "rust_pr_109251"}], "negative_passages": []} {"query_id": "q-en-rust-6e8f1a7912391a8fa51065740fb48908aebd61f7c62ff74fba4cc728bf4c6347", "query": " $DIR/bad-subst-const-kind.rs:8:31 | LL | impl Q for [u8; N] { | ^ expected `usize`, found `u64` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_109336"}], "negative_passages": []} {"query_id": "q-en-rust-6e8f1a7912391a8fa51065740fb48908aebd61f7c62ff74fba4cc728bf4c6347", "query": " $DIR/bad-const-wf-doesnt-specialize.rs:9:1 error: the constant `N` is not of type `usize` --> $DIR/bad-const-wf-doesnt-specialize.rs:8:29 | LL | impl Copy for S {} | -------------------------------- first implementation here LL | impl Copy for S {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `S<_>` | ^^^^ | note: required by a bound in `S` --> $DIR/bad-const-wf-doesnt-specialize.rs:6:10 | LL | struct S; | ^^^^^^^^^^^^^^ required by this bound in `S` error: aborting due to previous error For more information about this error, try `rustc --explain E0119`. ", "commid": "rust_pr_109336"}], "negative_passages": []} {"query_id": "q-en-rust-e7a41aab1f227efe8e17023646406eeab6473f4a5911158fb879a0008d5e03fd", "query": " $DIR/issue-109299-1.rs:10:40 | LL | struct Lexer(T); | --------------- associated item `Cursor` not found for this struct ... LL | type X = impl for Fn() -> Lexer::Cursor; | ^^^^^^ associated item not found in `Lexer` | = note: the associated type was found for - `Lexer` error: aborting due to previous error For more information about this error, try `rustc --explain E0220`. ", "commid": "rust_pr_109423"}], "negative_passages": []} {"query_id": "q-en-rust-e7a41aab1f227efe8e17023646406eeab6473f4a5911158fb879a0008d5e03fd", "query": " $DIR/issue-109299.rs:6:12 | LL | impl Lexer<'d> { | - ^^ undeclared lifetime | | | help: consider introducing lifetime `'d` here: `<'d>` error: aborting due to previous error For more information about this error, try `rustc --explain E0261`. ", "commid": "rust_pr_109423"}], "negative_passages": []} {"query_id": "q-en-rust-73e1db0a697af541c379380a95407d98a5a0d88cbcb78bbb071d3e33ff6d9a1d", "query": "Given Rustdoc does not escape the ASCII angle brackets for the generic argument list of the generic associated type projection leading to the following butchered HTML output: I've noticed this while working on rustdoc integration for in . It's possible that the refactoring PR will fix this (need to check) or at least make it more pleasant to fix. Assigning myself for now since I'm gonna fix this otherwise. claim label T-rustdoc A-rustdoc-ui F-genericassociatedtypes $DIR/variance-computation-requires-equality.rs:3:12 | LL | #![feature(inherent_associated_types)] | ^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: see issue #8995 for more information = note: `#[warn(incomplete_features)]` on by default warning: 1 warning emitted ", "commid": "rust_pr_121864"}], "negative_passages": []} {"query_id": "q-en-rust-406a2986ff0ccb500658ee5abeb43b92da770a2e1e06ad8161c217272f3e9cc5", "query": "I tried this code (): I expected to see this happen: rustc outputs an error saying that I need to add the and traits on . Instead, this happened: which is jarring to see, and unhelpful, because HashMap does support Index, but only with particular trait bounds on . $DIR/bad-index-due-to-nested.rs:20:5 | LL | map[k] | ^^^ the trait `Hash` is not implemented for `K` | note: required by a bound in ` as Index<&K>>` --> $DIR/bad-index-due-to-nested.rs:9:8 | LL | K: Hash, | ^^^^ required by this bound in ` as Index<&K>>` help: consider restricting type parameter `K` | LL | fn index<'a, K: std::hash::Hash, V>(map: &'a HashMap, k: K) -> &'a V { | +++++++++++++++++ error[E0277]: the trait bound `V: Copy` is not satisfied --> $DIR/bad-index-due-to-nested.rs:20:5 | LL | map[k] | ^^^ the trait `Copy` is not implemented for `V` | note: required by a bound in ` as Index<&K>>` --> $DIR/bad-index-due-to-nested.rs:10:8 | LL | V: Copy, | ^^^^ required by this bound in ` as Index<&K>>` help: consider restricting type parameter `V` | LL | fn index<'a, K, V: std::marker::Copy>(map: &'a HashMap, k: K) -> &'a V { | +++++++++++++++++++ error[E0308]: mismatched types --> $DIR/bad-index-due-to-nested.rs:20:9 | LL | fn index<'a, K, V>(map: &'a HashMap, k: K) -> &'a V { | - this type parameter LL | map[k] | ^ | | | expected `&K`, found type parameter `K` | help: consider borrowing here: `&k` | = note: expected reference `&K` found type parameter `K` error[E0308]: mismatched types --> $DIR/bad-index-due-to-nested.rs:20:5 | LL | fn index<'a, K, V>(map: &'a HashMap, k: K) -> &'a V { | - this type parameter ----- expected `&'a V` because of return type LL | map[k] | ^^^^^^ | | | expected `&V`, found type parameter `V` | help: consider borrowing here: `&map[k]` | = note: expected reference `&'a V` found type parameter `V` error: aborting due to 4 previous errors Some errors have detailed explanations: E0277, E0308. For more information about an error, try `rustc --explain E0277`. ", "commid": "rust_pr_110432"}], "negative_passages": []} {"query_id": "q-en-rust-824f7b9eadcdce26f6e58499850e17b5d0f94e087a574994ca60f79d57cdd3f0", "query": " $DIR/return-dyn-type-mismatch-2.rs:7:5 | LL | fn foo() -> dyn Trait | ------------ expected `(dyn Trait + 'static)` because of return type ... LL | 42 | ^^ expected `dyn Trait`, found integer | = note: expected trait object `(dyn Trait + 'static)` found type `{integer}` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_112215"}], "negative_passages": []} {"query_id": "q-en-rust-824f7b9eadcdce26f6e58499850e17b5d0f94e087a574994ca60f79d57cdd3f0", "query": " $DIR/return-dyn-type-mismatch.rs:15:21 | LL | fn other_func() -> dyn TestTrait { | ------------------------- expected `(dyn TestTrait + 'static)` because of return type LL | match Self::func() { LL | None => None, | ^^^^ expected `dyn TestTrait`, found `Option<_>` | = note: expected trait object `(dyn TestTrait + 'static)` found enum `Option<_>` error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_112215"}], "negative_passages": []} {"query_id": "q-en-rust-d9f992325ae58992e5f029b7cfd10fb70539f1bfa16317464cc0c205a2f06ac0", "query": "All existing standard library documentation implicitly assumes that the APIs are being used between the start of a Rust and end of . For example does not document any indication that the function would panic. It does not need to document that, because the function cannot panic, as long as the call occurs within the duration of . However it's possible to observe a panic like this: (Related PR and discussion: ) In general using the standard library from an callback, or before through a static constructor, is UB: according to \"we can't really guarantee anything specific happens [...]; at least not in a cross-platform way.\" Is this worth calling out centrally as a caveat to all other documentation of the standard library? At the top level of the whole crate (it would perhaps be more prominent than it deserves), at the module level, or in the Reference? Certainly for , , , the expectation users need to have is that nothing in there will work outside of . Are there APIs it makes sense to carve out as being permissible outside of ? Stuff like , , , , etc. We'd maybe need to do research into how constructors and atexit are being used in the wild. For example the crate relies on , , and to be usable before : It seems obvious that those things should work but there isn't documentation which guarantees it. I assume that makes the crate technically unsound as written.\nIMHO, core types should work in whatever context but libstd is more dicey. It more explicitly has a runtime or three (e.g. Rust's, libc's and the OS's). So at a minimum anything in the modules , , , , , , , etc should be regarded with suspicion unless documented otherwise. In summary, my thoughts are: is fine depends on the allocator used assumes code is running in main (except for the and possibly stuff)\nCore types can still panic and panics do IO.\nHm... how much of an issue is that? Panics don't have to do I/O (e.g. if stderr is closed) so if platforms don't support that before or after main then it can be silently skipped, no?\nBut do we do that properly today? And it's not just IO but also panic hooks which in turn can rely on various other parts of std.\nI think it would be fine to document that e.g. after main they may want to set an empty (or aborting) panic hook if panicking is possible. I really don't want to be in a situation where the user is in a worse place than if they'd just used .\nCould we turn panics into immediate aborts after main? That would also avoid the backtrace printing and symbolication.\nI looked through the standard library, and is the only function that has limitation on usage before/after main (in this case, this is only a problem during TLS destruction). This function is also called from , and , which can therefore panic when called during TLS destruction. and might be able to be reworked to avoid this, but this is fundamentally a limitation of since it requires a reference to the current to work. The stack guard used by the stack overflow handler also accesses the same TLS slot as , but since the stack bounds don't have a this could be moved to a separate to properly support stack overflow handling during TLS destruction.\nIf we're going to make a statement at all then imo we should start out with something more cautious than making solid guarantees since this would be a pretty wide-ranging promise. before/after main behavior is not tested. It is best-effort and if someone relies on it they should have their own tests core and alloc are expected to work because they don't touch OS APIs or global state caveat: user code in any hookable API like panic handlers, global allocators, OOM handler or any future hooks. Especially panics affect a lot of code. we might replace hooks after main? most existing parts of std, backtrace and other things interacting with the OS are expected to work but there may be platform-specific edge-cases and the behavior may change in the future due to changing implementation details or new features that depend more runtime state (e.g. a std threadpool or async runtime) then finally a list of known limitations\nProposed documentation based on my previous comment:", "positive_passages": [{"docid": "doc-en-rust-e059731fcb212445e7b1affaf5f889a806c148aadabe6fafd7375378950b28ea", "text": "//! contains further primitive shared memory types, including [`atomic`] and //! [`mpsc`], which contains the channel types for message passing. //! //! # Use before and after `main()` //! //! Many parts of the standard library are expected to work before and after `main()`; //! but this is not guaranteed or ensured by tests. It is recommended that you write your own tests //! and run them on each platform you wish to support. //! This means that use of `std` before/after main, especially of features that interact with the //! OS or global state, is exempted from stability and portability guarantees and instead only //! provided on a best-effort basis. Nevertheless bug reports are appreciated. //! //! On the other hand `core` and `alloc` are most likely to work in such environments with //! the caveat that any hookable behavior such as panics, oom handling or allocators will also //! depend on the compatibility of the hooks. //! //! Some features may also behave differently outside main, e.g. stdio could become unbuffered, //! some panics might turn into aborts, backtraces might not get symbolicated or similar. //! //! Non-exhaustive list of known limitations: //! //! - after-main use of thread-locals, which also affects additional features: //! - [`thread::current()`] //! - [`thread::scope()`] //! - [`sync::mpsc`] //! - before-main stdio file descriptors are not guaranteed to be open on unix platforms //! //! //! [I/O]: io //! [`MIN`]: i32::MIN //! [`MAX`]: i32::MAX", "commid": "rust_pr_115247"}], "negative_passages": []} {"query_id": "q-en-rust-d9f992325ae58992e5f029b7cfd10fb70539f1bfa16317464cc0c205a2f06ac0", "query": "All existing standard library documentation implicitly assumes that the APIs are being used between the start of a Rust and end of . For example does not document any indication that the function would panic. It does not need to document that, because the function cannot panic, as long as the call occurs within the duration of . However it's possible to observe a panic like this: (Related PR and discussion: ) In general using the standard library from an callback, or before through a static constructor, is UB: according to \"we can't really guarantee anything specific happens [...]; at least not in a cross-platform way.\" Is this worth calling out centrally as a caveat to all other documentation of the standard library? At the top level of the whole crate (it would perhaps be more prominent than it deserves), at the module level, or in the Reference? Certainly for , , , the expectation users need to have is that nothing in there will work outside of . Are there APIs it makes sense to carve out as being permissible outside of ? Stuff like , , , , etc. We'd maybe need to do research into how constructors and atexit are being used in the wild. For example the crate relies on , , and to be usable before : It seems obvious that those things should work but there isn't documentation which guarantees it. I assume that makes the crate technically unsound as written.\nIMHO, core types should work in whatever context but libstd is more dicey. It more explicitly has a runtime or three (e.g. Rust's, libc's and the OS's). So at a minimum anything in the modules , , , , , , , etc should be regarded with suspicion unless documented otherwise. In summary, my thoughts are: is fine depends on the allocator used assumes code is running in main (except for the and possibly stuff)\nCore types can still panic and panics do IO.\nHm... how much of an issue is that? Panics don't have to do I/O (e.g. if stderr is closed) so if platforms don't support that before or after main then it can be silently skipped, no?\nBut do we do that properly today? And it's not just IO but also panic hooks which in turn can rely on various other parts of std.\nI think it would be fine to document that e.g. after main they may want to set an empty (or aborting) panic hook if panicking is possible. I really don't want to be in a situation where the user is in a worse place than if they'd just used .\nCould we turn panics into immediate aborts after main? That would also avoid the backtrace printing and symbolication.\nI looked through the standard library, and is the only function that has limitation on usage before/after main (in this case, this is only a problem during TLS destruction). This function is also called from , and , which can therefore panic when called during TLS destruction. and might be able to be reworked to avoid this, but this is fundamentally a limitation of since it requires a reference to the current to work. The stack guard used by the stack overflow handler also accesses the same TLS slot as , but since the stack bounds don't have a this could be moved to a separate to properly support stack overflow handling during TLS destruction.\nIf we're going to make a statement at all then imo we should start out with something more cautious than making solid guarantees since this would be a pretty wide-ranging promise. before/after main behavior is not tested. It is best-effort and if someone relies on it they should have their own tests core and alloc are expected to work because they don't touch OS APIs or global state caveat: user code in any hookable API like panic handlers, global allocators, OOM handler or any future hooks. Especially panics affect a lot of code. we might replace hooks after main? most existing parts of std, backtrace and other things interacting with the OS are expected to work but there may be platform-specific edge-cases and the behavior may change in the future due to changing implementation details or new features that depend more runtime state (e.g. a std threadpool or async runtime) then finally a list of known limitations\nProposed documentation based on my previous comment:", "positive_passages": [{"docid": "doc-en-rust-366aec0b367c74bc1466d97196e8ad65fb71bfe07fecb94b8fa0d700979176fc", "text": "//! [rust-discord]: https://discord.gg/rust-lang //! [array]: prim@array //! [slice]: prim@slice // To run std tests without x.py without ending up with two copies of std, Miri needs to be // able to \"empty\" this crate. See . // rustc itself never sets the feature, so this line has no effect there.", "commid": "rust_pr_115247"}], "negative_passages": []} {"query_id": "q-en-rust-d5996c0fe580fe2ad695645b8668e02f2408543010fad1a5d51a9f3384c3ebe7", "query": "Tello-edu The says use my build failed with error $DIR/issue-111189-resolution-ice.rs:6:7 | LL | /// #[rustfmt::skip] | ^^^^^^^^^^^^^ no item named `rustfmt` in scope | note: the lint level is defined here --> $DIR/issue-111189-resolution-ice.rs:4:9 | LL | #![deny(warnings)] | ^^^^^^^^ = note: `#[deny(rustdoc::broken_intra_doc_links)]` implied by `#[deny(warnings)]` error: unresolved link to `clippy::whatever` --> $DIR/issue-111189-resolution-ice.rs:8:7 | LL | /// #[clippy::whatever] | ^^^^^^^^^^^^^^^^ no item named `clippy` in scope error: aborting due to 2 previous errors ", "commid": "rust_pr_111195"}], "negative_passages": []} {"query_id": "q-en-rust-8c13487a41858b11fd7042fd43ee9123f1e9429ee59b15208c6c930d9e7b9c01", "query": "Inlining introduces unsized temporaries. This is problematic since unsized temporaries fail to uphold alignment requirements as described in . For example:\nThe unsized locals alignment issue itself was fixed in .", "positive_passages": [{"docid": "doc-en-rust-970fd1cc8c33d4c90054806998f776e87033df96b0cb21cd600bd82f75683380", "text": ") -> Result, &'static str> { let callee_attrs = self.tcx.codegen_fn_attrs(callsite.callee.def_id()); self.check_codegen_attributes(callsite, callee_attrs)?; let terminator = caller_body[callsite.block].terminator.as_ref().unwrap(); let TerminatorKind::Call { args, destination, .. } = &terminator.kind else { bug!() }; let destination_ty = destination.ty(&caller_body.local_decls, self.tcx).ty; for arg in args { if !arg.ty(&caller_body.local_decls, self.tcx).is_sized(self.tcx, self.param_env) { // We do not allow inlining functions with unsized params. Inlining these functions // could create unsized locals, which are unsound and being phased out. return Err(\"Call has unsized argument\"); } } self.check_mir_is_available(caller_body, &callsite.callee)?; let callee_body = try_instance_mir(self.tcx, callsite.callee.def)?; self.check_mir_body(callsite, callee_body, callee_attrs)?;", "commid": "rust_pr_111424"}], "negative_passages": []} {"query_id": "q-en-rust-8c13487a41858b11fd7042fd43ee9123f1e9429ee59b15208c6c930d9e7b9c01", "query": "Inlining introduces unsized temporaries. This is problematic since unsized temporaries fail to uphold alignment requirements as described in . For example:\nThe unsized locals alignment issue itself was fixed in .", "positive_passages": [{"docid": "doc-en-rust-c36d21d666231ba574ef740974355c6f136da39d03d3eff6860a4f140cb718f9", "text": "// Check call signature compatibility. // Normally, this shouldn't be required, but trait normalization failure can create a // validation ICE. let terminator = caller_body[callsite.block].terminator.as_ref().unwrap(); let TerminatorKind::Call { args, destination, .. } = &terminator.kind else { bug!() }; let destination_ty = destination.ty(&caller_body.local_decls, self.tcx).ty; let output_type = callee_body.return_ty(); if !util::is_subtype(self.tcx, self.param_env, output_type, destination_ty) { trace!(?output_type, ?destination_ty);", "commid": "rust_pr_111424"}], "negative_passages": []} {"query_id": "q-en-rust-8c13487a41858b11fd7042fd43ee9123f1e9429ee59b15208c6c930d9e7b9c01", "query": "Inlining introduces unsized temporaries. This is problematic since unsized temporaries fail to uphold alignment requirements as described in . For example:\nThe unsized locals alignment issue itself was fixed in .", "positive_passages": [{"docid": "doc-en-rust-38c104a35a93aa55bd87f282d5da1ce3109f458debafcfe71f68fb2c4fc7111d", "text": " - // MIR for `caller` before Inline + // MIR for `caller` after Inline fn caller(_1: Box<[i32]>) -> () { debug x => _1; // in scope 0 at $DIR/unsized_argument.rs:+0:11: +0:12 let mut _0: (); // return place in scope 0 at $DIR/unsized_argument.rs:+0:26: +0:26 let _2: (); // in scope 0 at $DIR/unsized_argument.rs:+1:5: +1:15 let mut _3: std::boxed::Box<[i32]>; // in scope 0 at $DIR/unsized_argument.rs:+1:13: +1:14 let mut _4: (); // in scope 0 at $DIR/unsized_argument.rs:+1:14: +1:15 let mut _5: (); // in scope 0 at $DIR/unsized_argument.rs:+1:14: +1:15 let mut _6: (); // in scope 0 at $DIR/unsized_argument.rs:+1:14: +1:15 let mut _7: *const [i32]; // in scope 0 at $DIR/unsized_argument.rs:+1:13: +1:14 bb0: { StorageLive(_2); // scope 0 at $DIR/unsized_argument.rs:+1:5: +1:15 StorageLive(_3); // scope 0 at $DIR/unsized_argument.rs:+1:13: +1:14 _3 = move _1; // scope 0 at $DIR/unsized_argument.rs:+1:13: +1:14 _7 = (((_3.0: std::ptr::Unique<[i32]>).0: std::ptr::NonNull<[i32]>).0: *const [i32]); // scope 0 at $DIR/unsized_argument.rs:+1:5: +1:15 _2 = callee(move (*_7)) -> [return: bb3, unwind: bb4]; // scope 0 at $DIR/unsized_argument.rs:+1:5: +1:15 // mir::Constant // + span: $DIR/unsized_argument.rs:9:5: 9:11 // + literal: Const { ty: fn([i32]) {callee}, val: Value() } } bb1: { StorageDead(_3); // scope 0 at $DIR/unsized_argument.rs:+1:14: +1:15 StorageDead(_2); // scope 0 at $DIR/unsized_argument.rs:+1:15: +1:16 _0 = const (); // scope 0 at $DIR/unsized_argument.rs:+0:26: +2:2 return; // scope 0 at $DIR/unsized_argument.rs:+2:2: +2:2 } bb2 (cleanup): { resume; // scope 0 at $DIR/unsized_argument.rs:+0:1: +2:2 } bb3: { _4 = alloc::alloc::box_free::<[i32], std::alloc::Global>(move (_3.0: std::ptr::Unique<[i32]>), move (_3.1: std::alloc::Global)) -> bb1; // scope 0 at $DIR/unsized_argument.rs:+1:14: +1:15 // mir::Constant // + span: $DIR/unsized_argument.rs:9:14: 9:15 // + literal: Const { ty: unsafe fn(Unique<[i32]>, std::alloc::Global) {alloc::alloc::box_free::<[i32], std::alloc::Global>}, val: Value() } } bb4 (cleanup): { _6 = alloc::alloc::box_free::<[i32], std::alloc::Global>(move (_3.0: std::ptr::Unique<[i32]>), move (_3.1: std::alloc::Global)) -> [return: bb2, unwind terminate]; // scope 0 at $DIR/unsized_argument.rs:+1:14: +1:15 // mir::Constant // + span: $DIR/unsized_argument.rs:9:14: 9:15 // + literal: Const { ty: unsafe fn(Unique<[i32]>, std::alloc::Global) {alloc::alloc::box_free::<[i32], std::alloc::Global>}, val: Value() } } } ", "commid": "rust_pr_111424"}], "negative_passages": []} {"query_id": "q-en-rust-8c13487a41858b11fd7042fd43ee9123f1e9429ee59b15208c6c930d9e7b9c01", "query": "Inlining introduces unsized temporaries. This is problematic since unsized temporaries fail to uphold alignment requirements as described in . For example:\nThe unsized locals alignment issue itself was fixed in .", "positive_passages": [{"docid": "doc-en-rust-f5670384519111571a174627ed7a9b93d09232caf73a74c317418d17943db88f", "text": " // needs-unwind #![feature(unsized_fn_params)] #[inline(always)] fn callee(y: [i32]) {} // EMIT_MIR unsized_argument.caller.Inline.diff fn caller(x: Box<[i32]>) { callee(*x); } fn main() { let b = Box::new([1]); caller(b); } ", "commid": "rust_pr_111424"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ' /// Whether `check_binop` allows overloaded operators to be invoked. /// Whether `check_binop` is part of an assignment or not. /// Used to know wether we allow user overloads and to print /// better messages on error. #[deriving(Eq)] enum AllowOverloadedOperatorsFlag { AllowOverloadedOperators, DontAllowOverloadedOperators, enum IsBinopAssignment{ SimpleBinop, BinopAssignment, } #[deriving(Clone)]", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ', allow_overloaded_operators: AllowOverloadedOperatorsFlag is_binop_assignment: IsBinopAssignment ) { let tcx = fcx.ccx.tcx;", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ' // Check for overloaded operators if allowed. // Check for overloaded operators if not an assignment. let result_t; if allow_overloaded_operators == AllowOverloadedOperators { if is_binop_assignment == SimpleBinop { result_t = check_user_binop(fcx, callee_id, expr,", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ' format!(\"binary operation {} cannot be applied to type `{}`\", ast_util::binop_to_str(op), actual) format!(\"binary assignment operation {}= cannot be applied to type `{}`\", ast_util::binop_to_str(op), actual) }, lhs_t, None); check_expr(fcx, rhs); result_t = ty::mk_err(); }", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ' AllowOverloadedOperators); SimpleBinop); let lhs_ty = fcx.expr_ty(lhs); let rhs_ty = fcx.expr_ty(rhs);", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ' DontAllowOverloadedOperators); BinopAssignment); let lhs_t = fcx.expr_ty(lhs); let result_t = fcx.expr_ty(expr);", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ' // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. struct Foo; fn main() { let mut a = Foo; let ref b = Foo; a += *b; //~ Error: binary assignment operation += cannot be applied to type `Foo` } ", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-95f082732098635a4d151bb61c59ae073a54343cd88a8f571c47e8398f3b977c", "query": "OS: Ubuntu 12.04 x86 rustc -v rustc 0.9-pre ( 2013-12-24 01:56:30 -0800) host: i686-unknown-linux-gnu Full error message: 30:14 error: binary operation + cannot be applied to type v += move; ^~~~~ error: internal compiler error: no type for node 81: expr move (id=81) in fcx 0xb3970ef8 This message reflects a bug in the Rust compiler. We would appreciate a bug report: task 'rustc' failed at 'explicit failure', task ' let x: |int| -> int = |ref x| { x += 1; }; //~ ERROR binary operation + cannot be applied to type `&int` let x: |int| -> int = |ref x| { x += 1; }; //~ ERROR binary assignment operation += cannot be applied to type `&int` }", "commid": "rust_pr_11449.0"}], "negative_passages": []} {"query_id": "q-en-rust-df8429ebdc1c0dee59154d5ea0327cb539a6c624a5e9af5af41a4a1d82cd3cd7", "query": "This is with Rust 1.69 but was probably always the case. Example: Compiling with prints Similarly when providing nothing changes. I didn't really follow how collects the libraries but I think it's via the query in which claims so this should be included AFAIU. Also the rationale of was to provide all libraries needed to link to the resulting , which is not the case in the above example. CC who originally implemented this feature.\nIt works as expected if the attribute is used in a dependencies. It seems to me that the function, is forgetting to add the current native libs. Doing something like this correctly adds to the output (even when passed via the command line) but I don't know if it's the right thing to do (cc\nYeah, I think adding is the correct fix. That adds the same information as does for dependencies.\nBeware that is rather quirky. Originally Rust printed lines during build with information which link flags need to be to use the static library it builds, and is a blunt way of redirecting these \"warnings\". I don't know whether omission of is a bug or the old warnings just weren't meant to print them.\nI can't imagine it to be a deliberate omission as is the only way to know how to correctly link any arbitrary staticlib afaik.", "positive_passages": [{"docid": "doc-en-rust-198696f4d7a070c1ecaea73c381f28d7f2e3690457f22f3bc25ce9d36bb792d1", "text": "} } all_native_libs.extend_from_slice(&codegen_results.crate_info.used_libraries); if sess.opts.prints.contains(&PrintRequest::NativeStaticLibs) { print_native_static_libs(sess, &all_native_libs, &all_rust_dylibs); }", "commid": "rust_pr_111675"}], "negative_passages": []} {"query_id": "q-en-rust-df8429ebdc1c0dee59154d5ea0327cb539a6c624a5e9af5af41a4a1d82cd3cd7", "query": "This is with Rust 1.69 but was probably always the case. Example: Compiling with prints Similarly when providing nothing changes. I didn't really follow how collects the libraries but I think it's via the query in which claims so this should be included AFAIU. Also the rationale of was to provide all libraries needed to link to the resulting , which is not the case in the above example. CC who originally implemented this feature.\nIt works as expected if the attribute is used in a dependencies. It seems to me that the function, is forgetting to add the current native libs. Doing something like this correctly adds to the output (even when passed via the command line) but I don't know if it's the right thing to do (cc\nYeah, I think adding is the correct fix. That adds the same information as does for dependencies.\nBeware that is rather quirky. Originally Rust printed lines during build with information which link flags need to be to use the static library it builds, and is a blunt way of redirecting these \"warnings\". I don't know whether omission of is a bug or the old warnings just weren't meant to print them.\nI can't imagine it to be a deliberate omission as is the only way to know how to correctly link any arbitrary staticlib afaik.", "positive_passages": [{"docid": "doc-en-rust-1c95cae72d04d25b42e9e9c9783402ea3d879991936db311d030afb43a6aa6ff", "text": " include ../tools.mk # ignore-cross-compile # ignore-wasm all: $(RUSTC) --crate-type rlib -lbar_cli bar.rs $(RUSTC) foo.rs -lfoo_cli --crate-type staticlib --print native-static-libs 2>&1 | grep 'note: native-static-libs: ' | sed 's/note: native-static-libs: (.*)/1/' > $(TMPDIR)/libs.txt cat $(TMPDIR)/libs.txt | grep -F \"glib-2.0\" # in bar.rs cat $(TMPDIR)/libs.txt | grep -F \"systemd\" # in foo.rs cat $(TMPDIR)/libs.txt | grep -F \"bar_cli\" cat $(TMPDIR)/libs.txt | grep -F \"foo_cli\" ", "commid": "rust_pr_111675"}], "negative_passages": []} {"query_id": "q-en-rust-df8429ebdc1c0dee59154d5ea0327cb539a6c624a5e9af5af41a4a1d82cd3cd7", "query": "This is with Rust 1.69 but was probably always the case. Example: Compiling with prints Similarly when providing nothing changes. I didn't really follow how collects the libraries but I think it's via the query in which claims so this should be included AFAIU. Also the rationale of was to provide all libraries needed to link to the resulting , which is not the case in the above example. CC who originally implemented this feature.\nIt works as expected if the attribute is used in a dependencies. It seems to me that the function, is forgetting to add the current native libs. Doing something like this correctly adds to the output (even when passed via the command line) but I don't know if it's the right thing to do (cc\nYeah, I think adding is the correct fix. That adds the same information as does for dependencies.\nBeware that is rather quirky. Originally Rust printed lines during build with information which link flags need to be to use the static library it builds, and is a blunt way of redirecting these \"warnings\". I don't know whether omission of is a bug or the old warnings just weren't meant to print them.\nI can't imagine it to be a deliberate omission as is the only way to know how to correctly link any arbitrary staticlib afaik.", "positive_passages": [{"docid": "doc-en-rust-7c3aea9bb41886c6fff89843a03643be3bf628c305beddea7bb4520bdd884194", "text": " #[no_mangle] pub extern \"C\" fn my_bar_add(left: i32, right: i32) -> i32 { // Obviously makes no sense but... unsafe { g_free(std::ptr::null_mut()); } left + right } #[link(name = \"glib-2.0\")] extern \"C\" { fn g_free(p: *mut ()); } ", "commid": "rust_pr_111675"}], "negative_passages": []} {"query_id": "q-en-rust-df8429ebdc1c0dee59154d5ea0327cb539a6c624a5e9af5af41a4a1d82cd3cd7", "query": "This is with Rust 1.69 but was probably always the case. Example: Compiling with prints Similarly when providing nothing changes. I didn't really follow how collects the libraries but I think it's via the query in which claims so this should be included AFAIU. Also the rationale of was to provide all libraries needed to link to the resulting , which is not the case in the above example. CC who originally implemented this feature.\nIt works as expected if the attribute is used in a dependencies. It seems to me that the function, is forgetting to add the current native libs. Doing something like this correctly adds to the output (even when passed via the command line) but I don't know if it's the right thing to do (cc\nYeah, I think adding is the correct fix. That adds the same information as does for dependencies.\nBeware that is rather quirky. Originally Rust printed lines during build with information which link flags need to be to use the static library it builds, and is a blunt way of redirecting these \"warnings\". I don't know whether omission of is a bug or the old warnings just weren't meant to print them.\nI can't imagine it to be a deliberate omission as is the only way to know how to correctly link any arbitrary staticlib afaik.", "positive_passages": [{"docid": "doc-en-rust-80ea86fb944658fa96c736d07ed7c34ff0b3b309d9a8506cf98396b656af2c3a", "text": " extern crate bar; #[no_mangle] pub extern \"C\" fn my_foo_add(left: i32, right: i32) -> i32 { // Obviously makes no sense but... unsafe { init(std::ptr::null_mut()); } bar::my_bar_add(left, right) } #[link(name = \"systemd\")] extern \"C\" { fn init(p: *mut ()); } ", "commid": "rust_pr_111675"}], "negative_passages": []} {"query_id": "q-en-rust-cee077875e5d993b993a007e90fa0cfda7cd2684359386da511d80b45eb15eef", "query": " $DIR/issue-111879-0.rs:11:25 | LL | pub type Focus = &'a mut User; | ^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_111887"}], "negative_passages": []} {"query_id": "q-en-rust-cee077875e5d993b993a007e90fa0cfda7cd2684359386da511d80b45eb15eef", "query": " $DIR/issue-111879-1.rs:12:1 | LL | fn main(_: for<'a> fn(Foo::Assoc)) {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ incorrect number of function parameters | = note: expected fn pointer `fn()` found fn pointer `fn(for<'a> fn(Foo::Assoc))` error: aborting due to previous error For more information about this error, try `rustc --explain E0580`. ", "commid": "rust_pr_111887"}], "negative_passages": []} {"query_id": "q-en-rust-0432eac4bc58f53b9e676d5045a4052d51d569cdd5f8a088dae6dd3615f18318", "query": " $DIR/issue-112225-2.rs:13:9 | LL | let x = Default::default(); | ^ ... LL | async { x.0; }, | - type must be known at this point | help: consider giving `x` an explicit type | LL | let x: /* Type */ = Default::default(); | ++++++++++++ error: aborting due to previous error For more information about this error, try `rustc --explain E0282`. ", "commid": "rust_pr_112266"}], "negative_passages": []} {"query_id": "q-en-rust-257b55440433e1a20c4b2ad2bd62bce694d537356ff1f13531c7a93c3e29b0d1", "query": "Sometimes when changing compiler internals, I run to make sure I didn't break tools and codegen backends. This also checks rust-analyzer by default, which is not very useful, I cannot break that. If we want to keep that, it would at least be useful to put it last so I can Ctrl-C out of it. Currently, rustfmt is checked after rust-analyzer. $DIR/parse-error.rs:11:9 | LL | asm!(); | ^^^^^^ error: asm template must be a string literal --> $DIR/parse-error.rs:13:14 | LL | asm!(foo); | ^^^ error: expected token: `,` --> $DIR/parse-error.rs:15:19 | LL | asm!(\"{}\" foo); | ^^^ expected `,` error: expected operand, clobber_abi, options, or additional template string --> $DIR/parse-error.rs:17:20 | LL | asm!(\"{}\", foo); | ^^^ expected operand, clobber_abi, options, or additional template string error: expected `(`, found `foo` --> $DIR/parse-error.rs:19:23 | LL | asm!(\"{}\", in foo); | ^^^ expected `(` error: expected `)`, found `foo` --> $DIR/parse-error.rs:21:27 | LL | asm!(\"{}\", in(reg foo)); | ^^^ expected `)` error: expected expression, found end of macro arguments --> $DIR/parse-error.rs:23:27 | LL | asm!(\"{}\", in(reg)); | ^ expected expression error: expected register class or explicit register --> $DIR/parse-error.rs:25:26 | LL | asm!(\"{}\", inout(=) foo => bar); | ^ error: expected expression, found end of macro arguments --> $DIR/parse-error.rs:27:37 | LL | asm!(\"{}\", inout(reg) foo =>); | ^ expected expression error: expected one of `!`, `,`, `.`, `::`, `?`, `{`, or an operator, found `=>` --> $DIR/parse-error.rs:29:32 | LL | asm!(\"{}\", in(reg) foo => bar); | ^^ expected one of 7 possible tokens error: expected a path for argument to `sym` --> $DIR/parse-error.rs:31:24 | LL | asm!(\"{}\", sym foo + bar); | ^^^^^^^^^ error: expected one of `)`, `att_syntax`, `may_unwind`, `nomem`, `noreturn`, `nostack`, `preserves_flags`, `pure`, `raw`, or `readonly`, found `foo` --> $DIR/parse-error.rs:33:26 | LL | asm!(\"\", options(foo)); | ^^^ expected one of 10 possible tokens error: expected one of `)` or `,`, found `foo` --> $DIR/parse-error.rs:35:32 | LL | asm!(\"\", options(nomem foo)); | ^^^ expected one of `)` or `,` error: expected one of `)`, `att_syntax`, `may_unwind`, `nomem`, `noreturn`, `nostack`, `preserves_flags`, `pure`, `raw`, or `readonly`, found `foo` --> $DIR/parse-error.rs:37:33 | LL | asm!(\"\", options(nomem, foo)); | ^^^ expected one of 10 possible tokens error: at least one abi must be provided as an argument to `clobber_abi` --> $DIR/parse-error.rs:44:30 | LL | asm!(\"\", clobber_abi()); | ^ error: expected string literal --> $DIR/parse-error.rs:46:30 | LL | asm!(\"\", clobber_abi(foo)); | ^^^ not a string literal error: expected one of `)` or `,`, found `foo` --> $DIR/parse-error.rs:48:34 | LL | asm!(\"\", clobber_abi(\"C\" foo)); | ^^^ expected one of `)` or `,` error: expected string literal --> $DIR/parse-error.rs:50:35 | LL | asm!(\"\", clobber_abi(\"C\", foo)); | ^^^ not a string literal error: expected string literal --> $DIR/parse-error.rs:52:30 | LL | asm!(\"\", clobber_abi(1)); | ^ not a string literal error: expected string literal --> $DIR/parse-error.rs:54:30 | LL | asm!(\"\", clobber_abi(())); | ^ not a string literal error: expected string literal --> $DIR/parse-error.rs:56:30 | LL | asm!(\"\", clobber_abi(uwu)); | ^^^ not a string literal error: expected string literal --> $DIR/parse-error.rs:58:30 | LL | asm!(\"\", clobber_abi({})); | ^ not a string literal error: expected string literal --> $DIR/parse-error.rs:60:30 | LL | asm!(\"\", clobber_abi(loop {})); | ^^^^ not a string literal error: expected string literal --> $DIR/parse-error.rs:62:30 | LL | asm!(\"\", clobber_abi(if)); | ^^ not a string literal error: expected string literal --> $DIR/parse-error.rs:64:30 | LL | asm!(\"\", clobber_abi(do)); | ^^ not a string literal error: expected string literal --> $DIR/parse-error.rs:66:30 | LL | asm!(\"\", clobber_abi(<)); | ^ not a string literal error: expected string literal --> $DIR/parse-error.rs:68:30 | LL | asm!(\"\", clobber_abi(.)); | ^ not a string literal error: duplicate argument named `a` --> $DIR/parse-error.rs:76:36 | LL | asm!(\"{a}\", a = const foo, a = const bar); | ------------- ^^^^^^^^^^^^^ duplicate argument | | | previously here error: argument never used --> $DIR/parse-error.rs:76:36 | LL | asm!(\"{a}\", a = const foo, a = const bar); | ^^^^^^^^^^^^^ argument never used | = help: if this argument is intentionally unused, consider using it in an asm comment: `\"/* {1} */\"` error: expected one of `clobber_abi`, `const`, `in`, `inlateout`, `inout`, `lateout`, `options`, `out`, or `sym`, found `\"\"` --> $DIR/parse-error.rs:82:29 | LL | asm!(\"\", options(), \"\"); | ^^ expected one of 9 possible tokens error: expected one of `clobber_abi`, `const`, `in`, `inlateout`, `inout`, `lateout`, `options`, `out`, or `sym`, found `\"{}\"` --> $DIR/parse-error.rs:84:33 | LL | asm!(\"{}\", in(reg) foo, \"{}\", out(reg) foo); | ^^^^ expected one of 9 possible tokens error: asm template must be a string literal --> $DIR/parse-error.rs:86:14 | LL | asm!(format!(\"{{{}}}\", 0), in(reg) foo); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error: asm template must be a string literal --> $DIR/parse-error.rs:88:21 | LL | asm!(\"{1}\", format!(\"{{{}}}\", 0), in(reg) foo, out(reg) bar); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error: _ cannot be used for input operands --> $DIR/parse-error.rs:90:28 | LL | asm!(\"{}\", in(reg) _); | ^ error: _ cannot be used for input operands --> $DIR/parse-error.rs:92:31 | LL | asm!(\"{}\", inout(reg) _); | ^ error: _ cannot be used for input operands --> $DIR/parse-error.rs:94:35 | LL | asm!(\"{}\", inlateout(reg) _); | ^ error: requires at least a template string argument --> $DIR/parse-error.rs:101:1 | LL | global_asm!(); | ^^^^^^^^^^^^^ error: asm template must be a string literal --> $DIR/parse-error.rs:103:13 | LL | global_asm!(FOO); | ^^^ error: expected token: `,` --> $DIR/parse-error.rs:105:18 | LL | global_asm!(\"{}\" FOO); | ^^^ expected `,` error: expected operand, options, or additional template string --> $DIR/parse-error.rs:107:19 | LL | global_asm!(\"{}\", FOO); | ^^^ expected operand, options, or additional template string error: expected expression, found end of macro arguments --> $DIR/parse-error.rs:109:24 | LL | global_asm!(\"{}\", const); | ^ expected expression error: expected one of `,`, `.`, `?`, or an operator, found `FOO` --> $DIR/parse-error.rs:111:30 | LL | global_asm!(\"{}\", const(reg) FOO); | ^^^ expected one of `,`, `.`, `?`, or an operator error: expected one of `)`, `att_syntax`, or `raw`, found `FOO` --> $DIR/parse-error.rs:113:25 | LL | global_asm!(\"\", options(FOO)); | ^^^ expected one of `)`, `att_syntax`, or `raw` error: expected one of `)`, `att_syntax`, or `raw`, found `nomem` --> $DIR/parse-error.rs:115:25 | LL | global_asm!(\"\", options(nomem FOO)); | ^^^^^ expected one of `)`, `att_syntax`, or `raw` error: expected one of `)`, `att_syntax`, or `raw`, found `nomem` --> $DIR/parse-error.rs:117:25 | LL | global_asm!(\"\", options(nomem, FOO)); | ^^^^^ expected one of `)`, `att_syntax`, or `raw` error: expected string literal --> $DIR/parse-error.rs:120:29 | LL | global_asm!(\"\", clobber_abi(FOO)); | ^^^ not a string literal error: expected one of `)` or `,`, found `FOO` --> $DIR/parse-error.rs:122:33 | LL | global_asm!(\"\", clobber_abi(\"C\" FOO)); | ^^^ expected one of `)` or `,` error: expected string literal --> $DIR/parse-error.rs:124:34 | LL | global_asm!(\"\", clobber_abi(\"C\", FOO)); | ^^^ not a string literal error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:126:19 | LL | global_asm!(\"{}\", clobber_abi(\"C\"), const FOO); | ^^^^^^^^^^^^^^^^ error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:128:28 | LL | global_asm!(\"\", options(), clobber_abi(\"C\")); | ^^^^^^^^^^^^^^^^ error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:130:30 | LL | global_asm!(\"{}\", options(), clobber_abi(\"C\"), const FOO); | ^^^^^^^^^^^^^^^^ error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:132:17 | LL | global_asm!(\"\", clobber_abi(\"C\"), clobber_abi(\"C\")); | ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^ error: duplicate argument named `a` --> $DIR/parse-error.rs:134:35 | LL | global_asm!(\"{a}\", a = const FOO, a = const BAR); | ------------- ^^^^^^^^^^^^^ duplicate argument | | | previously here error: argument never used --> $DIR/parse-error.rs:134:35 | LL | global_asm!(\"{a}\", a = const FOO, a = const BAR); | ^^^^^^^^^^^^^ argument never used | = help: if this argument is intentionally unused, consider using it in an asm comment: `\"/* {1} */\"` error: expected one of `clobber_abi`, `const`, `options`, or `sym`, found `\"\"` --> $DIR/parse-error.rs:137:28 | LL | global_asm!(\"\", options(), \"\"); | ^^ expected one of `clobber_abi`, `const`, `options`, or `sym` error: expected one of `clobber_abi`, `const`, `options`, or `sym`, found `\"{}\"` --> $DIR/parse-error.rs:139:30 | LL | global_asm!(\"{}\", const FOO, \"{}\", const FOO); | ^^^^ expected one of `clobber_abi`, `const`, `options`, or `sym` error: asm template must be a string literal --> $DIR/parse-error.rs:141:13 | LL | global_asm!(format!(\"{{{}}}\", 0), const FOO); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error: asm template must be a string literal --> $DIR/parse-error.rs:143:20 | LL | global_asm!(\"{1}\", format!(\"{{{}}}\", 0), const FOO, const BAR); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:39:37 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{}\", options(), const foo); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:71:44 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{}\", clobber_abi(\"C\"), const foo); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:74:55 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{}\", options(), clobber_abi(\"C\"), const foo); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:76:31 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{a}\", a = const foo, a = const bar); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:76:46 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{a}\", a = const foo, a = const bar); | ^^^ non-constant value error: aborting due to 63 previous errors For more information about this error, try `rustc --explain E0435`. ", "commid": "rust_pr_112683"}], "negative_passages": []} {"query_id": "q-en-rust-2d22586e38f2bb2d0f3acc6cebf65c3f3d4edf9c1956422d291baf849d44c41d", "query": " $DIR/parse-error.rs:11:9 | LL | asm!(); | ^^^^^^ error: asm template must be a string literal --> $DIR/parse-error.rs:13:14 | LL | asm!(foo); | ^^^ error: expected token: `,` --> $DIR/parse-error.rs:15:19 | LL | asm!(\"{}\" foo); | ^^^ expected `,` error: expected operand, clobber_abi, options, or additional template string --> $DIR/parse-error.rs:17:20 | LL | asm!(\"{}\", foo); | ^^^ expected operand, clobber_abi, options, or additional template string error: expected `(`, found `foo` --> $DIR/parse-error.rs:19:23 | LL | asm!(\"{}\", in foo); | ^^^ expected `(` error: expected `)`, found `foo` --> $DIR/parse-error.rs:21:27 | LL | asm!(\"{}\", in(reg foo)); | ^^^ expected `)` error: expected expression, found end of macro arguments --> $DIR/parse-error.rs:23:27 | LL | asm!(\"{}\", in(reg)); | ^ expected expression error: expected register class or explicit register --> $DIR/parse-error.rs:25:26 | LL | asm!(\"{}\", inout(=) foo => bar); | ^ error: expected expression, found end of macro arguments --> $DIR/parse-error.rs:27:37 | LL | asm!(\"{}\", inout(reg) foo =>); | ^ expected expression error: expected one of `!`, `,`, `.`, `::`, `?`, `{`, or an operator, found `=>` --> $DIR/parse-error.rs:29:32 | LL | asm!(\"{}\", in(reg) foo => bar); | ^^ expected one of 7 possible tokens error: expected a path for argument to `sym` --> $DIR/parse-error.rs:31:24 | LL | asm!(\"{}\", sym foo + bar); | ^^^^^^^^^ error: expected one of `)`, `att_syntax`, `may_unwind`, `nomem`, `noreturn`, `nostack`, `preserves_flags`, `pure`, `raw`, or `readonly`, found `foo` --> $DIR/parse-error.rs:33:26 | LL | asm!(\"\", options(foo)); | ^^^ expected one of 10 possible tokens error: expected one of `)` or `,`, found `foo` --> $DIR/parse-error.rs:35:32 | LL | asm!(\"\", options(nomem foo)); | ^^^ expected one of `)` or `,` error: expected one of `)`, `att_syntax`, `may_unwind`, `nomem`, `noreturn`, `nostack`, `preserves_flags`, `pure`, `raw`, or `readonly`, found `foo` --> $DIR/parse-error.rs:37:33 | LL | asm!(\"\", options(nomem, foo)); | ^^^ expected one of 10 possible tokens error: at least one abi must be provided as an argument to `clobber_abi` --> $DIR/parse-error.rs:41:30 | LL | asm!(\"\", clobber_abi()); | ^ error: expected string literal --> $DIR/parse-error.rs:43:30 | LL | asm!(\"\", clobber_abi(foo)); | ^^^ not a string literal error: expected one of `)` or `,`, found `foo` --> $DIR/parse-error.rs:45:34 | LL | asm!(\"\", clobber_abi(\"C\" foo)); | ^^^ expected one of `)` or `,` error: expected string literal --> $DIR/parse-error.rs:47:35 | LL | asm!(\"\", clobber_abi(\"C\", foo)); | ^^^ not a string literal error: duplicate argument named `a` --> $DIR/parse-error.rs:54:36 | LL | asm!(\"{a}\", a = const foo, a = const bar); | ------------- ^^^^^^^^^^^^^ duplicate argument | | | previously here error: argument never used --> $DIR/parse-error.rs:54:36 | LL | asm!(\"{a}\", a = const foo, a = const bar); | ^^^^^^^^^^^^^ argument never used | = help: if this argument is intentionally unused, consider using it in an asm comment: `\"/* {1} */\"` error: explicit register arguments cannot have names --> $DIR/parse-error.rs:59:18 | LL | asm!(\"\", a = in(\"eax\") foo); | ^^^^^^^^^^^^^^^^^ error: positional arguments cannot follow named arguments or explicit register arguments --> $DIR/parse-error.rs:65:36 | LL | asm!(\"{1}\", in(\"eax\") foo, const bar); | ------------- ^^^^^^^^^ positional argument | | | explicit register argument error: expected one of `clobber_abi`, `const`, `in`, `inlateout`, `inout`, `lateout`, `options`, `out`, or `sym`, found `\"\"` --> $DIR/parse-error.rs:68:29 | LL | asm!(\"\", options(), \"\"); | ^^ expected one of 9 possible tokens error: expected one of `clobber_abi`, `const`, `in`, `inlateout`, `inout`, `lateout`, `options`, `out`, or `sym`, found `\"{}\"` --> $DIR/parse-error.rs:70:33 | LL | asm!(\"{}\", in(reg) foo, \"{}\", out(reg) foo); | ^^^^ expected one of 9 possible tokens error: asm template must be a string literal --> $DIR/parse-error.rs:72:14 | LL | asm!(format!(\"{{{}}}\", 0), in(reg) foo); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error: asm template must be a string literal --> $DIR/parse-error.rs:74:21 | LL | asm!(\"{1}\", format!(\"{{{}}}\", 0), in(reg) foo, out(reg) bar); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error: _ cannot be used for input operands --> $DIR/parse-error.rs:76:28 | LL | asm!(\"{}\", in(reg) _); | ^ error: _ cannot be used for input operands --> $DIR/parse-error.rs:78:31 | LL | asm!(\"{}\", inout(reg) _); | ^ error: _ cannot be used for input operands --> $DIR/parse-error.rs:80:35 | LL | asm!(\"{}\", inlateout(reg) _); | ^ error: requires at least a template string argument --> $DIR/parse-error.rs:87:1 | LL | global_asm!(); | ^^^^^^^^^^^^^ error: asm template must be a string literal --> $DIR/parse-error.rs:89:13 | LL | global_asm!(FOO); | ^^^ error: expected token: `,` --> $DIR/parse-error.rs:91:18 | LL | global_asm!(\"{}\" FOO); | ^^^ expected `,` error: expected operand, options, or additional template string --> $DIR/parse-error.rs:93:19 | LL | global_asm!(\"{}\", FOO); | ^^^ expected operand, options, or additional template string error: expected expression, found end of macro arguments --> $DIR/parse-error.rs:95:24 | LL | global_asm!(\"{}\", const); | ^ expected expression error: expected one of `,`, `.`, `?`, or an operator, found `FOO` --> $DIR/parse-error.rs:97:30 | LL | global_asm!(\"{}\", const(reg) FOO); | ^^^ expected one of `,`, `.`, `?`, or an operator error: expected one of `)`, `att_syntax`, or `raw`, found `FOO` --> $DIR/parse-error.rs:99:25 | LL | global_asm!(\"\", options(FOO)); | ^^^ expected one of `)`, `att_syntax`, or `raw` error: expected one of `)`, `att_syntax`, or `raw`, found `nomem` --> $DIR/parse-error.rs:101:25 | LL | global_asm!(\"\", options(nomem FOO)); | ^^^^^ expected one of `)`, `att_syntax`, or `raw` error: expected one of `)`, `att_syntax`, or `raw`, found `nomem` --> $DIR/parse-error.rs:103:25 | LL | global_asm!(\"\", options(nomem, FOO)); | ^^^^^ expected one of `)`, `att_syntax`, or `raw` error: expected string literal --> $DIR/parse-error.rs:106:29 | LL | global_asm!(\"\", clobber_abi(FOO)); | ^^^ not a string literal error: expected one of `)` or `,`, found `FOO` --> $DIR/parse-error.rs:108:33 | LL | global_asm!(\"\", clobber_abi(\"C\" FOO)); | ^^^ expected one of `)` or `,` error: expected string literal --> $DIR/parse-error.rs:110:34 | LL | global_asm!(\"\", clobber_abi(\"C\", FOO)); | ^^^ not a string literal error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:112:19 | LL | global_asm!(\"{}\", clobber_abi(\"C\"), const FOO); | ^^^^^^^^^^^^^^^^ error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:114:28 | LL | global_asm!(\"\", options(), clobber_abi(\"C\")); | ^^^^^^^^^^^^^^^^ error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:116:30 | LL | global_asm!(\"{}\", options(), clobber_abi(\"C\"), const FOO); | ^^^^^^^^^^^^^^^^ error: `clobber_abi` cannot be used with `global_asm!` --> $DIR/parse-error.rs:118:17 | LL | global_asm!(\"\", clobber_abi(\"C\"), clobber_abi(\"C\")); | ^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^ error: duplicate argument named `a` --> $DIR/parse-error.rs:120:35 | LL | global_asm!(\"{a}\", a = const FOO, a = const BAR); | ------------- ^^^^^^^^^^^^^ duplicate argument | | | previously here error: argument never used --> $DIR/parse-error.rs:120:35 | LL | global_asm!(\"{a}\", a = const FOO, a = const BAR); | ^^^^^^^^^^^^^ argument never used | = help: if this argument is intentionally unused, consider using it in an asm comment: `\"/* {1} */\"` error: expected one of `clobber_abi`, `const`, `options`, or `sym`, found `\"\"` --> $DIR/parse-error.rs:123:28 | LL | global_asm!(\"\", options(), \"\"); | ^^ expected one of `clobber_abi`, `const`, `options`, or `sym` error: expected one of `clobber_abi`, `const`, `options`, or `sym`, found `\"{}\"` --> $DIR/parse-error.rs:125:30 | LL | global_asm!(\"{}\", const FOO, \"{}\", const FOO); | ^^^^ expected one of `clobber_abi`, `const`, `options`, or `sym` error: asm template must be a string literal --> $DIR/parse-error.rs:127:13 | LL | global_asm!(format!(\"{{{}}}\", 0), const FOO); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error: asm template must be a string literal --> $DIR/parse-error.rs:129:20 | LL | global_asm!(\"{1}\", format!(\"{{{}}}\", 0), const FOO, const BAR); | ^^^^^^^^^^^^^^^^^^^^ | = note: this error originates in the macro `format` (in Nightly builds, run with -Z macro-backtrace for more info) error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:39:37 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{}\", options(), const foo); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:49:44 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{}\", clobber_abi(\"C\"), const foo); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:52:55 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{}\", options(), clobber_abi(\"C\"), const foo); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:54:31 | LL | let mut foo = 0; | ----------- help: consider using `const` instead of `let`: `const foo` ... LL | asm!(\"{a}\", a = const foo, a = const bar); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:54:46 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{a}\", a = const foo, a = const bar); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:61:46 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{a}\", in(\"eax\") foo, a = const bar); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:63:46 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{a}\", in(\"eax\") foo, a = const bar); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/parse-error.rs:65:42 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{1}\", in(\"eax\") foo, const bar); | ^^^ non-constant value error: aborting due to 59 previous errors For more information about this error, try `rustc --explain E0435`. ", "commid": "rust_pr_112683"}], "negative_passages": []} {"query_id": "q-en-rust-2d22586e38f2bb2d0f3acc6cebf65c3f3d4edf9c1956422d291baf849d44c41d", "query": " $DIR/x86_64_parse_error.rs:11:18 | LL | asm!(\"\", a = in(\"eax\") foo); | ^^^^^^^^^^^^^^^^^ error: positional arguments cannot follow named arguments or explicit register arguments --> $DIR/x86_64_parse_error.rs:17:36 | LL | asm!(\"{1}\", in(\"eax\") foo, const bar); | ------------- ^^^^^^^^^ positional argument | | | explicit register argument error[E0435]: attempt to use a non-constant value in a constant --> $DIR/x86_64_parse_error.rs:13:46 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{a}\", in(\"eax\") foo, a = const bar); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/x86_64_parse_error.rs:15:46 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{a}\", in(\"eax\") foo, a = const bar); | ^^^ non-constant value error[E0435]: attempt to use a non-constant value in a constant --> $DIR/x86_64_parse_error.rs:17:42 | LL | let mut bar = 0; | ----------- help: consider using `const` instead of `let`: `const bar` ... LL | asm!(\"{1}\", in(\"eax\") foo, const bar); | ^^^ non-constant value error: aborting due to 5 previous errors For more information about this error, try `rustc --explain E0435`. ", "commid": "rust_pr_112683"}], "negative_passages": []} {"query_id": "q-en-rust-1acfb5f6c4ceba2b753deaee7612c0beda0a4c8cda6910d681cfb1c2273c0514", "query": "This code previously compiled fine, but as of ~2 weeks ago it began to fail on nightly and has continued to do so since. p.s. Is the output of suppose to be one day behind compared to the toolchain name? In any case, was the last working version, and onward trigger the error. modify labels: +regression-from-stable-to-nightly -regression-untriaged\ncc\nI can put up a fix for this. I suspected this might cause an issue but couldn't think how.", "positive_passages": [{"docid": "doc-en-rust-2f41e0f9b6bcc7d3f2e8b489f59cac8279ec27e72e7d71691f4be3bdcb9435e2", "text": "// They can denote both statically and dynamically-sized byte arrays. let mut pat_ty = ty; if let hir::ExprKind::Lit(Spanned { node: ast::LitKind::ByteStr(..), .. }) = lt.kind { if let ty::Ref(_, inner_ty, _) = *self.structurally_resolved_type(span, expected).kind() && self.structurally_resolved_type(span, inner_ty).is_slice() let expected = self.structurally_resolved_type(span, expected); if let ty::Ref(_, inner_ty, _) = expected.kind() && matches!(inner_ty.kind(), ty::Slice(_)) { let tcx = self.tcx; trace!(?lt.hir_id.local_id, \"polymorphic byte string lit\");", "commid": "rust_pr_113007"}], "negative_passages": []} {"query_id": "q-en-rust-1acfb5f6c4ceba2b753deaee7612c0beda0a4c8cda6910d681cfb1c2273c0514", "query": "This code previously compiled fine, but as of ~2 weeks ago it began to fail on nightly and has continued to do so since. p.s. Is the output of suppose to be one day behind compared to the toolchain name? In any case, was the last working version, and onward trigger the error. modify labels: +regression-from-stable-to-nightly -regression-untriaged\ncc\nI can put up a fix for this. I suspected this might cause an issue but couldn't think how.", "positive_passages": [{"docid": "doc-en-rust-142cbd2f441760273ac422bdd177f4553f113cdf8cdaf0c705fc1104388d6759", "text": " // check-pass fn load() -> Option { todo!() } fn main() { while let Some(tag) = load() { match &tag { b\"NAME\" => {} b\"DATA\" => {} _ => {} } } } ", "commid": "rust_pr_113007"}], "negative_passages": []} {"query_id": "q-en-rust-1acfb5f6c4ceba2b753deaee7612c0beda0a4c8cda6910d681cfb1c2273c0514", "query": "This code previously compiled fine, but as of ~2 weeks ago it began to fail on nightly and has continued to do so since. p.s. Is the output of suppose to be one day behind compared to the toolchain name? In any case, was the last working version, and onward trigger the error. modify labels: +regression-from-stable-to-nightly -regression-untriaged\ncc\nI can put up a fix for this. I suspected this might cause an issue but couldn't think how.", "positive_passages": [{"docid": "doc-en-rust-e05e3bbffe787a87195036424df449c0cf0e582aae929fc699d1dbb200a1e6d5", "text": "// compile-flags: -Ztrait-solver=next // check-pass // known-bug: rust-lang/trait-system-refactor-initiative#38 fn test(s: &[u8]) { match &s[0..3] {", "commid": "rust_pr_113007"}], "negative_passages": []} {"query_id": "q-en-rust-1acfb5f6c4ceba2b753deaee7612c0beda0a4c8cda6910d681cfb1c2273c0514", "query": "This code previously compiled fine, but as of ~2 weeks ago it began to fail on nightly and has continued to do so since. p.s. Is the output of suppose to be one day behind compared to the toolchain name? In any case, was the last working version, and onward trigger the error. modify labels: +regression-from-stable-to-nightly -regression-untriaged\ncc\nI can put up a fix for this. I suspected this might cause an issue but couldn't think how.", "positive_passages": [{"docid": "doc-en-rust-91bed283a7f7c6b994b03f5ef924283ba06b1d3bd1c2f99eda093c1c127b8cf4", "text": " error[E0271]: type mismatch resolving `[u8; 3] <: as SliceIndex<[u8]>>::Output` --> $DIR/slice-match-byte-lit.rs:6:9 | LL | match &s[0..3] { | -------- this expression has type `& as SliceIndex<[u8]>>::Output` LL | b\"uwu\" => {} | ^^^^^^ types differ error: aborting due to previous error For more information about this error, try `rustc --explain E0271`. ", "commid": "rust_pr_113007"}], "negative_passages": []} {"query_id": "q-en-rust-a1f9e48424a28f1044908f1054404e5894533e8a9513858884452eb91ab78d95", "query": " $DIR/recursive-type-2.rs:11:1 | LL | fn main() { | ^^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0391`. ", "commid": "rust_pr_119323"}], "negative_passages": []} {"query_id": "q-en-rust-a7a7a05e2453994fc047bf893d5547ffec4f0be50f1937883768e4d0afbdbef0", "query": " $DIR/recursive-type-binding.rs:11:1 | LL | fn main() { | ^^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0391`. ", "commid": "rust_pr_119323"}], "negative_passages": []} {"query_id": "q-en-rust-a7a7a05e2453994fc047bf893d5547ffec4f0be50f1937883768e4d0afbdbef0", "query": " $DIR/recursive-type-coercion-from-never.rs:14:1 | LL | fn main() { | ^^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0391`. ", "commid": "rust_pr_119323"}], "negative_passages": []} {"query_id": "q-en-rust-a7a7a05e2453994fc047bf893d5547ffec4f0be50f1937883768e4d0afbdbef0", "query": " $DIR/infer-var-for-self-param.rs:5:14 | LL | let _ = (Default::default(),); | ^^^^^^^^^^^^^^^^ cannot call associated function of trait | help: use a fully-qualified path to a specific available implementation | LL | let _ = (::default(),); | +++++++++++++++++++ + error: aborting due to previous error For more information about this error, try `rustc --explain E0790`. ", "commid": "rust_pr_113651"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-1a71604bfd57d1faa92388168017a0cc276e044e362016071949636b49a601c1", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::io; use std::vec; pub struct Container<'a> { reader: &'a mut Reader //~ ERROR explicit lifetime bound required } impl<'a> Container<'a> { pub fn wrap<'s>(reader: &'s mut Reader) -> Container<'s> { Container { reader: reader } } pub fn read_to(&mut self, vec: &mut [u8]) { self.reader.read(vec); } } pub fn for_stdin<'a>() -> Container<'a> { let mut r = io::stdin(); Container::wrap(&mut r as &mut Reader) } fn main() { let mut c = for_stdin(); let mut v = vec::Vec::from_elem(10, 0u8); c.read_to(v.as_mut_slice()); } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-f6b8a865d478d697d0fe915083bcb9190b7a963854eca91dd247702e7429a4e8", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn blah() -> int { //~ ERROR not all control paths return a value 1i ; //~ NOTE consider removing this semicolon: } fn main() { } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-538ef0579806c1cd6cb4132aec4123ea6b80f2714da15bb250b2fbcc594e3780", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(struct_variant)] mod a { pub enum Enum { EnumStructVariant { x: u8, y: u8, z: u8 } } pub fn get_enum_struct_variant() -> () { EnumStructVariant { x: 1, y: 2, z: 3 } //~^ ERROR mismatched types: expected `()`, found `a::Enum` (expected (), found enum a::Enum) } } mod b { mod test { use a; fn test_enum_struct_variant() { let enum_struct_variant = ::a::get_enum_struct_variant(); match enum_struct_variant { a::EnumStructVariant { x, y, z } => { //~^ ERROR error: mismatched types: expected `()`, found a structure pattern } } } } } fn main() {} ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-981b13079f89610cb3380d7510b27ea87becc9cffe9e97da3d35926af069623e", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { let mut array = [1, 2, 3]; //~^ ERROR cannot determine a type for this local variable: cannot determine the type of this integ let pie_slice = array.slice(1, 2); } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-e4f0d7ab599171ebf7a247e1f6171be04c916806753bad3d3706cade46851d3d", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub fn foo(params: Option<&[&str]>) -> uint { params.unwrap().head().unwrap().len() } fn main() { let name = \"Foo\"; let msg = foo(Some(&[name.as_slice()])); //~^ ERROR mismatched types: expected `core::option::Option<&[&str]>` assert_eq!(msg, 3); } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-2030b41956f1ac39fb2b551ebc6c5c90f2952cc30e33b3eb995a31630c9caf97", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { let v = &[]; //~ ERROR cannot determine a type for this local variable: unconstrained type let it = v.iter(); } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-235f3ce6fd3365a857a117f30c0a12690726a1aef3783508005c55dbc99fc383", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:explicit failure pub fn main() { fail!(); println!(\"{}\", 1i); } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-2ee95bfe2a013b85a0bbbacbb8d9f9be332f6959bab06d607257453a4d55ba31", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:bad input fn main() { Some(\"foo\").unwrap_or(fail!(\"bad input\")).to_string(); } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-e534b90f5239087cc5cbda64b706cf483c4d23cd3617a475aab1bfa016eff022", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub mod two_tuple { trait T {} struct P<'a>(&'a T + 'a, &'a T + 'a); pub fn f<'a>(car: &'a T, cdr: &'a T) -> P<'a> { P(car, cdr) } } pub mod two_fields { trait T {} struct P<'a> { car: &'a T + 'a, cdr: &'a T + 'a } pub fn f<'a>(car: &'a T, cdr: &'a T) -> P<'a> { P{ car: car, cdr: cdr } } } fn main() {} ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-c79d4cad607d207620ee146367a32294c631c94113b787f23f428a2a62e802ee", "query": "Here is a very simple program which employs trait objects: : : : When this program is compiled and run, it fails with segmentation fault: This program is not correct. You can see that function produces a for arbitrary lifetime , but the local variable is valid only for body. A pointer to , however, is saved to the , and when it is accessed inside , a segmentation fault occurs. However, Rust still compiles it. I thought that borrowing checker should prevent this kind of errors. BTW, I can write or and it will compile either way, but I thought that when going from plain pointers to trait objects I should always specify exact kind of trait object. Not sure that I'm right here thought.\ncc\nSounds similar to ; potential duplicate?\nThis appears to be fixed: Original example code updated to current rust: : : :\nAwesome! flagging as needstest (not sure if one already exists)", "positive_passages": [{"docid": "doc-en-rust-1bc6d485bcab4deadb19de23dd6f11a5c83c4318e84ec74712df47e324969d38", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { if true { proc(_) {} } else { proc(_: &mut ()) {} }; } ", "commid": "rust_pr_17199.0"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-c93742c01435b298dec3a3f4ffcb977ddf28d4046a7e9993eeceacfe5d3ceacd", "text": "} _ => {} } // If someone calls a const fn, they can extract that call out into a separate constant (or a const // block in the future), so we check that to tell them that in the diagnostic. Does not affect typeck. let is_const_fn = match element.kind { // If someone calls a const fn or constructs a const value, they can extract that // out into a separate constant (or a const block in the future), so we check that // to tell them that in the diagnostic. Does not affect typeck. let is_constable = match element.kind { hir::ExprKind::Call(func, _args) => match *self.node_ty(func.hir_id).kind() { ty::FnDef(def_id, _) => tcx.is_const_fn(def_id), _ => false, ty::FnDef(def_id, _) if tcx.is_const_fn(def_id) => traits::IsConstable::Fn, _ => traits::IsConstable::No, }, _ => false, hir::ExprKind::Path(qpath) => { match self.typeck_results.borrow().qpath_res(&qpath, element.hir_id) { Res::Def(DefKind::Ctor(_, CtorKind::Const), _) => traits::IsConstable::Ctor, _ => traits::IsConstable::No, } } _ => traits::IsConstable::No, }; // If the length is 0, we don't create any elements, so we don't copy any. If the length is 1, we // don't copy that one element, we move it. Only check for Copy if the length is larger. if count.try_eval_target_usize(tcx, self.param_env).map_or(true, |len| len > 1) { let lang_item = self.tcx.require_lang_item(LangItem::Copy, None); let code = traits::ObligationCauseCode::RepeatElementCopy { is_const_fn }; let code = traits::ObligationCauseCode::RepeatElementCopy { is_constable, elt_type: element_ty, elt_span: element.span, elt_stmt_span: self .tcx .hir() .parent_iter(element.hir_id) .find_map(|(_, node)| match node { hir::Node::Item(it) => Some(it.span), hir::Node::Stmt(stmt) => Some(stmt.span), _ => None, }) .expect(\"array repeat expressions must be inside an item or statement\"), }; self.require_type_meets(element_ty, element.span, code, lang_item); } }", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-9a3d8af1526c29d904e9d018f4f90ec1ef41262e8457f48336314b97f6606120", "text": "InlineAsmSized, /// `[expr; N]` requires `type_of(expr): Copy`. RepeatElementCopy { /// If element is a `const fn` we display a help message suggesting to move the /// function call to a new `const` item while saying that `T` doesn't implement `Copy`. is_const_fn: bool, /// If element is a `const fn` or const ctor we display a help message suggesting /// to move it to a new `const` item while saying that `T` doesn't implement `Copy`. is_constable: IsConstable, elt_type: Ty<'tcx>, elt_span: Span, /// Span of the statement/item in which the repeat expression occurs. We can use this to /// place a `const` declaration before it elt_stmt_span: Span, }, /// Types of fields (other than the last, except for packed structs) in a struct must be sized.", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-c8f096fe624823ffc5da71769dbca012ba890e15c37fea544213e2b0d2150df8", "text": "TypeAlias(InternedObligationCauseCode<'tcx>, Span, DefId), } /// Whether a value can be extracted into a const. /// Used for diagnostics around array repeat expressions. #[derive(Copy, Clone, Debug, PartialEq, Eq, HashStable, TyEncodable, TyDecodable)] pub enum IsConstable { No, /// Call to a const fn Fn, /// Use of a const ctor Ctor, } crate::TrivialTypeTraversalAndLiftImpls! { IsConstable, } /// The 'location' at which we try to perform HIR-based wf checking. /// This information is used to obtain an `hir::Ty`, which /// we can walk in order to obtain precise spans for any", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-d4fb988ffa9ab1a523ae801f87119fce0d38643641c1b5ba3be55d3afb810156", "text": "use rustc_infer::infer::type_variable::{TypeVariableOrigin, TypeVariableOriginKind}; use rustc_infer::infer::{DefineOpaqueTypes, InferOk, LateBoundRegionConversionTime}; use rustc_middle::hir::map; use rustc_middle::traits::IsConstable; use rustc_middle::ty::error::TypeError::{self, Sorts}; use rustc_middle::ty::{ self, suggest_arbitrary_trait_bound, suggest_constraining_type_param, AdtKind,", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-f0afcb08dbd6be256b094d8d512c8d07c51963425aefeaa69a959065e8c5dd43", "text": ")); } } ObligationCauseCode::RepeatElementCopy { is_const_fn } => { ObligationCauseCode::RepeatElementCopy { is_constable, elt_type, elt_span, elt_stmt_span } => { err.note( \"the `Copy` trait is required because this value will be copied for each element of the array\", ); if is_const_fn { err.help( \"consider creating a new `const` item and initializing it with the result of the function call to be used in the repeat position, like `const VAL: Type = const_fn();` and `let x = [VAL; 42];`\", ); let value_kind = match is_constable { IsConstable::Fn => Some(\"the result of the function call\"), IsConstable::Ctor => Some(\"the result of the constructor\"), _ => None }; let sm = tcx.sess.source_map(); if let Some(value_kind) = value_kind && let Ok(snip) = sm.span_to_snippet(elt_span) { let help_msg = format!( \"consider creating a new `const` item and initializing it with {value_kind} to be used in the repeat position\"); let indentation = sm.indentation_before(elt_stmt_span).unwrap_or_default(); err.multipart_suggestion(help_msg, vec![ (elt_stmt_span.shrink_to_lo(), format!(\"const ARRAY_REPEAT_VALUE: {elt_type} = {snip};n{indentation}\")), (elt_span, \"ARRAY_REPEAT_VALUE\".to_string()) ], Applicability::MachineApplicable); } if self.tcx.sess.is_nightly_build() && is_const_fn { if self.tcx.sess.is_nightly_build() && matches!(is_constable, IsConstable::Fn|IsConstable::Ctor) { err.help( \"create an inline `const` block, see RFC #2920 for more information\",", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-5734842eadc6031254eeeab93379eb5162f4632469c112b6cc976e9f2bb4bf1d", "text": "| = note: required for `Option` to implement `Copy` = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider creating a new `const` item and initializing it with the result of the function call to be used in the repeat position, like `const VAL: Type = const_fn();` and `let x = [VAL; 42];` = help: create an inline `const` block, see RFC #2920 for more information help: consider annotating `Bar` with `#[derive(Copy)]` | LL + #[derive(Copy)] LL | struct Bar; | help: consider creating a new `const` item and initializing it with the result of the function call to be used in the repeat position | LL ~ const ARRAY_REPEAT_VALUE: Option = no_copy(); LL ~ let _: [Option; 2] = [ARRAY_REPEAT_VALUE; 2]; | error: aborting due to previous error", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-806d3af03a5089fb114cd4fb06a0558ebe0784ea5214e16ca4c71b82a2c6e112", "text": "LL | #[derive(Copy, Clone)] | ^^^^ unsatisfied trait bound introduced in this `derive` macro = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider creating a new `const` item and initializing it with the result of the function call to be used in the repeat position, like `const VAL: Type = const_fn();` and `let x = [VAL; 42];` = help: create an inline `const` block, see RFC #2920 for more information = note: this error originates in the derive macro `Copy` (in Nightly builds, run with -Z macro-backtrace for more info) help: consider creating a new `const` item and initializing it with the result of the function call to be used in the repeat position | LL ~ const ARRAY_REPEAT_VALUE: Foo = Foo(String::new()); LL ~ [ARRAY_REPEAT_VALUE; 4]; | error: aborting due to previous error", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-27abea0331d1e2ba5e33bd7275e4be14534f986024a69583d926a6426f652d7f", "text": " static _MAYBE_STRINGS: [Option; 5] = [None; 5]; //~^ ERROR the trait bound `String: Copy` is not satisfied fn main() { // should hint to create an inline `const` block // or to create a new `const` item let strings: [String; 5] = [String::new(); 5]; let _strings: [String; 5] = [String::new(); 5]; //~^ ERROR the trait bound `String: Copy` is not satisfied let _maybe_strings: [Option; 5] = [None; 5]; //~^ ERROR the trait bound `String: Copy` is not satisfied println!(\"{:?}\", strings); }", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-aa216a4d72dd7620cc9f7da4be0f3a5a5569266c7e4a06837b181e2ff15a8eb4", "query": "I'm basing this suggestion on two things. One, that if you follow it, the code compiles: and two, that the diagnostic help you get when you try to use a in the repeat position suggests something very similar, see \"other cases.\" It would require the compiler to know that a value that is attempted to be repeated is able though, which I don't know whether it can, but I figured I'd suggest this, and you can reject it if it isn't possible to do. Alternatively, the help text could say something like \"if the value can be made , consider creating [...]\" I was inspired to make this issue based on the output you get when you try to use a as an array initializer. For instance, the code produces the output No response\nhey why im getting this error message all over my github repository ! whenever i commit some changes, The first changes i able to create pull request but after some other changes im not able to do that why im getting these error ! Please help\nI don't think your comment is related to this issue? If this is a problem you've encountered while trying to fix this issue, please reach out on instead.", "positive_passages": [{"docid": "doc-en-rust-5c052349a2e92c0c9acd0909e17469e330fb2a98a19809c6c56ab69ee2ecff7a", "text": "error[E0277]: the trait bound `String: Copy` is not satisfied --> $DIR/const-fn-in-vec.rs:4:33 --> $DIR/const-fn-in-vec.rs:1:47 | LL | let strings: [String; 5] = [String::new(); 5]; | ^^^^^^^^^^^^^ the trait `Copy` is not implemented for `String` LL | static _MAYBE_STRINGS: [Option; 5] = [None; 5]; | ^^^^ the trait `Copy` is not implemented for `String` | = note: required for `Option` to implement `Copy` = note: the `Copy` trait is required because this value will be copied for each element of the array = help: consider creating a new `const` item and initializing it with the result of the function call to be used in the repeat position, like `const VAL: Type = const_fn();` and `let x = [VAL; 42];` = help: create an inline `const` block, see RFC #2920 for more information help: consider creating a new `const` item and initializing it with the result of the constructor to be used in the repeat position | LL + const ARRAY_REPEAT_VALUE: Option = None; LL ~ static _MAYBE_STRINGS: [Option; 5] = [ARRAY_REPEAT_VALUE; 5]; | error[E0277]: the trait bound `String: Copy` is not satisfied --> $DIR/const-fn-in-vec.rs:7:34 | LL | let _strings: [String; 5] = [String::new(); 5]; | ^^^^^^^^^^^^^ the trait `Copy` is not implemented for `String` | = note: the `Copy` trait is required because this value will be copied for each element of the array = help: create an inline `const` block, see RFC #2920 for more information help: consider creating a new `const` item and initializing it with the result of the function call to be used in the repeat position | LL ~ const ARRAY_REPEAT_VALUE: String = String::new(); LL ~ let _strings: [String; 5] = [ARRAY_REPEAT_VALUE; 5]; | error[E0277]: the trait bound `String: Copy` is not satisfied --> $DIR/const-fn-in-vec.rs:9:48 | LL | let _maybe_strings: [Option; 5] = [None; 5]; | ^^^^ the trait `Copy` is not implemented for `String` | = note: required for `Option` to implement `Copy` = note: the `Copy` trait is required because this value will be copied for each element of the array = help: create an inline `const` block, see RFC #2920 for more information help: consider creating a new `const` item and initializing it with the result of the constructor to be used in the repeat position | LL ~ const ARRAY_REPEAT_VALUE: Option = None; LL ~ let _maybe_strings: [Option; 5] = [ARRAY_REPEAT_VALUE; 5]; | error: aborting due to previous error error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0277`.", "commid": "rust_pr_113925"}], "negative_passages": []} {"query_id": "q-en-rust-ae820a70e765bfedc54e6c09f78a14a8caa0d0a6213f4348fa2e0004c534e206", "query": " $DIR/no-inline-literals-out-of-range.rs:4:24 | LL | format_args!(\"{}\", 0x8f_i8); // issue #115423 | ^^^^^^^ | = note: the literal `0x8f_i8` (decimal `143`) does not fit into the type `i8` and will become `-113i8` = note: `#[deny(overflowing_literals)]` on by default help: consider using the type `u8` instead | LL | format_args!(\"{}\", 0x8f_u8); // issue #115423 | ~~~~~~~ help: to use as a negative number (decimal `-113`), consider using the type `u8` for the literal and cast it to `i8` | LL | format_args!(\"{}\", 0x8f_u8 as i8); // issue #115423 | ~~~~~~~~~~~~~ error: literal out of range for `u8` --> $DIR/no-inline-literals-out-of-range.rs:6:24 | LL | format_args!(\"{}\", 0xffff_ffff_u8); // issue #116633 | ^^^^^^^^^^^^^^ help: consider using the type `u32` instead: `0xffff_ffff_u32` | = note: the literal `0xffff_ffff_u8` (decimal `4294967295`) does not fit into the type `u8` and will become `255u8` error: literal out of range for `usize` --> $DIR/no-inline-literals-out-of-range.rs:8:24 | LL | format_args!(\"{}\", 0xffff_ffff_ffff_ffff_ffff_usize); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the literal `0xffff_ffff_ffff_ffff_ffff_usize` (decimal `1208925819614629174706175`) does not fit into the type `usize` and will become `18446744073709551615usize` error: literal out of range for `isize` --> $DIR/no-inline-literals-out-of-range.rs:10:24 | LL | format_args!(\"{}\", 0x8000_0000_0000_0000_isize); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the literal `0x8000_0000_0000_0000_isize` (decimal `9223372036854775808`) does not fit into the type `isize` and will become `-9223372036854775808isize` error: literal out of range for `i32` --> $DIR/no-inline-literals-out-of-range.rs:12:24 | LL | format_args!(\"{}\", 0xffff_ffff); // treat unsuffixed literals as i32 | ^^^^^^^^^^^ | = note: the literal `0xffff_ffff` (decimal `4294967295`) does not fit into the type `i32` and will become `-1i32` = help: consider using the type `u32` instead help: to use as a negative number (decimal `-1`), consider using the type `u32` for the literal and cast it to `i32` | LL | format_args!(\"{}\", 0xffff_ffffu32 as i32); // treat unsuffixed literals as i32 | ~~~~~~~~~~~~~~~~~~~~~ error: aborting due to 5 previous errors ", "commid": "rust_pr_123935"}], "negative_passages": []} {"query_id": "q-en-rust-4a8637df5a22d3f556c08d695e4a61ccf2ef5b51bbc15b6848513fcdd67dcfe5", "query": " $DIR/statement-attribute-validation.rs:8:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:13:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:16:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:21:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:24:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:27:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:30:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:33:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: expected one of `(`, `,`, `::`, or `=`, found `-` --> $DIR/statement-attribute-validation.rs:36:16 | LL | #[allow(two-words)] | ^ expected one of `(`, `,`, `::`, or `=` error: aborting due to 9 previous errors ", "commid": "rust_pr_117092"}], "negative_passages": []} {"query_id": "q-en-rust-5ccc57192d29a4b37d8b57474671d040c0d8a19b87129709f3e10fc68d3efafd", "query": "snippet: Version information Command: $DIR/async-unwrap-suggestion.rs:5:16 | LL | return Ok(6); | ^^^^^ expected `i32`, found `Result<{integer}, _>` | = note: expected type `i32` found enum `Result<{integer}, _>` error[E0308]: mismatched types --> $DIR/async-unwrap-suggestion.rs:15:16 | LL | return s; | ^ expected `i32`, found `Result<{integer}, _>` | = note: expected type `i32` found enum `Result<{integer}, _>` help: consider using `Result::expect` to unwrap the `Result<{integer}, _>` value, panicking if the value is a `Result::Err` | LL | return s.expect(\"REASON\"); | +++++++++++++++++ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_117152"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-2b6c21dae8e97f3a809e786cc909055a24de6d92be38a615b3b1562f693396f4", "text": "(\"macro_registrar\", Active), (\"log_syntax\", Active), (\"trace_macros\", Active), (\"simd\", Active), // These are used to test this portion of the compiler, they don't actually // mean anything", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-d361d7982083c8599313f542ba59b77f020916572534e799be58455f9757c25b", "text": "} } ast::ItemStruct(..) => { if attr::contains_name(i.attrs, \"simd\") { self.gate_feature(\"simd\", i.span, \"SIMD types are experimental and possibly buggy\"); } } _ => {} }", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-e07a7e4520fba5df2a5e48d96ba853b46368b3c5ed42e97c1247421f976869ca", "text": "html_favicon_url = \"http://www.rust-lang.org/favicon.ico\", html_root_url = \"http://static.rust-lang.org/doc/master\")]; #[feature(macro_rules, globs, asm, managed_boxes, thread_local, link_args)]; #[feature(macro_rules, globs, asm, managed_boxes, thread_local, link_args, simd)]; // Don't link to std. We are std. #[no_std]; #[deny(non_camel_case_types)]; #[deny(missing_doc)]; #[allow(unknown_features)]; // When testing libstd, bring in libuv as the I/O backend so tests can print // things and all of the std::io tests have an I/O interface to run on top", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-a00949b5156a9bdb82aa770e1673980bbd5da333b3c4d1abd76f426bef677256", "text": "#[allow(non_camel_case_types)]; #[experimental] #[simd] pub struct i8x16(i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8, i8); #[experimental] #[simd] pub struct i16x8(i16, i16, i16, i16, i16, i16, i16, i16); #[experimental] #[simd] pub struct i32x4(i32, i32, i32, i32); #[experimental] #[simd] pub struct i64x2(i64, i64); #[experimental] #[simd] pub struct u8x16(u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8, u8); #[experimental] #[simd] pub struct u16x8(u16, u16, u16, u16, u16, u16, u16, u16); #[experimental] #[simd] pub struct u32x4(u32, u32, u32, u32); #[experimental] #[simd] pub struct u64x2(u64, u64); #[experimental] #[simd] pub struct f32x4(f32, f32, f32, f32); #[experimental] #[simd] pub struct f64x2(f64, f64);", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-073539fdab5a94b55341b71b0b2cc61e0d8f6b469dba2a365007de708ad56854", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #[simd] pub struct i64x2(i64, i64); //~ ERROR: SIMD types are experimental fn main() {} No newline at end of file", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-1aa171def1caf30e63eec6d8868e0df9a329d3cf29e944939f490f26fccc7db6", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // xfail-test FIXME #11741 tuple structs ignore stability attributes #[deny(experimental)]; use std::unstable::simd; fn main() { let _ = simd::i64x2(0, 0); //~ ERROR: experimental } ", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-296b3c4e8b2d1068de09e15abc924d1b7012ec9e8d8f9cf8b8a8908e0b6a60f1", "text": " #[feature(simd)]; #[simd] struct vec4(T, T, T, T); //~ ERROR SIMD vector cannot be generic", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-d2c29efb5f088414bc0fc937c514a12fc9c77fd7a3449d78e8287df0a40533a1", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. #[allow(experimental)]; use std::unstable::simd::{i32x4, f32x4}; fn test_int(e: i32) -> i32 {", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-1b59ef5cc98fd65231ac26fa328d3a9d511b1b774043a7606f2fb6f4628dfd63", "query": "Our support isn't very complete at all, and details of the interface may change.\nNominating.\nAccepted for P-backcompat-lang.\nI'm working on this", "positive_passages": [{"docid": "doc-en-rust-c72797c1a104285be06b986035a3d21ec4acc777b2878820e36d6655c4744512", "text": " // xfail-fast feature doesn't work #[feature(simd)]; #[simd] struct RGBA { r: f32,", "commid": "rust_pr_11738"}], "negative_passages": []} {"query_id": "q-en-rust-4baf0e8d5127fe6f958b823add21bed6dcb30db386b7af7c0c25faafe018ab65", "query": "File: /tmp/icemaker/issue--2.rs auto-reduced (treereduce-rust): original: Version information ` Command: $DIR/issue-50814-2.rs:16:24 | LL | const BAR: usize = [5, 6, 7][T::BOO]; | ^^^^^^^^^^^^^^^^^ index out of bounds: the length is 3 but the index is 42 note: erroneous constant encountered --> $DIR/issue-50814-2.rs:20:6 | LL | & as Foo>::BAR | ^^^^^^^^^^^^^^^^^^^^^ note: erroneous constant encountered --> $DIR/issue-50814-2.rs:20:5 | LL | & as Foo>::BAR | ^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_117438"}], "negative_passages": []} {"query_id": "q-en-rust-4baf0e8d5127fe6f958b823add21bed6dcb30db386b7af7c0c25faafe018ab65", "query": "File: /tmp/icemaker/issue--2.rs auto-reduced (treereduce-rust): original: Version information ` Command: $DIR/issue-50814-2.rs:16:24 | LL | const BAR: usize = [5, 6, 7][T::BOO]; | ^^^^^^^^^^^^^^^^^ index out of bounds: the length is 3 but the index is 42 note: erroneous constant encountered --> $DIR/issue-50814-2.rs:20:6 | LL | & as Foo>::BAR | ^^^^^^^^^^^^^^^^^^^^^ note: the above error was encountered while instantiating `fn foo::<()>` --> $DIR/issue-50814-2.rs:32:22 | LL | println!(\"{:x}\", foo::<()>() as *const usize as usize); | ^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_117438"}], "negative_passages": []} {"query_id": "q-en-rust-4baf0e8d5127fe6f958b823add21bed6dcb30db386b7af7c0c25faafe018ab65", "query": "File: /tmp/icemaker/issue--2.rs auto-reduced (treereduce-rust): original: Version information ` Command: $DIR/issue-50814-2.rs:14:24 | LL | const BAR: usize = [5, 6, 7][T::BOO]; | ^^^^^^^^^^^^^^^^^ index out of bounds: the length is 3 but the index is 42 note: erroneous constant encountered --> $DIR/issue-50814-2.rs:18:6 | LL | & as Foo>::BAR | ^^^^^^^^^^^^^^^^^^^^^ note: the above error was encountered while instantiating `fn foo::<()>` --> $DIR/issue-50814-2.rs:30:22 | LL | println!(\"{:x}\", foo::<()>() as *const usize as usize); | ^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0080`. ", "commid": "rust_pr_117438"}], "negative_passages": []} {"query_id": "q-en-rust-d6dae8bfbff2793654dde557c924fe35a834853420e545d610f32bbce3d5dc08", "query": "Bootstrap no longer documents any / crates or in CI. now only contains a meager 20 or so crates. This has to have regressed fairly recently.\nIt regressed in the latest nightly, we have CI tests on checking for some of these and the last daily test succeeded.\nIf I'm reading the commit history right there were a number of PRs that touched one of CI, bootstrap or rustdoc itself:\nThis appears to have regressed in ... but I'm pretty confused by that.\nSince mostly allowed to have an effect, it could be that the issue is in .\ncc and also maybe you happen to have an idea too\nI don't have a clue how either PR could have affected building docs.\nI would do quick check between the recently merged PRs as there are not many. It seems not to be so hard to find and fix the problem. But I can only do that later tonight.. I don't have access to my computers right now.\nOk, so the problem is when we use as a backend with llvm, documentation outputs generated by these backends seem to conflict with each other during the invocations. It appears that one of them(most likely cranelift) is not generating the compiler documentation. As we purge the previously generated documentation output at we end up with output that doesn't have the compiler documentation. I am currently unsure of the exact reason why cranelift is causing this issue. I will open a temporary fix PR and investigate further once I get more time.\nIndeed the search at is completely useless right now. If a fix would take more than a day, reverting the offending PR should also be considered, given that those docs are quite crucial for rustc contributors.\nFix PR is already open at\nYeah I've seen that, but I can't tell if that's the lowest-risk approach. :shrug: Reverting (or un-setting on whatever builder builds the docs) is something we know will obviously work.\nI am quite sure that PR contains no risk, but I am fine with reverting codegen environment variable updates in the runner containers\nI'm fine either way but can't approve your PR since I don't understand the context of it. So if we can have it reviewed fast, fine, but otherwise - the revert is trivial to review.\ndid fix it locally, but seems didn't fix it on actual environment..\nTo fix this, we can either disable cranelift on the gh action step which we build nightly documentation or revert until we find out and fix the problem of compiler documentations with cranelift.\nWhich one is that?\nI guess (never worked on the infra side before) these ones\nThose steps just do the uploading, the docs are most likely created before. We need to figure out which runner is doing this and disable cranelift there.\nIt's done in multiple place(creating and uploading). I think the question is which one is getting deployed to\nIt's been 3 days without docs, clearly this is not entirely trivial to fix. Let's revert.\nThe docs are back.\nWhile the immediate issue is addressed, do we need an issue to track the underlying problem?\nWe could. At the same time, and are also still working on this, e.g. in this as it's quite important to fix. There's little chance we'll forget about this, and it should hopefully be fixed very soon.\nThe underlying problem is that symlinks are being dropped in a wrong way between invocations. I can easily fix this by copying the document outputs instead of linking them, but I would prefer to invest a couple more hours in fixing the linking issue rather than replacing it.", "positive_passages": [{"docid": "doc-en-rust-83fd8e06f93d779b3b16ba557148cac731769dd10db8fd30ae608db28225081e", "text": "// - use std::lazy for `Lazy` // - use std::cell for `OnceCell` // Once they get stabilized and reach beta. use build_helper::ci::CiEnv; use clap::ValueEnum; use once_cell::sync::{Lazy, OnceCell};", "commid": "rust_pr_117471"}], "negative_passages": []} {"query_id": "q-en-rust-d6dae8bfbff2793654dde557c924fe35a834853420e545d610f32bbce3d5dc08", "query": "Bootstrap no longer documents any / crates or in CI. now only contains a meager 20 or so crates. This has to have regressed fairly recently.\nIt regressed in the latest nightly, we have CI tests on checking for some of these and the last daily test succeeded.\nIf I'm reading the commit history right there were a number of PRs that touched one of CI, bootstrap or rustdoc itself:\nThis appears to have regressed in ... but I'm pretty confused by that.\nSince mostly allowed to have an effect, it could be that the issue is in .\ncc and also maybe you happen to have an idea too\nI don't have a clue how either PR could have affected building docs.\nI would do quick check between the recently merged PRs as there are not many. It seems not to be so hard to find and fix the problem. But I can only do that later tonight.. I don't have access to my computers right now.\nOk, so the problem is when we use as a backend with llvm, documentation outputs generated by these backends seem to conflict with each other during the invocations. It appears that one of them(most likely cranelift) is not generating the compiler documentation. As we purge the previously generated documentation output at we end up with output that doesn't have the compiler documentation. I am currently unsure of the exact reason why cranelift is causing this issue. I will open a temporary fix PR and investigate further once I get more time.\nIndeed the search at is completely useless right now. If a fix would take more than a day, reverting the offending PR should also be considered, given that those docs are quite crucial for rustc contributors.\nFix PR is already open at\nYeah I've seen that, but I can't tell if that's the lowest-risk approach. :shrug: Reverting (or un-setting on whatever builder builds the docs) is something we know will obviously work.\nI am quite sure that PR contains no risk, but I am fine with reverting codegen environment variable updates in the runner containers\nI'm fine either way but can't approve your PR since I don't understand the context of it. So if we can have it reviewed fast, fine, but otherwise - the revert is trivial to review.\ndid fix it locally, but seems didn't fix it on actual environment..\nTo fix this, we can either disable cranelift on the gh action step which we build nightly documentation or revert until we find out and fix the problem of compiler documentations with cranelift.\nWhich one is that?\nI guess (never worked on the infra side before) these ones\nThose steps just do the uploading, the docs are most likely created before. We need to figure out which runner is doing this and disable cranelift there.\nIt's done in multiple place(creating and uploading). I think the question is which one is getting deployed to\nIt's been 3 days without docs, clearly this is not entirely trivial to fix. Let's revert.\nThe docs are back.\nWhile the immediate issue is addressed, do we need an issue to track the underlying problem?\nWe could. At the same time, and are also still working on this, e.g. in this as it's quite important to fix. There's little chance we'll forget about this, and it should hopefully be fixed very soon.\nThe underlying problem is that symlinks are being dropped in a wrong way between invocations. I can easily fix this by copying the document outputs instead of linking them, but I would prefer to invest a couple more hours in fixing the linking issue rather than replacing it.", "positive_passages": [{"docid": "doc-en-rust-4aa688f32ab4e345431b78eed199f7c68b17b1ca81f4200ddd4333e60df86c90", "text": "self.clear_if_dirty(&out_dir, &backend); } if cmd == \"doc\" || cmd == \"rustdoc\" { if cmd == \"doc\" || cmd == \"rustdoc\" // FIXME: We shouldn't need to check this. // ref https://github.com/rust-lang/rust/issues/117430#issuecomment-1788160523 && !CiEnv::is_ci() { let my_out = match mode { // This is the intended out directory for compiler documentation. Mode::Rustc | Mode::ToolRustc => self.compiler_doc_out(target),", "commid": "rust_pr_117471"}], "negative_passages": []} {"query_id": "q-en-rust-d6dae8bfbff2793654dde557c924fe35a834853420e545d610f32bbce3d5dc08", "query": "Bootstrap no longer documents any / crates or in CI. now only contains a meager 20 or so crates. This has to have regressed fairly recently.\nIt regressed in the latest nightly, we have CI tests on checking for some of these and the last daily test succeeded.\nIf I'm reading the commit history right there were a number of PRs that touched one of CI, bootstrap or rustdoc itself:\nThis appears to have regressed in ... but I'm pretty confused by that.\nSince mostly allowed to have an effect, it could be that the issue is in .\ncc and also maybe you happen to have an idea too\nI don't have a clue how either PR could have affected building docs.\nI would do quick check between the recently merged PRs as there are not many. It seems not to be so hard to find and fix the problem. But I can only do that later tonight.. I don't have access to my computers right now.\nOk, so the problem is when we use as a backend with llvm, documentation outputs generated by these backends seem to conflict with each other during the invocations. It appears that one of them(most likely cranelift) is not generating the compiler documentation. As we purge the previously generated documentation output at we end up with output that doesn't have the compiler documentation. I am currently unsure of the exact reason why cranelift is causing this issue. I will open a temporary fix PR and investigate further once I get more time.\nIndeed the search at is completely useless right now. If a fix would take more than a day, reverting the offending PR should also be considered, given that those docs are quite crucial for rustc contributors.\nFix PR is already open at\nYeah I've seen that, but I can't tell if that's the lowest-risk approach. :shrug: Reverting (or un-setting on whatever builder builds the docs) is something we know will obviously work.\nI am quite sure that PR contains no risk, but I am fine with reverting codegen environment variable updates in the runner containers\nI'm fine either way but can't approve your PR since I don't understand the context of it. So if we can have it reviewed fast, fine, but otherwise - the revert is trivial to review.\ndid fix it locally, but seems didn't fix it on actual environment..\nTo fix this, we can either disable cranelift on the gh action step which we build nightly documentation or revert until we find out and fix the problem of compiler documentations with cranelift.\nWhich one is that?\nI guess (never worked on the infra side before) these ones\nThose steps just do the uploading, the docs are most likely created before. We need to figure out which runner is doing this and disable cranelift there.\nIt's done in multiple place(creating and uploading). I think the question is which one is getting deployed to\nIt's been 3 days without docs, clearly this is not entirely trivial to fix. Let's revert.\nThe docs are back.\nWhile the immediate issue is addressed, do we need an issue to track the underlying problem?\nWe could. At the same time, and are also still working on this, e.g. in this as it's quite important to fix. There's little chance we'll forget about this, and it should hopefully be fixed very soon.\nThe underlying problem is that symlinks are being dropped in a wrong way between invocations. I can easily fix this by copying the document outputs instead of linking them, but I would prefer to invest a couple more hours in fixing the linking issue rather than replacing it.", "positive_passages": [{"docid": "doc-en-rust-9e5d7289011f8999ce5432013c75bc5f0889ffe3e477e5bc1000ae042533557a", "text": "} fn run(self, builder: &Builder<'_>) -> Option { if builder.config.dry_run() { return None; } // This prevents rustc_codegen_cranelift from being built for \"dist\" // or \"install\" on the stable/beta channels. It is not yet stable and // should not be included.", "commid": "rust_pr_117535"}], "negative_passages": []} {"query_id": "q-en-rust-d6dae8bfbff2793654dde557c924fe35a834853420e545d610f32bbce3d5dc08", "query": "Bootstrap no longer documents any / crates or in CI. now only contains a meager 20 or so crates. This has to have regressed fairly recently.\nIt regressed in the latest nightly, we have CI tests on checking for some of these and the last daily test succeeded.\nIf I'm reading the commit history right there were a number of PRs that touched one of CI, bootstrap or rustdoc itself:\nThis appears to have regressed in ... but I'm pretty confused by that.\nSince mostly allowed to have an effect, it could be that the issue is in .\ncc and also maybe you happen to have an idea too\nI don't have a clue how either PR could have affected building docs.\nI would do quick check between the recently merged PRs as there are not many. It seems not to be so hard to find and fix the problem. But I can only do that later tonight.. I don't have access to my computers right now.\nOk, so the problem is when we use as a backend with llvm, documentation outputs generated by these backends seem to conflict with each other during the invocations. It appears that one of them(most likely cranelift) is not generating the compiler documentation. As we purge the previously generated documentation output at we end up with output that doesn't have the compiler documentation. I am currently unsure of the exact reason why cranelift is causing this issue. I will open a temporary fix PR and investigate further once I get more time.\nIndeed the search at is completely useless right now. If a fix would take more than a day, reverting the offending PR should also be considered, given that those docs are quite crucial for rustc contributors.\nFix PR is already open at\nYeah I've seen that, but I can't tell if that's the lowest-risk approach. :shrug: Reverting (or un-setting on whatever builder builds the docs) is something we know will obviously work.\nI am quite sure that PR contains no risk, but I am fine with reverting codegen environment variable updates in the runner containers\nI'm fine either way but can't approve your PR since I don't understand the context of it. So if we can have it reviewed fast, fine, but otherwise - the revert is trivial to review.\ndid fix it locally, but seems didn't fix it on actual environment..\nTo fix this, we can either disable cranelift on the gh action step which we build nightly documentation or revert until we find out and fix the problem of compiler documentations with cranelift.\nWhich one is that?\nI guess (never worked on the infra side before) these ones\nThose steps just do the uploading, the docs are most likely created before. We need to figure out which runner is doing this and disable cranelift there.\nIt's done in multiple place(creating and uploading). I think the question is which one is getting deployed to\nIt's been 3 days without docs, clearly this is not entirely trivial to fix. Let's revert.\nThe docs are back.\nWhile the immediate issue is addressed, do we need an issue to track the underlying problem?\nWe could. At the same time, and are also still working on this, e.g. in this as it's quite important to fix. There's little chance we'll forget about this, and it should hopefully be fixed very soon.\nThe underlying problem is that symlinks are being dropped in a wrong way between invocations. I can easily fix this by copying the document outputs instead of linking them, but I would prefer to invest a couple more hours in fixing the linking issue rather than replacing it.", "positive_passages": [{"docid": "doc-en-rust-f7148cabb0c81dcad86db8f280d30f9622240bc4fd88a102779d71fe0c48243d", "text": "return None; } if !builder.config.rust_codegen_backends.contains(&self.backend) { return None; } if self.backend == \"cranelift\" { if !target_supports_cranelift_backend(self.compiler.host) { builder.info(\"target not supported by rustc_codegen_cranelift. skipping\");", "commid": "rust_pr_117535"}], "negative_passages": []} {"query_id": "q-en-rust-d6dae8bfbff2793654dde557c924fe35a834853420e545d610f32bbce3d5dc08", "query": "Bootstrap no longer documents any / crates or in CI. now only contains a meager 20 or so crates. This has to have regressed fairly recently.\nIt regressed in the latest nightly, we have CI tests on checking for some of these and the last daily test succeeded.\nIf I'm reading the commit history right there were a number of PRs that touched one of CI, bootstrap or rustdoc itself:\nThis appears to have regressed in ... but I'm pretty confused by that.\nSince mostly allowed to have an effect, it could be that the issue is in .\ncc and also maybe you happen to have an idea too\nI don't have a clue how either PR could have affected building docs.\nI would do quick check between the recently merged PRs as there are not many. It seems not to be so hard to find and fix the problem. But I can only do that later tonight.. I don't have access to my computers right now.\nOk, so the problem is when we use as a backend with llvm, documentation outputs generated by these backends seem to conflict with each other during the invocations. It appears that one of them(most likely cranelift) is not generating the compiler documentation. As we purge the previously generated documentation output at we end up with output that doesn't have the compiler documentation. I am currently unsure of the exact reason why cranelift is causing this issue. I will open a temporary fix PR and investigate further once I get more time.\nIndeed the search at is completely useless right now. If a fix would take more than a day, reverting the offending PR should also be considered, given that those docs are quite crucial for rustc contributors.\nFix PR is already open at\nYeah I've seen that, but I can't tell if that's the lowest-risk approach. :shrug: Reverting (or un-setting on whatever builder builds the docs) is something we know will obviously work.\nI am quite sure that PR contains no risk, but I am fine with reverting codegen environment variable updates in the runner containers\nI'm fine either way but can't approve your PR since I don't understand the context of it. So if we can have it reviewed fast, fine, but otherwise - the revert is trivial to review.\ndid fix it locally, but seems didn't fix it on actual environment..\nTo fix this, we can either disable cranelift on the gh action step which we build nightly documentation or revert until we find out and fix the problem of compiler documentations with cranelift.\nWhich one is that?\nI guess (never worked on the infra side before) these ones\nThose steps just do the uploading, the docs are most likely created before. We need to figure out which runner is doing this and disable cranelift there.\nIt's done in multiple place(creating and uploading). I think the question is which one is getting deployed to\nIt's been 3 days without docs, clearly this is not entirely trivial to fix. Let's revert.\nThe docs are back.\nWhile the immediate issue is addressed, do we need an issue to track the underlying problem?\nWe could. At the same time, and are also still working on this, e.g. in this as it's quite important to fix. There's little chance we'll forget about this, and it should hopefully be fixed very soon.\nThe underlying problem is that symlinks are being dropped in a wrong way between invocations. I can easily fix this by copying the document outputs instead of linking them, but I would prefer to invest a couple more hours in fixing the linking issue rather than replacing it.", "positive_passages": [{"docid": "doc-en-rust-9ad1bf6728eeb25be9bb4b7cc6be73050042ce36d9af53001fdd49c0d505f498", "text": "let backends_dst = PathBuf::from(\"lib\").join(&backends_rel); let backend_name = format!(\"rustc_codegen_{}\", backend); let mut found_backend = false; for backend in fs::read_dir(&backends_src).unwrap() { let file_name = backend.unwrap().file_name(); if file_name.to_str().unwrap().contains(&backend_name) { tarball.add_file(backends_src.join(file_name), &backends_dst, 0o644); found_backend = true; } } assert!(found_backend); Some(tarball.generate()) }", "commid": "rust_pr_117535"}], "negative_passages": []} {"query_id": "q-en-rust-d6dae8bfbff2793654dde557c924fe35a834853420e545d610f32bbce3d5dc08", "query": "Bootstrap no longer documents any / crates or in CI. now only contains a meager 20 or so crates. This has to have regressed fairly recently.\nIt regressed in the latest nightly, we have CI tests on checking for some of these and the last daily test succeeded.\nIf I'm reading the commit history right there were a number of PRs that touched one of CI, bootstrap or rustdoc itself:\nThis appears to have regressed in ... but I'm pretty confused by that.\nSince mostly allowed to have an effect, it could be that the issue is in .\ncc and also maybe you happen to have an idea too\nI don't have a clue how either PR could have affected building docs.\nI would do quick check between the recently merged PRs as there are not many. It seems not to be so hard to find and fix the problem. But I can only do that later tonight.. I don't have access to my computers right now.\nOk, so the problem is when we use as a backend with llvm, documentation outputs generated by these backends seem to conflict with each other during the invocations. It appears that one of them(most likely cranelift) is not generating the compiler documentation. As we purge the previously generated documentation output at we end up with output that doesn't have the compiler documentation. I am currently unsure of the exact reason why cranelift is causing this issue. I will open a temporary fix PR and investigate further once I get more time.\nIndeed the search at is completely useless right now. If a fix would take more than a day, reverting the offending PR should also be considered, given that those docs are quite crucial for rustc contributors.\nFix PR is already open at\nYeah I've seen that, but I can't tell if that's the lowest-risk approach. :shrug: Reverting (or un-setting on whatever builder builds the docs) is something we know will obviously work.\nI am quite sure that PR contains no risk, but I am fine with reverting codegen environment variable updates in the runner containers\nI'm fine either way but can't approve your PR since I don't understand the context of it. So if we can have it reviewed fast, fine, but otherwise - the revert is trivial to review.\ndid fix it locally, but seems didn't fix it on actual environment..\nTo fix this, we can either disable cranelift on the gh action step which we build nightly documentation or revert until we find out and fix the problem of compiler documentations with cranelift.\nWhich one is that?\nI guess (never worked on the infra side before) these ones\nThose steps just do the uploading, the docs are most likely created before. We need to figure out which runner is doing this and disable cranelift there.\nIt's done in multiple place(creating and uploading). I think the question is which one is getting deployed to\nIt's been 3 days without docs, clearly this is not entirely trivial to fix. Let's revert.\nThe docs are back.\nWhile the immediate issue is addressed, do we need an issue to track the underlying problem?\nWe could. At the same time, and are also still working on this, e.g. in this as it's quite important to fix. There's little chance we'll forget about this, and it should hopefully be fixed very soon.\nThe underlying problem is that symlinks are being dropped in a wrong way between invocations. I can easily fix this by copying the document outputs instead of linking them, but I would prefer to invest a couple more hours in fixing the linking issue rather than replacing it.", "positive_passages": [{"docid": "doc-en-rust-365bd2403390a99634ef3f46b12c42107b6d707d2268c5f1c7e82130896d7d56", "text": "--env DIST_TRY_BUILD --env PR_CI_JOB --env OBJDIR_ON_HOST=\"$objdir\" --env CODEGEN_BACKENDS --init --rm rust-ci ", "commid": "rust_pr_117535"}], "negative_passages": []} {"query_id": "q-en-rust-ace39226b51ad31e13c14f96f45156ae3341265c894564acf4b574a4ed031b3e", "query": " $DIR/transmute_infinitely_recursive_type.rs:22:5 | LL | struct ExplicitlyPadded(ExplicitlyPadded); | ^^^^^^^^^^^^^^^^^^^^^^^ ---------------- recursive without indirection | help: insert some indirection (e.g., a `Box`, `Rc`, or `&`) to break the cycle | LL | struct ExplicitlyPadded(Box); | ++++ + error[E0391]: cycle detected when computing layout of `should_pad_explicitly_packed_field::ExplicitlyPadded` | = note: ...which immediately requires computing layout of `should_pad_explicitly_packed_field::ExplicitlyPadded` again = note: cycle used when evaluating trait selection obligation `(): core::mem::transmutability::BikeshedIntrinsicFrom` = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error: aborting due to 2 previous errors Some errors have detailed explanations: E0072, E0391. For more information about an error, try `rustc --explain E0072`. ", "commid": "rust_pr_119772"}], "negative_passages": []} {"query_id": "q-en-rust-bfba33a2679ab995bc29d0ccd9958ccd2a8ee7f65354e51091142b6cddfd8f42", "query": "When a type parameter is expected, but an unconstrained associated type is found rustc suggests a syntactically incorrect fix with a misleading help message about trait bounds instead of associated type constraints. No response", "positive_passages": [{"docid": "doc-en-rust-c106d33bf791487f1d4557754cfa78c124576b9daab36c1d26eb3a4bd8bc2aba", "text": "span, if span_to_replace.is_some() { constraint.clone() } else if constraint.starts_with(\"<\") { constraint.to_string() } else if bound_list_non_empty { format!(\" + {constraint}\") } else {", "commid": "rust_pr_117505"}], "negative_passages": []} {"query_id": "q-en-rust-bfba33a2679ab995bc29d0ccd9958ccd2a8ee7f65354e51091142b6cddfd8f42", "query": "When a type parameter is expected, but an unconstrained associated type is found rustc suggests a syntactically incorrect fix with a misleading help message about trait bounds instead of associated type constraints. No response", "positive_passages": [{"docid": "doc-en-rust-4430da1a9ecec1ca8935ec32d78a057df45ce708599cf7ed8117a02702b0f8e8", "text": " // run-rustfix pub trait MyTrait { type T; fn bar(self) -> Self::T; } pub fn foo, B>(a: A) -> B { return a.bar(); //~ ERROR mismatched types } fn main() {} ", "commid": "rust_pr_117505"}], "negative_passages": []} {"query_id": "q-en-rust-bfba33a2679ab995bc29d0ccd9958ccd2a8ee7f65354e51091142b6cddfd8f42", "query": "When a type parameter is expected, but an unconstrained associated type is found rustc suggests a syntactically incorrect fix with a misleading help message about trait bounds instead of associated type constraints. No response", "positive_passages": [{"docid": "doc-en-rust-0ce9c4794787edf9e01bb9acdd93eeae3eb0fd5ec44db2fdb22d668a3a69fde0", "text": " // run-rustfix pub trait MyTrait { type T; fn bar(self) -> Self::T; } pub fn foo(a: A) -> B { return a.bar(); //~ ERROR mismatched types } fn main() {} ", "commid": "rust_pr_117505"}], "negative_passages": []} {"query_id": "q-en-rust-bfba33a2679ab995bc29d0ccd9958ccd2a8ee7f65354e51091142b6cddfd8f42", "query": "When a type parameter is expected, but an unconstrained associated type is found rustc suggests a syntactically incorrect fix with a misleading help message about trait bounds instead of associated type constraints. No response", "positive_passages": [{"docid": "doc-en-rust-0ffc4093c075ea0ef590b05a7b8f3a284873bb61d4be1ceb17ef4778347bb8f3", "text": " error[E0308]: mismatched types --> $DIR/restrict-assoc-type-of-generic-bound.rs:9:12 | LL | pub fn foo(a: A) -> B { | - - expected `B` because of return type | | | expected this type parameter LL | return a.bar(); | ^^^^^^^ expected type parameter `B`, found associated type | = note: expected type parameter `B` found associated type `::T` help: consider further restricting this bound | LL | pub fn foo, B>(a: A) -> B { | +++++++ error: aborting due to previous error For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_117505"}], "negative_passages": []} {"query_id": "q-en-rust-e5c85c30961d6d5f4adea13e5825c75314dc982282852638ad0eadaa8e94ee94", "query": "Reported by a lz4flex user I created a small repo to demonstrate on a minimal example. BorrowedCursor::ensureinit from this PR seems exhibit exponential cost. The program creates a Vec and reads it in chunks of 32kb. Size is specified as a command line argument (100MB-400MB). (If your default is 1.73) $DIR/negative-coherence-placeholder-region-constraints-on-unification.rs:21:1 | LL | impl FnMarker for fn(T) {} | ------------------------------------------- first implementation here LL | impl FnMarker for fn(&T) {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `fn(&_)` | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #56105 = note: this behavior recently changed as a result of a bug fix; see rust-lang/rust#56105 for details note: the lint level is defined here --> $DIR/negative-coherence-placeholder-region-constraints-on-unification.rs:4:11 | LL | #![forbid(coherence_leak_check)] | ^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_117994"}], "negative_passages": []} {"query_id": "q-en-rust-1f22b06224aecd994d3e19d133edc7f3e152837884d6a163ec0313c23c6e85a5", "query": " $DIR/ice-118285-fn-ptr-value.rs:1:25 | LL | struct Checked bool>; | ^^^^^^^^^^^^^^^^^ | = note: the only supported types are integers, `bool` and `char` error: aborting due to 1 previous error ", "commid": "rust_pr_118498"}], "negative_passages": []} {"query_id": "q-en-rust-66fc9963abdafc9e6929ffd62b615687417c9bbafc1ca99f5237423c8cbe5f5c", "query": " $DIR/double-opaque-parent-predicates.rs:3:12 | LL | #![feature(generic_const_exprs)] | ^^^^^^^^^^^^^^^^^^^ | = note: see issue #76560 for more information = note: `#[warn(incomplete_features)]` on by default warning: 1 warning emitted ", "commid": "rust_pr_125501"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. use std::mem::ManuallyDrop; use std::path::Path; use std::slice; use std::sync::Arc;", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. module_llvm: ModuleLlvm { llmod_raw, llcx, tm }, module_llvm: ModuleLlvm { llmod_raw, llcx, tm: ManuallyDrop::new(tm) }, name: thin_module.name().to_string(), kind: ModuleKind::Regular, };", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. use std::mem::ManuallyDrop; mod back { pub mod archive;", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. // independent from llcx and llmod_raw, resources get disposed by drop impl tm: OwnedTargetMachine, // This field is `ManuallyDrop` because it is important that the `TargetMachine` // is disposed prior to the `Context` being disposed otherwise UAFs can occur. tm: ManuallyDrop, } unsafe impl Send for ModuleLlvm {}", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. ModuleLlvm { llmod_raw, llcx, tm: create_target_machine(tcx, mod_name) } ModuleLlvm { llmod_raw, llcx, tm: ManuallyDrop::new(create_target_machine(tcx, mod_name)), } } }", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. ModuleLlvm { llmod_raw, llcx, tm: create_informational_target_machine(tcx.sess) } ModuleLlvm { llmod_raw, llcx, tm: ManuallyDrop::new(create_informational_target_machine(tcx.sess)), } } }", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. Ok(ModuleLlvm { llmod_raw, llcx, tm }) Ok(ModuleLlvm { llmod_raw, llcx, tm: ManuallyDrop::new(tm) }) } }", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-6770c5f0b8cf54cfa362ead35ff2dc6e166ce639bdd994e6d6c84ce8c58cecd7", "query": "After using command random dependencies throw error. I'm 100% sure that dependencies I'm trying to build are valid because at every compile attempt different dependency fails e.g. tomledit, crossbeam-channel, actix-router_. Sometimes none of imported dependencies fail and out of 5 attempts I was able to build project succesfully twice (I deleted folder after each attempt). Example output: some dependencies (in my case 23) command Improve build scripts on Windows ARM?\nThanks for the report! Transferred to rust-lang/rust since this is a compiler issue. I'm able to reproduce, though I am also using Parallels on an aarch64 mac. I do not have native Windows ARM hardware to test, so I'm not sure if this is unique to the M2 or Parallels. If someone has native hardware, it might be good to help with testing to see if it is a general problem with the target. This appears to be a regression, and I have bisected to cc\nI'm seeing the same issue intermittently on a project that uses . It looks like we might have a UAF on an LLVM object. ManuallyDrop::drop(&mut self.tm); llvm::LLVMContextDispose(&mut *(self.llcx as *mut _)); } }", "commid": "rust_pr_118464"}], "negative_passages": []} {"query_id": "q-en-rust-bb362b77d060fbff6f15f1aca8ee4aefe6a5aabddbb70f9d2c17b2ae0c16dbb3", "query": "I tried the following code: output after running for a very long time: modify rustc as follows results in the following output: changing the invocation to actually prints the warning at each power of two of steps. This warning should not be affected by . It's whole point is to be repeated multiple times. cc $DIR/evade-deduplication-issue-118612.rs:8:5 | LL | / loop { LL | | LL | | LL | | ... | LL | | } LL | | } | |_____^ the const evaluator is currently interpreting this expression | help: the constant being evaluated --> $DIR/evade-deduplication-issue-118612.rs:6:1 | LL | const FOO: () = { | ^^^^^^^^^^^^^ warning: constant evaluation is taking a long time --> $DIR/evade-deduplication-issue-118612.rs:8:5 | LL | / loop { LL | | LL | | LL | | ... | LL | | } LL | | } | |_____^ the const evaluator is currently interpreting this expression | help: the constant being evaluated --> $DIR/evade-deduplication-issue-118612.rs:6:1 | LL | const FOO: () = { | ^^^^^^^^^^^^^ warning: constant evaluation is taking a long time --> $DIR/evade-deduplication-issue-118612.rs:8:5 | LL | / loop { LL | | LL | | LL | | ... | LL | | } LL | | } | |_____^ the const evaluator is currently interpreting this expression | help: the constant being evaluated --> $DIR/evade-deduplication-issue-118612.rs:6:1 | LL | const FOO: () = { | ^^^^^^^^^^^^^ warning: constant evaluation is taking a long time --> $DIR/evade-deduplication-issue-118612.rs:8:5 | LL | / loop { LL | | LL | | LL | | ... | LL | | } LL | | } | |_____^ the const evaluator is currently interpreting this expression | help: the constant being evaluated --> $DIR/evade-deduplication-issue-118612.rs:6:1 | LL | const FOO: () = { | ^^^^^^^^^^^^^ warning: constant evaluation is taking a long time --> $DIR/evade-deduplication-issue-118612.rs:8:5 | LL | / loop { LL | | LL | | LL | | ... | LL | | } LL | | } | |_____^ the const evaluator is currently interpreting this expression | help: the constant being evaluated --> $DIR/evade-deduplication-issue-118612.rs:6:1 | LL | const FOO: () = { | ^^^^^^^^^^^^^ warning: 5 warnings emitted ", "commid": "rust_pr_130665"}], "negative_passages": []} {"query_id": "q-en-rust-99077e573b66fe6cdd65f52fbb5afc4e298862e14019edacdbc646dcd0ab9929", "query": "! by\nI'll try to change the display because it's really not great.\nI wonder whether the formatting is the issue rather than the algorithm used to discover the required features. The formatting goes awry in the example above due to the large number of (redundant) features: That's technically correct, but not entirely helpful to a developer either. ' has the following entries in its table: To use , the only feature a client needs to enable is . The remaining features are automatically enabled recursively via the It would seem reasonable to prune the feature list by eliminating those features that are implicitly enabled. Doing this solves two issues: It makes the formatting a non-issue (here). It declutters the documentation, showing only information that clients can act upon. I'm not aware of a way to cancel automatic enablement of dependent features, so dropping this information from the documentation should be inconsequential in practice. If the formatting is being reworked nonetheless, here's what I'd like to see: Enclose the feature names in ASCII double quotes () for convenient copy-pasting. The crate to prevent them from being converted to smart quotes. Make the list comma-separated (again, for convenient copy-pasting). I understand that (as used currently) carries semantic information that needs to be reflected elsewhere. If is being used, it can remain as-is. We commonly don't need to copy-paste disjunct features.", "positive_passages": [{"docid": "doc-en-rust-1517812cc3d4bcecc301f0e461544b1c229d684387ac68b2e870982a3512173d", "text": "} .item-info .stab { /* This min-height is needed to unify the height of the stab elements because some of them have emojis. */ min-height: 36px; display: flex; display: block; padding: 3px; margin-bottom: 5px; align-items: center; vertical-align: text-bottom; } .item-name .stab { margin-left: 0.3125em;", "commid": "rust_pr_118677"}], "negative_passages": []} {"query_id": "q-en-rust-99077e573b66fe6cdd65f52fbb5afc4e298862e14019edacdbc646dcd0ab9929", "query": "! by\nI'll try to change the display because it's really not great.\nI wonder whether the formatting is the issue rather than the algorithm used to discover the required features. The formatting goes awry in the example above due to the large number of (redundant) features: That's technically correct, but not entirely helpful to a developer either. ' has the following entries in its table: To use , the only feature a client needs to enable is . The remaining features are automatically enabled recursively via the It would seem reasonable to prune the feature list by eliminating those features that are implicitly enabled. Doing this solves two issues: It makes the formatting a non-issue (here). It declutters the documentation, showing only information that clients can act upon. I'm not aware of a way to cancel automatic enablement of dependent features, so dropping this information from the documentation should be inconsequential in practice. If the formatting is being reworked nonetheless, here's what I'd like to see: Enclose the feature names in ASCII double quotes () for convenient copy-pasting. The crate to prevent them from being converted to smart quotes. Make the list comma-separated (again, for convenient copy-pasting). I understand that (as used currently) carries semantic information that needs to be reflected elsewhere. If is being used, it can remain as-is. We commonly don't need to copy-paste disjunct features.", "positive_passages": [{"docid": "doc-en-rust-586ce0416eb419aa018dd4e26c65caaea40bc8bdbe54d52568feb5babd7ddf94", "text": "color: var(--stab-code-color); } .stab .emoji { .stab .emoji, .item-info .stab::before { font-size: 1.25rem; } .stab .emoji { margin-right: 0.3rem; } .item-info .stab::before { /* ensure badges with emoji and without it have same height */ content: \"0\"; width: 0; display: inline-block; color: transparent; } /* Black one-pixel outline around emoji shapes */ .emoji { text-shadow: 1px 0 0 black, -1px 0 0 black, 0 1px 0 black, 0 1px 0 black, 0 -1px 0 black; }", "commid": "rust_pr_118677"}], "negative_passages": []} {"query_id": "q-en-rust-99077e573b66fe6cdd65f52fbb5afc4e298862e14019edacdbc646dcd0ab9929", "query": "! by\nI'll try to change the display because it's really not great.\nI wonder whether the formatting is the issue rather than the algorithm used to discover the required features. The formatting goes awry in the example above due to the large number of (redundant) features: That's technically correct, but not entirely helpful to a developer either. ' has the following entries in its table: To use , the only feature a client needs to enable is . The remaining features are automatically enabled recursively via the It would seem reasonable to prune the feature list by eliminating those features that are implicitly enabled. Doing this solves two issues: It makes the formatting a non-issue (here). It declutters the documentation, showing only information that clients can act upon. I'm not aware of a way to cancel automatic enablement of dependent features, so dropping this information from the documentation should be inconsequential in practice. If the formatting is being reworked nonetheless, here's what I'd like to see: Enclose the feature names in ASCII double quotes () for convenient copy-pasting. The crate to prevent them from being converted to smart quotes. Make the list comma-separated (again, for convenient copy-pasting). I understand that (as used currently) carries semantic information that needs to be reflected elsewhere. If is being used, it can remain as-is. We commonly don't need to copy-paste disjunct features.", "positive_passages": [{"docid": "doc-en-rust-03b9cc3b2e867d3d80dfd868e7d53f1730d4f6574fe7cd4f44ce804b823b36c6", "text": "assert-size: (\".item-info .stab\", {\"width\": 289}) assert-position: (\".item-info .stab\", {\"x\": 245}) // We check that the display of the feature elements is not broken. It serves as regression // test for . set-window-size: (850, 800) store-position: ( \"//*[@class='stab portability']//code[text()='Win32_System']\", {\"x\": first_line_x, \"y\": first_line_y}, ) store-position: ( \"//*[@class='stab portability']//code[text()='Win32_System_Diagnostics']\", {\"x\": second_line_x, \"y\": second_line_y}, ) assert: |first_line_x| != |second_line_x| && |first_line_x| == 516 && |second_line_x| == 272 assert: |first_line_y| != |second_line_y| && |first_line_y| == 688 && |second_line_y| == 711 // Now we ensure that they're not rendered on the same line. set-window-size: (1100, 800) go-to: \"file://\" + |DOC_PATH| + \"/lib2/trait.Trait.html\" // We first ensure that there are two item info on the trait. assert-count: (\"#main-content > .item-info .stab\", 2)", "commid": "rust_pr_118677"}], "negative_passages": []} {"query_id": "q-en-rust-99077e573b66fe6cdd65f52fbb5afc4e298862e14019edacdbc646dcd0ab9929", "query": "! by\nI'll try to change the display because it's really not great.\nI wonder whether the formatting is the issue rather than the algorithm used to discover the required features. The formatting goes awry in the example above due to the large number of (redundant) features: That's technically correct, but not entirely helpful to a developer either. ' has the following entries in its table: To use , the only feature a client needs to enable is . The remaining features are automatically enabled recursively via the It would seem reasonable to prune the feature list by eliminating those features that are implicitly enabled. Doing this solves two issues: It makes the formatting a non-issue (here). It declutters the documentation, showing only information that clients can act upon. I'm not aware of a way to cancel automatic enablement of dependent features, so dropping this information from the documentation should be inconsequential in practice. If the formatting is being reworked nonetheless, here's what I'd like to see: Enclose the feature names in ASCII double quotes () for convenient copy-pasting. The crate to prevent them from being converted to smart quotes. Make the list comma-separated (again, for convenient copy-pasting). I understand that (as used currently) carries semantic information that needs to be reflected elsewhere. If is being used, it can remain as-is. We commonly don't need to copy-paste disjunct features.", "positive_passages": [{"docid": "doc-en-rust-4e88570f0a164e9449a66ed4b8497bc66e20e4599c09dde7e230a652790e5ba9", "text": "[lib] path = \"lib.rs\" [features] Win32 = [\"Win32_System\"] Win32_System = [\"Win32_System_Diagnostics\"] Win32_System_Diagnostics = [\"Win32_System_Diagnostics_Debug\"] Win32_System_Diagnostics_Debug = [] default = [\"Win32\"] [dependencies] implementors = { path = \"./implementors\" } http = { path = \"./http\" }", "commid": "rust_pr_118677"}], "negative_passages": []} {"query_id": "q-en-rust-99077e573b66fe6cdd65f52fbb5afc4e298862e14019edacdbc646dcd0ab9929", "query": "! by\nI'll try to change the display because it's really not great.\nI wonder whether the formatting is the issue rather than the algorithm used to discover the required features. The formatting goes awry in the example above due to the large number of (redundant) features: That's technically correct, but not entirely helpful to a developer either. ' has the following entries in its table: To use , the only feature a client needs to enable is . The remaining features are automatically enabled recursively via the It would seem reasonable to prune the feature list by eliminating those features that are implicitly enabled. Doing this solves two issues: It makes the formatting a non-issue (here). It declutters the documentation, showing only information that clients can act upon. I'm not aware of a way to cancel automatic enablement of dependent features, so dropping this information from the documentation should be inconsequential in practice. If the formatting is being reworked nonetheless, here's what I'd like to see: Enclose the feature names in ASCII double quotes () for convenient copy-pasting. The crate to prevent them from being converted to smart quotes. Make the list comma-separated (again, for convenient copy-pasting). I understand that (as used currently) carries semantic information that needs to be reflected elsewhere. If is being used, it can remain as-is. We commonly don't need to copy-paste disjunct features.", "positive_passages": [{"docid": "doc-en-rust-4d4cff39333a45042e5e8c83fd0c7590706c462d1b74995643bd9429ed91af1f", "text": "// ignore-tidy-linelength #![feature(doc_cfg)] #![feature(doc_auto_cfg)] pub mod another_folder; pub mod another_mod;", "commid": "rust_pr_118677"}], "negative_passages": []} {"query_id": "q-en-rust-99077e573b66fe6cdd65f52fbb5afc4e298862e14019edacdbc646dcd0ab9929", "query": "! by\nI'll try to change the display because it's really not great.\nI wonder whether the formatting is the issue rather than the algorithm used to discover the required features. The formatting goes awry in the example above due to the large number of (redundant) features: That's technically correct, but not entirely helpful to a developer either. ' has the following entries in its table: To use , the only feature a client needs to enable is . The remaining features are automatically enabled recursively via the It would seem reasonable to prune the feature list by eliminating those features that are implicitly enabled. Doing this solves two issues: It makes the formatting a non-issue (here). It declutters the documentation, showing only information that clients can act upon. I'm not aware of a way to cancel automatic enablement of dependent features, so dropping this information from the documentation should be inconsequential in practice. If the formatting is being reworked nonetheless, here's what I'd like to see: Enclose the feature names in ASCII double quotes () for convenient copy-pasting. The crate to prevent them from being converted to smart quotes. Make the list comma-separated (again, for convenient copy-pasting). I understand that (as used currently) carries semantic information that needs to be reflected elsewhere. If is being used, it can remain as-is. We commonly don't need to copy-paste disjunct features.", "positive_passages": [{"docid": "doc-en-rust-4b910d22a0deca1c1b31a3ba768925bebc6ce5e1204e22d84f25f25d1b694f88", "text": "/// Some documentation /// # A Heading pub fn a_method(&self) {} #[cfg(all( feature = \"Win32\", feature = \"Win32_System\", feature = \"Win32_System_Diagnostics\", feature = \"Win32_System_Diagnostics_Debug\" ))] pub fn lot_of_features() {} } #[doc(cfg(feature = \"foo-method\"))]", "commid": "rust_pr_118677"}], "negative_passages": []} {"query_id": "q-en-rust-8b9bbd277c3a57aab240e64a1cdf09ae73d0a46dadb48d2c33520c4c56ed096a", "query": " $DIR/check-normalized-sig-for-wf.rs:7:7 | LL | s: String, | - binding `s` declared here ... LL | f(&s).0 | --^^- | | | | | borrowed value does not live long enough | argument requires that `s` is borrowed for `'static` LL | LL | } | - `s` dropped here while still borrowed error[E0521]: borrowed data escapes outside of function --> $DIR/check-normalized-sig-for-wf.rs:15:5 | LL | fn extend(input: &T) -> &'static T { | ----- - let's call the lifetime of this reference `'1` | | | `input` is a reference that is only valid in the function body ... LL | n(input).0 | ^^^^^^^^ | | | `input` escapes the function body here | argument requires that `'1` must outlive `'static` error[E0521]: borrowed data escapes outside of function --> $DIR/check-normalized-sig-for-wf.rs:23:5 | LL | fn extend_mut<'a, T>(input: &'a mut T) -> &'static mut T { | -- ----- `input` is a reference that is only valid in the function body | | | lifetime `'a` defined here ... LL | n(input).0 | ^^^^^^^^ | | | `input` escapes the function body here | argument requires that `'a` must outlive `'static` error: aborting due to 3 previous errors Some errors have detailed explanations: E0521, E0597. For more information about an error, try `rustc --explain E0521`. ", "commid": "rust_pr_118882"}], "negative_passages": []} {"query_id": "q-en-rust-e8e2e36b2d4c090e245367cce765ce1549ce6c4128b2e0ab36855e13b8a630f1", "query": "I have a that has started to fail running clippy, due to a compiler error compiling wayland-protocols. The dependency chain is down from iced thru \"clipboardwayland\" (0.2.0) to \"smithay-clipboard\" (0.6.6) and to \"wayland-protocols\" v0.29.5 \"clipboardwayland\" 0.2.0 is the latest version on , but it depends on \"smithay-clipboard\" 0.6.6, while 0.7.0 is the latest version on wayland-protocols 0.29.5 : See also: list_stem: bool, // The whole `use` item item: &Item, vis: ty::Visibility,", "commid": "rust_pr_119168"}], "negative_passages": []} {"query_id": "q-en-rust-e8e2e36b2d4c090e245367cce765ce1549ce6c4128b2e0ab36855e13b8a630f1", "query": "I have a that has started to fail running clippy, due to a compiler error compiling wayland-protocols. The dependency chain is down from iced thru \"clipboardwayland\" (0.2.0) to \"smithay-clipboard\" (0.6.6) and to \"wayland-protocols\" v0.29.5 \"clipboardwayland\" 0.2.0 is the latest version on , but it depends on \"smithay-clipboard\" 0.6.6, while 0.7.0 is the latest version on wayland-protocols 0.29.5 : See also: if nested { // Top level use tree reuses the item's id and list stems reuse their parent // use tree's ids, so in both cases their visibilities are already filled. if nested && !list_stem { self.r.feed_visibility(self.r.local_def_id(id), vis); }", "commid": "rust_pr_119168"}], "negative_passages": []} {"query_id": "q-en-rust-e8e2e36b2d4c090e245367cce765ce1549ce6c4128b2e0ab36855e13b8a630f1", "query": "I have a that has started to fail running clippy, due to a compiler error compiling wayland-protocols. The dependency chain is down from iced thru \"clipboardwayland\" (0.2.0) to \"smithay-clipboard\" (0.6.6) and to \"wayland-protocols\" v0.29.5 \"clipboardwayland\" 0.2.0 is the latest version on , but it depends on \"smithay-clipboard\" 0.6.6, while 0.7.0 is the latest version on wayland-protocols 0.29.5 : See also: tree, id, &prefix, true, // The whole `use` item tree, id, &prefix, true, false, // The whole `use` item item, vis, root_span, ); }", "commid": "rust_pr_119168"}], "negative_passages": []} {"query_id": "q-en-rust-e8e2e36b2d4c090e245367cce765ce1549ce6c4128b2e0ab36855e13b8a630f1", "query": "I have a that has started to fail running clippy, due to a compiler error compiling wayland-protocols. The dependency chain is down from iced thru \"clipboardwayland\" (0.2.0) to \"smithay-clipboard\" (0.6.6) and to \"wayland-protocols\" v0.29.5 \"clipboardwayland\" 0.2.0 is the latest version on , but it depends on \"smithay-clipboard\" 0.6.6, while 0.7.0 is the latest version on wayland-protocols 0.29.5 : See also: true, // The whole `use` item item, ty::Visibility::Restricted(", "commid": "rust_pr_119168"}], "negative_passages": []} {"query_id": "q-en-rust-e8e2e36b2d4c090e245367cce765ce1549ce6c4128b2e0ab36855e13b8a630f1", "query": "I have a that has started to fail running clippy, due to a compiler error compiling wayland-protocols. The dependency chain is down from iced thru \"clipboardwayland\" (0.2.0) to \"smithay-clipboard\" (0.6.6) and to \"wayland-protocols\" v0.29.5 \"clipboardwayland\" 0.2.0 is the latest version on , but it depends on \"smithay-clipboard\" 0.6.6, while 0.7.0 is the latest version on wayland-protocols 0.29.5 : See also: false, // The whole `use` item item, vis,", "commid": "rust_pr_119168"}], "negative_passages": []} {"query_id": "q-en-rust-e8e2e36b2d4c090e245367cce765ce1549ce6c4128b2e0ab36855e13b8a630f1", "query": "I have a that has started to fail running clippy, due to a compiler error compiling wayland-protocols. The dependency chain is down from iced thru \"clipboardwayland\" (0.2.0) to \"smithay-clipboard\" (0.6.6) and to \"wayland-protocols\" v0.29.5 \"clipboardwayland\" 0.2.0 is the latest version on , but it depends on \"smithay-clipboard\" 0.6.6, while 0.7.0 is the latest version on wayland-protocols 0.29.5 : See also: // check-pass // edition: 2018 mod outer { mod inner { pub mod inner2 {} } pub(crate) use inner::{}; pub(crate) use inner::{{}}; pub(crate) use inner::{inner2::{}}; pub(crate) use inner::{inner2::{{}}}; } fn main() {} ", "commid": "rust_pr_119168"}], "negative_passages": []} {"query_id": "q-en-rust-1209c82f8d947834081c0667d5b25abcf68c97b75d8ff7eea9ee7b35edada938", "query": " $DIR/union.rs:1:12 | LL | #![feature(dyn_star)] | ^^^^^^^^ | = note: see issue #102425 for more information = note: `#[warn(incomplete_features)]` on by default error[E0277]: `Union` needs to have the same ABI as a pointer --> $DIR/union.rs:14:9 | LL | bar(Union { x: 0usize }); | ^^^^^^^^^^^^^^^^^^^ `Union` needs to be a pointer-like type | = help: the trait `PointerLike` is not implemented for `Union` error: aborting due to 1 previous error; 1 warning emitted For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_119708"}], "negative_passages": []} {"query_id": "q-en-rust-a321c149c86596aedf4386c8f5b8461cf240b24dc5584925481c063084544314", "query": "and WebAssembly book () In Rust some requirements for WebAssembly targets. This makes Wasm binaries compiled with Rust (at least with default options/out of the box std) incompatible with older runtime environments, in particular not-so-old iOS safari (). Here is for example the support for Safari iOS 15.5 simulator: ! I searched but did not find any place where these requirements are written down. I think that this is a problem.\nIn principle Rust can produce binaries that only target the core subset of wasm 1.0 just fine. You do need to set some additional options with and build your own standard libraries though. But point taken, is missing dedicated pages on the wasm target that could contain this information.\nThe 2nd sentence right after the one you quoted outlines exactly what you need to do to take advantage of rustc\u2019s capability to emit outputs to the wasm 1.0 spec (or any superset of it:) Make sure to do this and report back if you're still seeing the unexpected instructions/types/other entities after rustc is told to produce wasm binaries the way you need them to be. AFAIK, not for WASM.\nSorry, I realized I misunderstood your answer and deleted my post while you were answering it. Do you know how to show which features are enabled by default for a given target? I searched and did not find. I'm trying to discover whether is enabled by default on nowadays. As it seems to be because I am seeing types in the emitted wasm, even though I disabled the feature by \"\" in an environment variable RUSTFLAGS. It would be surprising to me that is built with that feature by default on wasm32, but maybe it is.\nHm, I'm actually not sure why you might be seeing simd128 in your modules. As far as I can tell it is not enabled by default for any wasm target. In order to print out features used you should look for lines in: For what it is worth I\u2019ve been using this exact target at my $DAYJOB with a pretty primitive runtime and it has been working largely OOB if my memory serves me right. So it indeed could have something to do with your dependencies possibly using something like ? Not sure!\nIt seems that the was built with as I see this WASM function: What I am not sure is whether 1. this was integrated with the standard library and then imported indirectly to my project, or 2. somehow one of the dependency managed to enable it by itself. I tend to favor 1 but I'm not sure. That is why it would be great for such things to be documented :-)\nFor reference, I asked on the about how the std lib is built.\nI confirm that as of version 1.75, the standard library is not using any WebAssembly SIMD instructions. Here is my test: which shows nothing, so they get pulled in by the memchr package through some of my dependencies.\nYou yourself pointed out which crate produces the v128 types: . It uses to enable emission of these types, regardless of the global configuration it looks like. This is usually fine for other architectures, but rarely for wasm, so it might make sense to file an issue over there.\nI fully agree and I just opened an issue there: BurntSushi/memchr\nAs a heads up I've proposed that close this issue with documentation of how to target MVP wasm.", "positive_passages": [{"docid": "doc-en-rust-f5a97b8f0e390eab6bfab2f9b3468517a1861d95d79ee3d498005dd36b656ca5", "text": "- [wasm32-wasip1](platform-support/wasm32-wasip1.md) - [wasm32-wasip1-threads](platform-support/wasm32-wasip1-threads.md) - [wasm32-wasip2](platform-support/wasm32-wasip2.md) - [wasm32-unknown-unknown](platform-support/wasm32-unknown-unknown.md) - [wasm64-unknown-unknown](platform-support/wasm64-unknown-unknown.md) - [*-win7-windows-msvc](platform-support/win7-windows-msvc.md) - [x86_64-fortanix-unknown-sgx](platform-support/x86_64-fortanix-unknown-sgx.md)", "commid": "rust_pr_128511"}], "negative_passages": []} {"query_id": "q-en-rust-a321c149c86596aedf4386c8f5b8461cf240b24dc5584925481c063084544314", "query": "and WebAssembly book () In Rust some requirements for WebAssembly targets. This makes Wasm binaries compiled with Rust (at least with default options/out of the box std) incompatible with older runtime environments, in particular not-so-old iOS safari (). Here is for example the support for Safari iOS 15.5 simulator: ! I searched but did not find any place where these requirements are written down. I think that this is a problem.\nIn principle Rust can produce binaries that only target the core subset of wasm 1.0 just fine. You do need to set some additional options with and build your own standard libraries though. But point taken, is missing dedicated pages on the wasm target that could contain this information.\nThe 2nd sentence right after the one you quoted outlines exactly what you need to do to take advantage of rustc\u2019s capability to emit outputs to the wasm 1.0 spec (or any superset of it:) Make sure to do this and report back if you're still seeing the unexpected instructions/types/other entities after rustc is told to produce wasm binaries the way you need them to be. AFAIK, not for WASM.\nSorry, I realized I misunderstood your answer and deleted my post while you were answering it. Do you know how to show which features are enabled by default for a given target? I searched and did not find. I'm trying to discover whether is enabled by default on nowadays. As it seems to be because I am seeing types in the emitted wasm, even though I disabled the feature by \"\" in an environment variable RUSTFLAGS. It would be surprising to me that is built with that feature by default on wasm32, but maybe it is.\nHm, I'm actually not sure why you might be seeing simd128 in your modules. As far as I can tell it is not enabled by default for any wasm target. In order to print out features used you should look for lines in: For what it is worth I\u2019ve been using this exact target at my $DAYJOB with a pretty primitive runtime and it has been working largely OOB if my memory serves me right. So it indeed could have something to do with your dependencies possibly using something like ? Not sure!\nIt seems that the was built with as I see this WASM function: What I am not sure is whether 1. this was integrated with the standard library and then imported indirectly to my project, or 2. somehow one of the dependency managed to enable it by itself. I tend to favor 1 but I'm not sure. That is why it would be great for such things to be documented :-)\nFor reference, I asked on the about how the std lib is built.\nI confirm that as of version 1.75, the standard library is not using any WebAssembly SIMD instructions. Here is my test: which shows nothing, so they get pulled in by the memchr package through some of my dependencies.\nYou yourself pointed out which crate produces the v128 types: . It uses to enable emission of these types, regardless of the global configuration it looks like. This is usually fine for other architectures, but rarely for wasm, so it might make sense to file an issue over there.\nI fully agree and I just opened an issue there: BurntSushi/memchr\nAs a heads up I've proposed that close this issue with documentation of how to target MVP wasm.", "positive_passages": [{"docid": "doc-en-rust-216389043891c4ac8ce218cfdba629e44982157ea68bad4479077bbe88784d37", "text": "[`thumbv8m.main-none-eabi`](platform-support/thumbv8m.main-none-eabi.md) | * | Bare Armv8-M Mainline [`thumbv8m.main-none-eabihf`](platform-support/thumbv8m.main-none-eabi.md) | * | Bare Armv8-M Mainline, hardfloat `wasm32-unknown-emscripten` | \u2713 | WebAssembly via Emscripten `wasm32-unknown-unknown` | \u2713 | WebAssembly [`wasm32-unknown-unknown`](platform-support/wasm32-unknown-unknown.md) | \u2713 | WebAssembly `wasm32-wasi` | \u2713 | WebAssembly with WASI (undergoing a [rename to `wasm32-wasip1`][wasi-rename]) [`wasm32-wasip1`](platform-support/wasm32-wasip1.md) | \u2713 | WebAssembly with WASI [`wasm32-wasip1-threads`](platform-support/wasm32-wasip1-threads.md) | \u2713 | WebAssembly with WASI Preview 1 and threads", "commid": "rust_pr_128511"}], "negative_passages": []} {"query_id": "q-en-rust-a321c149c86596aedf4386c8f5b8461cf240b24dc5584925481c063084544314", "query": "and WebAssembly book () In Rust some requirements for WebAssembly targets. This makes Wasm binaries compiled with Rust (at least with default options/out of the box std) incompatible with older runtime environments, in particular not-so-old iOS safari (). Here is for example the support for Safari iOS 15.5 simulator: ! I searched but did not find any place where these requirements are written down. I think that this is a problem.\nIn principle Rust can produce binaries that only target the core subset of wasm 1.0 just fine. You do need to set some additional options with and build your own standard libraries though. But point taken, is missing dedicated pages on the wasm target that could contain this information.\nThe 2nd sentence right after the one you quoted outlines exactly what you need to do to take advantage of rustc\u2019s capability to emit outputs to the wasm 1.0 spec (or any superset of it:) Make sure to do this and report back if you're still seeing the unexpected instructions/types/other entities after rustc is told to produce wasm binaries the way you need them to be. AFAIK, not for WASM.\nSorry, I realized I misunderstood your answer and deleted my post while you were answering it. Do you know how to show which features are enabled by default for a given target? I searched and did not find. I'm trying to discover whether is enabled by default on nowadays. As it seems to be because I am seeing types in the emitted wasm, even though I disabled the feature by \"\" in an environment variable RUSTFLAGS. It would be surprising to me that is built with that feature by default on wasm32, but maybe it is.\nHm, I'm actually not sure why you might be seeing simd128 in your modules. As far as I can tell it is not enabled by default for any wasm target. In order to print out features used you should look for lines in: For what it is worth I\u2019ve been using this exact target at my $DAYJOB with a pretty primitive runtime and it has been working largely OOB if my memory serves me right. So it indeed could have something to do with your dependencies possibly using something like ? Not sure!\nIt seems that the was built with as I see this WASM function: What I am not sure is whether 1. this was integrated with the standard library and then imported indirectly to my project, or 2. somehow one of the dependency managed to enable it by itself. I tend to favor 1 but I'm not sure. That is why it would be great for such things to be documented :-)\nFor reference, I asked on the about how the std lib is built.\nI confirm that as of version 1.75, the standard library is not using any WebAssembly SIMD instructions. Here is my test: which shows nothing, so they get pulled in by the memchr package through some of my dependencies.\nYou yourself pointed out which crate produces the v128 types: . It uses to enable emission of these types, regardless of the global configuration it looks like. This is usually fine for other architectures, but rarely for wasm, so it might make sense to file an issue over there.\nI fully agree and I just opened an issue there: BurntSushi/memchr\nAs a heads up I've proposed that close this issue with documentation of how to target MVP wasm.", "positive_passages": [{"docid": "doc-en-rust-1f9e47b025f2d2032fc9b1822395a7a6f0fdd51bceba8315e8f24a2f72c79a80", "text": " # `wasm32-unknown-unknown` **Tier: 2** The `wasm32-unknown-unknown` target is a WebAssembly compilation target which does not import any functions from the host for the standard library. This is the \"minimal\" WebAssembly in the sense of making the fewest assumptions about the host environment. This target is often used when compiling to the web or JavaScript environments as there is no standard for what functions can be imported on the web. This target can also be useful for creating minimal or bare-bones WebAssembly binaries. The `wasm32-unknown-unknown` target has support for the Rust standard library but many parts of the standard library do not work and return errors. For example `println!` does nothing, `std::fs` always return errors, and `std::thread::spawn` will panic. There is no means by which this can be overridden. For a WebAssembly target that more fully supports the standard library see the [`wasm32-wasip1`](./wasm32-wasip1.md) or [`wasm32-wasip2`](./wasm32-wasip2.md) targets. The `wasm32-unknown-unknown` target has full support for the `core` and `alloc` crates. It additionally supports the `HashMap` type in the `std` crate, although hash maps are not randomized like they are on other platforms. One existing user of this target (please feel free to edit and expand this list too) is the [`wasm-bindgen` project](https://github.com/rustwasm/wasm-bindgen) which facilitates Rust code interoperating with JavaScript code. Note, though, that not all uses of `wasm32-unknown-unknown` are using JavaScript and the web. ## Target maintainers When this target was added to the compiler platform-specific documentation here was not maintained at that time. This means that the list below is not exhaustive and there are more interested parties in this target. That being said since when this document was last updated those interested in maintaining this target are: - Alex Crichton, https://github.com/alexcrichton ## Requirements This target is cross-compiled. The target includes support for `std` itself, but as mentioned above many pieces of functionality that require an operating system do not work and will return errors. This target currently has no equivalent in C/C++. There is no C/C++ toolchain for this target. While interop is theoretically possible it's recommended to instead use one of: * `wasm32-unknown-emscripten` - for web-based use cases the Emscripten toolchain is typically chosen for running C/C++. * [`wasm32-wasip1`](./wasm32-wasip1.md) - the wasi-sdk toolchain is used to compile C/C++ on this target and can interop with Rust code. WASI works on the web so far as there's no blocker, but an implementation of WASI APIs must be either chosen or reimplemented. This target has no build requirements beyond what's in-tree in the Rust repository. Linking binaries requires LLD to be enabled for the `wasm-ld` driver. This target uses the `dlmalloc` crate as the default global allocator. ## Building the target Building this target can be done by: * Configure the `wasm32-unknown-unknown` target to get built. * Configure LLD to be built. * Ensure the `WebAssembly` target backend is not disabled in LLVM. These are all controlled through `config.toml` options. It should be possible to build this target on any platform. ## Building Rust programs Rust programs can be compiled by adding this target via rustup: ```sh $ rustup target add wasm32-unknown-unknown ``` and then compiling with the target: ```sh $ rustc foo.rs --target wasm32-unknown-unknown $ file foo.wasm ``` ## Cross-compilation This target can be cross-compiled from any host. ## Testing This target is not tested in CI for the rust-lang/rust repository. Many tests must be disabled to run on this target and failures are non-obvious because `println!` doesn't work in the standard library. It's recommended to test the `wasm32-wasip1` target instead for WebAssembly compatibility. ## Conditionally compiling code It's recommended to conditionally compile code for this target with: ```text #[cfg(all(target_family = \"wasm\", target_os = \"unknown\"))] ``` Note that there is no way to tell via `#[cfg]` whether code will be running on the web or not. ## Enabled WebAssembly features WebAssembly is an evolving standard which adds new features such as new instructions over time. This target's default set of supported WebAssembly features will additionally change over time. The `wasm32-unknown-unknown` target inherits the default settings of LLVM which typically matches the default settings of Emscripten as well. Changes to WebAssembly go through a [proposals process][proposals] but reaching the final stage (stage 5) does not automatically mean that the feature will be enabled in LLVM and Rust by default. At this time the general guidance is that features must be present in most engines for a \"good chunk of time\" before they're enabled in LLVM by default. There is currently no exact number of months or engines that are required to enable features by default. [proposals]: https://github.com/WebAssembly/proposals As of the time of this writing the proposals that are enabled by default (the `generic` CPU in LLVM terminology) are: * `multivalue` * `mutable-globals` * `reference-types` * `sign-ext` If you're compiling WebAssembly code for an engine that does not support a feature in LLVM's default feature set then the feature must be disabled at compile time. Note, though, that enabled features may be used in the standard library or precompiled libraries shipped via rustup. This means that not only does your own code need to be compiled with the correct set of flags but the Rust standard library additionally must be recompiled. Compiling all code for the initial release of WebAssembly looks like: ```sh $ export RUSTFLAGS=-Ctarget-cpu=mvp $ cargo +nightly build -Zbuild-std=panic_abort,std --target wasm32-unknown-unknown ``` Here the `mvp` \"cpu\" is a placeholder in LLVM for disabling all supported features by default. Cargo's `-Zbuild-std` feature, a Nightly Rust feature, is then used to recompile the standard library in addition to your own code. This will produce a binary that uses only the original WebAssembly features by default and no proposals since its inception. To enable individual features it can be done with `-Ctarget-feature=+foo`. Available features for Rust code itself are documented in the [reference] and can also be found through: ```sh $ rustc -Ctarget-feature=help --target wasm32-unknown-unknown ``` You'll need to consult your WebAssembly engine's documentation to learn more about the supported WebAssembly features the engine has. [reference]: https://doc.rust-lang.org/reference/attributes/codegen.html#wasm32-or-wasm64 Note that it is still possible for Rust crates and libraries to enable WebAssembly features on a per-function level. This means that the build command above may not be sufficient to disable all WebAssembly features. If the final binary still has SIMD instructions, for example, the function in question will need to be found and the crate in question will likely contain something like: ```rust,ignore (not-always-compiled-to-wasm) #[target_feature(enable = \"simd128\")] fn foo() { // ... } ``` In this situation there is no compiler flag to disable emission of SIMD instructions and the crate must instead be modified to not include this function at compile time either by default or through a Cargo feature. For crate authors it's recommended to avoid `#[target_feature(enable = \"...\")]` except where necessary and instead use: ```rust,ignore (not-always-compiled-to-wasm) #[cfg(target_feature = \"simd128\")] fn foo() { // ... } ``` That is to say instead of enabling target features it's recommended to conditionally compile code instead. This is notably different to the way native platforms such as x86_64 work, and this is due to the fact that WebAssembly binaries must only contain code the engine understands. Native binaries work so long as the CPU doesn't execute unknown code dynamically at runtime. ", "commid": "rust_pr_128511"}], "negative_passages": []} {"query_id": "q-en-rust-a321c149c86596aedf4386c8f5b8461cf240b24dc5584925481c063084544314", "query": "and WebAssembly book () In Rust some requirements for WebAssembly targets. This makes Wasm binaries compiled with Rust (at least with default options/out of the box std) incompatible with older runtime environments, in particular not-so-old iOS safari (). Here is for example the support for Safari iOS 15.5 simulator: ! I searched but did not find any place where these requirements are written down. I think that this is a problem.\nIn principle Rust can produce binaries that only target the core subset of wasm 1.0 just fine. You do need to set some additional options with and build your own standard libraries though. But point taken, is missing dedicated pages on the wasm target that could contain this information.\nThe 2nd sentence right after the one you quoted outlines exactly what you need to do to take advantage of rustc\u2019s capability to emit outputs to the wasm 1.0 spec (or any superset of it:) Make sure to do this and report back if you're still seeing the unexpected instructions/types/other entities after rustc is told to produce wasm binaries the way you need them to be. AFAIK, not for WASM.\nSorry, I realized I misunderstood your answer and deleted my post while you were answering it. Do you know how to show which features are enabled by default for a given target? I searched and did not find. I'm trying to discover whether is enabled by default on nowadays. As it seems to be because I am seeing types in the emitted wasm, even though I disabled the feature by \"\" in an environment variable RUSTFLAGS. It would be surprising to me that is built with that feature by default on wasm32, but maybe it is.\nHm, I'm actually not sure why you might be seeing simd128 in your modules. As far as I can tell it is not enabled by default for any wasm target. In order to print out features used you should look for lines in: For what it is worth I\u2019ve been using this exact target at my $DAYJOB with a pretty primitive runtime and it has been working largely OOB if my memory serves me right. So it indeed could have something to do with your dependencies possibly using something like ? Not sure!\nIt seems that the was built with as I see this WASM function: What I am not sure is whether 1. this was integrated with the standard library and then imported indirectly to my project, or 2. somehow one of the dependency managed to enable it by itself. I tend to favor 1 but I'm not sure. That is why it would be great for such things to be documented :-)\nFor reference, I asked on the about how the std lib is built.\nI confirm that as of version 1.75, the standard library is not using any WebAssembly SIMD instructions. Here is my test: which shows nothing, so they get pulled in by the memchr package through some of my dependencies.\nYou yourself pointed out which crate produces the v128 types: . It uses to enable emission of these types, regardless of the global configuration it looks like. This is usually fine for other architectures, but rarely for wasm, so it might make sense to file an issue over there.\nI fully agree and I just opened an issue there: BurntSushi/memchr\nAs a heads up I've proposed that close this issue with documentation of how to target MVP wasm.", "positive_passages": [{"docid": "doc-en-rust-b4ca640f7e5d05cd2276b4836b8decfc60f429fe7eec3450650c181d8dc91dc7", "text": "Prior to Rust 1.80 the `target_env = \"p1\"` key was not set. Currently the `target_feature = \"atomics\"` is Nightly-only. Note that the precise `#[cfg]` necessary to detect this target may change as the target becomes more stable. ## Enabled WebAssembly features The default set of WebAssembly features enabled for compilation includes two more features in addition to that which [`wasm32-unknown-unknown`](./wasm32-unknown-unknown.md) enables: * `bulk-memory` * `atomics` For more information about features see the documentation for [`wasm32-unknown-unknown`](./wasm32-unknown-unknown.md), but note that the `mvp` CPU in LLVM does not support this target as it's required that `bulk-memory`, `atomics`, and `mutable-globals` are all enabled. ", "commid": "rust_pr_128511"}], "negative_passages": []} {"query_id": "q-en-rust-a321c149c86596aedf4386c8f5b8461cf240b24dc5584925481c063084544314", "query": "and WebAssembly book () In Rust some requirements for WebAssembly targets. This makes Wasm binaries compiled with Rust (at least with default options/out of the box std) incompatible with older runtime environments, in particular not-so-old iOS safari (). Here is for example the support for Safari iOS 15.5 simulator: ! I searched but did not find any place where these requirements are written down. I think that this is a problem.\nIn principle Rust can produce binaries that only target the core subset of wasm 1.0 just fine. You do need to set some additional options with and build your own standard libraries though. But point taken, is missing dedicated pages on the wasm target that could contain this information.\nThe 2nd sentence right after the one you quoted outlines exactly what you need to do to take advantage of rustc\u2019s capability to emit outputs to the wasm 1.0 spec (or any superset of it:) Make sure to do this and report back if you're still seeing the unexpected instructions/types/other entities after rustc is told to produce wasm binaries the way you need them to be. AFAIK, not for WASM.\nSorry, I realized I misunderstood your answer and deleted my post while you were answering it. Do you know how to show which features are enabled by default for a given target? I searched and did not find. I'm trying to discover whether is enabled by default on nowadays. As it seems to be because I am seeing types in the emitted wasm, even though I disabled the feature by \"\" in an environment variable RUSTFLAGS. It would be surprising to me that is built with that feature by default on wasm32, but maybe it is.\nHm, I'm actually not sure why you might be seeing simd128 in your modules. As far as I can tell it is not enabled by default for any wasm target. In order to print out features used you should look for lines in: For what it is worth I\u2019ve been using this exact target at my $DAYJOB with a pretty primitive runtime and it has been working largely OOB if my memory serves me right. So it indeed could have something to do with your dependencies possibly using something like ? Not sure!\nIt seems that the was built with as I see this WASM function: What I am not sure is whether 1. this was integrated with the standard library and then imported indirectly to my project, or 2. somehow one of the dependency managed to enable it by itself. I tend to favor 1 but I'm not sure. That is why it would be great for such things to be documented :-)\nFor reference, I asked on the about how the std lib is built.\nI confirm that as of version 1.75, the standard library is not using any WebAssembly SIMD instructions. Here is my test: which shows nothing, so they get pulled in by the memchr package through some of my dependencies.\nYou yourself pointed out which crate produces the v128 types: . It uses to enable emission of these types, regardless of the global configuration it looks like. This is usually fine for other architectures, but rarely for wasm, so it might make sense to file an issue over there.\nI fully agree and I just opened an issue there: BurntSushi/memchr\nAs a heads up I've proposed that close this issue with documentation of how to target MVP wasm.", "positive_passages": [{"docid": "doc-en-rust-a7a0e47a62b72c6be00e604fea5a87e04d07904872750479fd6ded004bf8d993", "text": "Note that the `target_env = \"p1\"` condition first appeared in Rust 1.80. Prior to Rust 1.80 the `target_env` condition was not set. ## Enabled WebAssembly features The default set of WebAssembly features enabled for compilation is currently the same as [`wasm32-unknown-unknown`](./wasm32-unknown-unknown.md). See the documentation there for more information. ", "commid": "rust_pr_128511"}], "negative_passages": []} {"query_id": "q-en-rust-a321c149c86596aedf4386c8f5b8461cf240b24dc5584925481c063084544314", "query": "and WebAssembly book () In Rust some requirements for WebAssembly targets. This makes Wasm binaries compiled with Rust (at least with default options/out of the box std) incompatible with older runtime environments, in particular not-so-old iOS safari (). Here is for example the support for Safari iOS 15.5 simulator: ! I searched but did not find any place where these requirements are written down. I think that this is a problem.\nIn principle Rust can produce binaries that only target the core subset of wasm 1.0 just fine. You do need to set some additional options with and build your own standard libraries though. But point taken, is missing dedicated pages on the wasm target that could contain this information.\nThe 2nd sentence right after the one you quoted outlines exactly what you need to do to take advantage of rustc\u2019s capability to emit outputs to the wasm 1.0 spec (or any superset of it:) Make sure to do this and report back if you're still seeing the unexpected instructions/types/other entities after rustc is told to produce wasm binaries the way you need them to be. AFAIK, not for WASM.\nSorry, I realized I misunderstood your answer and deleted my post while you were answering it. Do you know how to show which features are enabled by default for a given target? I searched and did not find. I'm trying to discover whether is enabled by default on nowadays. As it seems to be because I am seeing types in the emitted wasm, even though I disabled the feature by \"\" in an environment variable RUSTFLAGS. It would be surprising to me that is built with that feature by default on wasm32, but maybe it is.\nHm, I'm actually not sure why you might be seeing simd128 in your modules. As far as I can tell it is not enabled by default for any wasm target. In order to print out features used you should look for lines in: For what it is worth I\u2019ve been using this exact target at my $DAYJOB with a pretty primitive runtime and it has been working largely OOB if my memory serves me right. So it indeed could have something to do with your dependencies possibly using something like ? Not sure!\nIt seems that the was built with as I see this WASM function: What I am not sure is whether 1. this was integrated with the standard library and then imported indirectly to my project, or 2. somehow one of the dependency managed to enable it by itself. I tend to favor 1 but I'm not sure. That is why it would be great for such things to be documented :-)\nFor reference, I asked on the about how the std lib is built.\nI confirm that as of version 1.75, the standard library is not using any WebAssembly SIMD instructions. Here is my test: which shows nothing, so they get pulled in by the memchr package through some of my dependencies.\nYou yourself pointed out which crate produces the v128 types: . It uses to enable emission of these types, regardless of the global configuration it looks like. This is usually fine for other architectures, but rarely for wasm, so it might make sense to file an issue over there.\nI fully agree and I just opened an issue there: BurntSushi/memchr\nAs a heads up I've proposed that close this issue with documentation of how to target MVP wasm.", "positive_passages": [{"docid": "doc-en-rust-3ee5fbaaec92ccdf8fbdeec8b4fbc842f5d784c746b41435822826f3042587d0", "text": "```text #[cfg(all(target_os = \"wasi\", target_env = \"p2\"))] ``` ## Enabled WebAssembly features The default set of WebAssembly features enabled for compilation is currently the same as [`wasm32-unknown-unknown`](./wasm32-unknown-unknown.md). See the documentation there for more information. ", "commid": "rust_pr_128511"}], "negative_passages": []} {"query_id": "q-en-rust-4575c1e9a850e55f76599eec2344524cc46de05796c9fdf15607bbdc9fa746eb", "query": "Reading the compiler code (specifically the one responsible for mir output) I found these s where probably expected . $DIR/issue-120444-1.rs:10:13 | LL | /// [`Vfs`][crate::Vfs] | ----- ^^^^^^^^^^ explicit target is redundant | | | because label contains path that resolves to same destination | = note: when a link's destination is not specified, the label is used to resolve intra-doc links note: the lint level is defined here --> $DIR/issue-120444-1.rs:3:9 | LL | #![deny(rustdoc::redundant_explicit_links)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: remove explicit link target | LL | /// [`Vfs`] | ~~~~~~~ error: aborting due to 1 previous error ", "commid": "rust_pr_120702"}], "negative_passages": []} {"query_id": "q-en-rust-37633f465d467b6aedf5403b2d938b7c6172aee3aa986668824267adc7c8a1e6", "query": " $DIR/issue-120444-2.rs:10:13 | LL | /// [`Vfs`][crate::Vfs] | ----- ^^^^^^^^^^ explicit target is redundant | | | because label contains path that resolves to same destination | = note: when a link's destination is not specified, the label is used to resolve intra-doc links note: the lint level is defined here --> $DIR/issue-120444-2.rs:3:9 | LL | #![deny(rustdoc::redundant_explicit_links)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: remove explicit link target | LL | /// [`Vfs`] | ~~~~~~~ error: aborting due to 1 previous error ", "commid": "rust_pr_120702"}], "negative_passages": []} {"query_id": "q-en-rust-1e7639d323e44931dc05aa7e023f1372448209a69239fe00cbda3db07686f887", "query": "The commit a check that stars firing due to the changes to the default target layouts in LLVM 18 ():\nHaving this issue on as well: Why it's trying to compile for a 32-bit target is clearly beyond me because I'm using an entirely 64-bit host \u2015 since happens to be the author of the crate this is failing on (namely, the 32-bit BIOS version of the bootloader crate which I explicitly avoid depending on for my intendedly UEFI-only kernel), pinging him for assistance.\nI think this issue thread is about some failing tests in the Rust repo. Your issue seems to be specific to the crate, which required some updates to the custom targets it is using. I published a new version already, which should fix the issue. If you still have issues with the crate after a , please open an issue in the repository.", "positive_passages": [{"docid": "doc-en-rust-b53be2f8e483b6864032ff0d8f638bd0474d74e2bac56c18251b8945aef93672", "text": "\"arch\": \"x86_64\", \"cpu\": \"x86-64\", \"crt-static-respected\": true, \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128\", \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128\", \"dynamic-linking\": true, \"env\": \"gnu\", \"has-rpath\": true,", "commid": "rust_pr_120529"}], "negative_passages": []} {"query_id": "q-en-rust-1e7639d323e44931dc05aa7e023f1372448209a69239fe00cbda3db07686f887", "query": "The commit a check that stars firing due to the changes to the default target layouts in LLVM 18 ():\nHaving this issue on as well: Why it's trying to compile for a 32-bit target is clearly beyond me because I'm using an entirely 64-bit host \u2015 since happens to be the author of the crate this is failing on (namely, the 32-bit BIOS version of the bootloader crate which I explicitly avoid depending on for my intendedly UEFI-only kernel), pinging him for assistance.\nI think this issue thread is about some failing tests in the Rust repo. Your issue seems to be specific to the crate, which required some updates to the custom targets it is using. I published a new version already, which should fix the issue. If you still have issues with the crate after a , please open an issue in the repository.", "positive_passages": [{"docid": "doc-en-rust-a47f45326d665d88440f337adc838c3c9e08a0c03fdadc0eb8672871d5a4b122", "text": "\"arch\": \"x86_64\", \"cpu\": \"x86-64\", \"crt-static-respected\": true, \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128\", \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128\", \"dynamic-linking\": true, \"env\": \"gnu\", \"executables\": true,", "commid": "rust_pr_120529"}], "negative_passages": []} {"query_id": "q-en-rust-1e7639d323e44931dc05aa7e023f1372448209a69239fe00cbda3db07686f887", "query": "The commit a check that stars firing due to the changes to the default target layouts in LLVM 18 ():\nHaving this issue on as well: Why it's trying to compile for a 32-bit target is clearly beyond me because I'm using an entirely 64-bit host \u2015 since happens to be the author of the crate this is failing on (namely, the 32-bit BIOS version of the bootloader crate which I explicitly avoid depending on for my intendedly UEFI-only kernel), pinging him for assistance.\nI think this issue thread is about some failing tests in the Rust repo. Your issue seems to be specific to the crate, which required some updates to the custom targets it is using. I published a new version already, which should fix the issue. If you still have issues with the crate after a , please open an issue in the repository.", "positive_passages": [{"docid": "doc-en-rust-0b5de7963f6f9795dcdacb065060c6ddf345e190c8e963dd20430700cb5d9c51", "text": "{ \"data-layout\": \"e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-f64:32:64-f80:32-n8:16:32-S128\", \"data-layout\": \"e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i128:128-f64:32:64-f80:32-n8:16:32-S128\", \"linker-flavor\": \"gcc\", \"llvm-target\": \"i686-unknown-linux-gnu\", \"target-endian\": \"little\",", "commid": "rust_pr_120529"}], "negative_passages": []} {"query_id": "q-en-rust-1e7639d323e44931dc05aa7e023f1372448209a69239fe00cbda3db07686f887", "query": "The commit a check that stars firing due to the changes to the default target layouts in LLVM 18 ():\nHaving this issue on as well: Why it's trying to compile for a 32-bit target is clearly beyond me because I'm using an entirely 64-bit host \u2015 since happens to be the author of the crate this is failing on (namely, the 32-bit BIOS version of the bootloader crate which I explicitly avoid depending on for my intendedly UEFI-only kernel), pinging him for assistance.\nI think this issue thread is about some failing tests in the Rust repo. Your issue seems to be specific to the crate, which required some updates to the custom targets it is using. I published a new version already, which should fix the issue. If you still have issues with the crate after a , please open an issue in the repository.", "positive_passages": [{"docid": "doc-en-rust-5ec8c07e73b2a4e3d398a7d4036023ba9bd8c392e097168d2e477118ff4dcf40", "text": "{ \"pre-link-args\": {\"gcc\": [\"-m64\"]}, \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-f80:128-n8:16:32:64-S128\", \"data-layout\": \"e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128\", \"linker-flavor\": \"gcc\", \"llvm-target\": \"x86_64-unknown-linux-gnu\", \"target-endian\": \"little\",", "commid": "rust_pr_120529"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-362cffde54b5f5f429c848babcb2c36958d881fab9d5b03cab1fdf4d0ca10800", "text": "} } fn install_llvm_file(builder: &Builder<'_>, source: &Path, destination: &Path) { fn install_llvm_file( builder: &Builder<'_>, source: &Path, destination: &Path, install_symlink: bool, ) { if builder.config.dry_run() { return; } builder.install(source, destination, 0o644); if source.is_symlink() { // If we have a symlink like libLLVM-18.so -> libLLVM.so.18.1, install the target of the // symlink, which is what will actually get loaded at runtime. builder.install(&t!(fs::canonicalize(source)), destination, 0o644); if install_symlink { // If requested, also install the symlink. This is used by download-ci-llvm. let full_dest = destination.join(source.file_name().unwrap()); builder.copy(&source, &full_dest); } } else { builder.install(&source, destination, 0o644); } } /// Maybe add LLVM object files to the given destination lib-dir. Allows either static or dynamic linking. /// /// Returns whether the files were actually copied. fn maybe_install_llvm(builder: &Builder<'_>, target: TargetSelection, dst_libdir: &Path) -> bool { fn maybe_install_llvm( builder: &Builder<'_>, target: TargetSelection, dst_libdir: &Path, install_symlink: bool, ) -> bool { // If the LLVM was externally provided, then we don't currently copy // artifacts into the sysroot. This is not necessarily the right // choice (in particular, it will require the LLVM dylib to be in", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-4348c4ad16871c215f49427ce9673ec43a964e9793e7b167250141b1b06025af", "text": "} else { PathBuf::from(file) }; install_llvm_file(builder, &file, dst_libdir); install_llvm_file(builder, &file, dst_libdir, install_symlink); } !builder.config.dry_run() } else {", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-d0c8fbb72849790c20aafa37de0a0fe7ff5904b41f18fa63ba9100d652b37223", "text": "// dynamically linked; it is already included into librustc_llvm // statically. if builder.llvm_link_shared() { maybe_install_llvm(builder, target, &dst_libdir); maybe_install_llvm(builder, target, &dst_libdir, false); } }", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-988a4cb06e4a0195fd1e253172c761c66a4b13bec8e08e2dcc34e7c6dcdccf66", "text": "let mut tarball = Tarball::new(builder, \"rust-dev\", &target.triple); tarball.set_overlay(OverlayKind::LLVM); // LLVM requires a shared object symlink to exist on some platforms. tarball.permit_symlinks(true); builder.ensure(crate::core::build_steps::llvm::Llvm { target });", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-6de69da82fe35cd2a7b67aff0af8e3d039a00a26bed7a218bcc8bc427ebfc44d", "text": "// of `rustc-dev` to support the inherited `-lLLVM` when using the // compiler libraries. let dst_libdir = tarball.image_dir().join(\"lib\"); maybe_install_llvm(builder, target, &dst_libdir); maybe_install_llvm(builder, target, &dst_libdir, true); let link_type = if builder.llvm_link_shared() { \"dynamic\" } else { \"static\" }; t!(std::fs::write(tarball.image_dir().join(\"link-type.txt\"), link_type), dst_libdir);", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-b9cc0f1354b1e8317757ce1a9c902047e480fcc816ebc8d0b67176dd393efbc8", "text": "let out_dir = builder.llvm_out(target); let mut llvm_config_ret_dir = builder.llvm_out(builder.config.build); if (!builder.config.build.is_msvc() || builder.ninja()) && !builder.config.llvm_from_ci { llvm_config_ret_dir.push(\"build\"); } llvm_config_ret_dir.push(\"bin\"); let build_llvm_config = llvm_config_ret_dir.join(exe(\"llvm-config\", builder.config.build)); let llvm_cmake_dir = out_dir.join(\"lib/cmake/llvm\");", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-d762eda0e70bc3639fe4fa8f8abbfc759725be3b1f0c36ab643fa5315004e487", "text": " Subproject commit 9ea7f739f257b049a65deeb1f2455bb2ea021cfa Subproject commit 7973f3560287d750500718314a0fd4025bd8ac0e ", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-d25f843e7f82d83410e5a1e10596361d3de145d222cc233ee80318ec3e9d6aa7", "query": "Reproduction: Miri and prints But prints I reported this to LLVM but was told that the IR generated from MIR has alignment issues. I really don't know much about LLVM IR but I suspect the problem is The second element of () is not guaranteed to be 16-byte aligned cc\nCan you reproduce this without using u128/i128? And/or does this reproduce with LLVM 18?\nIt's still reproducible on nikic's llvm-18 branch. If i replace all with then it no longer reproduces, but of course I don't know if there are other examples out there that triggers the same bug without i128\nAh; what I meant is can you reproduce this issue using a type which in Rust has the same layout as , but which is not ? Perhaps ?\nYes, still reproducible after replacing it with a struct (both zero-sized and with members)\nWG-prioritization assigning priority (). label -I-prioritize +P-high", "positive_passages": [{"docid": "doc-en-rust-354431a59a9300c89b53bead029639a778e8049b38ecbd9640c4a5f7c0fc9df6", "text": "})?; let libdir = env.build_artifacts().join(\"stage2\").join(\"lib\"); let llvm_lib = io::find_file_in_dir(&libdir, \"libLLVM\", \".so\")?; // The actual name will be something like libLLVM.so.18.1-rust-dev. let llvm_lib = io::find_file_in_dir(&libdir, \"libLLVM.so\", \"\")?; log::info!(\"Optimizing {llvm_lib} with BOLT\");", "commid": "rust_pr_121395"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-969fd0cc1d039310f255de29def1aa40c109144ab184014af1342ab19594a5f3", "text": "resolve_consider_marking_as_pub = consider marking `{$ident}` as `pub` in the imported module resolve_consider_move_macro_position = consider moving the definition of `{$ident}` before this call resolve_const_not_member_of_trait = const `{$const_}` is not a member of trait `{$trait_}` .label = not a member of trait `{$trait_}`", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-f51ff4fe3b760bcb7bb6a654329771a166581053a6e8c77764db87e774206317", "text": "attempt to use a non-constant value in a constant .suggestion = try using `Self` resolve_macro_defined_later = a macro with the same name exists, but it appears later at here resolve_macro_expected_found = expected {$expected}, found {$found} `{$macro_path}`", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-9101940946c343904bc40b46fe0cbdb723d1f291ac9c41e32b6b4c29bfaf0f1b", "text": "use thin_vec::{thin_vec, ThinVec}; use crate::errors::{AddedMacroUse, ChangeImportBinding, ChangeImportBindingSuggestion}; use crate::errors::{ConsiderAddingADerive, ExplicitUnsafeTraits, MaybeMissingMacroRulesName}; use crate::errors::{ ConsiderAddingADerive, ExplicitUnsafeTraits, MacroDefinedLater, MacroSuggMovePosition, MaybeMissingMacroRulesName, }; use crate::imports::{Import, ImportKind}; use crate::late::{PatternSource, Rib}; use crate::{errors as errs, BindingKey};", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-93c1673ef84ee86430ef5b733ff5869de94f3a556d23516ab31711e68fc8253a", "text": "return; } let unused_macro = self.unused_macros.iter().find_map(|(def_id, (_, unused_ident))| { if unused_ident.name == ident.name { Some((def_id.clone(), unused_ident.clone())) } else { None } }); if let Some((def_id, unused_ident)) = unused_macro { let scope = self.local_macro_def_scopes[&def_id]; let parent_nearest = parent_scope.module.nearest_parent_mod(); if Some(parent_nearest) == scope.opt_def_id() { err.subdiagnostic(self.dcx(), MacroDefinedLater { span: unused_ident.span }); err.subdiagnostic(self.dcx(), MacroSuggMovePosition { span: ident.span, ident }); return; } } if self.macro_names.contains(&ident.normalize_to_macros_2_0()) { err.subdiagnostic(self.dcx(), AddedMacroUse); return;", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-c853a297983c54e9e2ef7e5fabee5f808ba821041840cc22e36f04323f48752a", "text": "} #[derive(Subdiagnostic)] #[note(resolve_macro_defined_later)] pub(crate) struct MacroDefinedLater { #[primary_span] pub(crate) span: Span, } #[derive(Subdiagnostic)] #[label(resolve_consider_move_macro_position)] pub(crate) struct MacroSuggMovePosition { #[primary_span] pub(crate) span: Span, pub(crate) ident: Ident, } #[derive(Subdiagnostic)] #[note(resolve_missing_macro_rules_name)] pub(crate) struct MaybeMissingMacroRulesName { #[primary_span]", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-1cda3c2f87f32d37b2a43829f62ef353d341638181c30e01dd2dada9cec18b0f", "text": " mod demo { fn hello() { something_later!(); //~ ERROR cannot find macro `something_later` in this scope } macro_rules! something_later { () => { println!(\"successfully expanded!\"); }; } } fn main() {} ", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-9bade7b6c526bae56a4c988e5028989593e107fe060105395125f61a378aa677", "text": " error: cannot find macro `something_later` in this scope --> $DIR/defined-later-issue-121061-2.rs:3:9 | LL | something_later!(); | ^^^^^^^^^^^^^^^ consider moving the definition of `something_later` before this call | note: a macro with the same name exists, but it appears later at here --> $DIR/defined-later-issue-121061-2.rs:6:18 | LL | macro_rules! something_later { | ^^^^^^^^^^^^^^^ error: aborting due to 1 previous error ", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-e3df51e71a74896f4d81e03b17ccbe35bd943262809543c72aa8de00fddf58ba", "text": " fn main() { something_later!(); //~ ERROR cannot find macro `something_later` in this scope } macro_rules! something_later { () => { println!(\"successfully expanded!\"); }; } ", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-79689aef23903b9e3abb96cb3d5138ccb7c801ce7711cd1b39f84779d5d700a1", "query": "Often times, the actual error of the attempted call and the warning of the unused macro can be apart and/or soaked in other errors. Due to this, it might be desirable to couple a help message addressing this specific situation of a later definition. This might also be useful advice for beginners who are just starting out with Rust and might not be aware of the horrors of declarative macros using , and as such could appreciate a helping hand by the compiler here. No response_ I tried to keep the suggested output close to the one for similarly named macros/variables, but that might not be applicable here and instead deserve its own section under the code excerpt. Open for bikeshedding. Like all of this, actually. Hmm. (The message is the same on nightly 2024-02-12.)\nlabel +D-terse +D-newcomer-roadblock\nThank you!!", "positive_passages": [{"docid": "doc-en-rust-d6c427851beb6b862bee3f647ceeb310a1dbb5b3817cd283baa744f509b6a4c8", "text": " error: cannot find macro `something_later` in this scope --> $DIR/defined-later-issue-121061.rs:2:5 | LL | something_later!(); | ^^^^^^^^^^^^^^^ consider moving the definition of `something_later` before this call | note: a macro with the same name exists, but it appears later at here --> $DIR/defined-later-issue-121061.rs:5:14 | LL | macro_rules! something_later { | ^^^^^^^^^^^^^^^ error: aborting due to 1 previous error ", "commid": "rust_pr_121130"}], "negative_passages": []} {"query_id": "q-en-rust-f3fbecf67faffd5a91872fb5dc902a154b4c8dcfc6ac73a0e7b4360d7c80fe87", "query": " $DIR/restrict_type_hir.rs:31:5 | LL | _: i32, | ^^^^^^ error: unnamed fields can only have struct or union types --> $DIR/restrict_type_hir.rs:32:5 | LL | _: MyI32, | ^^^^^^^^ error: unnamed fields can only have struct or union types --> $DIR/restrict_type_hir.rs:33:5 | LL | _: BadEnum, | ^^^^^^^^^^ error: unnamed fields can only have struct or union types --> $DIR/restrict_type_hir.rs:34:5 | LL | _: BadEnum2, | ^^^^^^^^^^^ error: named type of unnamed field must have `#[repr(C)]` representation --> $DIR/restrict_type_hir.rs:36:5 | LL | _: dep::BadStruct, | ^^^^^^^^^^^^^^^^^ unnamed field defined here | ::: $DIR/auxiliary/dep.rs:4:1 | LL | pub struct BadStruct(()); | -------------------- `BadStruct` defined here | help: add `#[repr(C)]` to this struct --> $DIR/auxiliary/dep.rs:4:1 | LL + #[repr(C)] LL | pub struct BadStruct(()); | error: unnamed fields can only have struct or union types --> $DIR/restrict_type_hir.rs:37:5 | LL | _: dep::BadEnum, | ^^^^^^^^^^^^^^^ error: unnamed fields can only have struct or union types --> $DIR/restrict_type_hir.rs:38:5 | LL | _: dep::BadEnum2, | ^^^^^^^^^^^^^^^^ error: unnamed fields can only have struct or union types --> $DIR/restrict_type_hir.rs:39:5 | LL | _: dep::BadAlias, | ^^^^^^^^^^^^^^^^ error: aborting due to 8 previous errors ", "commid": "rust_pr_121198"}], "negative_passages": []} {"query_id": "q-en-rust-9e6952b0e4c733e43b487b22139c7b9d8f25234713803d47bad52f4aa9eae59a", "query": " $DIR/normalize-conflicting-impls.rs:10:8 | LL | R::Value: DimName, | ^^^^^ associated type `Value` not found error[E0119]: conflicting implementations of trait `Allocator<_, ()>` for type `DefaultAllocator` --> $DIR/normalize-conflicting-impls.rs:14:1 | LL | / impl Allocator for DefaultAllocator LL | | where LL | | R::Value: DimName, | |______________________- first implementation here ... LL | impl Allocator for DefaultAllocator {} | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `DefaultAllocator` error: aborting due to 2 previous errors Some errors have detailed explanations: E0119, E0220. For more information about an error, try `rustc --explain E0119`. ", "commid": "rust_pr_121181"}], "negative_passages": []} {"query_id": "q-en-rust-d78a28c80280dbfbe670760a941d50922ed04307f37282d74793ada34680c2fd", "query": "I expect that the invocation to be for the value that is moved, but for some reason it is doing it for the entire closure. No response No response\nlabel +E-needs-mcve\nMCVE or", "positive_passages": [{"docid": "doc-en-rust-b5b64d9060e5f594e3b32ab3eee59689eec2b863179801e322b57b5090ca15eb", "text": "#![allow(rustc::untranslatable_diagnostic)] use either::Either; use hir::ClosureKind; use rustc_data_structures::captures::Captures; use rustc_data_structures::fx::FxIndexSet; use rustc_errors::{codes::*, struct_span_code_err, Applicability, Diag, MultiSpan};", "commid": "rust_pr_122589"}], "negative_passages": []} {"query_id": "q-en-rust-d78a28c80280dbfbe670760a941d50922ed04307f37282d74793ada34680c2fd", "query": "I expect that the invocation to be for the value that is moved, but for some reason it is doing it for the entire closure. No response No response\nlabel +E-needs-mcve\nMCVE or", "positive_passages": [{"docid": "doc-en-rust-409f47c5f5bcfe300e2012c02feae4be96eff16a78e69b11439659d177d53052", "text": "} else if let UseSpans::FnSelfUse { kind: CallKind::Normal { .. }, .. } = move_spans { // We already suggest cloning for these cases in `explain_captures`. } else if let UseSpans::ClosureUse { closure_kind: ClosureKind::Coroutine(CoroutineKind::Desugared(_, CoroutineSource::Block)), args_span: _, capture_kind_span: _, path_span, } = move_spans { self.suggest_cloning(err, ty, expr, path_span); } else if self.suggest_hoisting_call_outside_loop(err, expr) { // The place where the the type moves would be misleading to suggest clone. // #121466", "commid": "rust_pr_122589"}], "negative_passages": []} {"query_id": "q-en-rust-d78a28c80280dbfbe670760a941d50922ed04307f37282d74793ada34680c2fd", "query": "I expect that the invocation to be for the value that is moved, but for some reason it is doing it for the entire closure. No response No response\nlabel +E-needs-mcve\nMCVE or", "positive_passages": [{"docid": "doc-en-rust-3c1f3cf9eec3d0b3d7bb299de038256e5923808814b5c4bd38dba351969290f4", "text": "} // FIXME: We make sure that this is a normal top-level binding, // but we could suggest `todo!()` for all uninitalized bindings in the pattern pattern // but we could suggest `todo!()` for all uninitialized bindings in the pattern pattern if let hir::StmtKind::Let(hir::LetStmt { span, ty, init: None, pat, .. }) = &ex.kind && let hir::PatKind::Binding(..) = pat.kind", "commid": "rust_pr_122589"}], "negative_passages": []} {"query_id": "q-en-rust-d78a28c80280dbfbe670760a941d50922ed04307f37282d74793ada34680c2fd", "query": "I expect that the invocation to be for the value that is moved, but for some reason it is doing it for the entire closure. No response No response\nlabel +E-needs-mcve\nMCVE or", "positive_passages": [{"docid": "doc-en-rust-cef54e1b91a40dbb60ff5b7e8a2ca89461d3cd4460fa89b306d83ef9312c90a4", "text": "true } /// In a move error that occurs on a call wihtin a loop, we try to identify cases where cloning /// In a move error that occurs on a call within a loop, we try to identify cases where cloning /// the value would lead to a logic error. We infer these cases by seeing if the moved value is /// part of the logic to break the loop, either through an explicit `break` or if the expression /// is part of a `while let`.", "commid": "rust_pr_122589"}], "negative_passages": []} {"query_id": "q-en-rust-d78a28c80280dbfbe670760a941d50922ed04307f37282d74793ada34680c2fd", "query": "I expect that the invocation to be for the value that is moved, but for some reason it is doing it for the entire closure. No response No response\nlabel +E-needs-mcve\nMCVE or", "positive_passages": [{"docid": "doc-en-rust-a60addf36c3f7bda8274153931fb465bd6d9337e6d669790ce1b791cc0c97b6e", "text": "{ // FIXME: We could check that the call's *parent* takes `&mut val` to make the // suggestion more targeted to the `mk_iter(val).next()` case. Maybe do that only to // check for wheter to suggest `let value` or `let mut value`. // check for whether to suggest `let value` or `let mut value`. let span = in_loop.span; if !finder.found_breaks.is_empty()", "commid": "rust_pr_122589"}], "negative_passages": []} {"query_id": "q-en-rust-d78a28c80280dbfbe670760a941d50922ed04307f37282d74793ada34680c2fd", "query": "I expect that the invocation to be for the value that is moved, but for some reason it is doing it for the entire closure. No response No response\nlabel +E-needs-mcve\nMCVE or", "positive_passages": [{"docid": "doc-en-rust-d2bfcd116f9e9ecff13cfaad405983acbf185d1e159a055bc5da099f71524a4a", "text": " //@ edition:2021 async fn clone_async_block(value: String) { for _ in 0..10 { async { //~ ERROR: use of moved value: `value` [E0382] drop(value); //~^ HELP: consider cloning the value if the performance cost is acceptable }.await } } fn main() {} ", "commid": "rust_pr_122589"}], "negative_passages": []} {"query_id": "q-en-rust-d78a28c80280dbfbe670760a941d50922ed04307f37282d74793ada34680c2fd", "query": "I expect that the invocation to be for the value that is moved, but for some reason it is doing it for the entire closure. No response No response\nlabel +E-needs-mcve\nMCVE or", "positive_passages": [{"docid": "doc-en-rust-daa75c6ad631ae958e1b60eda720a89948583d5e3b97c42d25d4c08747cd02a9", "text": " error[E0382]: use of moved value: `value` --> $DIR/cloning-in-async-block-121547.rs:5:9 | LL | async fn clone_async_block(value: String) { | ----- move occurs because `value` has type `String`, which does not implement the `Copy` trait LL | for _ in 0..10 { | -------------- inside of this loop LL | / async { LL | | drop(value); | | ----- use occurs due to use in coroutine LL | | LL | | }.await | |_________^ value moved here, in previous iteration of loop | help: consider cloning the value if the performance cost is acceptable | LL | drop(value.clone()); | ++++++++ error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0382`. ", "commid": "rust_pr_122589"}], "negative_passages": []} {"query_id": "q-en-rust-577b2d31afb6271cb2335741b57a534d2c858d544dfcacd2268b43d5423d6d50", "query": " $DIR/double-opaque-parent-predicates.rs:3:12 | LL | #![feature(generic_const_exprs)] | ^^^^^^^^^^^^^^^^^^^ | = note: see issue #76560 for more information = note: `#[warn(incomplete_features)]` on by default warning: 1 warning emitted ", "commid": "rust_pr_125501"}], "negative_passages": []} {"query_id": "q-en-rust-8ed5c24ee95af5aba0d841835b1f32eafc32b117d84ce6fd4b4ba7cfd1c0c488", "query": "I'm referring to this particular line Even though in environments Global as a structure exists and is defined, and it only shows up in PhantomData, the use of the structure is most likely undefined if attribute is never used. Yes, there will be no calls to Global as it's only a phantom, and execution will not be affected, but just seems ill-defined at least. For completeness: compilation of code that uses with custom allocator under is not affected, and there is no sign of runtime bugs.\nDoes this matter? It holds it directly:\nMy issue is the opposite - it should be in the PhantomData too, but not implicit\nI understand, but it already includes for dropck, so why should it include it in the too?\nTo avoid inclusion of Global that has no logical relation to the BTreeMap instance with custom allocator\nIf it's accepted as a bug I can quickly fix it, at least it passes tests", "positive_passages": [{"docid": "doc-en-rust-1d26127a37297f94c5109c4bf93b1b5634ee1d3e989db258f792953871186984", "text": "/// `ManuallyDrop` to control drop order (needs to be dropped after all the nodes). pub(super) alloc: ManuallyDrop, // For dropck; the `Box` avoids making the `Unpin` impl more strict than before _marker: PhantomData>, _marker: PhantomData>, } #[stable(feature = \"btree_drop\", since = \"1.7.0\")]", "commid": "rust_pr_121847"}], "negative_passages": []} {"query_id": "q-en-rust-71921e8f73441c8434b3cfedd6afab93d0976d307613359b606ab1ca6d6d6338", "query": "Trying to build Miri against latest rustc artifacts fails: Regression range: Likely cause: Cc\nHow does miri link against rust?\nJust a bunch of :\nGiven that rustc CI still works and only builds against the distributed rustc toolchain are affected, it seems likely that the issue is related to the rustc-dev component -- maybe that doesn't contain all the required files any more?\nReproducing instructions: checkout out to install the right toolchain (that branch uses ) -\nThe necessary symlink is indeed not part of rustc-dev (only rust-dev). We could add it, but it's not obvious to me why it is needed. I'm not familiar with how rustc generates that linker invocation, but the fact that it explicitly includes in it seems wrong to me ( is a dependency of and just linking against should be sufficient).\nThere is no component? At least doesn't list one and comes back empty. I don't know anything about this linking magic either.^^ Cc\nBut evidently the symlink is needed... so in the mean time it'd be good to unstuck this by adding the symlink I think. I expect this will break out-of-tree builds of Clippy as well, and rustfmt -- basically every tool that links against rustc.\nAh, it's not a real component, just the tarball that download-ci-llvm uses. Main problem is that we have checks against shipping symlinks, and I expect they exist for good reason...\nAh, yeah probably Windows doesn't like them. Why can't rustc_driver link against the correct .so file, i.e., why is a symlink needed in the first place?\nI think this is the reason why it gets to the linker line: This is because llvm-config returns something like and the symlink is needed to resolve that to . And it seems like rustc just embeds the original name in the dylib, rather than the symlink-resolved variant, so the symlink is still needed. Possibly that's a bug? Similar to how shared objects embed the resolved name, maybe rustc should do the same for its own metadata. Though independent of general rustc behavior, I guess we could explicitly resolve the symlink in the for rustcllvm. That might be the most straightforward fix.\nIf llvm-config inherently relies on a symlink, how do they support Windows?\nThat would break compiling an rlib which states that it links against a certain dylib when said dylib is not available on the host, right? Currently you only need dylibs to be available when actually linking (that is using a crate type other than rlib or staticlib). It includes , not . Even if just was passed, has a for and thus needs to be present in . Edit: Forgot it is supposed to be in llvm-tools-preview, not in rustc-dev. By the way has never been a symlink afaik. Instead a separate copy was shipped in both the rustc and llvm-tools-preview components. The first for rustc itself to link against and the second for user code to link against. Just like we ship in both the rustc and rustc-dev components. Both copies just happened to be identical.\nI just looked at the latest nightly and is present in both and as it should and has the correct filename for it. also has as .\nI got confused about when got merged. I though it was already included in the latest nightly. I just looked at the llvm-preview component for that PR and it includes rather than the that is expected. Is it possible to rename it back and patch up the ? There is no way to link against with all linkers without a symlink, while for that is not a problem.\nWhy can't rustc_driver include ?\nBecause the linker will prefix the name with and postfix it with , so would cause the linker to look for rather than . (notice the different location of ) The only portable way to link against is to create a symlink from to , set the to and then tell the linker to link against . But on Windows we can't use symlinks. While on linux is possible, this is not the case on most other targets.\nWindows doesn't have a concept of sonames, so I don't think a symlink would be used there. I don't know what the dylib is called there. We use static linking there anyway. So I think we wouldn't actually ship a symlink on anything but Linux. More problematic is the other part of this comment: If rustup-toolchain-install-master can't deal with symlinks, that's also a problem for Linux.\nAny platform except Windows you mean?\nAs far as I can tell, dist-x86_64-linux is the only configuration where we link LLVM dynamically. (Well and the llvm-16/llvm-17 images, but that's non-dist and a different situation anyway.)\nCI also starts with this exact same issue.\nSo to summarize, I see a few options here: Undo the library name change by carrying a patch in our LLVM fork. Implies carrying a long-term patch to LLVM, though a fairly small one. Undo the library name change after the fact. As pointed out, this isn't just a matter of moving the files, we also have to fix up the DTSONAME, which doesn't appear straightforward. Would require adding a dependency on patchelf? Ship the symlink in rustup components like rustc-dev, on Linux only. May require updates to tooling like rustup-toolchain-install-master. Change the way rustc links against upstream dylib dependencies, by resolving library names. Requires replicating linker search path logic and may break other other things. Not sure if this is possible. Same as 4, but only do this in rustllvms Again, not sure about the consequences, but at least this is more limited. I think 3 is probably ideal long term, but depends on just how broken symlink support is right now. Is this a matter of \"we'll copy the file instead of symlinking\" or \"rustup-toolchain-install-master is going to abort\"? Maybe knows. The easiest fix would be 1.\nThere's another fix, which is to ship a with content\nOooh, that's a great idea!\nI got klint CI working with the fix:\nOption 6, convince LLVM that this way of naming libraries is a bad idea and causes problems? I can't tell to what extent this is caused by Rust idiosyncrasies vs LLVM doing something strange. Wait, a .so file can be just a text file...?!?\nBasically, LLVM used to do it's own thing here, and now follows the standard convention for shared object naming on Linux. This turns out to be quite inconvenient for us, but I don't think it would be reasonable to ask LLVM to undo the change. (Though, imho, this change really should not have been done in an rc3 release...) It can be a linker script. Somewhat unusual, but things like or are often linker scripts.\nWell, if that linker script solution works that would be great. :)\nThis is now beginning to block Miri development. If someone reading along knows enough about linkers to approve that would be great. :)", "positive_passages": [{"docid": "doc-en-rust-58b695892a42094c6ebe18ace508ecb5a64a62206f3e15b0936425f11a353752", "text": "// If we have a symlink like libLLVM-18.so -> libLLVM.so.18.1, install the target of the // symlink, which is what will actually get loaded at runtime. builder.install(&t!(fs::canonicalize(source)), destination, 0o644); let full_dest = destination.join(source.file_name().unwrap()); if install_symlink { // If requested, also install the symlink. This is used by download-ci-llvm. let full_dest = destination.join(source.file_name().unwrap()); // For download-ci-llvm, also install the symlink, to match what LLVM does. Using a // symlink is fine here, as this is not a rustup component. builder.copy(&source, &full_dest); } else { // Otherwise, replace the symlink with an equivalent linker script. This is used when // projects like miri link against librustc_driver.so. We don't use a symlink, as // these are not allowed inside rustup components. let link = t!(fs::read_link(source)); t!(std::fs::write(full_dest, format!(\"INPUT({})n\", link.display()))); } } else { builder.install(&source, destination, 0o644);", "commid": "rust_pr_121967"}], "negative_passages": []} {"query_id": "q-en-rust-755a3d6418d79c56655c5c091f7f880eabdba224dcb17a37080548980bc499e7", "query": "Fuzzer generated custom MIR This code has UB under Stacked Borrows, but UB-free under Tree Borrows. Right: Wrong: While it looks like an LLVM issue, I couldn't get an IR-only reproduction. It's possible that rustc is producing IR with UBs On latest nightly cc $DIR/recover-colon-instead-of-eq-in-local.rs:2:32 --> $DIR/recover-colon-instead-of-eq-in-local.rs:5:32 | LL | let _: std::env::temp_dir().join(&self, push: Box); | - ^ expected one of `!`, `+`, `->`, `::`, `;`, or `=` | | | while parsing the type for `_` error: expected one of `!`, `+`, `->`, `::`, `;`, or `=`, found `.` --> $DIR/recover-colon-instead-of-eq-in-local.rs:9:32 | LL | let _: std::env::temp_dir().join(\"foo\"); | - ^ expected one of `!`, `+`, `->`, `::`, `;`, or `=`", "commid": "rust_pr_122115"}], "negative_passages": []} {"query_id": "q-en-rust-03215a0feb4c97eac70a6e292ef69458786f60dda734323ec1741125efb59cb8", "query": " $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:30 | LL | trait Trait { | ^^^ not found in this scope warning: trait objects without an explicit `dyn` are deprecated --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:22 | LL | trait Trait { | ^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see = note: `#[warn(bare_trait_objects)]` on by default help: if this is an object-safe trait, use `dyn` | LL | trait Trait { | +++ error[E0391]: cycle detected when computing type of `Trait::N` --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:22 | LL | trait Trait { | ^^^^^ | = note: ...which immediately requires computing type of `Trait::N` again note: cycle used when computing explicit predicates of trait `Trait` --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:1 | LL | trait Trait { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error[E0391]: cycle detected when computing type of `Trait::N` --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:13 | LL | trait Trait { | ^^^^^^^^^^^^^^^^^^^^ | = note: ...which immediately requires computing type of `Trait::N` again note: cycle used when computing explicit predicates of trait `Trait` --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:1 | LL | trait Trait { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ = note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information error: `(dyn Trait<{const error}> + 'static)` is forbidden as the type of a const generic parameter --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:22 | LL | trait Trait { | ^^^^^ | = note: the only supported types are integers, `bool` and `char` error: aborting due to 4 previous errors; 1 warning emitted Some errors have detailed explanations: E0391, E0425. For more information about an error, try `rustc --explain E0391`. ", "commid": "rust_pr_122370"}], "negative_passages": []} {"query_id": "q-en-rust-03ba8c452c29f931f5be8f250ddd68dab841c0c61567a7ac4063fd185f697748", "query": " $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:18 | LL | trait Trait { | - first use of `N` ... LL | fn fnc(&self) -> Trait { | ^ already used error[E0425]: cannot find value `bar` in this scope --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:30 | LL | trait Trait { | ^^^ not found in this scope error[E0423]: expected value, found builtin type `u32` --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:29 | LL | fn fnc(&self) -> Trait { | ^^^ not a value error[E0425]: cannot find value `bar` in this scope --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:26:9 | LL | bar | ^^^ not found in this scope warning: trait objects without an explicit `dyn` are deprecated --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:22 |", "commid": "rust_pr_122927"}], "negative_passages": []} {"query_id": "q-en-rust-03ba8c452c29f931f5be8f250ddd68dab841c0c61567a7ac4063fd185f697748", "query": " $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:12 | LL | fn fnc(&self) -> Trait { | ^^^^^^^^^^^^^^^^^^^^ warning: trait objects without an explicit `dyn` are deprecated --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:21 | LL | fn fnc(&self) -> Trait { | ^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see help: if this is an object-safe trait, use `dyn` | LL | fn fnc(&self) -> Trait { | +++ warning: trait objects without an explicit `dyn` are deprecated --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:44 | LL | fn fnc(&self) -> Trait { | ^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see help: if this is an object-safe trait, use `dyn` | LL | fn fnc(&self) -> dyn Trait { | +++ warning: trait objects without an explicit `dyn` are deprecated --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:22 | LL | trait Trait { | ^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` help: if this is an object-safe trait, use `dyn` | LL | trait Trait { | +++ error[E0038]: the trait `Trait` cannot be made into an object --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:22 | LL | trait Trait { | ^^^^^ `Trait` cannot be made into an object | note: for a trait to be \"object safe\" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:8 | LL | trait Trait { | ----- this trait cannot be made into an object... ... LL | fn fnc(&self) -> Trait { | ^^^ ...because method `fnc` has generic type parameters = help: consider moving `fnc` to another trait error[E0038]: the trait `Trait` cannot be made into an object --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:13 | LL | trait Trait { | ^^^^^^^^^^^^^^^^^^^^ `Trait` cannot be made into an object | note: for a trait to be \"object safe\" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:8 | LL | trait Trait { | ----- this trait cannot be made into an object... ... LL | fn fnc(&self) -> Trait { | ^^^ ...because method `fnc` has generic type parameters = help: consider moving `fnc` to another trait error: `(dyn Trait<{const error}> + 'static)` is forbidden as the type of a const generic parameter --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:22 |", "commid": "rust_pr_122927"}], "negative_passages": []} {"query_id": "q-en-rust-03ba8c452c29f931f5be8f250ddd68dab841c0c61567a7ac4063fd185f697748", "query": " $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:44 | LL | trait Trait { | ----- in this trait ... LL | fn fnc(&self) -> Trait { | ^^^^^ | help: you might have meant to use `Self` to refer to the implementing type | LL | fn fnc(&self) -> Self { | ~~~~ warning: trait objects without an explicit `dyn` are deprecated --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:21 | LL | fn fnc(&self) -> Trait { | ^^^^^ | = warning: this is accepted in the current edition (Rust 2015) but is a hard error in Rust 2021! = note: for more information, see = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` help: if this is an object-safe trait, use `dyn` | LL | fn fnc(&self) -> Trait { | +++ error[E0038]: the trait `Trait` cannot be made into an object --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:21 | LL | fn fnc(&self) -> Trait { | ^^^^^ `Trait` cannot be made into an object | note: for a trait to be \"object safe\" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:8 | LL | trait Trait { | ----- this trait cannot be made into an object... ... LL | fn fnc(&self) -> Trait { | ^^^ ...because method `fnc` has generic type parameters = help: consider moving `fnc` to another trait error[E0038]: the trait `Trait` cannot be made into an object --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:1:13 | LL | trait Trait { | ^^^^^^^^^^^^^^^^^^^^ `Trait` cannot be made into an object | note: for a trait to be \"object safe\" it needs to allow building a vtable to allow the call to be resolvable dynamically; for more information visit --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:8 | LL | trait Trait { | ----- this trait cannot be made into an object... ... LL | fn fnc(&self) -> Trait { | ^^^ ...because method `fnc` has generic type parameters = help: consider moving `fnc` to another trait = note: duplicate diagnostic emitted due to `-Z deduplicate-diagnostics=no` error: `(dyn Trait<{const error}> + 'static)` is forbidden as the type of a const generic parameter --> $DIR/ice-hir-wf-check-anon-const-issue-122199.rs:13:21 | LL | fn fnc(&self) -> Trait { | ^^^^^ | = note: the only supported types are integers, `bool` and `char` error: aborting due to 14 previous errors; 5 warnings emitted Some errors have detailed explanations: E0391, E0425. For more information about an error, try `rustc --explain E0391`. Some errors have detailed explanations: E0038, E0391, E0403, E0423, E0425. For more information about an error, try `rustc --explain E0038`. ", "commid": "rust_pr_122927"}], "negative_passages": []} {"query_id": "q-en-rust-4733d423b5dd4012ab23e5a41054aebd0158996f28caeb8dd4c733c98a0a9341", "query": "The compiler allows this code when it shouldn't The following code is buggy: Invalid string contains, as the name suggests, an invalid string; sometimes I get a string filled with , sometimes just garbage, and usually it will eventually assert that the string contains invalid characters. It appears that the first gets dropped when the second one is assigned? Either the first is supposed to be dropped as happens now, and the borrow checker should forbid this code (since the old no longer exists, cannot have a lifetime), or it should let the first live until the end of the current scope, and make the above work.\nWhat platform are you running this on? This code compiles and prints for me on OSX\nI hadn't actually tried in my own rustc, I created the minimized version on rusti: Inspired by an actual bug that I got after trying to read some stuff from a file. It seems what I posted originally does actually compile and work properly on my system as well, I guess I should have tested that... I'll try and find and find a minimized example tomorrow.\nis getting old at this point, and I suspect that this was fixed by\nThis one asserts (sometimes, it's random) for real on my system, most recent build of master (debian 64-bit, rustc 0.10-pre 2014-02-12 14:51:48 -0800) Assuming the executable is named bug: run with Removing the line that says gives me the output: Otherwise it appears random, usually asserting:\nQuite plausibly due to , though I haven't dug in in detail.\nIt seems the bug persists even without the shadowing so perhaps this is related to , except for the 'match' part?\nIt appears that we allow this code to compile: Nominating, that's bad.\nAny suggestions for a better name for this issue? The current name is misleading. Perhaps something along the lines of \"Stale pointers can be created in safe code\"?\nUpdated title and description\ncc me\ncc\nInterestingly enough, the compiler rejects this code: It appears that the type ascription is throwing off the compiler?\nAnother example taken from\n0, P-backcompat-lang\nThe compiler correctly rejects:\ncc me\nhas taken a major detour to fix other two more fundamental bugs first. On top of that fix, the root cause of this bug is also becoming clearer. Case 1: Case 2: Case 3: The difference here is that the RHS is directly assignable to LHS in case 1 but not in case 2 + 3 so takes different code paths. And in latter two cases, there's bug in which upstream and downstream code inside disagree with the order of two parameters so two regions are merged incorrectly. The fix is rather simple. But the variance inference adds another twist to case 3. The result of variance inference can be used to do type parameter substitution, which is not enabled yet. The net effect is two negatives become positive; the current rejects that too.It did compile on top of , which in itself is incorrect. So I guess the fix for this bug must wait for the PR to land.", "positive_passages": [{"docid": "doc-en-rust-51addcf701c99d0cc6e215043500e871039c907816b052c9dd86000f3f7eb841", "text": "use middle::freevars; use middle::pat_util; use middle::ty; use middle::typeck::MethodCall; use middle::typeck::{MethodCall, MethodObject, MethodOrigin, MethodParam}; use middle::typeck::{MethodStatic}; use middle::typeck; use syntax::ast; use syntax::codemap::{Span}; use util::ppaux::Repr; use std::gc::Gc; use syntax::ast; use syntax::codemap::Span; /////////////////////////////////////////////////////////////////////////// // The Delegate trait", "commid": "rust_pr_15313"}], "negative_passages": []} {"query_id": "q-en-rust-4733d423b5dd4012ab23e5a41054aebd0158996f28caeb8dd4c733c98a0a9341", "query": "The compiler allows this code when it shouldn't The following code is buggy: Invalid string contains, as the name suggests, an invalid string; sometimes I get a string filled with , sometimes just garbage, and usually it will eventually assert that the string contains invalid characters. It appears that the first gets dropped when the second one is assigned? Either the first is supposed to be dropped as happens now, and the borrow checker should forbid this code (since the old no longer exists, cannot have a lifetime), or it should let the first live until the end of the current scope, and make the above work.\nWhat platform are you running this on? This code compiles and prints for me on OSX\nI hadn't actually tried in my own rustc, I created the minimized version on rusti: Inspired by an actual bug that I got after trying to read some stuff from a file. It seems what I posted originally does actually compile and work properly on my system as well, I guess I should have tested that... I'll try and find and find a minimized example tomorrow.\nis getting old at this point, and I suspect that this was fixed by\nThis one asserts (sometimes, it's random) for real on my system, most recent build of master (debian 64-bit, rustc 0.10-pre 2014-02-12 14:51:48 -0800) Assuming the executable is named bug: run with Removing the line that says gives me the output: Otherwise it appears random, usually asserting:\nQuite plausibly due to , though I haven't dug in in detail.\nIt seems the bug persists even without the shadowing so perhaps this is related to , except for the 'match' part?\nIt appears that we allow this code to compile: Nominating, that's bad.\nAny suggestions for a better name for this issue? The current name is misleading. Perhaps something along the lines of \"Stale pointers can be created in safe code\"?\nUpdated title and description\ncc me\ncc\nInterestingly enough, the compiler rejects this code: It appears that the type ascription is throwing off the compiler?\nAnother example taken from\n0, P-backcompat-lang\nThe compiler correctly rejects:\ncc me\nhas taken a major detour to fix other two more fundamental bugs first. On top of that fix, the root cause of this bug is also becoming clearer. Case 1: Case 2: Case 3: The difference here is that the RHS is directly assignable to LHS in case 1 but not in case 2 + 3 so takes different code paths. And in latter two cases, there's bug in which upstream and downstream code inside disagree with the order of two parameters so two regions are merged incorrectly. The fix is rather simple. But the variance inference adds another twist to case 3. The result of variance inference can be used to do type parameter substitution, which is not enabled yet. The net effect is two negatives become positive; the current rejects that too.It did compile on top of , which in itself is incorrect. So I guess the fix for this bug must wait for the PR to land.", "positive_passages": [{"docid": "doc-en-rust-865d0fe2d197203290d0d84bc5c50ccc2517bf935f2ba43c34cd0e8874291a37", "text": "WriteAndRead, // x += y } enum OverloadedCallType { FnOverloadedCall, FnMutOverloadedCall, FnOnceOverloadedCall, } impl OverloadedCallType { fn from_trait_id(tcx: &ty::ctxt, trait_id: ast::DefId) -> OverloadedCallType { for &(maybe_function_trait, overloaded_call_type) in [ (tcx.lang_items.fn_once_trait(), FnOnceOverloadedCall), (tcx.lang_items.fn_mut_trait(), FnMutOverloadedCall), (tcx.lang_items.fn_trait(), FnOverloadedCall) ].iter() { match maybe_function_trait { Some(function_trait) if function_trait == trait_id => { return overloaded_call_type } _ => continue, } } tcx.sess.bug(\"overloaded call didn't map to known function trait\") } fn from_method_id(tcx: &ty::ctxt, method_id: ast::DefId) -> OverloadedCallType { let method_descriptor = match tcx.methods.borrow_mut().find(&method_id) { None => { tcx.sess.bug(\"overloaded call method wasn't in method map\") } Some(ref method_descriptor) => (*method_descriptor).clone(), }; let impl_id = match method_descriptor.container { ty::TraitContainer(_) => { tcx.sess.bug(\"statically resolved overloaded call method belonged to a trait?!\") } ty::ImplContainer(impl_id) => impl_id, }; let trait_ref = match ty::impl_trait_ref(tcx, impl_id) { None => { tcx.sess.bug(\"statically resolved overloaded call impl didn't implement a trait?!\") } Some(ref trait_ref) => (*trait_ref).clone(), }; OverloadedCallType::from_trait_id(tcx, trait_ref.def_id) } fn from_method_origin(tcx: &ty::ctxt, origin: &MethodOrigin) -> OverloadedCallType { match *origin { MethodStatic(def_id) => { OverloadedCallType::from_method_id(tcx, def_id) } MethodParam(ref method_param) => { OverloadedCallType::from_trait_id(tcx, method_param.trait_id) } MethodObject(ref method_object) => { OverloadedCallType::from_trait_id(tcx, method_object.trait_id) } } } } /////////////////////////////////////////////////////////////////////////// // The ExprUseVisitor type //", "commid": "rust_pr_15313"}], "negative_passages": []} {"query_id": "q-en-rust-4733d423b5dd4012ab23e5a41054aebd0158996f28caeb8dd4c733c98a0a9341", "query": "The compiler allows this code when it shouldn't The following code is buggy: Invalid string contains, as the name suggests, an invalid string; sometimes I get a string filled with , sometimes just garbage, and usually it will eventually assert that the string contains invalid characters. It appears that the first gets dropped when the second one is assigned? Either the first is supposed to be dropped as happens now, and the borrow checker should forbid this code (since the old no longer exists, cannot have a lifetime), or it should let the first live until the end of the current scope, and make the above work.\nWhat platform are you running this on? This code compiles and prints for me on OSX\nI hadn't actually tried in my own rustc, I created the minimized version on rusti: Inspired by an actual bug that I got after trying to read some stuff from a file. It seems what I posted originally does actually compile and work properly on my system as well, I guess I should have tested that... I'll try and find and find a minimized example tomorrow.\nis getting old at this point, and I suspect that this was fixed by\nThis one asserts (sometimes, it's random) for real on my system, most recent build of master (debian 64-bit, rustc 0.10-pre 2014-02-12 14:51:48 -0800) Assuming the executable is named bug: run with Removing the line that says gives me the output: Otherwise it appears random, usually asserting:\nQuite plausibly due to , though I haven't dug in in detail.\nIt seems the bug persists even without the shadowing so perhaps this is related to , except for the 'match' part?\nIt appears that we allow this code to compile: Nominating, that's bad.\nAny suggestions for a better name for this issue? The current name is misleading. Perhaps something along the lines of \"Stale pointers can be created in safe code\"?\nUpdated title and description\ncc me\ncc\nInterestingly enough, the compiler rejects this code: It appears that the type ascription is throwing off the compiler?\nAnother example taken from\n0, P-backcompat-lang\nThe compiler correctly rejects:\ncc me\nhas taken a major detour to fix other two more fundamental bugs first. On top of that fix, the root cause of this bug is also becoming clearer. Case 1: Case 2: Case 3: The difference here is that the RHS is directly assignable to LHS in case 1 but not in case 2 + 3 so takes different code paths. And in latter two cases, there's bug in which upstream and downstream code inside disagree with the order of two parameters so two regions are merged incorrectly. The fix is rather simple. But the variance inference adds another twist to case 3. The result of variance inference can be used to do type parameter substitution, which is not enabled yet. The net effect is two negatives become positive; the current rejects that too.It did compile on top of , which in itself is incorrect. So I guess the fix for this bug must wait for the PR to land.", "positive_passages": [{"docid": "doc-en-rust-e41921f7dc90fed33c65fc9fa5c99a12f8e8b577902ebfe76a5bf30cd256609d", "text": "} } _ => { match self.tcx() .method_map .borrow() .find(&MethodCall::expr(call.id)) { Some(_) => { // FIXME(#14774, pcwalton): Implement this. let overloaded_call_type = match self.tcx() .method_map .borrow() .find(&MethodCall::expr(call.id)) { Some(ref method_callee) => { OverloadedCallType::from_method_origin( self.tcx(), &method_callee.origin) } None => { self.tcx().sess.span_bug( callee.span, format!(\"unexpected callee type {}\", callee_ty.repr(self.tcx())).as_slice()); callee_ty.repr(self.tcx())).as_slice()) } }; match overloaded_call_type { FnMutOverloadedCall => { self.borrow_expr(callee, ty::ReScope(call.id), ty::MutBorrow, ClosureInvocation); } FnOverloadedCall => { self.borrow_expr(callee, ty::ReScope(call.id), ty::ImmBorrow, ClosureInvocation); } FnOnceOverloadedCall => self.consume_expr(callee), } } }", "commid": "rust_pr_15313"}], "negative_passages": []} {"query_id": "q-en-rust-4733d423b5dd4012ab23e5a41054aebd0158996f28caeb8dd4c733c98a0a9341", "query": "The compiler allows this code when it shouldn't The following code is buggy: Invalid string contains, as the name suggests, an invalid string; sometimes I get a string filled with , sometimes just garbage, and usually it will eventually assert that the string contains invalid characters. It appears that the first gets dropped when the second one is assigned? Either the first is supposed to be dropped as happens now, and the borrow checker should forbid this code (since the old no longer exists, cannot have a lifetime), or it should let the first live until the end of the current scope, and make the above work.\nWhat platform are you running this on? This code compiles and prints for me on OSX\nI hadn't actually tried in my own rustc, I created the minimized version on rusti: Inspired by an actual bug that I got after trying to read some stuff from a file. It seems what I posted originally does actually compile and work properly on my system as well, I guess I should have tested that... I'll try and find and find a minimized example tomorrow.\nis getting old at this point, and I suspect that this was fixed by\nThis one asserts (sometimes, it's random) for real on my system, most recent build of master (debian 64-bit, rustc 0.10-pre 2014-02-12 14:51:48 -0800) Assuming the executable is named bug: run with Removing the line that says gives me the output: Otherwise it appears random, usually asserting:\nQuite plausibly due to , though I haven't dug in in detail.\nIt seems the bug persists even without the shadowing so perhaps this is related to , except for the 'match' part?\nIt appears that we allow this code to compile: Nominating, that's bad.\nAny suggestions for a better name for this issue? The current name is misleading. Perhaps something along the lines of \"Stale pointers can be created in safe code\"?\nUpdated title and description\ncc me\ncc\nInterestingly enough, the compiler rejects this code: It appears that the type ascription is throwing off the compiler?\nAnother example taken from\n0, P-backcompat-lang\nThe compiler correctly rejects:\ncc me\nhas taken a major detour to fix other two more fundamental bugs first. On top of that fix, the root cause of this bug is also becoming clearer. Case 1: Case 2: Case 3: The difference here is that the RHS is directly assignable to LHS in case 1 but not in case 2 + 3 so takes different code paths. And in latter two cases, there's bug in which upstream and downstream code inside disagree with the order of two parameters so two regions are merged incorrectly. The fix is rather simple. But the variance inference adds another twist to case 3. The result of variance inference can be used to do type parameter substitution, which is not enabled yet. The net effect is two negatives become positive; the current rejects that too.It did compile on top of , which in itself is incorrect. So I guess the fix for this bug must wait for the PR to land.", "positive_passages": [{"docid": "doc-en-rust-e0f2477cc0a13132c6a6871d91660c7b5abddfe131ead6c62af5f4e16fc766b1", "text": " // Copyright 2012 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(overloaded_calls)] use std::ops::{Fn, FnMut, FnOnce}; struct SFn { x: int, y: int, } impl Fn<(int,),int> for SFn { fn call(&self, (z,): (int,)) -> int { self.x * self.y * z } } struct SFnMut { x: int, y: int, } impl FnMut<(int,),int> for SFnMut { fn call_mut(&mut self, (z,): (int,)) -> int { self.x * self.y * z } } struct SFnOnce { x: String, } impl FnOnce<(String,),uint> for SFnOnce { fn call_once(self, (z,): (String,)) -> uint { self.x.len() + z.len() } } fn f() { let mut s = SFn { x: 1, y: 2, }; let sp = &mut s; s(3); //~ ERROR cannot borrow `s` as immutable because it is also borrowed as mutable //~^ ERROR cannot borrow `s` as immutable because it is also borrowed as mutable } fn g() { let s = SFnMut { x: 1, y: 2, }; s(3); //~ ERROR cannot borrow immutable local variable `s` as mutable } fn h() { let s = SFnOnce { x: \"hello\".to_string(), }; s(\" world\".to_string()); s(\" world\".to_string()); //~ ERROR use of moved value: `s` } fn main() {} ", "commid": "rust_pr_15313"}], "negative_passages": []} {"query_id": "q-en-rust-5b9b45c15e00d0609696237e7acbadfe50d57c1ab9d01256267cd29d0ab8431d", "query": "When compiling my feature and is included I got the attached ICE. The following is the unstable configuration is the repository this occurs on. I will get a minimal verifiable example setup shortly. @feature_gate = sym::diagnostic_namespace; } declare_lint! {", "commid": "rust_pr_122482"}], "negative_passages": []} {"query_id": "q-en-rust-5b9b45c15e00d0609696237e7acbadfe50d57c1ab9d01256267cd29d0ab8431d", "query": "When compiling my feature and is included I got the attached ICE. The following is the unstable configuration is the repository this occurs on. I will get a minimal verifiable example setup shortly. #![deny(unknown_or_malformed_diagnostic_attributes)] #[diagnostic::unknown_attribute] //~^ERROR unknown diagnostic attribute struct Foo; fn main() {} ", "commid": "rust_pr_122482"}], "negative_passages": []} {"query_id": "q-en-rust-5b9b45c15e00d0609696237e7acbadfe50d57c1ab9d01256267cd29d0ab8431d", "query": "When compiling my feature and is included I got the attached ICE. The following is the unstable configuration is the repository this occurs on. I will get a minimal verifiable example setup shortly. error: unknown diagnostic attribute --> $DIR/deny_malformed_attribute.rs:3:15 | LL | #[diagnostic::unknown_attribute] | ^^^^^^^^^^^^^^^^^ | note: the lint level is defined here --> $DIR/deny_malformed_attribute.rs:1:9 | LL | #![deny(unknown_or_malformed_diagnostic_attributes)] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 1 previous error ", "commid": "rust_pr_122482"}], "negative_passages": []} {"query_id": "q-en-rust-98682eef439ad1af5fb6e38052e9af8b7066e32cf311ad5853e57d481185fd17", "query": " $DIR/synthetic-hir-has-parent.rs:7:9 | LL | String: Copy; | ^^^^^^^^^^^^ the trait `Copy` is not implemented for `String` | = help: see issue #48214 help: add `#![feature(trivial_bounds)]` to the crate attributes to enable | LL + #![feature(trivial_bounds)] | error[E0277]: the trait bound `String: Copy` is not satisfied --> $DIR/synthetic-hir-has-parent.rs:4:18 | LL | fn demo() -> impl Foo | ^^^^^^^^ the trait `Copy` is not implemented for `String` | = help: see issue #48214 help: add `#![feature(trivial_bounds)]` to the crate attributes to enable | LL + #![feature(trivial_bounds)] | error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_123218"}], "negative_passages": []} {"query_id": "q-en-rust-3107a3df464aecec6021315f3fd8cf1c2c1b0097818c5877dba79a2700415494", "query": " $DIR/ensure-overriding-bindings-in-pattern-with-ty-err-doesnt-ice.rs:2:31 | LL | let str::<{fn str() { let str::T>>::as_bytes; }}, T>::as_bytes; | ^^^^^^^^^^^^^^^^^^ arbitrary expressions are not allowed in patterns error[E0412]: cannot find type `T` in this scope --> $DIR/ensure-overriding-bindings-in-pattern-with-ty-err-doesnt-ice.rs:2:55 | LL | let str::<{fn str() { let str::T>>::as_bytes; }}, T>::as_bytes; | ^ not found in this scope error[E0109]: type and const arguments are not allowed on builtin type `str` --> $DIR/ensure-overriding-bindings-in-pattern-with-ty-err-doesnt-ice.rs:2:15 | LL | let str::<{fn str() { let str::T>>::as_bytes; }}, T>::as_bytes; | --- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^ type and const arguments not allowed | | | not allowed on builtin type `str` | help: primitive type `str` doesn't have generic parameters | LL - let str::<{fn str() { let str::T>>::as_bytes; }}, T>::as_bytes; LL + let str::as_bytes; | error[E0533]: expected unit struct, unit variant or constant, found associated function `str<, T>::as_bytes` --> $DIR/ensure-overriding-bindings-in-pattern-with-ty-err-doesnt-ice.rs:2:9 | LL | let str::<{fn str() { let str::T>>::as_bytes; }}, T>::as_bytes; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ not a unit struct, unit variant or constant error: aborting due to 4 previous errors Some errors have detailed explanations: E0109, E0412, E0533. For more information about an error, try `rustc --explain E0109`. ", "commid": "rust_pr_123202"}], "negative_passages": []} {"query_id": "q-en-rust-6c8c5870718a88b6f3e4292dcab7f49bf57fb7979e82b768e293019d6bd04832", "query": " $DIR/binary-op-suggest-deref.rs:60:13", "commid": "rust_pr_123085"}], "negative_passages": []} {"query_id": "q-en-rust-6c8c5870718a88b6f3e4292dcab7f49bf57fb7979e82b768e293019d6bd04832", "query": " $DIR/binops.rs:7:7", "commid": "rust_pr_123085"}], "negative_passages": []} {"query_id": "q-en-rust-6c8c5870718a88b6f3e4292dcab7f49bf57fb7979e82b768e293019d6bd04832", "query": " $DIR/dont-ice-on-invalid-lifetime-in-macro-definition.rs:5:17 | LL | e:: | ^ unknown prefix | = note: prefixed identifiers and literals are reserved since Rust 2021 help: consider inserting whitespace here | LL | e:: | + error: aborting due to 1 previous error ", "commid": "rust_pr_123223"}], "negative_passages": []} {"query_id": "q-en-rust-eff82a61886b25b0f1dd4cf2300b1ef5995dc7fc7eccfb7efad8700de136ebcb", "query": " $DIR/lex-bad-str-literal-as-char-3.rs:5:21 | LL | println!('hello world'); | ^^^^^ unknown prefix | = note: prefixed identifiers and literals are reserved since Rust 2021 help: if you meant to write a string literal, use double quotes | LL | println!(\"hello world\"); | ~ ~ error[E0762]: unterminated character literal --> $DIR/lex-bad-str-literal-as-char-3.rs:5:26 |", "commid": "rust_pr_123223"}], "negative_passages": []} {"query_id": "q-en-rust-eff82a61886b25b0f1dd4cf2300b1ef5995dc7fc7eccfb7efad8700de136ebcb", "query": " $DIR/lex-bad-str-literal-as-char-4.rs:4:25 | LL | println!('hello world'); | ^^^^^ unknown prefix | = note: prefixed identifiers and literals are reserved since Rust 2021 help: if you meant to write a string literal, use double quotes | LL | println!(\"hello world\"); | ~ ~ error[E0762]: unterminated character literal --> $DIR/lex-bad-str-literal-as-char-4.rs:4:30 | LL | println!('hello world'); | ^^^ | help: if you meant to write a string literal, use double quotes | LL | println!(\"hello world\"); | ~ ~ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0762`. ", "commid": "rust_pr_123223"}], "negative_passages": []} {"query_id": "q-en-rust-b98989eba2174d54f8a300e0582a74e7d85cb3454c126c43fe80b95afe2d317f", "query": "In chrono we have support for behind a feature flag. As a workaround for I the line: It now gives a new error: cc $DIR/rustc-decodable-issue-123156.rs:10:10 | LL | #[derive(RustcDecodable)] | ^^^^^^^^^^^^^^ | = warning: this was previously accepted by the compiler but is being phased out; it will become a hard error in a future release! = note: for more information, see issue #64266 ", "commid": "rust_pr_123182"}], "negative_passages": []} {"query_id": "q-en-rust-efa79a06be0c64ba2d3dd8f41fb7769a86cafd7c35b18bcc52ffa18c13f61d4f", "query": " $DIR/unimplemented_pat.rs:9:15 | LL | type Always = pattern_type!(Option is Some(_)); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: see issue #123646 for more information = help: add `#![feature(pattern_types)]` to the crate attributes to enable = note: this compiler was built on YYYY-MM-DD; consider upgrading it if it is out of date error[E0658]: pattern types are unstable --> $DIR/unimplemented_pat.rs:12:16 | LL | type Binding = pattern_type!(Option is x); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: see issue #123646 for more information = help: add `#![feature(pattern_types)]` to the crate attributes to enable = note: this compiler was built on YYYY-MM-DD; consider upgrading it if it is out of date error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0658`. ", "commid": "rust_pr_123648"}], "negative_passages": []} {"query_id": "q-en-rust-a666fb22d41f56bd6cefa0c279a3bba76ae4ff2bd1f686a9d5d5fbf61056133a", "query": " $DIR/leaking-unnameables.rs:8:18 | LL | pub fn f() -> _ { | ^ | | | not allowed in type signatures | help: replace with the correct return type: `fn()` error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0121`. ", "commid": "rust_pr_123931"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. use crate::meth::load_vtable; use crate::mir::operand::OperandValue; use crate::mir::place::PlaceRef; use crate::traits::*;", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. let ptr_align = bx.data_layout().pointer_align.abi; let vtable_byte_offset = u64::try_from(entry_idx).unwrap() * ptr_size.bytes(); let gep = bx.inbounds_ptradd(old_info, bx.const_usize(vtable_byte_offset)); let new_vptr = bx.load(bx.type_ptr(), gep, ptr_align); bx.nonnull_metadata(new_vptr); // VTable loads are invariant. bx.set_invariant_load(new_vptr); new_vptr load_vtable(bx, old_info, bx.type_ptr(), vtable_byte_offset, source, true) } else { old_info }", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. let ptr_align = bx.data_layout().pointer_align.abi; let vtable_byte_offset = self.0 * ptr_size.bytes(); if bx.cx().sess().opts.unstable_opts.virtual_function_elimination && bx.cx().sess().lto() == Lto::Fat { let typeid = bx .typeid_metadata(typeid_for_trait_ref(bx.tcx(), expect_dyn_trait_in_self(ty))) .unwrap(); let func = bx.type_checked_load(llvtable, vtable_byte_offset, typeid); func } else { let gep = bx.inbounds_ptradd(llvtable, bx.const_usize(vtable_byte_offset)); let ptr = bx.load(llty, gep, ptr_align); // VTable loads are invariant. bx.set_invariant_load(ptr); if nonnull { bx.nonnull_metadata(ptr); } ptr } load_vtable(bx, llvtable, llty, vtable_byte_offset, ty, nonnull) } pub(crate) fn get_optional_fn>(", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. ty: Ty<'tcx>, ) -> Bx::Value { // Load the data pointer from the object. debug!(\"get_int({:?}, {:?})\", llvtable, self); let llty = bx.type_isize(); let ptr_size = bx.data_layout().pointer_size; let ptr_align = bx.data_layout().pointer_align.abi; let vtable_byte_offset = self.0 * ptr_size.bytes(); let gep = bx.inbounds_ptradd(llvtable, bx.const_usize(vtable_byte_offset)); let ptr = bx.load(llty, gep, ptr_align); // VTable loads are invariant. bx.set_invariant_load(ptr); ptr load_vtable(bx, llvtable, llty, vtable_byte_offset, ty, false) } } /// This takes a valid `self` receiver type and extracts the principal trait /// ref of the type. fn expect_dyn_trait_in_self(ty: Ty<'_>) -> ty::PolyExistentialTraitRef<'_> { /// ref of the type. Return `None` if there is no principal trait. fn dyn_trait_in_self(ty: Ty<'_>) -> Option> { for arg in ty.peel_refs().walk() { if let GenericArgKind::Type(ty) = arg.unpack() && let ty::Dynamic(data, _, _) = ty.kind() { return data.principal().expect(\"expected principal trait object\"); return data.principal(); } }", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. /// Call this function whenever you need to load a vtable. pub(crate) fn load_vtable<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>>( bx: &mut Bx, llvtable: Bx::Value, llty: Bx::Type, vtable_byte_offset: u64, ty: Ty<'tcx>, nonnull: bool, ) -> Bx::Value { let ptr_align = bx.data_layout().pointer_align.abi; if bx.cx().sess().opts.unstable_opts.virtual_function_elimination && bx.cx().sess().lto() == Lto::Fat { if let Some(trait_ref) = dyn_trait_in_self(ty) { let typeid = bx.typeid_metadata(typeid_for_trait_ref(bx.tcx(), trait_ref)).unwrap(); let func = bx.type_checked_load(llvtable, vtable_byte_offset, typeid); return func; } else if nonnull { bug!(\"load nonnull value from a vtable without a principal trait\") } } let gep = bx.inbounds_ptradd(llvtable, bx.const_usize(vtable_byte_offset)); let ptr = bx.load(llty, gep, ptr_align); // VTable loads are invariant. bx.set_invariant_load(ptr); if nonnull { bx.nonnull_metadata(ptr); } ptr } ", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. ty::COMMON_VTABLE_ENTRIES_ALIGN, _ => bug!(), }; let value = meth::VirtualIndex::from_index(idx).get_usize(bx, vtable); let value = meth::VirtualIndex::from_index(idx).get_usize(bx, vtable, callee_ty); match name { // Size is always <= isize::MAX. sym::vtable_size => {", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. .get_usize(bx, vtable); .get_usize(bx, vtable, t); let align = meth::VirtualIndex::from_index(ty::COMMON_VTABLE_ENTRIES_ALIGN) .get_usize(bx, vtable); .get_usize(bx, vtable, t); // Size is always <= isize::MAX. let size_bound = bx.data_layout().ptr_sized_integer().signed_max() as u128;", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. //@ known-bug: #123955 //@ compile-flags: -Clto -Zvirtual-function-elimination //@ only-x86_64 pub fn main() { _ = Box::new(()) as Box; } ", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. //@ known-bug: #124092 //@ compile-flags: -Zvirtual-function-elimination=true -Clto=true //@ only-x86_64 const X: for<'b> fn(&'b ()) = |&()| (); fn main() { let dyn_debug = Box::new(X) as Box as Box; } ", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-e257d21ba2f0e111738fdd71cf46049e455d6d844700ec5b2ed1fd625c0738a3", "query": "This happens when compiling the crate with . : The backtrace and output here are obtained during cross-compilation (host , target , build-std), but the panic also happens when the host is Windows and without build-std. //@ build-pass //@ compile-flags: -Zvirtual-function-elimination=true -Clto=true //@ only-x86_64 //@ no-prefer-dynamic // issue #123955 pub fn test0() { _ = Box::new(()) as Box; } // issue #124092 const X: for<'b> fn(&'b ()) = |&()| (); pub fn test1() { let _dyn_debug = Box::new(X) as Box as Box; } fn main() {} ", "commid": "rust_pr_130734"}], "negative_passages": []} {"query_id": "q-en-rust-302b2ccfb7b6bd017eb1eca60631ba722e152c7414543a25ebaa5a74c1ed2292", "query": " $DIR/account-for-lifetimes-in-closure-suggestion.rs:13:22 | LL | Thing.enter_scope(|ctx| { | --- | | | has type `TwoThings<'_, '1>` | has type `TwoThings<'2, '_>` LL | SameLifetime(ctx); | ^^^ this usage requires that `'1` must outlive `'2` | = note: requirement occurs because of the type `TwoThings<'_, '_>`, which makes the generic argument `'_` invariant = note: the struct `TwoThings<'a, 'b>` is invariant over the parameter `'a` = help: see for more information about variance error: aborting due to 1 previous error ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-7bdc2a1076403b14b3b7ffe6e2607f626dc344c6eab1abe343328494516dcb5c", "query": " $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:19:16 | LL | type Bar = BarImpl<'a, 'b, T>; | ^^^^^^^^^^^^^^^^^^ | note: lifetime parameter instantiated with the lifetime `'a` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:6 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ note: but lifetime parameter must outlive the lifetime `'b` as defined here --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:14:10 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | ^^ error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:23:21 | LL | self.enter_scope(|ctx| { | --- | | | has type `&'1 mut FooImpl<'_, '_, T>` | has type `&mut FooImpl<'2, '_, T>` LL | BarImpl(ctx); | ^^^ this usage requires that `'1` must outlive `'2` error: lifetime may not live long enough --> $DIR/lifetime-not-long-enough-suggestion-regression-test-124563.rs:22:9 | LL | impl<'a, 'b, T> Foo for FooImpl<'a, 'b, T> | -- -- lifetime `'b` defined here | | | lifetime `'a` defined here ... LL | / self.enter_scope(|ctx| { LL | | BarImpl(ctx); LL | | }); | |__________^ argument requires that `'a` must outlive `'b` | = help: consider adding the following bound: `'a: 'b` = note: requirement occurs because of a mutable reference to `FooImpl<'_, '_, T>` = note: mutable references are invariant over their type parameter = help: see for more information about variance error: aborting due to 3 previous errors For more information about this error, try `rustc --explain E0478`. ", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-7bdc2a1076403b14b3b7ffe6e2607f626dc344c6eab1abe343328494516dcb5c", "query": " $DIR/regions-escape-method.rs:15:13 --> $DIR/regions-escape-method.rs:16:13 | LL | s.f(|p| p) | -- ^ returning this value requires that `'1` must outlive `'2` | || | |return type of closure is &'2 i32 | has type `&'1 i32` | help: dereference the return value | LL | s.f(|p| *p) | + error: aborting due to 1 previous error", "commid": "rust_pr_126884"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-565bd6cd5e8ced10c98131b2e2d9328a83c983a8a0d96136fdcf7f4624882679", "text": ".bounds = `impl` may be usable in bounds, etc. from outside the expression, which might e.g. make something constructible that previously wasn't, because it's still on a publicly-visible type .exception = items in an anonymous const item (`const _: () = {\"{\"} ... {\"}\"}`) are treated as in the same scope as the anonymous const's declaration .const_anon = use a const-anon item to suppress this lint .macro_to_change = the {$macro_kind} `{$macro_to_change}` defines the non-local `impl`, and may need to be changed lint_non_local_definitions_impl_move_help = move the `impl` block outside of this {$body_kind_descr} {$depth ->", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-cca80fb88481799c40c6ae58c426046dc4ba3476162309d43f6032ee34b8ae76", "text": "has_trait: bool, self_ty_str: String, of_trait_str: Option, macro_to_change: Option<(String, &'static str)>, }, MacroRules { depth: u32,", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-070ebed415772ca29ce3aaba01377e2be59849ec06a2d002f36339c83627a327", "text": "has_trait, self_ty_str, of_trait_str, macro_to_change, } => { diag.primary_message(fluent::lint_non_local_definitions_impl); diag.arg(\"depth\", depth);", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-5eac3c3a8da1ec9bff7bee2aeddbf4184d7b7c43a8fdb2d4e2a113beabd47779", "text": "diag.arg(\"of_trait_str\", of_trait_str); } if let Some((macro_to_change, macro_kind)) = macro_to_change { diag.arg(\"macro_to_change\", macro_to_change); diag.arg(\"macro_kind\", macro_kind); diag.note(fluent::lint_macro_to_change); } if let Some(cargo_update) = cargo_update { diag.subdiagnostic(&diag.dcx, cargo_update); } if has_trait { diag.note(fluent::lint_bounds); diag.note(fluent::lint_with_trait);", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-859dca01415cab21a9fe8a3bba23df7e78327b99c08f8f8d812fbeb8f98c87a1", "text": "); } if let Some(cargo_update) = cargo_update { diag.subdiagnostic(&diag.dcx, cargo_update); } if let Some(const_anon) = const_anon { diag.note(fluent::lint_exception); if let Some(const_anon) = const_anon {", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-0c9702e7f6dcefa2ef230b41f5349d14492408bb8cfb859f75bfd68259429fbd", "text": "Some((cx.tcx.def_span(parent), may_move)) }; let macro_to_change = if let ExpnKind::Macro(kind, name) = item.span.ctxt().outer_expn_data().kind { Some((name.to_string(), kind.descr())) } else { None }; cx.emit_span_lint( NON_LOCAL_DEFINITIONS, ms,", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-ed7468e5aa346a5ca366dda2c15199a08f3a14ff091645d86ea4446fa7b89b86", "text": "move_to, may_remove, has_trait: impl_.of_trait.is_some(), macro_to_change, }, ) }", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-d9e6f549e7b1dfdb0ea062d3c39b1d8cd6a095264e9bef8656cb2962c57dc8f9", "text": "| `Debug` is not local | move the `impl` block outside of this constant `_IMPL_DEBUG` | = note: the macro `non_local_macro::non_local_impl` defines the non-local `impl`, and may need to be changed = note: the macro `non_local_macro::non_local_impl` may come from an old version of the `non_local_macro` crate, try updating your dependency with `cargo update -p non_local_macro` = note: `impl` may be usable in bounds, etc. from outside the expression, which might e.g. make something constructible that previously wasn't, because it's still on a publicly-visible type = note: an `impl` is never scoped, even when it is nested inside an item, as it may impact type checking outside of that item, which can be the case if neither the trait or the self type are at the same nesting level as the `impl` = note: the macro `non_local_macro::non_local_impl` may come from an old version of the `non_local_macro` crate, try updating your dependency with `cargo update -p non_local_macro` = note: items in an anonymous const item (`const _: () = { ... }`) are treated as in the same scope as the anonymous const's declaration = note: this lint may become deny-by-default in the edition 2024 and higher, see the tracking issue = note: `#[warn(non_local_definitions)]` on by default", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-013281199ccacac056a1093c87cd26a70b2986c0a786184b9521d8b35df6824e", "query": "I tried this code: I expected to see this happen: no warnings Instead, this happened: : cc\nIt looks like there is an upstream issue about this , but it seems like this shouldn't be an issue for dependents as long as is on edition < 2024.\nIt is expected for the lint to lint on macro-generated code. It is also expected for the lint to lint on all editions (it is not a edition dependent lint).\nWhat am I meant to do as a downstream of a macro crate that hasn't fixed it then? I don't want to be ing a FCW, but there's nothing else I can do to clear this warning from my code. (And that implies that the planned \"hard error in 2027\" actually means for all editions, not just Edition 2027?)\nAllow the lint. Report the issue upstream. Consider switching to an unaffected crate. That's understandable, but until the upstream crate is fixed I don't have a better solution. Note, that it is quite unlikely that this point that the lint will become deny-by-default in Rust 2024. No, if hard-error there is, it would only be for edition 2027 and higher, not below.\nWhy?\nDetecting and ignoring macro-generated code for a lint where possible is the expected behavior of rustc.\nBecause, in particular for macro-generated code, we can't lint at the source since, well it is only generated when using the proc-macro or . Yes, but we must still emit the warning in macro-generated code, since the goal is to make the lint deny-by-default and then hard-error in a future edition, and without the warn it would go straight to hard-error which is not good.\nIs there no ability to enable the lint when in the same crate/workspace but disable it for \"foreign\" macro-generated spans?\nYes, but can't rely on this. Also, not all macros have tests. The lint is only warn-by-default so it wouldn't break any test yet. If you mean: \"don't lint on external macro\", that is possible for but not for proc-macro since they are always external. We did an crater run, to evaluate the impact, before merging the lint in We had many cases with derive-macro, most of them (82.2%) where fixable by a , the rest would need manual intervention, the crate in question here was detected, and I reported the issue upstream with the fix that would fix the issue.\nthat was why I was thinking of trying to reason cross-workspace, but hmm, true. And actually, proc macros are worse: they can , so at least some are going to almost certainly slip through either way.\nI would love if we had the ability to do that, but unfortunately doesn't have that information. Workspaces are a Cargo-only concept. :eyes:\nI was hoping that we could do something where cargo helps us out here by describing which things we pay attention to, but I am fully prepared to believe that such is not tractable, even if only because no one has figured out if the infra for doing such a thing is even possible.\nYeah, unfortunately there isn't really anything else the compiler can do, it would be great if we could lint at the macro definition but due to the way they work that isn't possible, we are therefore forced to do the second \"best\" thing and lint at the usage. should be make it clearer that the \"macro needs to change\". So let's reclassify this issue as not-a-bug but as a discussion. labels -C-bug +C-discussion", "positive_passages": [{"docid": "doc-en-rust-8b0a70585003a2e2e29314e34e97883ba31000a63a21a2f293208cf4c9583214", "text": "LL | m!(); | ---- in this macro invocation | = note: the macro `m` defines the non-local `impl`, and may need to be changed = note: `impl` may be usable in bounds, etc. from outside the expression, which might e.g. make something constructible that previously wasn't, because it's still on a publicly-visible type = note: an `impl` is never scoped, even when it is nested inside an item, as it may impact type checking outside of that item, which can be the case if neither the trait or the self type are at the same nesting level as the `impl` = note: this lint may become deny-by-default in the edition 2024 and higher, see the tracking issue ", "commid": "rust_pr_125722"}], "negative_passages": []} {"query_id": "q-en-rust-570a9691acc477bf528d6f6f53c3573ed02493ad3b53f3482e57fab39c1e8078", "query": " $DIR/variadic-ffi-nested-syntactic-fail.rs:8:21 | LL | fn f3() where for<> ...: {} | ^^^ error[E0308]: mismatched types --> $DIR/variadic-ffi-nested-syntactic-fail.rs:8:33 --> $DIR/variadic-ffi-nested-syntactic-fail.rs:12:33 | LL | let _recovery_witness: () = 0; | -- ^ expected `()`, found integer | | | expected due to this error: aborting due to 3 previous errors error: aborting due to 4 previous errors Some errors have detailed explanations: E0308, E0743. For more information about an error, try `rustc --explain E0308`.", "commid": "rust_pr_125863"}], "negative_passages": []} {"query_id": "q-en-rust-d86f98e29e96907eb74e2979df8b01395c41a9d6fde295b825e0d20753d1bba3", "query": "Take a look at the docs for : Rustdoc is making good choices for whether to wrap the declarations ! But when it wraps, it doesn't include a trailing comma on the last parameter. I think it should, because the default style guide wants a comma there. (And rustdoc should continue not putting a trailing comma when the declaration is shown as a single line.) $DIR/repeat_expr_hack_gives_right_generics.rs:16:17 | LL | bar::<{ [1; N] }>(); | ^ cannot perform const operation using `N` | = help: const parameters may only be used as standalone arguments, i.e. `N` = help: add `#![feature(generic_const_exprs)]` to allow generic const expressions error: aborting due to 1 previous error ", "commid": "rust_pr_126228"}], "negative_passages": []} {"query_id": "q-en-rust-fc2a9d191b7e45621a13f76a3e5eb8239321455cc918a82840ebb9a4ee9c73d4", "query": " $DIR/refine-resolution-errors.rs:9:6 | LL | impl Mirror for () { | ^ unconstrained type parameter error[E0282]: type annotations needed --> $DIR/refine-resolution-errors.rs:15:5 | LL | async fn first() -> <() as Mirror>::Assoc; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ cannot infer type error: aborting due to 2 previous errors Some errors have detailed explanations: E0207, E0282. For more information about an error, try `rustc --explain E0207`. ", "commid": "rust_pr_126968"}], "negative_passages": []} {"query_id": "q-en-rust-ca843159dc6af61f9432fdda611782921d9eb4e3694e5800c5c222d7719f5ab3", "query": "Reviewing PR had some oversight regarding valid utf8 checks. This was graciously brought to our (meaning myself and attention by in . Fix should really just be changing to setting to , unless we decide there may be some merit to iterating again through the buffer.\nlabel T-libs C-cleanup O-windows\nlabel -needs-triage", "positive_passages": [{"docid": "doc-en-rust-36024c19c57e7d874954ce34c6dce60a0bea0a3a2ad4c88b32cf8dfbe6bacf97", "text": "#[inline] pub(crate) fn extend_from_slice(&mut self, other: &[u8]) { self.bytes.extend_from_slice(other); self.is_known_utf8 = self.is_known_utf8 || self.next_surrogate(0).is_none(); self.is_known_utf8 = false; } }", "commid": "rust_pr_126980"}], "negative_passages": []} {"query_id": "q-en-rust-ca843159dc6af61f9432fdda611782921d9eb4e3694e5800c5c222d7719f5ab3", "query": "Reviewing PR had some oversight regarding valid utf8 checks. This was graciously brought to our (meaning myself and attention by in . Fix should really just be changing to setting to , unless we decide there may be some merit to iterating again through the buffer.\nlabel T-libs C-cleanup O-windows\nlabel -needs-triage", "positive_passages": [{"docid": "doc-en-rust-b9d416c132b27bef98ca696c9bc400aba1846c0767e2bc26588ea0715d0c476a", "text": "string.push(CodePoint::from_u32(0xD800).unwrap()); check_utf8_boundary(&string, 3); } #[test] fn wobbled_wtf8_plus_bytes_isnt_utf8() { let mut string: Wtf8Buf = unsafe { Wtf8::from_bytes_unchecked(b\"xEDxA0x80\").to_owned() }; assert!(!string.is_known_utf8); string.extend_from_slice(b\"some utf-8\"); assert!(!string.is_known_utf8); } #[test] fn wobbled_wtf8_plus_str_isnt_utf8() { let mut string: Wtf8Buf = unsafe { Wtf8::from_bytes_unchecked(b\"xEDxA0x80\").to_owned() }; assert!(!string.is_known_utf8); string.push_str(\"some utf-8\"); assert!(!string.is_known_utf8); } #[test] fn unwobbly_wtf8_plus_utf8_is_utf8() { let mut string: Wtf8Buf = Wtf8Buf::from_str(\"hello world\"); assert!(string.is_known_utf8); string.push_str(\"some utf-8\"); assert!(string.is_known_utf8); } ", "commid": "rust_pr_126980"}], "negative_passages": []} {"query_id": "q-en-rust-47be984a5b8a3caa9175c3154faab0ff317eb8b4f51d6c594e7c9f5765357010", "query": "Running with doesn't include a trailing so the next shell prompt gets smooshed on: I can't imagine this is intentional since all other options seem to include the newline. $DIR/assoc-ty.rs:10:10 | LL | auto trait Trait { | ----- auto traits cannot have associated items LL | LL | type Output; | -----^^^^^^- help: remove these associated items error[E0658]: auto traits are experimental and possibly buggy --> $DIR/assoc-ty.rs:8:1 | LL | / auto trait Trait { LL | | LL | | type Output; LL | | LL | | } | |_^ | = note: see issue #13231 for more information = help: add `#![feature(auto_traits)]` to the crate attributes to enable = note: this compiler was built on YYYY-MM-DD; consider upgrading it if it is out of date error[E0308]: mismatched types --> $DIR/assoc-ty.rs:15:36 | LL | let _: <() as Trait>::Output = (); | --------------------- ^^ expected associated type, found `()` | | | expected due to this | = note: expected associated type `<() as Trait>::Output` found unit type `()` = help: consider constraining the associated type `<() as Trait>::Output` to `()` or calling a method that returns `<() as Trait>::Output` = note: for more information, visit https://doc.rust-lang.org/book/ch19-03-advanced-traits.html error: aborting due to 3 previous errors Some errors have detailed explanations: E0308, E0380, E0658. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_128160"}], "negative_passages": []} {"query_id": "q-en-rust-434fa91b7700f2b0069d21c005dfd262a9c2d63c25183fd854d60e6b9ec8ee38", "query": " $DIR/assoc-ty.rs:10:10 | LL | auto trait Trait { | ----- auto traits cannot have associated items LL | LL | type Output; | -----^^^^^^- help: remove these associated items error[E0658]: auto traits are experimental and possibly buggy --> $DIR/assoc-ty.rs:8:1 | LL | / auto trait Trait { LL | | LL | | type Output; LL | | LL | | } | |_^ | = note: see issue #13231 for more information = help: add `#![feature(auto_traits)]` to the crate attributes to enable = note: this compiler was built on YYYY-MM-DD; consider upgrading it if it is out of date error[E0308]: mismatched types --> $DIR/assoc-ty.rs:15:36 | LL | let _: <() as Trait>::Output = (); | --------------------- ^^ types differ | | | expected due to this | = note: expected associated type `<() as Trait>::Output` found unit type `()` = help: consider constraining the associated type `<() as Trait>::Output` to `()` or calling a method that returns `<() as Trait>::Output` = note: for more information, visit https://doc.rust-lang.org/book/ch19-03-advanced-traits.html error: aborting due to 3 previous errors Some errors have detailed explanations: E0308, E0380, E0658. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_128160"}], "negative_passages": []} {"query_id": "q-en-rust-434fa91b7700f2b0069d21c005dfd262a9c2d63c25183fd854d60e6b9ec8ee38", "query": " $DIR/issue-101477-enum.rs:6:7", "commid": "rust_pr_127835"}], "negative_passages": []} {"query_id": "q-en-rust-a7aaa3fd74686a30b6cfd7d26d89bdc2670056406cb578d376a945fbd70871b8", "query": " $DIR/unicode-double-equals-recovery.rs:1:16 | LL | const A: usize \u2a75 2; | ^ | help: Unicode character '\u2a75' (Two Consecutive Equals Signs) looks like '==' (Double Equals Sign), but it is not | LL | const A: usize == 2; | ~~ error: unexpected `==` --> $DIR/unicode-double-equals-recovery.rs:1:16 | LL | const A: usize \u2a75 2; | ^ | help: try using `=` instead | LL | const A: usize = 2; | ~ error: aborting due to 2 previous errors ", "commid": "rust_pr_127835"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2ccac77ff640b5510cea6aaf177c51e7ecadb7b7e11b62e665fbbe6563f72", "query": "I tried this code: I expected to see this happen: compilation failure Instead, this happened: compilation succeeded $DIR/deriving-smart-pointer-neg.rs:59:5 | LL | #[pointee] | ^^^^^^^^^^ error: the `#[pointee]` attribute may only be used on generic parameters --> $DIR/deriving-smart-pointer-neg.rs:66:74 | LL | struct PointeeInTypeConstBlock<'a, T: ?Sized = [u32; const { struct UhOh<#[pointee] T>(T); 10 }]> { | ^^^^^^^^^^ error: the `#[pointee]` attribute may only be used on generic parameters --> $DIR/deriving-smart-pointer-neg.rs:76:34 | LL | const V: u32 = { struct UhOh<#[pointee] T>(T); 10 }> | ^^^^^^^^^^ error: the `#[pointee]` attribute may only be used on generic parameters --> $DIR/deriving-smart-pointer-neg.rs:85:56 | LL | ptr: PointeeInConstConstBlock<'a, T, { struct UhOh<#[pointee] T>(T); 0 }> | ^^^^^^^^^^ error[E0392]: lifetime parameter `'a` is never used --> $DIR/deriving-smart-pointer-neg.rs:15:16 |", "commid": "rust_pr_128721"}], "negative_passages": []} {"query_id": "q-en-rust-cbc2ccac77ff640b5510cea6aaf177c51e7ecadb7b7e11b62e665fbbe6563f72", "query": "I tried this code: I expected to see this happen: compilation failure Instead, this happened: compilation succeeded $DIR/move-out-of-ref.rs:11:9 | LL | *x; | ^^ move occurs because `*x` has type `Ty`, which does not implement the `Copy` trait | note: if `Ty` implemented `Clone`, you could clone the value --> $DIR/move-out-of-ref.rs:7:1 | LL | struct Ty; | ^^^^^^^^^ consider implementing `Clone` for this type ... LL | *x; | -- you could clone this value error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0507`. ", "commid": "rust_pr_129101"}], "negative_passages": []} {"query_id": "q-en-rust-fd4b9178da4f15488f38ac7bd4288c7039930b69a242140e35c6eff8a1d5823f", "query": " $DIR/tainted-body-2.rs:9:5 | LL | missing; | ^^^^^^^ not found in this scope error: aborting due to 1 previous error For more information about this error, try `rustc --explain E0425`. ", "commid": "rust_pr_129677"}], "negative_passages": []} {"query_id": "q-en-rust-47f681e79359d4538a29b4fe6560472bcd9c6a462701bf8783cc73cb7468c995", "query": " $DIR/generics.rs:17:43 --> $DIR/generics.rs:18:43 | LL | f2: extern \"C-cmse-nonsecure-call\" fn(impl Copy, u32, u32, u32) -> u64, | ^^^^^^^^^", "commid": "rust_pr_130064"}], "negative_passages": []} {"query_id": "q-en-rust-47f681e79359d4538a29b4fe6560472bcd9c6a462701bf8783cc73cb7468c995", "query": " $DIR/generics.rs:19:9 --> $DIR/generics.rs:20:9 | LL | f3: extern \"C-cmse-nonsecure-call\" fn(T, u32, u32, u32) -> u64, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error[E0798]: function pointers with the `\"C-cmse-nonsecure-call\"` ABI cannot contain generics in their type --> $DIR/generics.rs:20:9 --> $DIR/generics.rs:21:9 | LL | f4: extern \"C-cmse-nonsecure-call\" fn(Wrapper, u32, u32, u32) -> u64, | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 5 previous errors error[E0798]: return value of `\"C-cmse-nonsecure-call\"` function too large to pass via registers --> $DIR/generics.rs:27:73 | LL | type WithTraitObject = extern \"C-cmse-nonsecure-call\" fn(&dyn Trait) -> &dyn Trait; | ^^^^^^^^^^ this type doesn't fit in the available registers | = note: functions with the `\"C-cmse-nonsecure-call\"` ABI must pass their result via the available return registers = note: the result must either be a (transparently wrapped) i64, u64 or f64, or be at most 4 bytes in size error[E0798]: return value of `\"C-cmse-nonsecure-call\"` function too large to pass via registers --> $DIR/generics.rs:31:62 | LL | extern \"C-cmse-nonsecure-call\" fn(&'static dyn Trait) -> &'static dyn Trait; | ^^^^^^^^^^^^^^^^^^ this type doesn't fit in the available registers | = note: functions with the `\"C-cmse-nonsecure-call\"` ABI must pass their result via the available return registers = note: the result must either be a (transparently wrapped) i64, u64 or f64, or be at most 4 bytes in size error[E0798]: return value of `\"C-cmse-nonsecure-call\"` function too large to pass via registers --> $DIR/generics.rs:38:62 | LL | extern \"C-cmse-nonsecure-call\" fn(WrapperTransparent) -> WrapperTransparent; | ^^^^^^^^^^^^^^^^^^ this type doesn't fit in the available registers | = note: functions with the `\"C-cmse-nonsecure-call\"` ABI must pass their result via the available return registers = note: the result must either be a (transparently wrapped) i64, u64 or f64, or be at most 4 bytes in size error[E0045]: C-variadic function must have a compatible calling convention, like `C` or `cdecl` --> $DIR/generics.rs:41:20 | LL | type WithVarArgs = extern \"C-cmse-nonsecure-call\" fn(u32, ...); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ C-variadic function must have a compatible calling convention error: aborting due to 9 previous errors Some errors have detailed explanations: E0412, E0562, E0798. For more information about an error, try `rustc --explain E0412`. Some errors have detailed explanations: E0045, E0412, E0562, E0798. For more information about an error, try `rustc --explain E0045`. ", "commid": "rust_pr_130064"}], "negative_passages": []} {"query_id": "q-en-rust-f52e7ee3c960a16d1e294bb18931bf322af166e07d550e5721135999631b8d63", "query": "When running any bootstrap command, it complains that it can't figure out whether my master branch is outdated. Of course it does, as it just YOLOs into , which is just a file and not a directory for worktrees: it should probably not do that (i don't have a suggestion on what to do instead) regressed by\ni would say adding a bit of error handling that checks if it's in a worktree doesn't sound too hard. there's two options from there: emit the warning in a worktree. this is basically trivial, but if someone does 100% of their work in a worktree, they may end up with an outdated upstream and not know it back to a more reliable method when in a worktree", "positive_passages": [{"docid": "doc-en-rust-65ad448162ffbe927c855a43f2d3d58f89d00cc5813678b331e74ee120ab5043", "text": "cmd_finder.must_have(s); } // this warning is useless in CI, // and CI probably won't have the right branches anyway. if !build_helper::ci::CiEnv::is_ci() { if let Err(e) = warn_old_master_branch(&build.config.git_config(), &build.config.src) .map_err(|e| e.to_string()) { eprintln!(\"unable to check if upstream branch is old: {e}\"); } } warn_old_master_branch(&build.config.git_config(), &build.config.src); }", "commid": "rust_pr_130121"}], "negative_passages": []} {"query_id": "q-en-rust-f52e7ee3c960a16d1e294bb18931bf322af166e07d550e5721135999631b8d63", "query": "When running any bootstrap command, it complains that it can't figure out whether my master branch is outdated. Of course it does, as it just YOLOs into , which is just a file and not a directory for worktrees: it should probably not do that (i don't have a suggestion on what to do instead) regressed by\ni would say adding a bit of error handling that checks if it's in a worktree doesn't sound too hard. there's two options from there: emit the warning in a worktree. this is basically trivial, but if someone does 100% of their work in a worktree, they may end up with an outdated upstream and not know it back to a more reliable method when in a worktree", "positive_passages": [{"docid": "doc-en-rust-7c53f53a5b1696c94e40c40866af87881c690de0050e4903796f7df2720f1441", "text": "/// /// This can result in formatting thousands of files instead of a dozen, /// so we should warn the user something is wrong. pub fn warn_old_master_branch( config: &GitConfig<'_>, git_dir: &Path, ) -> Result<(), Box> { use std::time::Duration; const WARN_AFTER: Duration = Duration::from_secs(60 * 60 * 24 * 10); let updated_master = updated_master_branch(config, Some(git_dir))?; let branch_path = git_dir.join(\".git/refs/remotes\").join(&updated_master); match std::fs::metadata(branch_path) { Ok(meta) => { if meta.modified()?.elapsed()? > WARN_AFTER { eprintln!(\"warning: {updated_master} has not been updated in 10 days\"); } else { return Ok(()); pub fn warn_old_master_branch(config: &GitConfig<'_>, git_dir: &Path) { if crate::ci::CiEnv::is_ci() { // this warning is useless in CI, // and CI probably won't have the right branches anyway. return; } // this will be overwritten by the actual name, if possible let mut updated_master = \"the upstream master branch\".to_string(); match warn_old_master_branch_(config, git_dir, &mut updated_master) { Ok(branch_is_old) => { if !branch_is_old { return; } // otherwise fall through and print the rest of the warning } Err(err) => { eprintln!(\"warning: unable to check if {updated_master} is old due to error: {err}\")", "commid": "rust_pr_130121"}], "negative_passages": []} {"query_id": "q-en-rust-f52e7ee3c960a16d1e294bb18931bf322af166e07d550e5721135999631b8d63", "query": "When running any bootstrap command, it complains that it can't figure out whether my master branch is outdated. Of course it does, as it just YOLOs into , which is just a file and not a directory for worktrees: it should probably not do that (i don't have a suggestion on what to do instead) regressed by\ni would say adding a bit of error handling that checks if it's in a worktree doesn't sound too hard. there's two options from there: emit the warning in a worktree. this is basically trivial, but if someone does 100% of their work in a worktree, they may end up with an outdated upstream and not know it back to a more reliable method when in a worktree", "positive_passages": [{"docid": "doc-en-rust-33c1852cb498b6cb3cd43fedbfe5045f43b48324a451b338090af43f12e00932", "text": "} eprintln!( \"warning: {updated_master} is used to determine if files have been modifiedn warning: if it is not updated, this may cause files to be needlessly reformatted\" warning: if it is not updated, this may cause files to be needlessly reformatted\" ); Ok(()) } pub fn warn_old_master_branch_( config: &GitConfig<'_>, git_dir: &Path, updated_master: &mut String, ) -> Result> { use std::time::Duration; *updated_master = updated_master_branch(config, Some(git_dir))?; let branch_path = git_dir.join(\".git/refs/remotes\").join(&updated_master); const WARN_AFTER: Duration = Duration::from_secs(60 * 60 * 24 * 10); let meta = match std::fs::metadata(&branch_path) { Ok(meta) => meta, Err(err) => { let gcd = git_common_dir(&git_dir)?; if branch_path.starts_with(&gcd) { return Err(Box::new(err)); } std::fs::metadata(Path::new(&gcd).join(\"refs/remotes\").join(&updated_master))? } }; if meta.modified()?.elapsed()? > WARN_AFTER { eprintln!(\"warning: {updated_master} has not been updated in 10 days\"); Ok(true) } else { Ok(false) } } fn git_common_dir(dir: &Path) -> Result { output_result(Command::new(\"git\").arg(\"-C\").arg(dir).arg(\"rev-parse\").arg(\"--git-common-dir\")) .map(|x| x.trim().to_string()) }", "commid": "rust_pr_130121"}], "negative_passages": []} {"query_id": "q-en-rust-ffbf4b19508569359278bf43b31306e21c0a9b22e5beef3011e3a4741306c12a", "query": " T must be Copy // Captures of variable the given id by a closure (span is the", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-4fdfa15654a64cfcadaa9b33a890fe0405ce5126501d704aef598646b728f704", "text": "// Remember return type so that regionck can access it later. let fn_sig_tys: Vec = arg_tys.iter() .chain([ret_ty].iter()) .map(|&ty| ty) .collect(); arg_tys.iter().chain([ret_ty].iter()).map(|&ty| ty).collect(); debug!(\"fn-sig-map: fn_id={} fn_sig_tys={}\", fn_id, fn_sig_tys.repr(tcx)); inherited.fn_sig_map .borrow_mut() .insert(fn_id, fn_sig_tys); inherited.fn_sig_map.borrow_mut().insert(fn_id, fn_sig_tys); { let mut visit = GatherLocalsVisitor { fcx: &fcx, };", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-8ebbd18b7595738d2bf5bf1d6e8f9a4fff225678dc8cd0ecc0328191edce81f6", "text": "visit.visit_block(body); } fcx.require_type_is_sized(ret_ty, decl.output.span, traits::ReturnType); check_block_with_expected(&fcx, body, ExpectHasType(ret_ty));", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-a497242f111f376b5d82038348f459aebf21277bb68683b74471fbaaff1e129a", "text": "traits::RepeatVec => { tcx.sess.span_note( obligation.cause.span, format!( \"the `Copy` trait is required because the repeated element will be copied\").as_slice()); \"the `Copy` trait is required because the repeated element will be copied\"); } traits::VariableType(_) => { tcx.sess.span_note( obligation.cause.span, \"all local variables must have a statically known size\"); } traits::ReturnType => { tcx.sess.span_note( obligation.cause.span, \"the return type of a function must have a statically known size\"); } traits::AssignmentLhsSized => { tcx.sess.span_note( obligation.cause.span,", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-dac159af613437add442839fa2b649c5f8c5e7e713c2c024ecad252a9d0dc4c1", "text": "#![no_std] #![feature(lang_items)] #[lang=\"sized\"] pub trait Sized for Sized? {} #[lang=\"fail\"] fn fail(_: &(&'static str, &'static str, uint)) -> ! { loop {} }", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-4b6e0bebba756f77eb9fc09ba44f473a365ac2166f3aa2d411d5578bfe168024", "text": "C([Box]), } fn c(c:char) -> A { B(c) //~ ERROR cannot move a value of type A: the size of A cannot be statically determined fn c(c:char) { B(c); //~^ ERROR cannot move a value of type A: the size of A cannot be statically determined } pub fn main() {}", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-45922678c6cd755ce6ecfaa0e30bc4c92cbbe26c66d7b2eec5bc0a0213c1a5c5", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub trait AbstractRenderer {} fn _create_render(_: &()) -> AbstractRenderer //~^ ERROR: the trait `core::kinds::Sized` is not implemented { match 0u { _ => unimplemented!() } } fn main() { } ", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-41396f09f31ee4ad3b6759a18710504407a874f360bd9cb5cf912304b2bb9f20", "text": "r: A+'static } fn new_struct(r: A+'static) -> Struct { fn new_struct(r: A+'static) -> Struct { //~^ ERROR the trait `core::kinds::Sized` is not implemented //~^ ERROR the trait `core::kinds::Sized` is not implemented Struct { r: r } //~^ ERROR the trait `core::kinds::Sized` is not implemented", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-c6d37609080ed01aa382e18532048947735ab7c069f088308a4746692743ad50", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(globs)] #![feature(globs, lang_items)] #![no_std] // makes debugging this test *a lot* easier (during resolve) #[lang = \"sized\"] pub trait Sized for Sized? {} // Test to make sure that private items imported through globs remain private // when they're used.", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd05abfbd828719249c06e3e881d256200cd0a3ba0015f94126fbb769c26b16e", "query": "The following code causes an ICE: Removing the parameter or changing it to be a value instead of a reference stops the crash with an error saying I need to specify a lifetime, which makes me think this is a problem related to inferred lifetimes.\nIt isn't, you just have to specify a lifetime: ICE-s as well. Returning an unsized type should be a typeck error.\nhappens to work because it implies , i.e. would also work.", "positive_passages": [{"docid": "doc-en-rust-d82663572df538e91ca980c6f04f2038f2784a99b3856e925a9d798ef81db781", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(lang_items)] #![no_std] #[lang=\"sized\"] pub trait Sized for Sized? {} // error-pattern:requires `start` lang_item fn main() {}", "commid": "rust_pr_18142"}], "negative_passages": []} {"query_id": "q-en-rust-cd23f2dc8ade7f64c33f7f58763d77a5aa2361a998b89e5bf240c542af80e801", "query": "There are 3 cases of this undefined behaviour in : There is no guarantee that and have the same layout under the new rules. This needs to manually copy over the fields one by one which would compile to the same code as long as a feature like struct randomization wasn't enabled. Since vectors expose , it could be switched to make use of that. However, this is a pervasive issue with in the standard libraries.\nI would like to fix this one. However, I don't understand why you would need to use . If we need to copy everything, why don't we do it directly to a ?\nthe fields of Vec are private, so you can't just use", "positive_passages": [{"docid": "doc-en-rust-69660b82d833da962773a9a699579d327f34831cf8a71fba5d46a974b45ee7c5", "text": "#[inline] unsafe fn into_ascii_nocheck(self) -> Vec { let v: Vec = mem::transmute(self); v.into_ascii_nocheck() self.into_bytes().into_ascii_nocheck() } }", "commid": "rust_pr_18366"}], "negative_passages": []} {"query_id": "q-en-rust-cd23f2dc8ade7f64c33f7f58763d77a5aa2361a998b89e5bf240c542af80e801", "query": "There are 3 cases of this undefined behaviour in : There is no guarantee that and have the same layout under the new rules. This needs to manually copy over the fields one by one which would compile to the same code as long as a feature like struct randomization wasn't enabled. Since vectors expose , it could be switched to make use of that. However, this is a pervasive issue with in the standard libraries.\nI would like to fix this one. However, I don't understand why you would need to use . If we need to copy everything, why don't we do it directly to a ?\nthe fields of Vec are private, so you can't just use", "positive_passages": [{"docid": "doc-en-rust-47fa2209c66d571d58ac52aaddf708f32354d91ecde0eab556e6e443b87e1801", "text": "#[inline] unsafe fn into_ascii_nocheck(self) -> Vec { mem::transmute(self) let v = Vec::from_raw_parts(self.len(), self.capacity(), mem::transmute(self.as_ptr())); // We forget `self` to avoid freeing it at the end of the scope // Otherwise, the returned `Vec` would point to freed memory mem::forget(self); v } }", "commid": "rust_pr_18366"}], "negative_passages": []} {"query_id": "q-en-rust-cd23f2dc8ade7f64c33f7f58763d77a5aa2361a998b89e5bf240c542af80e801", "query": "There are 3 cases of this undefined behaviour in : There is no guarantee that and have the same layout under the new rules. This needs to manually copy over the fields one by one which would compile to the same code as long as a feature like struct randomization wasn't enabled. Since vectors expose , it could be switched to make use of that. However, this is a pervasive issue with in the standard libraries.\nI would like to fix this one. However, I don't understand why you would need to use . If we need to copy everything, why don't we do it directly to a ?\nthe fields of Vec are private, so you can't just use", "positive_passages": [{"docid": "doc-en-rust-b170396564709477e0384f80b6ffac920af2dbd97b6019b987565a7c60975b40", "text": "impl IntoBytes for Vec { fn into_bytes(self) -> Vec { unsafe { mem::transmute(self) } unsafe { let v = Vec::from_raw_parts(self.len(), self.capacity(), mem::transmute(self.as_ptr())); // We forget `self` to avoid freeing it at the end of the scope // Otherwise, the returned `Vec` would point to freed memory mem::forget(self); v } } }", "commid": "rust_pr_18366"}], "negative_passages": []} {"query_id": "q-en-rust-fe98417ce7651a0cc784cfae77ba85048609653c84ffb97932236e1ac345f94f", "query": "On the latest nightly (previously it worked fine) Rust fails to compile this code: It emits this error message: I managed to strip it down to this example: It fails to compile with this error: If is replaced with , then it compiles fine, so the error seems to be related to matching on s.\nSorry for the breakage, fix on its way.", "positive_passages": [{"docid": "doc-en-rust-713cf682d473c55416addf4a25f6b43219978f5443c57292a0008d399214a9db", "text": "let const_did = tcx.def_map.borrow().get_copy(&pat.id).def_id(); let const_pty = ty::lookup_item_type(tcx, const_did); fcx.write_ty(pat.id, const_pty.ty); demand::eqtype(fcx, pat.span, expected, const_pty.ty); demand::suptype(fcx, pat.span, expected, const_pty.ty); } ast::PatIdent(bm, ref path, ref sub) if pat_is_binding(&tcx.def_map, pat) => { let typ = fcx.local_ty(pat.span, pat.id);", "commid": "rust_pr_18356"}], "negative_passages": []} {"query_id": "q-en-rust-fe98417ce7651a0cc784cfae77ba85048609653c84ffb97932236e1ac345f94f", "query": "On the latest nightly (previously it worked fine) Rust fails to compile this code: It emits this error message: I managed to strip it down to this example: It fails to compile with this error: If is replaced with , then it compiles fine, so the error seems to be related to matching on s.\nSorry for the breakage, fix on its way.", "positive_passages": [{"docid": "doc-en-rust-e25ce2f84f88c079057c38fee1f03493d65325b9820361792f887a7332fc1b56", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. const X: &'static str = \"12345\"; fn test(s: String) -> bool { match s.as_slice() { X => true, _ => false } } fn main() { assert!(test(\"12345\".to_string())); } ", "commid": "rust_pr_18356"}], "negative_passages": []} {"query_id": "q-en-rust-f6536168d9c99082c98b2bf4805cbe548ced7755b8cbe07e0278d4335413ca6e", "query": "Right now the Int and Integer traits imply PartialOrd but not Ord. Is there a reason for this? It seems like integers do have a total order, not just partial. I asked on IRC and people seemed to think Int should imply Ord. If this is correct I can send a pull request?\nYup :)\nI have two questions: May I ask what is the trait? Couldn't find any in the std doc: Did you mean () when you wrote \"the trait\"? But it doesn't even inherit . Am I missing something?\nThe trait is in .\nAh, it was the big int code separated from the std lib! Thanks. I also overlooked implemented , implying . Sorry for the noise!", "positive_passages": [{"docid": "doc-en-rust-1219ea0f09812c049c41268f600f0fceda2851fb602e3dc3a9d97f9fe8bf0c9f", "text": "use {uint, u8, u16, u32, u64}; use {f32, f64}; use clone::Clone; use cmp::{PartialEq, PartialOrd}; use cmp::{Ord, PartialEq, PartialOrd}; use kinds::Copy; use mem::size_of; use ops::{Add, Sub, Mul, Div, Rem, Neg};", "commid": "rust_pr_18795"}], "negative_passages": []} {"query_id": "q-en-rust-f6536168d9c99082c98b2bf4805cbe548ced7755b8cbe07e0278d4335413ca6e", "query": "Right now the Int and Integer traits imply PartialOrd but not Ord. Is there a reason for this? It seems like integers do have a total order, not just partial. I asked on IRC and people seemed to think Int should imply Ord. If this is correct I can send a pull request?\nYup :)\nI have two questions: May I ask what is the trait? Couldn't find any in the std doc: Did you mean () when you wrote \"the trait\"? But it doesn't even inherit . Am I missing something?\nThe trait is in .\nAh, it was the big int code separated from the std lib! Thanks. I also overlooked implemented , implying . Sorry for the noise!", "positive_passages": [{"docid": "doc-en-rust-19c01a8e5197c9036e765b10f47271a27c49115ef3045f4cb597bfd899de8de6", "text": "/// A primitive signed or unsigned integer equipped with various bitwise /// operators, bit counting methods, and endian conversion functions. pub trait Int: Primitive + Ord + CheckedAdd + CheckedSub + CheckedMul", "commid": "rust_pr_18795"}], "negative_passages": []} {"query_id": "q-en-rust-adff20f233002c7b57c07999a8fc258646c5e1fe8387693c136d76fb260bed53", "query": "In we had the submitter change a to an as follows: We felt that better communicated the intent of \"this code should never panic assuming that the implementation is correct\" (as opposed to , which communicates \"this code should never panic assuming that the user passes in the correct inputs\" (cf. indexing, division) and is generally to be avoided in the stdlib). However, I think it's a shame that by expressing the intent more clearly in the code we're simultaneously losing information regarding which invariant was violated in order to reach the unreachable code. Note that this isn't really a huge deal, because we can just move the \"Invalid SearchStack\" bit over into a comment next to . It's also not really a huge deal for the theoretical bug reporter who's filing an issue upon hitting this code, since still includes a filename and line number in its output. However, given the precedent of optional messages set by , I don't think it would be a stretch to imagine that could also have an optional message. Usage for the typical case would be the same: could optionally look like this:\nI'll have a go at implementing this.", "positive_passages": [{"docid": "doc-en-rust-6f97680750bf67eb00f18a5df6ff0ea74ce080f64bd0207e2167d714bc0ace37", "text": "/// ``` #[macro_export] macro_rules! unreachable( () => (panic!(\"internal error: entered unreachable code\")) () => ({ panic!(\"internal error: entered unreachable code\") }); ($msg:expr) => ({ unreachable!(\"{}\", $msg) }); ($fmt:expr, $($arg:tt)*) => ({ panic!(concat!(\"internal error: entered unreachable code: \", $fmt), $($arg)*) }); ) /// A standardised placeholder for marking unfinished code. It panics with the", "commid": "rust_pr_18867"}], "negative_passages": []} {"query_id": "q-en-rust-adff20f233002c7b57c07999a8fc258646c5e1fe8387693c136d76fb260bed53", "query": "In we had the submitter change a to an as follows: We felt that better communicated the intent of \"this code should never panic assuming that the implementation is correct\" (as opposed to , which communicates \"this code should never panic assuming that the user passes in the correct inputs\" (cf. indexing, division) and is generally to be avoided in the stdlib). However, I think it's a shame that by expressing the intent more clearly in the code we're simultaneously losing information regarding which invariant was violated in order to reach the unreachable code. Note that this isn't really a huge deal, because we can just move the \"Invalid SearchStack\" bit over into a comment next to . It's also not really a huge deal for the theoretical bug reporter who's filing an issue upon hitting this code, since still includes a filename and line number in its output. However, given the precedent of optional messages set by , I don't think it would be a stretch to imagine that could also have an optional message. Usage for the typical case would be the same: could optionally look like this:\nI'll have a go at implementing this.", "positive_passages": [{"docid": "doc-en-rust-83feb6a6979a9cf0cdca92c81d10b25502618683a67a1b629e6e330722013eab", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:internal error: entered unreachable code: 6 is not prime fn main() { unreachable!(\"{} is not {}\", 6u32, \"prime\"); } ", "commid": "rust_pr_18867"}], "negative_passages": []} {"query_id": "q-en-rust-adff20f233002c7b57c07999a8fc258646c5e1fe8387693c136d76fb260bed53", "query": "In we had the submitter change a to an as follows: We felt that better communicated the intent of \"this code should never panic assuming that the implementation is correct\" (as opposed to , which communicates \"this code should never panic assuming that the user passes in the correct inputs\" (cf. indexing, division) and is generally to be avoided in the stdlib). However, I think it's a shame that by expressing the intent more clearly in the code we're simultaneously losing information regarding which invariant was violated in order to reach the unreachable code. Note that this isn't really a huge deal, because we can just move the \"Invalid SearchStack\" bit over into a comment next to . It's also not really a huge deal for the theoretical bug reporter who's filing an issue upon hitting this code, since still includes a filename and line number in its output. However, given the precedent of optional messages set by , I don't think it would be a stretch to imagine that could also have an optional message. Usage for the typical case would be the same: could optionally look like this:\nI'll have a go at implementing this.", "positive_passages": [{"docid": "doc-en-rust-0f2bf779d733788feb2b1df03b0c7a371b162b1ae32b062f151edc764319cc2f", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:internal error: entered unreachable code fn main() { unreachable!() } ", "commid": "rust_pr_18867"}], "negative_passages": []} {"query_id": "q-en-rust-adff20f233002c7b57c07999a8fc258646c5e1fe8387693c136d76fb260bed53", "query": "In we had the submitter change a to an as follows: We felt that better communicated the intent of \"this code should never panic assuming that the implementation is correct\" (as opposed to , which communicates \"this code should never panic assuming that the user passes in the correct inputs\" (cf. indexing, division) and is generally to be avoided in the stdlib). However, I think it's a shame that by expressing the intent more clearly in the code we're simultaneously losing information regarding which invariant was violated in order to reach the unreachable code. Note that this isn't really a huge deal, because we can just move the \"Invalid SearchStack\" bit over into a comment next to . It's also not really a huge deal for the theoretical bug reporter who's filing an issue upon hitting this code, since still includes a filename and line number in its output. However, given the precedent of optional messages set by , I don't think it would be a stretch to imagine that could also have an optional message. Usage for the typical case would be the same: could optionally look like this:\nI'll have a go at implementing this.", "positive_passages": [{"docid": "doc-en-rust-d6774eab12f7a46676a70ed1556fcbf2b9b4a24067052b11de00394c67f6c2f3", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:internal error: entered unreachable code: uhoh fn main() { unreachable!(\"uhoh\") } ", "commid": "rust_pr_18867"}], "negative_passages": []} {"query_id": "q-en-rust-adff20f233002c7b57c07999a8fc258646c5e1fe8387693c136d76fb260bed53", "query": "In we had the submitter change a to an as follows: We felt that better communicated the intent of \"this code should never panic assuming that the implementation is correct\" (as opposed to , which communicates \"this code should never panic assuming that the user passes in the correct inputs\" (cf. indexing, division) and is generally to be avoided in the stdlib). However, I think it's a shame that by expressing the intent more clearly in the code we're simultaneously losing information regarding which invariant was violated in order to reach the unreachable code. Note that this isn't really a huge deal, because we can just move the \"Invalid SearchStack\" bit over into a comment next to . It's also not really a huge deal for the theoretical bug reporter who's filing an issue upon hitting this code, since still includes a filename and line number in its output. However, given the precedent of optional messages set by , I don't think it would be a stretch to imagine that could also have an optional message. Usage for the typical case would be the same: could optionally look like this:\nI'll have a go at implementing this.", "positive_passages": [{"docid": "doc-en-rust-913ef8690369dcd0be20e4bccce72831286f7bfa74d327d8b9658b608f9aae27", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:internal error: entered unreachable code fn main() { unreachable!() } ", "commid": "rust_pr_18867"}], "negative_passages": []} {"query_id": "q-en-rust-441ebef3a07a8650d7e608ebfdd137451433511bbf66d8572d140d97523f439c", "query": "Backtrace:\nWell, this is interesting. It's broken all the way back to 0.10. I think it's been a bug in the type id implementation since it first landed. Here's a smaller test case:\nis there a particular reason explicitly checks that all regions it encounters are ? Trans doesn't seem to go out of its way to erase late-bound regions in fn type signatures, which results in this ICE. I could add a call to in trans to fix this, but it would be cheaper to just ignore regions during hashing.\nEnded up catching me\nIt seems we actually want to take late-bound regions into account when hashing. For example, the following should hold:", "positive_passages": [{"docid": "doc-en-rust-1d2f25a34c4d865b1d9eed7d4ff667a8324effc948f10bdb491886539247a2e9", "text": "/// context it's calculated within. This is used by the `type_id` intrinsic. pub fn hash_crate_independent(tcx: &ctxt, ty: Ty, svh: &Svh) -> u64 { let mut state = sip::SipState::new(); macro_rules! byte( ($b:expr) => { ($b as u8).hash(&mut state) } ); macro_rules! hash( ($e:expr) => { $e.hash(&mut state) } ); let region = |_state: &mut sip::SipState, r: Region| { match r { ReStatic => {} ReEmpty | ReEarlyBound(..) | ReLateBound(..) | ReFree(..) | ReScope(..) | ReInfer(..) => { tcx.sess.bug(\"non-static region found when hashing a type\") helper(tcx, ty, svh, &mut state); return state.result(); fn helper(tcx: &ctxt, ty: Ty, svh: &Svh, state: &mut sip::SipState) { macro_rules! byte( ($b:expr) => { ($b as u8).hash(state) } ); macro_rules! hash( ($e:expr) => { $e.hash(state) } ); let region = |state: &mut sip::SipState, r: Region| { match r { ReStatic => {} ReLateBound(db, BrAnon(i)) => { db.hash(state); i.hash(state); } ReEmpty | ReEarlyBound(..) | ReLateBound(..) | ReFree(..) | ReScope(..) | ReInfer(..) => { tcx.sess.bug(\"unexpected region found when hashing a type\") } } } }; let did = |state: &mut sip::SipState, did: DefId| { let h = if ast_util::is_local(did) { svh.clone() } else { tcx.sess.cstore.get_crate_hash(did.krate) }; h.as_str().hash(state); did.node.hash(state); }; let mt = |state: &mut sip::SipState, mt: mt| { mt.mutbl.hash(state); }; ty::walk_ty(ty, |ty| { match ty.sty { ty_bool => byte!(2), ty_char => byte!(3), ty_int(i) => { byte!(4); hash!(i); } ty_uint(u) => { byte!(5); hash!(u); } ty_float(f) => { byte!(6); hash!(f); } ty_str => { byte!(7); } ty_enum(d, _) => { byte!(8); did(&mut state, d); } ty_uniq(_) => { byte!(9); } ty_vec(_, Some(n)) => { byte!(10); n.hash(&mut state); } ty_vec(_, None) => { byte!(11); } ty_ptr(m) => { byte!(12); mt(&mut state, m); } ty_rptr(r, m) => { byte!(13); region(&mut state, r); mt(&mut state, m); } ty_bare_fn(ref b) => { byte!(14); hash!(b.unsafety); hash!(b.abi); let did = |state: &mut sip::SipState, did: DefId| { let h = if ast_util::is_local(did) { svh.clone() } else { tcx.sess.cstore.get_crate_hash(did.krate) }; h.as_str().hash(state); did.node.hash(state); }; let mt = |state: &mut sip::SipState, mt: mt| { mt.mutbl.hash(state); }; let fn_sig = |state: &mut sip::SipState, sig: &FnSig| { let sig = anonymize_late_bound_regions(tcx, sig); for a in sig.inputs.iter() { helper(tcx, *a, svh, state); } if let ty::FnConverging(output) = sig.output { helper(tcx, output, svh, state); } ty_closure(ref c) => { byte!(15); hash!(c.unsafety); hash!(c.onceness); hash!(c.bounds); match c.store { UniqTraitStore => byte!(0), RegionTraitStore(r, m) => { byte!(1) region(&mut state, r); assert_eq!(m, ast::MutMutable); }; maybe_walk_ty(ty, |ty| { match ty.sty { ty_bool => byte!(2), ty_char => byte!(3), ty_int(i) => { byte!(4); hash!(i); } ty_uint(u) => { byte!(5); hash!(u); } ty_float(f) => { byte!(6); hash!(f); } ty_str => { byte!(7); } ty_enum(d, _) => { byte!(8); did(state, d); } ty_uniq(_) => { byte!(9); } ty_vec(_, Some(n)) => { byte!(10); n.hash(state); } ty_vec(_, None) => { byte!(11); } ty_ptr(m) => { byte!(12); mt(state, m); } ty_rptr(r, m) => { byte!(13); region(state, r); mt(state, m); } ty_bare_fn(ref b) => { byte!(14); hash!(b.unsafety); hash!(b.abi); fn_sig(state, &b.sig); return false; } ty_closure(ref c) => { byte!(15); hash!(c.unsafety); hash!(c.onceness); hash!(c.bounds); match c.store { UniqTraitStore => byte!(0), RegionTraitStore(r, m) => { byte!(1); region(state, r); assert_eq!(m, ast::MutMutable); } } fn_sig(state, &c.sig); return false; } } ty_trait(box TyTrait { ref principal, bounds }) => { byte!(17); did(&mut state, principal.def_id); hash!(bounds); } ty_struct(d, _) => { byte!(18); did(&mut state, d); } ty_tup(ref inner) => { byte!(19); hash!(inner.len()); } ty_param(p) => { byte!(20); hash!(p.idx); did(&mut state, p.def_id); } ty_open(_) => byte!(22), ty_infer(_) => unreachable!(), ty_err => byte!(23), ty_unboxed_closure(d, r, _) => { byte!(24); did(&mut state, d); region(&mut state, r); } } }); ty_trait(box TyTrait { ref principal, bounds }) => { byte!(17); did(state, principal.def_id); hash!(bounds); let principal = anonymize_late_bound_regions(tcx, principal); for subty in principal.substs.types.iter() { helper(tcx, *subty, svh, state); } state.result() return false; } ty_struct(d, _) => { byte!(18); did(state, d); } ty_tup(ref inner) => { byte!(19); hash!(inner.len()); } ty_param(p) => { byte!(20); hash!(p.idx); did(state, p.def_id); } ty_open(_) => byte!(22), ty_infer(_) => unreachable!(), ty_err => byte!(23), ty_unboxed_closure(d, r, _) => { byte!(24); did(state, d); region(state, r); } } true }); } } impl Variance {", "commid": "rust_pr_19821"}], "negative_passages": []} {"query_id": "q-en-rust-441ebef3a07a8650d7e608ebfdd137451433511bbf66d8572d140d97523f439c", "query": "Backtrace:\nWell, this is interesting. It's broken all the way back to 0.10. I think it's been a bug in the type id implementation since it first landed. Here's a smaller test case:\nis there a particular reason explicitly checks that all regions it encounters are ? Trans doesn't seem to go out of its way to erase late-bound regions in fn type signatures, which results in this ICE. I could add a call to in trans to fix this, but it would be cheaper to just ignore regions during hashing.\nEnded up catching me\nIt seems we actually want to take late-bound regions into account when hashing. For example, the following should hold:", "positive_passages": [{"docid": "doc-en-rust-1476952c0e31f4d798aba2a64caf6b80cd9dc9449c5bbee1b353525a93565783", "text": "replace_late_bound_regions(tcx, value, |_, _| ty::ReStatic).0 } /// Rewrite any late-bound regions so that they are anonymous. Region numbers are /// assigned starting at 1 and increasing monotonically in the order traversed /// by the fold operation. /// /// The chief purpose of this function is to canonicalize regions so that two /// `FnSig`s or `TraitRef`s which are equivalent up to region naming will become /// structurally identical. For example, `for<'a, 'b> fn(&'a int, &'b int)` and /// `for<'a, 'b> fn(&'b int, &'a int)` will become identical after anonymization. pub fn anonymize_late_bound_regions<'tcx, HR>(tcx: &ctxt<'tcx>, sig: &HR) -> HR where HR: HigherRankedFoldable<'tcx> { let mut counter = 0; replace_late_bound_regions(tcx, sig, |_, db| { counter += 1; ReLateBound(db, BrAnon(counter)) }).0 } /// Replaces the late-bound-regions in `value` that are bound by `value`. pub fn replace_late_bound_regions<'tcx, HR, F>( tcx: &ty::ctxt<'tcx>,", "commid": "rust_pr_19821"}], "negative_passages": []} {"query_id": "q-en-rust-441ebef3a07a8650d7e608ebfdd137451433511bbf66d8572d140d97523f439c", "query": "Backtrace:\nWell, this is interesting. It's broken all the way back to 0.10. I think it's been a bug in the type id implementation since it first landed. Here's a smaller test case:\nis there a particular reason explicitly checks that all regions it encounters are ? Trans doesn't seem to go out of its way to erase late-bound regions in fn type signatures, which results in this ICE. I could add a call to in trans to fix this, but it would be cheaper to just ignore regions during hashing.\nEnded up catching me\nIt seems we actually want to take late-bound regions into account when hashing. For example, the following should hold:", "positive_passages": [{"docid": "doc-en-rust-8bf125453043bb6024ea7a1b0717f03edbaf63c1056c55d119d7cb767c6c80de", "text": "fn print_poly_trait_ref(&mut self, t: &ast::PolyTraitRef) -> IoResult<()> { if !t.bound_lifetimes.is_empty() { try!(word(&mut self.s, \"for<\")); let mut comma = false; for lifetime_def in t.bound_lifetimes.iter() { if comma { try!(self.word_space(\",\")) } try!(self.print_lifetime_def(lifetime_def)); comma = true; } try!(word(&mut self.s, \">\")); }", "commid": "rust_pr_19821"}], "negative_passages": []} {"query_id": "q-en-rust-441ebef3a07a8650d7e608ebfdd137451433511bbf66d8572d140d97523f439c", "query": "Backtrace:\nWell, this is interesting. It's broken all the way back to 0.10. I think it's been a bug in the type id implementation since it first landed. Here's a smaller test case:\nis there a particular reason explicitly checks that all regions it encounters are ? Trans doesn't seem to go out of its way to erase late-bound regions in fn type signatures, which results in this ICE. I could add a call to in trans to fix this, but it would be cheaper to just ignore regions during hashing.\nEnded up catching me\nIt seems we actually want to take late-bound regions into account when hashing. For example, the following should hold:", "positive_passages": [{"docid": "doc-en-rust-0f1969ccea2bc6f79dacef9394c7cb524472df090530ce94f76790a27d768461", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that type IDs correctly account for higher-rank lifetimes // Also acts as a regression test for an ICE (issue #19791) #![feature(unboxed_closures)] use std::intrinsics::TypeId; fn main() { // Bare fns { let a = TypeId::of::(); let b = TypeId::of:: fn(&'static int, &'a int)>(); let c = TypeId::of:: fn(&'a int, &'b int)>(); let d = TypeId::of:: fn(&'b int, &'a int)>(); assert!(a != b); assert!(a != c); assert!(a != d); assert!(b != c); assert!(b != d); assert_eq!(c, d); // Make sure De Bruijn indices are handled correctly let e = TypeId::of:: fn(fn(&'a int) -> &'a int)>(); let f = TypeId::of:: fn(&'a int) -> &'a int)>(); assert!(e != f); } // Stack closures { let a = TypeId::of::<|&'static int, &'static int|>(); let b = TypeId::of:: |&'static int, &'a int|>(); let c = TypeId::of:: |&'a int, &'b int|>(); let d = TypeId::of:: |&'b int, &'a int|>(); assert!(a != b); assert!(a != c); assert!(a != d); assert!(b != c); assert!(b != d); assert_eq!(c, d); // Make sure De Bruijn indices are handled correctly let e = TypeId::of:: |(|&'a int| -> &'a int)|>(); let f = TypeId::of::<|for<'a> |&'a int| -> &'a int|>(); assert!(e != f); } // Boxed unboxed closures { let a = TypeId::of::>(); let b = TypeId::of:: Fn(&'static int, &'a int)>>(); let c = TypeId::of:: Fn(&'a int, &'b int)>>(); let d = TypeId::of:: Fn(&'b int, &'a int)>>(); assert!(a != b); assert!(a != c); assert!(a != d); assert!(b != c); assert!(b != d); assert_eq!(c, d); // Make sure De Bruijn indices are handled correctly let e = TypeId::of:: Fn(Box &'a int>)>>(); let f = TypeId::of:: Fn(&'a int) -> &'a int>)>>(); assert!(e != f); } // Raw unboxed closures // Note that every unboxed closure has its own anonymous type, // so no two IDs should equal each other, even when compatible { let a = id(|&: _: &int, _: &int| {}); let b = id(|&: _: &int, _: &int| {}); assert!(a != b); } fn id(_: T) -> TypeId { TypeId::of::() } } ", "commid": "rust_pr_19821"}], "negative_passages": []} {"query_id": "q-en-rust-d6392429c9ca3ddf6485fdce86811f23b297387c932b1f5454c68cb1a82088f1", "query": "Info located at: Looking at the source code in libcollections there doesn't seem to be such a method alltogether.\nThis would be fixed by The new way is", "positive_passages": [{"docid": "doc-en-rust-3c70c66683ce3fe5106a5733f90e5cbbf52aefa73655ed4cd137b11264f1188b", "text": "* [Iterators](iterators.md) * [Generics](generics.md) * [Traits](traits.md) * [Threads](threads.md) * [Concurrency](concurrency.md) * [Error Handling](error-handling.md) * [Documentation](documentation.md) * [III: Advanced Topics](advanced.md)", "commid": "rust_pr_21152"}], "negative_passages": []} {"query_id": "q-en-rust-d6392429c9ca3ddf6485fdce86811f23b297387c932b1f5454c68cb1a82088f1", "query": "Info located at: Looking at the source code in libcollections there doesn't seem to be such a method alltogether.\nThis would be fixed by The new way is", "positive_passages": [{"docid": "doc-en-rust-710f60b94f246edd8d1926b4bd6b40261fd3c12211008028d7bde02d2c8b9226", "text": " % Concurrency Concurrency and parallelism are incredibly important topics in computer science, and are also a hot topic in industry today. Computers are gaining more and more cores, yet many programmers aren't prepared to fully utilize them. Rust's memory safety features also apply to its concurrency story too. Even concurrent Rust programs must be memory safe, having no data races. Rust's type system is up to the task, and gives you powerful ways to reason about concurrent code at compile time. Before we talk about the concurrency features that come with Rust, it's important to understand something: Rust is low-level enough that all of this is provided by the standard library, not by the language. This means that if you don't like some aspect of the way Rust handles concurrency, you can implement an alternative way of doing things. [mio](https://github.com/carllerche/mio) is a real-world example of this principle in action. ## Background: `Send` and `Sync` Concurrency is difficult to reason about. In Rust, we have a strong, static type system to help us reason about our code. As such, Rust gives us two traits to help us make sense of code that can possibly be concurrent. ### `Send` The first trait we're going to talk about is [`Send`](../std/marker/trait.Send.html). When a type `T` implements `Send`, it indicates to the compiler that something of this type is able to have ownership transferred safely between threads. This is important to enforce certain restrictions. For example, if we have a channel connecting two threads, we would want to be able to send some data down the channel and to the other thread. Therefore, we'd ensure that `Send` was implemented for that type. In the opposite way, if we were wrapping a library with FFI that isn't threadsafe, we wouldn't want to implement `Send`, and so the compiler will help us enforce that it can't leave the current thread. ### `Sync` The second of these two trait is called [`Sync`](../std/marker/trait.Sync.html). When a type `T` implements `Sync`, it indicates to the compiler that something of this type has no possibility of introducing memory unsafety when used from multiple threads concurrently. For example, sharing immutable data with an atomic reference count is threadsafe. Rust provides a type like this, `Arc`, and it implements `Sync`, so that it could be safely shared between threads. These two traits allow you to use the type system to make strong guarantees about the properties of your code under concurrency. Before we demonstrate why, we need to learn how to create a concurrent Rust program in the first place! ## Threads Rust's standard library provides a library for 'threads', which allow you to run Rust code in parallel. Here's a basic example of using `Thread`: ``` use std::thread::Thread; fn main() { Thread::scoped(|| { println!(\"Hello from a thread!\"); }); } ``` The `Thread::scoped()` method accepts a closure, which is executed in a new thread. It's called `scoped` because this thread returns a join guard: ``` use std::thread::Thread; fn main() { let guard = Thread::scoped(|| { println!(\"Hello from a thread!\"); }); // guard goes out of scope here } ``` When `guard` goes out of scope, it will block execution until the thread is finished. If we didn't want this behaviour, we could use `Thread::spawn()`: ``` use std::thread::Thread; use std::old_io::timer; use std::time::Duration; fn main() { Thread::spawn(|| { println!(\"Hello from a thread!\"); }); timer::sleep(Duration::milliseconds(50)); } ``` Or call `.detach()`: ``` use std::thread::Thread; use std::old_io::timer; use std::time::Duration; fn main() { let guard = Thread::scoped(|| { println!(\"Hello from a thread!\"); }); guard.detach(); timer::sleep(Duration::milliseconds(50)); } ``` We need to `sleep` here because when `main()` ends, it kills all of the running threads. [`scoped`](std/thread/struct.Builder.html#method.scoped) has an interesting type signature: ```text fn scoped<'a, T, F>(self, f: F) -> JoinGuard<'a, T> where T: Send + 'a, F: FnOnce() -> T, F: Send + 'a ``` Specifically, `F`, the closure that we pass to execute in the new thread. It has two restrictions: It must be a `FnOnce` from `()` to `T`. Using `FnOnce` allows the closure to take ownership of any data it mentions from the parent thread. The other restriction is that `F` must be `Send`. We aren't allowed to transfer this ownership unless the type thinks that's okay. Many languages have the ability to execute threads, but it's wildly unsafe. There are entire books about how to prevent errors that occur from shared mutable state. Rust helps out with its type system here as well, by preventing data races at compile time. Let's talk about how you actually share things between threads. ## Safe Shared Mutable State Due to Rust's type system, we have a concept that sounds like a lie: \"safe shared mutable state.\" Many programmers agree that shared mutable state is very, very bad. Someone once said this: > Shared mutable state is the root of all evil. Most languages attempt to deal > with this problem through the 'mutable' part, but Rust deals with it by > solving the 'shared' part. The same [ownership system](ownership.html) that helps prevent using pointers incorrectly also helps rule out data races, one of the worst kinds of concurrency bugs. As an example, here is a Rust program that would have a data race in many languages. It will not compile: ```ignore use std::thread::Thread; use std::old_io::timer; use std::time::Duration; fn main() { let mut data = vec![1u32, 2, 3]; for i in 0 .. 2 { Thread::spawn(move || { data[i] += 1; }); } timer::sleep(Duration::milliseconds(50)); } ``` This gives us an error: ```text 12:17 error: capture of moved value: `data` data[i] += 1; ^~~~ ``` In this case, we know that our code _should_ be safe, but Rust isn't sure. And it's actually not safe: if we had a reference to `data` in each thread, and the thread takes ownership of the reference, we have three owners! That's bad. We can fix this by using the `Arc` type, which is an atomic reference counted pointer. The 'atomic' part means that it's safe to share across threads. `Arc` assumes one more property about its contents to ensure that it is safe to share across threads: it assumes its contents are `Sync`. But in our case, we want to be able to mutate the value. We need a type that can ensure only one person at a time can mutate what's inside. For that, we can use the `Mutex` type. Here's the second version of our code. It still doesn't work, but for a different reason: ```ignore use std::thread::Thread; use std::old_io::timer; use std::time::Duration; use std::sync::Mutex; fn main() { let mut data = Mutex::new(vec![1u32, 2, 3]); for i in 0 .. 2 { let data = data.lock().unwrap(); Thread::spawn(move || { data[i] += 1; }); } timer::sleep(Duration::milliseconds(50)); } ``` Here's the error: ```text :11:9: 11:22 error: the trait `core::marker::Send` is not implemented for the type `std::sync::mutex::MutexGuard<'_, collections::vec::Vec>` [E0277] :11 Thread::spawn(move || { ^~~~~~~~~~~~~ :11:9: 11:22 note: `std::sync::mutex::MutexGuard<'_, collections::vec::Vec>` cannot be sent between threads safely :11 Thread::spawn(move || { ^~~~~~~~~~~~~ ``` You see, [`Mutex`](std/sync/struct.Mutex.html) has a [`lock`](http://doc.rust-lang.org/nightly/std/sync/struct.Mutex.html#method.lock) method which has this signature: ```ignore fn lock(&self) -> LockResult> ``` If we [look at the code for MutexGuard](https://github.com/rust-lang/rust/blob/ca4b9674c26c1de07a2042cb68e6a062d7184cef/src/libstd/sync/mutex.rs#L172), we'll see this: ```ignore __marker: marker::NoSend, ``` Because our guard is `NoSend`, it's not `Send`. Which means we can't actually transfer the guard across thread boundaries, which gives us our error. We can use `Arc` to fix this. Here's the working version: ``` use std::sync::{Arc, Mutex}; use std::thread::Thread; use std::old_io::timer; use std::time::Duration; fn main() { let data = Arc::new(Mutex::new(vec![1u32, 2, 3])); for i in (0us..2) { let data = data.clone(); Thread::spawn(move || { let mut data = data.lock().unwrap(); data[i] += 1; }); } timer::sleep(Duration::milliseconds(50)); } ``` We now call `clone()` on our `Arc`, which increases the internal count. This handle is then moved into the new thread. Let's examine the body of the thread more closely: ``` # use std::sync::{Arc, Mutex}; # use std::thread::Thread; # use std::old_io::timer; # use std::time::Duration; # fn main() { # let data = Arc::new(Mutex::new(vec![1u32, 2, 3])); # for i in (0us..2) { # let data = data.clone(); Thread::spawn(move || { let mut data = data.lock().unwrap(); data[i] += 1; }); # } # } ``` First, we call `lock()`, which acquires the mutex's lock. Because this may fail, it returns an `Result`, and because this is just an example, we `unwrap()` it to get a reference to the data. Real code would have more robust error handling here. We're then free to mutate it, since we have the lock. This timer bit is a bit awkward, however. We have picked a reasonable amount of time to wait, but it's entirely possible that we've picked too high, and that we could be taking less time. It's also possible that we've picked too low, and that we aren't actually finishing this computation. Rust's standard library provides a few more mechanisms for two threads to synchronize with each other. Let's talk about one: channels. ## Channels Here's a version of our code that uses channels for synchronization, rather than waiting for a specific time: ``` use std::sync::{Arc, Mutex}; use std::thread::Thread; use std::sync::mpsc; fn main() { let data = Arc::new(Mutex::new(0u32)); let (tx, rx) = mpsc::channel(); for _ in (0..10) { let (data, tx) = (data.clone(), tx.clone()); Thread::spawn(move || { let mut data = data.lock().unwrap(); *data += 1; tx.send(()); }); } for _ in 0 .. 10 { rx.recv(); } } ``` We use the `mpsc::channel()` method to construct a new channel. We just `send` a simple `()` down the channel, and then wait for ten of them to come back. While this channel is just sending a generic signal, we can send any data that is `Send` over the channel! ``` use std::sync::{Arc, Mutex}; use std::thread::Thread; use std::sync::mpsc; fn main() { let (tx, rx) = mpsc::channel(); for _ in range(0, 10) { let tx = tx.clone(); Thread::spawn(move || { let answer = 42u32; tx.send(answer); }); } rx.recv().ok().expect(\"Could not recieve answer\"); } ``` A `u32` is `Send` because we can make a copy. So we create a thread, ask it to calculate the answer, and then it `send()`s us the answer over the channel. ## Panics A `panic!` will crash the currently executing thread. You can use Rust's threads as a simple isolation mechanism: ``` use std::thread::Thread; let result = Thread::scoped(move || { panic!(\"oops!\"); }).join(); assert!(result.is_err()); ``` Our `Thread` gives us a `Result` back, which allows us to check if the thread has panicked or not. ", "commid": "rust_pr_21152"}], "negative_passages": []} {"query_id": "q-en-rust-d6392429c9ca3ddf6485fdce86811f23b297387c932b1f5454c68cb1a82088f1", "query": "Info located at: Looking at the source code in libcollections there doesn't seem to be such a method alltogether.\nThis would be fixed by The new way is", "positive_passages": [{"docid": "doc-en-rust-3df348ac45a59bf82c443fa76a0f24e60a2b3cc37dd1d5e85051d634142285fb", "text": " % The Rust Threads and Communication Guide **NOTE** This guide is badly out of date and needs to be rewritten. # Introduction Rust provides safe concurrent abstractions through a number of core library primitives. This guide will describe the concurrency model in Rust, how it relates to the Rust type system, and introduce the fundamental library abstractions for constructing concurrent programs. Threads provide failure isolation and recovery. When a fatal error occurs in Rust code as a result of an explicit call to `panic!()`, an assertion failure, or another invalid operation, the runtime system destroys the entire thread. Unlike in languages such as Java and C++, there is no way to `catch` an exception. Instead, threads may monitor each other to see if they panic. Threads use Rust's type system to provide strong memory safety guarantees. In particular, the type system guarantees that threads cannot induce a data race from shared mutable state. # Basics At its simplest, creating a thread is a matter of calling the `spawn` function with a closure argument. `spawn` executes the closure in the new thread. ```{rust,ignore} # use std::thread::spawn; // Print something profound in a different thread using a named function fn print_message() { println!(\"I am running in a different thread!\"); } spawn(print_message); // Alternatively, use a `move ||` expression instead of a named function. // `||` expressions evaluate to an unnamed closure. The `move` keyword // indicates that the closure should take ownership of any variables it // touches. spawn(move || println!(\"I am also running in a different thread!\")); ``` In Rust, a thread is not a concept that appears in the language semantics. Instead, Rust's type system provides all the tools necessary to implement safe concurrency: particularly, ownership. The language leaves the implementation details to the standard library. The `spawn` function has the type signature: `fn spawn(f: F)`. This indicates that it takes as argument a closure (of type `F`) that it will run exactly once. This closure is limited to capturing `Send`-able data from its environment (that is, data which is deeply owned). Limiting the closure to `Send` ensures that `spawn` can safely move the entire closure and all its associated state into an entirely different thread for execution. ```rust use std::thread::Thread; fn generate_thread_number() -> i32 { 4 } // a very simple generation // Generate some state locally let child_thread_number = generate_thread_number(); Thread::spawn(move || { // Capture it in the remote thread. The `move` keyword indicates // that this closure should move `child_thread_number` into its // environment, rather than capturing a reference into the // enclosing stack frame. println!(\"I am child number {}\", child_thread_number); }); ``` ## Communication Now that we have spawned a new thread, it would be nice if we could communicate with it. For this, we use *channels*. A channel is simply a pair of endpoints: one for sending messages and another for receiving messages. The simplest way to create a channel is to use the `channel` function to create a `(Sender, Receiver)` pair. In Rust parlance, a *sender* is a sending endpoint of a channel, and a *receiver* is the receiving endpoint. Consider the following example of calculating two results concurrently: ```rust use std::thread::Thread; use std::sync::mpsc; let (tx, rx): (mpsc::Sender, mpsc::Receiver) = mpsc::channel(); Thread::spawn(move || { let result = some_expensive_computation(); tx.send(result); }); some_other_expensive_computation(); let result = rx.recv(); fn some_expensive_computation() -> u32 { 42 } // very expensive ;) fn some_other_expensive_computation() {} // even more so ``` Let's examine this example in detail. First, the `let` statement creates a stream for sending and receiving integers (the left-hand side of the `let`, `(tx, rx)`, is an example of a destructuring let: the pattern separates a tuple into its component parts). ```rust # use std::sync::mpsc; let (tx, rx): (mpsc::Sender, mpsc::Receiver) = mpsc::channel(); ``` The child thread will use the sender to send data to the parent thread, which will wait to receive the data on the receiver. The next statement spawns the child thread. ```rust # use std::thread::Thread; # use std::sync::mpsc; # fn some_expensive_computation() -> u32 { 42 } # let (tx, rx) = mpsc::channel(); Thread::spawn(move || { let result = some_expensive_computation(); tx.send(result); }); ``` Notice that the creation of the thread closure transfers `tx` to the child thread implicitly: the closure captures `tx` in its environment. Both `Sender` and `Receiver` are sendable types and may be captured into threads or otherwise transferred between them. In the example, the child thread runs an expensive computation, then sends the result over the captured channel. Finally, the parent continues with some other expensive computation, then waits for the child's result to arrive on the receiver: ```rust # use std::sync::mpsc; # fn some_other_expensive_computation() {} # let (tx, rx) = mpsc::channel::(); # tx.send(0); some_other_expensive_computation(); let result = rx.recv(); ``` The `Sender` and `Receiver` pair created by `channel` enables efficient communication between a single sender and a single receiver, but multiple senders cannot use a single `Sender` value, and multiple receivers cannot use a single `Receiver` value. What if our example needed to compute multiple results across a number of threads? The following program is ill-typed: ```{rust,ignore} # use std::sync::mpsc; # fn some_expensive_computation() -> u32 { 42 } let (tx, rx) = mpsc::channel(); spawn(move || { tx.send(some_expensive_computation()); }); // ERROR! The previous spawn statement already owns the sender, // so the compiler will not allow it to be captured again spawn(move || { tx.send(some_expensive_computation()); }); ``` Instead we can clone the `tx`, which allows for multiple senders. ```rust use std::thread::Thread; use std::sync::mpsc; let (tx, rx) = mpsc::channel(); for init_val in 0 .. 3 { // Create a new channel handle to distribute to the child thread let child_tx = tx.clone(); Thread::spawn(move || { child_tx.send(some_expensive_computation(init_val)); }); } let result = rx.recv().unwrap() + rx.recv().unwrap() + rx.recv().unwrap(); # fn some_expensive_computation(_i: i32) -> i32 { 42 } ``` Cloning a `Sender` produces a new handle to the same channel, allowing multiple threads to send data to a single receiver. It upgrades the channel internally in order to allow this functionality, which means that channels that are not cloned can avoid the overhead required to handle multiple senders. But this fact has no bearing on the channel's usage: the upgrade is transparent. Note that the above cloning example is somewhat contrived since you could also simply use three `Sender` pairs, but it serves to illustrate the point. For reference, written with multiple streams, it might look like the example below. ```rust use std::thread::Thread; use std::sync::mpsc; // Create a vector of ports, one for each child thread let rxs = (0 .. 3).map(|&:init_val| { let (tx, rx) = mpsc::channel(); Thread::spawn(move || { tx.send(some_expensive_computation(init_val)); }); rx }).collect::>(); // Wait on each port, accumulating the results let result = rxs.iter().fold(0, |&:accum, rx| accum + rx.recv().unwrap() ); # fn some_expensive_computation(_i: i32) -> i32 { 42 } ``` ## Backgrounding computations: Futures With `sync::Future`, rust has a mechanism for requesting a computation and getting the result later. The basic example below illustrates this. ```{rust,ignore} # #![allow(deprecated)] use std::sync::Future; # fn main() { # fn make_a_sandwich() {}; fn fib(n: u64) -> u64 { // lengthy computation returning an 64 12586269025 } let mut delayed_fib = Future::spawn(move || fib(50)); make_a_sandwich(); println!(\"fib(50) = {}\", delayed_fib.get()) # } ``` The call to `future::spawn` immediately returns a `future` object regardless of how long it takes to run `fib(50)`. You can then make yourself a sandwich while the computation of `fib` is running. The result of the execution of the method is obtained by calling `get` on the future. This call will block until the value is available (*i.e.* the computation is complete). Note that the future needs to be mutable so that it can save the result for next time `get` is called. Here is another example showing how futures allow you to background computations. The workload will be distributed on the available cores. ```{rust,ignore} # #![allow(deprecated)] # use std::num::Float; # use std::sync::Future; fn partial_sum(start: u64) -> f64 { let mut local_sum = 0f64; for num in range(start*100000, (start+1)*100000) { local_sum += (num as f64 + 1.0).powf(-2.0); } local_sum } fn main() { let mut futures = Vec::from_fn(200, |ind| Future::spawn(move || partial_sum(ind))); let mut final_res = 0f64; for ft in futures.iter_mut() { final_res += ft.get(); } println!(\"\u03c0^2/6 is not far from : {}\", final_res); } ``` ## Sharing without copying: Arc To share data between threads, a first approach would be to only use channel as we have seen previously. A copy of the data to share would then be made for each thread. In some cases, this would add up to a significant amount of wasted memory and would require copying the same data more than necessary. To tackle this issue, one can use an Atomically Reference Counted wrapper (`Arc`) as implemented in the `sync` library of Rust. With an Arc, the data will no longer be copied for each thread. The Arc acts as a reference to the shared data and only this reference is shared and cloned. Here is a small example showing how to use Arcs. We wish to run concurrently several computations on a single large vector of floats. Each thread needs the full vector to perform its duty. ```{rust,ignore} use std::num::Float; use std::rand; use std::sync::Arc; fn pnorm(nums: &[f64], p: u64) -> f64 { nums.iter().fold(0.0, |a, b| a + b.powf(p as f64)).powf(1.0 / (p as f64)) } fn main() { let numbers = Vec::from_fn(1000000, |_| rand::random::()); let numbers_arc = Arc::new(numbers); for num in range(1, 10) { let thread_numbers = numbers_arc.clone(); spawn(move || { println!(\"{}-norm = {}\", num, pnorm(thread_numbers.as_slice(), num)); }); } } ``` The function `pnorm` performs a simple computation on the vector (it computes the sum of its items at the power given as argument and takes the inverse power of this value). The Arc on the vector is created by the line: ```{rust,ignore} # use std::rand; # use std::sync::Arc; # fn main() { # let numbers = Vec::from_fn(1000000, |_| rand::random::()); let numbers_arc = Arc::new(numbers); # } ``` and a clone is captured for each thread via a procedure. This only copies the wrapper and not its contents. Within the thread's procedure, the captured Arc reference can be used as a shared reference to the underlying vector as if it were local. ```{rust,ignore} # use std::rand; # use std::sync::Arc; # fn pnorm(nums: &[f64], p: u64) -> f64 { 4.0 } # fn main() { # let numbers=Vec::from_fn(1000000, |_| rand::random::()); # let numbers_arc = Arc::new(numbers); # let num = 4; let thread_numbers = numbers_arc.clone(); spawn(move || { // Capture thread_numbers and use it as if it was the underlying vector println!(\"{}-norm = {}\", num, pnorm(thread_numbers.as_slice(), num)); }); # } ``` # Handling thread panics Rust has a built-in mechanism for raising exceptions. The `panic!()` macro (which can also be written with an error string as an argument: `panic!( ~reason)`) and the `assert!` construct (which effectively calls `panic!()` if a boolean expression is false) are both ways to raise exceptions. When a thread raises an exception, the thread unwinds its stack\u2014running destructors and freeing memory along the way\u2014and then exits. Unlike exceptions in C++, exceptions in Rust are unrecoverable within a single thread: once a thread panics, there is no way to \"catch\" the exception. While it isn't possible for a thread to recover from panicking, threads may notify each other if they panic. The simplest way of handling a panic is with the `try` function, which is similar to `spawn`, but immediately blocks and waits for the child thread to finish. `try` returns a value of type `Result>`. `Result` is an `enum` type with two variants: `Ok` and `Err`. In this case, because the type arguments to `Result` are `i32` and `()`, callers can pattern-match on a result to check whether it's an `Ok` result with an `i32` field (representing a successful result) or an `Err` result (representing termination with an error). ```{rust,ignore} # use std::thread::Thread; # fn some_condition() -> bool { false } # fn calculate_result() -> i32 { 0 } let result: Result> = Thread::spawn(move || { if some_condition() { calculate_result() } else { panic!(\"oops!\"); } }).join(); assert!(result.is_err()); ``` Unlike `spawn`, the function spawned using `try` may return a value, which `try` will dutifully propagate back to the caller in a [`Result`] enum. If the child thread terminates successfully, `try` will return an `Ok` result; if the child thread panics, `try` will return an `Error` result. [`Result`]: ../std/result/index.html > *Note:* A panicked thread does not currently produce a useful error > value (`try` always returns `Err(())`). In the > future, it may be possible for threads to intercept the value passed to > `panic!()`. But not all panics are created equal. In some cases you might need to abort the entire program (perhaps you're writing an assert which, if it trips, indicates an unrecoverable logic error); in other cases you might want to contain the panic at a certain boundary (perhaps a small piece of input from the outside world, which you happen to be processing in parallel, is malformed such that the processing thread cannot proceed). ", "commid": "rust_pr_21152"}], "negative_passages": []} {"query_id": "q-en-rust-c87d9c904977027d1103d332611514b63b8a72f472645ce10639575c51f6443a", "query": "I think this is fallout of , and it's somewhat unfortunate :( ! cc\nYup. Didn't think of that. So don't set the name of the main thread I suppose?\nYeah that may be the best option after all (sorry for recommending the opposite!)\nOkay, no problem, I'll send PR...\nThanks!", "positive_passages": [{"docid": "doc-en-rust-d9b3d5018dc9e274ddf5c6cd286230fd47f24be6ce9381708888608eb6c8c712", "text": "pub fn set(stack_bounds: (uint, uint), stack_guard: uint, thread: Thread) { THREAD_INFO.with(|c| assert!(c.borrow().is_none())); match thread.name() { Some(name) => unsafe { ::sys::thread::set_name(name); }, None => {} } THREAD_INFO.with(move |c| *c.borrow_mut() = Some(ThreadInfo{ stack_bounds: stack_bounds, stack_guard: stack_guard,", "commid": "rust_pr_21920"}], "negative_passages": []} {"query_id": "q-en-rust-c87d9c904977027d1103d332611514b63b8a72f472645ce10639575c51f6443a", "query": "I think this is fallout of , and it's somewhat unfortunate :( ! cc\nYup. Didn't think of that. So don't set the name of the main thread I suppose?\nYeah that may be the best option after all (sorry for recommending the opposite!)\nOkay, no problem, I'll send PR...\nThanks!", "positive_passages": [{"docid": "doc-en-rust-960e2ed291ce1fcd7b18513439b141762d32371d93aadfbfbf77e902f2753ac3", "text": "use option::Option::{self, Some, None}; use result::Result::{Err, Ok}; use sync::{Mutex, Condvar, Arc}; use str::Str; use string::String; use rt::{self, unwind}; use old_io::{Writer, stdio};", "commid": "rust_pr_21920"}], "negative_passages": []} {"query_id": "q-en-rust-c87d9c904977027d1103d332611514b63b8a72f472645ce10639575c51f6443a", "query": "I think this is fallout of , and it's somewhat unfortunate :( ! cc\nYup. Didn't think of that. So don't set the name of the main thread I suppose?\nYeah that may be the best option after all (sorry for recommending the opposite!)\nOkay, no problem, I'll send PR...\nThanks!", "positive_passages": [{"docid": "doc-en-rust-f97bd8bc28e68710d1dfd40778b8aa1351debe88c607b1f5e2e9a255775b6700", "text": "unsafe { stack::record_os_managed_stack_bounds(my_stack_bottom, my_stack_top); } match their_thread.name() { Some(name) => unsafe { imp::set_name(name.as_slice()); }, None => {} } thread_info::set( (my_stack_bottom, my_stack_top), unsafe { imp::guard::current() },", "commid": "rust_pr_21920"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-eff4ab1745ca288c2c894f72ee23285603eacdca8a466ba28c17e3deb2e9b93e", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. trait Trait { type A; type B; } fn foo>() { } //~^ ERROR: unsupported cyclic reference between types/traits detected fn main() { } ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-8d80eaed92d4f217b572add154eb183b72141a975d5fd7445481057ee55fee06", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn foo(t: U) { let y = t(); //~^ ERROR: expected function, found `U` } struct Bar; pub fn some_func() { let f = Bar(); //~^ ERROR: expected function, found `Bar` } fn main() { foo(|| { 1 }); } ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-883d00add2809de358a9955f82c62f518615a99960bc1d700e56a1e957874f3a", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. trait A { type Output; fn a(&self) -> ::X; //~^ ERROR: use of undeclared associated type `A::X` } impl A for u32 { type Output = u32; fn a(&self) -> u32 { 0 } } fn main() { let a: u32 = 0; let b: u32 = a.a(); } ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-872af9d980804f24c534df803d1c9d945fbe39f5bfe69660caaabc7db4ba75ff", "text": " -include ../tools.mk # Test output to be four # The original error only occurred when printing, not when comparing using assert! all: $(RUSTC) foo.rs -O [ `$(call RUN,foo)` = \"4\" ] ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-145fc9dae61fc0ea27c2555c6e5a88444324c5e445745a9a2caf6cf7cae6bd95", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn identity(a: &u32) -> &u32 { a } fn print_foo(f: &fn(&u32) -> &u32, x: &u32) { print!(\"{}\", (*f)(x)); } fn main() { let x = &4; let f: fn(&u32) -> &u32 = identity; // Didn't print 4 on optimized builds print_foo(&f, x); } ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-437a88cc7bdddfd8ba8add1844d77dbc4e3feaa9f3cbec1f03691f8d80863c44", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(core)] extern crate core; use core::marker::Sync; static SARRAY: [i32; 1] = [11]; struct MyStruct { pub arr: *const [i32], } unsafe impl Sync for MyStruct {} static mystruct: MyStruct = MyStruct { arr: &SARRAY }; fn main() {} ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-4dfd27ea5764a27b6a5574bdadd8dda97d5d7f8fe961e3b6d0578a76a25276b1", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::ops::Add; fn f(a: T, b: T) -> ::Output { a + b } fn main() { println!(\"a + b is {}\", f::(100f32, 200f32)); } ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-e489f6ff6d0f0d8da0702ba41e3e03e4536b5376fc632f887e9232202e7245a8", "query": "Rust Version: rustc 1.0.0-nightly ( 2015-02-12 00:38:24 +0000) Description: When compiling the following code with rustc the output is: ERROR:rbml::reader: failed to find block with tag 7 error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'explicit panic', C:botslavenightly-dist-rustc-win-64buildsrclibrbml stack backtrace: 1: 0x69bfbbeb - sys::backtrace::write::h57b621d79849130fitB 2: 0x69c13974 - rt::unwind::register::h686925f8fa692ab9GSJ 3: 0x69b836ef - rt::unwind::beginunwindinner::hc2a974b1bdaa4639YPJ 4: 0x302da8 - reader::maybegetdoc::h2547f886a15a227erHa 5: 0x301d2f - reader::getdoc::hc198b8918d0de179nLa 6: 0x1138635 - metadata::decoder::itemtype::h7f6e008c4a79af18mck 7: 0x11480d9 - metadata::decoder::gettype::h9620389d7a7d23c0Xok 8: 0x10fb7a8 - metadata::csearch::gettype::h3af5a6a2534b2d8bCkn 9: 0x6d605059 - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 10: 0x6d5f49ab - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 11: 0x6d5f64ef - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 12: 0x6d5978bd - astconv::asttytoty::h1dadfccc9ddb5e90VHv 13: 0x6d5ef523 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 14: 0x6d5fa734 - rscope::ShiftedRscope<'r.RegionScope::anonregions::h0a5ed4a2d46d3aeeqlu 15: 0x6d61330f - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::tcx::hf449f596d422460aAFw 16: 0x6d60a79a - collect::CollectCtxt<'a, 'tcx.AstConv<'tcx::getitemtypescheme::he174d5be00f2627cMFw 17: 0x6d643867 - checkcrate::h72731a5b4298723fkrB 18: 0x6d64131c - checkcrate::h72731a5b4298723fkrB 19: 0x70b1eb58 - driver::phase3runanalysispasses::hc863f28a1bd50b11SGa 20: 0x70b02618 - driver::compileinput::hd580d8c011d24ba3Eba 21: 0x70bcfc4c - runcompiler::hb48a33a057cced0e5bc 22: 0x70bcd85b - run::h3df2025cf41d43b3Ibc 23: 0x70bcc56a - run::h3df2025cf41d43b3Ibc 24: 0x69c4180c - rusttry 25: 0x69c417e9 - rusttry 26: 0x70bccbdc - run::h3df2025cf41d43b3Ibc 27: 0x69c03ae2 - sys::tcp::TcpListener::bind::h2319ec78a0f49dc2faF 28: 0x770f59ed - BaseThreadInitThunk\nSame error as .", "positive_passages": [{"docid": "doc-en-rust-4b5e11890484a5a1148e3fe260e08a9b3c379d2d7bf04219d56e3ab3bad338b9", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. macro_rules! items { () => { type A = (); fn a() {} } } trait Foo { type A; fn a(); } impl Foo for () { items!(); } fn main() { } ", "commid": "rust_pr_25161"}], "negative_passages": []} {"query_id": "q-en-rust-0a396996c5e71e5549afbc67ed4bc7561878e9d205281062d3053330284332cc", "query": "Super easy to implement for anyone interested. The implementation in the RFC is sound, although the new code should be abstracted out into a function that the macro calls for inline and macro-sanity reasons. The function needs to be marked as public/stable, but should otherwise be marked as #[doc(hidden)], as it is not intended to be called directly -- only by the macro.\nWilling to mentor on details.\nI'd like to give it a shot :)\nBy the way: The seems very sparse. If it's okay, I'd expand that a bit while I'm on it, or should that go into a seperate PR?\nYeah, that's totally reasonable to do at the same time!\nGreat! What would be the preferred place where to put the new function? I've it to as for now (currently compiling it)\nSince it's supposed to only be used by the macro (and will be doc(hidden)), it makes sense to me to put it in Putting it in makes sense, too, though.\nhas a section labeled \"Internal methods and functions\" that seemed to fit well.", "positive_passages": [{"docid": "doc-en-rust-c846f7be92dd23d872dce556725d6d6250338b62a6e74e8353a5430c19c629fc", "text": "// except according to those terms. /// Creates a `Vec` containing the arguments. /// /// `vec!` allows `Vec`s to be defined with the same syntax as array expressions. /// There are two forms of this macro: /// /// - Create a `Vec` containing a given list of elements: /// /// ``` /// let v = vec![1, 2, 3]; /// assert_eq!(v[0], 1); /// assert_eq!(v[1], 2); /// assert_eq!(v[2], 3); /// ``` /// /// - Create a `Vec` from a given element and size: /// /// ``` /// let v = vec![1; 3]; /// assert_eq!(v, vec![1, 1, 1]); /// ``` /// /// Note that unlike array expressions this syntax supports all elements /// which implement `Clone` and the number of elements doesn't have to be /// a constant. #[macro_export] #[stable(feature = \"rust1\", since = \"1.0.0\")] macro_rules! vec { ($x:expr; $y:expr) => ( <[_] as $crate::slice::SliceExt>::into_vec( $crate::boxed::Box::new([$x; $y])) ($elem:expr; $n:expr) => ( $crate::vec::from_elem($elem, $n) ); ($($x:expr),*) => ( <[_] as $crate::slice::SliceExt>::into_vec(", "commid": "rust_pr_22455"}], "negative_passages": []} {"query_id": "q-en-rust-0a396996c5e71e5549afbc67ed4bc7561878e9d205281062d3053330284332cc", "query": "Super easy to implement for anyone interested. The implementation in the RFC is sound, although the new code should be abstracted out into a function that the macro calls for inline and macro-sanity reasons. The function needs to be marked as public/stable, but should otherwise be marked as #[doc(hidden)], as it is not intended to be called directly -- only by the macro.\nWilling to mentor on details.\nI'd like to give it a shot :)\nBy the way: The seems very sparse. If it's okay, I'd expand that a bit while I'm on it, or should that go into a seperate PR?\nYeah, that's totally reasonable to do at the same time!\nGreat! What would be the preferred place where to put the new function? I've it to as for now (currently compiling it)\nSince it's supposed to only be used by the macro (and will be doc(hidden)), it makes sense to me to put it in Putting it in makes sense, too, though.\nhas a section labeled \"Internal methods and functions\" that seemed to fit well.", "positive_passages": [{"docid": "doc-en-rust-a1f6ae7722aba151ed6c38dc4da927a6481a1bda96277ae3230685ddb69c4653", "text": "} } #[doc(hidden)] #[stable(feature = \"rust1\", since = \"1.0.0\")] pub fn from_elem(elem: T, n: usize) -> Vec { unsafe { let mut v = Vec::with_capacity(n); let mut ptr = v.as_mut_ptr(); // Write all elements except the last one for i in 1..n { ptr::write(ptr, Clone::clone(&elem)); ptr = ptr.offset(1); v.set_len(i); // Increment the length in every step in case Clone::clone() panics } if n > 0 { // We can write the last element directly without cloning needlessly ptr::write(ptr, elem); v.set_len(n); } v } } //////////////////////////////////////////////////////////////////////////////// // Common trait implementations for Vec ////////////////////////////////////////////////////////////////////////////////", "commid": "rust_pr_22455"}], "negative_passages": []} {"query_id": "q-en-rust-0a396996c5e71e5549afbc67ed4bc7561878e9d205281062d3053330284332cc", "query": "Super easy to implement for anyone interested. The implementation in the RFC is sound, although the new code should be abstracted out into a function that the macro calls for inline and macro-sanity reasons. The function needs to be marked as public/stable, but should otherwise be marked as #[doc(hidden)], as it is not intended to be called directly -- only by the macro.\nWilling to mentor on details.\nI'd like to give it a shot :)\nBy the way: The seems very sparse. If it's okay, I'd expand that a bit while I'm on it, or should that go into a seperate PR?\nYeah, that's totally reasonable to do at the same time!\nGreat! What would be the preferred place where to put the new function? I've it to as for now (currently compiling it)\nSince it's supposed to only be used by the macro (and will be doc(hidden)), it makes sense to me to put it in Putting it in makes sense, too, though.\nhas a section labeled \"Internal methods and functions\" that seemed to fit well.", "positive_passages": [{"docid": "doc-en-rust-092e4cb5fdaa34b9d0e9d374311f0cc4b40bfcd9aafe8749a3704875db8510bb", "text": "assert_eq!(vec![1; 2], vec![1, 1]); assert_eq!(vec![1; 1], vec![1]); assert_eq!(vec![1; 0], vec![]); // from_elem syntax (see RFC 832) let el = Box::new(1); let n = 3; assert_eq!(vec![el; n], vec![Box::new(1), Box::new(1), Box::new(1)]); }", "commid": "rust_pr_22455"}], "negative_passages": []} {"query_id": "q-en-rust-a73f497e850169e927f9eafb7930dc3eda6323c95f3d9e02acc1a22a4f872c50", "query": "Could someone change the bad title? I am not good for english.\nSeems like more of a diagnostics issue. is a private constructor, but we're erroring out on type-checking before we realize this. The fix for this specific code is to replace with\nTriage: this now has a different, yet still confusing, error:\nThat error is correct; but either way, the other error you get today is this, which while not ideal, is possibly the best we can do. I guess we could look for a method with a single parameter? But that seems hacky.\nIt is, but it is a common enough pattern and such a big enhancement in end user experience that I feel it'd be worth it to add it, specially when the ctor is not public. Then, the output should be: Similar output, but not quite the same cause in .\nIn order to implement this there's a bit of roadblock at the moment. The verification if a struct has a given method is done in , while this error is being emitted in . I believe the error is being bubbled up, but it would have to be delayed on its entirety to the type check step in order to verify wether exists. :-/ You can look for traits that have a given method (), but when I tried it with it returned an empty . We could provide with a generic suggestion about trying out , but I would much rather prefer if we could have some certainty that the method exists instead of potentially misleading the user.\nCurrent output:\nThe error message seems pretty good... let's close this?\nI still think that . Definitely low priority now given the other improvements on the error.\n: We still need to look for and suggest it.", "positive_passages": [{"docid": "doc-en-rust-361c42c51421526fea9fc1c5134c88cb666bdd8d9bfd5483047ba39e61987e20", "text": "err.set_primary_message( \"cannot initialize a tuple struct which contains private fields\", ); if !def_id.is_local() && self .r .tcx .inherent_impls(def_id) .iter() .flat_map(|impl_def_id| { self.r.tcx.provided_trait_methods(*impl_def_id) }) .any(|assoc| !assoc.fn_has_self_parameter && assoc.name == sym::new) { // FIXME: look for associated functions with Self return type, // instead of relying only on the name and lack of self receiver. err.span_suggestion_verbose( span.shrink_to_hi(), \"you might have meant to use the `new` associated function\", \"::new\".to_string(), Applicability::MaybeIncorrect, ); } // Use spans of the tuple struct definition. self.r.field_def_ids(def_id).map(|field_ids| { field_ids", "commid": "rust_pr_116858"}], "negative_passages": []} {"query_id": "q-en-rust-a73f497e850169e927f9eafb7930dc3eda6323c95f3d9e02acc1a22a4f872c50", "query": "Could someone change the bad title? I am not good for english.\nSeems like more of a diagnostics issue. is a private constructor, but we're erroring out on type-checking before we realize this. The fix for this specific code is to replace with\nTriage: this now has a different, yet still confusing, error:\nThat error is correct; but either way, the other error you get today is this, which while not ideal, is possibly the best we can do. I guess we could look for a method with a single parameter? But that seems hacky.\nIt is, but it is a common enough pattern and such a big enhancement in end user experience that I feel it'd be worth it to add it, specially when the ctor is not public. Then, the output should be: Similar output, but not quite the same cause in .\nIn order to implement this there's a bit of roadblock at the moment. The verification if a struct has a given method is done in , while this error is being emitted in . I believe the error is being bubbled up, but it would have to be delayed on its entirety to the type check step in order to verify wether exists. :-/ You can look for traits that have a given method (), but when I tried it with it returned an empty . We could provide with a generic suggestion about trying out , but I would much rather prefer if we could have some certainty that the method exists instead of potentially misleading the user.\nCurrent output:\nThe error message seems pretty good... let's close this?\nI still think that . Definitely low priority now given the other improvements on the error.\n: We still need to look for and suggest it.", "positive_passages": [{"docid": "doc-en-rust-b10973cba529967a311942b7fa3dcae9ab76746f476a48735eb5068d4461f9f9", "text": " // run-rustfix #![allow(dead_code)] struct U { wtf: Option>>, x: T, } fn main() { U { wtf: Some(Box::new(U { //~ ERROR cannot initialize a tuple struct which contains private fields wtf: None, x: (), })), x: () }; } ", "commid": "rust_pr_116858"}], "negative_passages": []} {"query_id": "q-en-rust-a73f497e850169e927f9eafb7930dc3eda6323c95f3d9e02acc1a22a4f872c50", "query": "Could someone change the bad title? I am not good for english.\nSeems like more of a diagnostics issue. is a private constructor, but we're erroring out on type-checking before we realize this. The fix for this specific code is to replace with\nTriage: this now has a different, yet still confusing, error:\nThat error is correct; but either way, the other error you get today is this, which while not ideal, is possibly the best we can do. I guess we could look for a method with a single parameter? But that seems hacky.\nIt is, but it is a common enough pattern and such a big enhancement in end user experience that I feel it'd be worth it to add it, specially when the ctor is not public. Then, the output should be: Similar output, but not quite the same cause in .\nIn order to implement this there's a bit of roadblock at the moment. The verification if a struct has a given method is done in , while this error is being emitted in . I believe the error is being bubbled up, but it would have to be delayed on its entirety to the type check step in order to verify wether exists. :-/ You can look for traits that have a given method (), but when I tried it with it returned an empty . We could provide with a generic suggestion about trying out , but I would much rather prefer if we could have some certainty that the method exists instead of potentially misleading the user.\nCurrent output:\nThe error message seems pretty good... let's close this?\nI still think that . Definitely low priority now given the other improvements on the error.\n: We still need to look for and suggest it.", "positive_passages": [{"docid": "doc-en-rust-480784e19324533b281f2f5f94ec9acd74843136e28f65aa12ef30ece8925688", "text": " // run-rustfix #![allow(dead_code)] struct U { wtf: Option>>, x: T, } fn main() { U { wtf: Some(Box(U { //~ ERROR cannot initialize a tuple struct which contains private fields wtf: None, x: (), })), x: () }; } ", "commid": "rust_pr_116858"}], "negative_passages": []} {"query_id": "q-en-rust-a73f497e850169e927f9eafb7930dc3eda6323c95f3d9e02acc1a22a4f872c50", "query": "Could someone change the bad title? I am not good for english.\nSeems like more of a diagnostics issue. is a private constructor, but we're erroring out on type-checking before we realize this. The fix for this specific code is to replace with\nTriage: this now has a different, yet still confusing, error:\nThat error is correct; but either way, the other error you get today is this, which while not ideal, is possibly the best we can do. I guess we could look for a method with a single parameter? But that seems hacky.\nIt is, but it is a common enough pattern and such a big enhancement in end user experience that I feel it'd be worth it to add it, specially when the ctor is not public. Then, the output should be: Similar output, but not quite the same cause in .\nIn order to implement this there's a bit of roadblock at the moment. The verification if a struct has a given method is done in , while this error is being emitted in . I believe the error is being bubbled up, but it would have to be delayed on its entirety to the type check step in order to verify wether exists. :-/ You can look for traits that have a given method (), but when I tried it with it returned an empty . We could provide with a generic suggestion about trying out , but I would much rather prefer if we could have some certainty that the method exists instead of potentially misleading the user.\nCurrent output:\nThe error message seems pretty good... let's close this?\nI still think that . Definitely low priority now given the other improvements on the error.\n: We still need to look for and suggest it.", "positive_passages": [{"docid": "doc-en-rust-5fd3ece0b09d2d1b6cbf911345f4cc6b13765b696fa4281353aca1ca9599f1f7", "text": " error[E0423]: cannot initialize a tuple struct which contains private fields --> $DIR/suggest-box-new.rs:9:19 | LL | wtf: Some(Box(U { | ^^^ | note: constructor is not visible here due to private fields --> $SRC_DIR/alloc/src/boxed.rs:LL:COL | = note: private field | = note: private field help: you might have meant to use the `new` associated function | LL | wtf: Some(Box::new(U { | +++++ error: aborting due to previous error For more information about this error, try `rustc --explain E0423`. ", "commid": "rust_pr_116858"}], "negative_passages": []} {"query_id": "q-en-rust-f7845a2c58b08fdb0b11fb881953c779613566f2e538333900030084a0f0c902", "query": "is useless () so skip it.\nNeither of the macro chapters have sections for \"common macros\". Should we add one? Presumably there would be quite a few of them.\nThere is for the macros in std, plus some of the built-in syntax extensions. It's not that easy to find; there used to be a module in the rustdoc output but no longer :( I'm not sure exactly what should be documented where. The built-in syntax extensions like might deserve special mention, as they give you capabilities within that you otherwise would not have. cc\nI think , , and also deserve mentions.\nI would like to take on the task of documenting all the common macros. I've started them and appreciate some input on how big the final list should be, and if every macro needs an example (or three). What about the most common one, prinln! ? Should that be in there too?\nIt's not clear to me what doing this issue actually means. Moving some docs between some places?\nThe original issue was to add several common macro definitions with examples in TRPL, which I agreed with. I'm not sure what's up with the edits.", "positive_passages": [{"docid": "doc-en-rust-7ae05d49d538852d2c81389c5234934104105cb69382c90b8f5a3765d934716c", "text": "Exercise: use macros to reduce duplication in the above definition of the `bct!` macro. # Common macros Here are some common macros you\u2019ll see in Rust code. ## panic! This macro causes the current thread to panic. You can give it a message to panic with: ```rust,no_run panic!(\"oh no!\"); ``` ## vec! The `vec!` macro is used throughout the book, so you\u2019ve probably seen it already. It creates `Vec`s with ease: ```rust let v = vec![1, 2, 3, 4, 5]; ``` It also lets you make vectors with repeating values. For example, a hundred zeroes: ```rust let v = vec![0; 100]; ``` ## assert! and assert_eq! These two macros are used in tests. `assert!` takes a boolean, and `assert_eq!` takes two values and compares them. Truth passes, success `panic!`s. Like this: ```rust,no_run // A-ok! assert!(true); assert_eq!(5, 3 + 2); // nope :( assert!(5 < 3); assert_eq!(5, 3); ``` ## try! `try!` is used for error handling. It takes something that can return a `Result`, and gives `T` if it\u2019s a `Ok`, and `return`s with the `Err(E)` if it\u2019s that. Like this: ```rust,no_run use std::fs::File; fn foo() -> std::io::Result<()> { let f = try!(File::create(\"foo.txt\")); Ok(()) } ``` This is cleaner than doing this: ```rust,no_run use std::fs::File; fn foo() -> std::io::Result<()> { let f = File::create(\"foo.txt\"); let f = match f { Ok(t) => t, Err(e) => return Err(e), }; Ok(()) } ``` ## unreachable! This macro is used when you think some code should never execute: ```rust if false { unreachable!(); } ``` Sometimes, the compiler may make you have a different branch that you know will never, ever run. In these cases, use this macro, so that if you end up wrong, you\u2019ll get a `panic!` about it. ```rust let x: Option = None; match x { Some(_) => unreachable!(), None => println!(\"I know x is None!\"), } ``` ## unimplemented! The `unimplemented!` macro can be used when you\u2019re trying to get your functions to typecheck, and don\u2019t want to worry about writing out the body of the function. One example of this situation is implementing a trait with multiple required methods, where you want to tackle one at a time. Define the others as `unimplemented!` until you\u2019re ready to write them. # Procedural macros If Rust's macro system can't do what you need, you may want to write a", "commid": "rust_pr_24516"}], "negative_passages": []} {"query_id": "q-en-rust-39f335b277f4d76486956e01914948b05750ea06d64b9b7407b6e9fecd997105", "query": "On rustc 1.0.0-nightly ( 2015-02-21) (built 2015-02-21) the follow causes an ICE: Full backtrace: error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'assertion failed: != ast::LOCALCRATE', /Users/rustbuild/src/rust-buildbot/slave/nightly-dist-rustc- stack backtrace: 1: 0x10b7aebe3 - sys::backtrace::write::hc8e3cee73e646c590nC 2: 0x10b7dcbe5 - panicking::onpanic::h00b47941f5bc8a02HOL 3: 0x10b706418 - rt::unwind::beginunwindinner::h539538ef7f909326UvL 4: 0x108472895 - rt::unwind::beginunwind::h7610573592537740396 5: 0x1087eb7dc - middle::ty::lookuptraitdef::h00103e323589b59cNla 6: 0x108812687 - middle::ty::predicatesfortraitref::h59dd4f8104908ae1Vna 7: 0x10880c125 - middle::traits::util::Elaborator<'cx, 'tcx.Iterator::next::h9dd8ee47ae7ea8d3QkV 8: 0x108811fd1 - middle::traits::util::Supertraits<'cx, 'tcx.Iterator::next::h2583ba3461accfb9rnV 9: 0x108109246 - astconv::asttytoty::closure. 10: 0x1080a36cb - astconv::asttytoty::hb64a583bfbbc7b64Ncw 11: 0x1080ff44d - astconv::asttyargtoty::h51a53c78e1218ba4sbw 12: 0x1080feff0 - vec::Vec // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. trait Expr : PartialEq { //~^ ERROR: unsupported cyclic reference between types/traits detected type Item = Expr; } fn main() {} ", "commid": "rust_pr_24829"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-d31493c45816240e4bcef29eff24b0eb3663c8c29973ba291ab5ae87fa5427c9", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that `quote`-related macro are gated by `quote` feature gate. // (To sanity-check the code, uncomment this.) // #![feature(quote)] // FIXME the error message that is current emitted seems pretty bad. #![feature(rustc_private)] #![allow(dead_code, unused_imports, unused_variables)] #[macro_use] extern crate syntax; use syntax::ast; use syntax::codemap::Span; use syntax::parse; struct ParseSess; impl ParseSess { fn cfg(&self) -> ast::CrateConfig { loop { } } fn parse_sess<'a>(&'a self) -> &'a parse::ParseSess { loop { } } fn call_site(&self) -> Span { loop { } } fn ident_of(&self, st: &str) -> ast::Ident { loop { } } fn name_of(&self, st: &str) -> ast::Name { loop { } } } pub fn main() { let ecx = &ParseSess; let x = quote_tokens!(ecx, 3); //~ ERROR macro undefined: 'quote_tokens!' let x = quote_expr!(ecx, 3); //~ ERROR macro undefined: 'quote_expr!' let x = quote_ty!(ecx, 3); //~ ERROR macro undefined: 'quote_ty!' let x = quote_method!(ecx, 3); //~ ERROR macro undefined: 'quote_method!' let x = quote_item!(ecx, 3); //~ ERROR macro undefined: 'quote_item!' let x = quote_pat!(ecx, 3); //~ ERROR macro undefined: 'quote_pat!' let x = quote_arm!(ecx, 3); //~ ERROR macro undefined: 'quote_arm!' let x = quote_stmt!(ecx, 3); //~ ERROR macro undefined: 'quote_stmt!' let x = quote_matcher!(ecx, 3); //~ ERROR macro undefined: 'quote_matcher!' let x = quote_attr!(ecx, 3); //~ ERROR macro undefined: 'quote_attr!' } ", "commid": "rust_pr_23226"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-e29063216d9571f579e98388cea979d26d42e33f7cc9c9f3407ee8a216618ab4", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that `#[link_args]` attribute is gated by `link_args` // feature gate. #[link_args = \"aFdEfSeVEEE\"] extern {} //~^ ERROR the `link_args` attribute is not portable across platforms //~| HELP add #![feature(link_args)] to the crate attributes to enable fn main() { } ", "commid": "rust_pr_23226"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-9c3bfe185f25344abecb65dcd699bfbd6b2df68b42925229c0415bef6c2ba5a1", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. extern { #[link_name = \"llvm.sqrt.f32\"] fn sqrt(x: f32) -> f32; //~^ ERROR linking to LLVM intrinsics is experimental //~| HELP add #![feature(link_llvm_intrinsics)] to the crate attributes } fn main(){ } ", "commid": "rust_pr_23226"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-a6db0b96f36c71752304dc0a0d6b0bfe5b4e68e907c4228c9a5a699bc61ceb6b", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. // Test that `#[plugin_registrar]` attribute is gated by `plugin_registrar` // feature gate. // the registration function isn't typechecked yet #[plugin_registrar] pub fn registrar() {} //~ ERROR compiler plugins are experimental pub fn registrar() {} //~^ ERROR compiler plugins are experimental //~| HELP add #![feature(plugin_registrar)] to the crate attributes to enable fn main() {}", "commid": "rust_pr_23226"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-fea19db7908c1ea6bb6da5449e9947346ee1cbc496b6e0917b68c197e598e238", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that `#[thread_local]` attribute is gated by `thread_local` // feature gate. // // (Note that the `thread_local!` macro is explicitly *not* gated; it // is given permission to expand into this unstable attribute even // when the surrounding context does not have permission to use it.) #[thread_local] //~ ERROR `#[thread_local]` is an experimental feature static FOO: i32 = 3; pub fn main() { FOO.with(|x| { println!(\"x: {}\", x); }); } ", "commid": "rust_pr_23226"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-aae6c574fcfc9f67407dd092d3493dc1d299bcf4a19e17179714d3613411b9c6", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that `#[unsafe_destructor]` attribute is gated by `unsafe_destructor` // feature gate. struct D<'a>(&'a u32); #[unsafe_destructor] impl<'a> Drop for D<'a> { //~^ ERROR `#[unsafe_destructor]` allows too many unsafe patterns fn drop(&mut self) { } } //~^ HELP: add #![feature(unsafe_destructor)] to the crate attributes to enable pub fn main() { } ", "commid": "rust_pr_23226"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-dd7cac15d7bdada0d257bd23450c7701b0836b090228c3d43de2eb5d20e5199e", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that macro reexports item are gated by `macro_reexport` feature gate. // aux-build:macro_reexport_1.rs // ignore-stage1 #![crate_type = \"dylib\"] #[macro_reexport(reexported)] #[macro_use] #[no_link] extern crate macro_reexport_1; //~^ ERROR macros reexports are experimental and possibly buggy //~| HELP add #![feature(macro_reexport)] to the crate attributes to enable ", "commid": "rust_pr_23578"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-adf7055a76fa6d500a2e0433e82ab39072f166eb2c18e9461e7fdcab07053f0b", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that patterns including the box syntax are gated by `box_patterns` feature gate. fn main() { let x = Box::new(1); match x { box 1 => (), //~^ box pattern syntax is experimental //~| add #![feature(box_patterns)] to the crate attributes to enable _ => () }; } ", "commid": "rust_pr_23578"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-1e20bd17d08dbf7dc5e37d4e6082650fd63ecd2f22d146357f67ed33bcd1e629", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that the use of the box syntax is gated by `box_syntax` feature gate. fn main() { let x = box 3; //~^ ERROR box expression syntax is experimental; you can call `Box::new` instead. //~| HELP add #![feature(box_syntax)] to the crate attributes to enable } ", "commid": "rust_pr_23578"}], "negative_passages": []} {"query_id": "q-en-rust-747708f88c34e8fa9ee4d0a29901c559c06a0a01e31d5bd7e4fb42efc8c9fd1d", "query": "I was sloppy when I implemented and did not include a test. So, surprise, a follow-on commit a few days later broke it. As self-punishment for neglecting to include a regression test, I am assigning myself the task of reviewing our test suite to make sure that every feature-gate has a test. Here is a transcribed list of feature gates, based on , that I have annotated according to whether the feature is Accepted/Removed (and thus needs no tests), Active/Deprecated but has tests already (marked with an , or Active/Deprecated but has no tests that I saw via a cursory grep/skim. - , - , - , [x] - (\"asm\", \"1.0.0\", Active), - , [x] - (\"nonasciiidents\", \"1.0.0\", Active), [x] - (\"threadlocal\", \"1.0.0\", Active), [x] - (\"linkargs\", \"1.0.0\", Active), - , [x] - (\"pluginregistrar\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate. From the error message it seems like the gate is firing. [x] - (\"logsyntax\", \"1.0.0\", Active), [x] - (\"tracemacros\", \"1.0.0\", Active), [x] - (\"concatidents\", \"1.0.0\", Active), [x] - (\"unsafedestructor\", \"1.0.0\", Active), [x] - (\"intrinsics\", \"1.0.0\", Active), [x] - (\"langitems\", \"1.0.0\", Active), [x] - (\"simd\", \"1.0.0\", Active), - , [x] - (\"quote\", \"1.0.0\", Active), [x] - (\"linkllvmintrinsics\", \"1.0.0\", Active), [x] - (\"linkage\", \"1.0.0\", Active), - , - , [ ] - (\"rustcdiagnosticmacros\", \"1.0.0\", Active), [ ] - (\"unboxedclosures\", \"1.0.0\", Active), - , [ ] - (\"advancedslicepatterns\", \"1.0.0\", Active), - , - , [ ] - (\"visibleprivatetypes\", \"1.0.0\", Active), - , [x] - (\"boxsyntax\", \"1.0.0\", Active), gated-box- [ ] - (\"onunimplemented\", \"1.0.0\", Active), [x] - (\"simdffi\", \"1.0.0\", Active), gated-simd- - , - , [ ] - (\"plugin\", \"1.0.0\", Active), but it is not clear whether this is testing the gate or the gate [ ] - (\"start\", \"1.0.0\", Active), [ ] - (\"main\", \"1.0.0\", Active), - , - , [ ] - (\"oldorphancheck\", \"1.0.0\", Deprecated), [ ] - (\"oldimplcheck\", \"1.0.0\", Deprecated), [ ] - (\"optinbuiltintraits\", \"1.0.0\", Active), [x] - (\"intuint\", \"1.0.0\", Active), [x] - (\"macroreexport\", \"1.0.0\", Active), gated-macro- - , - , [x] - (\"stagedapi\", \"1.0.0\", Active), [ ] - (\"unmarkedapi\", \"1.0.0\", Active), [x] - (\"nostd\", \"1.0.0\", Active), [x] - (\"boxpatterns\", \"1.0.0\", Active), gated-box- [x] - (\"unsafenodropflag\", \"1.0.0\", Active), [x] - (\"customattribute\", \"1.0.0\", Active), [x] - (\"customderive\", \"1.0.0\", Active), single-derive- [ ] - (\"rustcattrs\", \"1.0.0\", Active), [x] - (\"staticassert\", \"1.0.0\", Active), [x] - (\"allowinternal_unstable\", \"1.0.0\", Active),\n(but hey, if other people want to add such missing tests in the meantime, while I am asleep, I will not mind. :)\nIn case its not clear: If you want to help, just find a case above that does not have a check-mark and also does not have a pull request number attached beneath it. (And I guess write your name so that other people do not waste time adding a test for the same feature gate).\nThe test for is located in\nI am not surprised that I \"overlooked\" that ... the expected error message prefix does not say anything about a feature gate. Let me just confirm that the whole message does ... Update: Okay, the help output for that test () does mention the feature gate: So I have now checked off that box, though I wonder if I should add this help message to the expected test output there.\nI was wondering the same thing about the help message. It is not very clear what that test is doing, the only hint was the filename. From reading some help messages were removed, would there be any concern for adding help messages to the expected test output in this case? It could be worth adding a comment in there similar to the other gated tests: In addition, would it be worth going as far as to rename to (and doing the same for any other gated tests that do not have a prefix in their filename)?\nIn relation to the feature gate list above I'm going to start off by looking into the following: update: removed items that had tests, and , - - -\nah yes, is (correctly) why the help mag is missing .... hmm .... that complicates things\nI've done a pass through of the unchecked feature gates and have found tests for the following: () () () () () () Could these be marked as complete?\nthanks!\nOk, so I've had a look through everything, here's my findings: I have some questions below for the three outstanding feature gates (at the very bottom of this comment) and was wondering if you could help confirm the status of those feature gate tests? My previous listed feature gates that had existing tests. I've since found a few more to add to that list. Here is the full list of feature gates with existing tests. New findings are in bold: : : : : : : : : : The following feature gates were removed after this issue was initially created. Since they are no longer around they can be considered complete or crossed out of the task list above: : see : see The following feature gates appear to have duplicate tests introduced in , the initial tests were in and : - The following are not in the above task list as they have been since the issue was initially created: : see : see The following are remaining feature gates needing tests (excluding the feature gates I had questions for, see below). I'm cooking up a PR for these: - - - The following items are ones that I had trouble completing because I have questions and need some feedback: - The was introduced in , and tests for were replaced with tests for missing stability attributes: . In addition, it seems that there is no way to trigger unless there is a bug in a library or the compiler itself. Are the tests in missing- enough to consider complete? These appear to be the we're looking for relating to , could someone confirm this? This appears to be the feature gate test, but was removed in () Should we reinstate the test in this issue or let it be handled by which is in the works?\nFor visibleprivatetypes, i found src/test/run-pass/visible-private-types-feature- Not sure if item this should be ticked as well.\nThere appear to be a lot of new unstable features since this was touched. It seems hopeless to ever catch up on this issue without an automatic tidy script verifying the tests exist. do you want this issue to continue? Should we try to augment tidy to check the existence of compile-fail tests mentioning unstable features?\nI agree that it seems like we'll never be able to close this without having some sort of automation (e.g. a tidy script, as you say) to verify that the tests exist. I don't have any immediate idea about the best way to write such a script. I guess it could just be based on some naming convention for the test itself; i.e. it would be just a heuristic check, not a true verification of it.\nI think it could be written like so: after , search everything in for . If this makes sense I can mentor.\nbut requiring the test to explicitly deny the feature defeats the point, doesn't it? That is, the goal here is to catch code that makes use of a feature without explicitly opting into that feature. .. Though I suppose you might be thinking that if there is enough infrastructure present to support some kind of deny(feature) attribute, then that is a sign that the necessary gating is in place, regardless of what the default lint settings actually are?\nGood point. I think yes, I was considering that just confirming that the compiler has code that correctly can deny the feature is good enough, but there is some room for error by making the deny explicit. The attribute also wouldn't actually confirm that the compile failed for the correct reason. We could also just require tests to annotate in some other way that yes, it is testing that a particular feature is denied. As an extension to my previous suggestion, the test runner could take the test case that is annotated to be a 'deny feature' test, run it once with then run it again with no feature attribute, and compare the results.\nSimilarly to the above, but to avoid creating another custom test runner. The featureck lint could require that and contain pairs of feature tests that are identical, one with and one without.\nWith fixed, maybe this issue can be closed as resolved?\nThanks", "positive_passages": [{"docid": "doc-en-rust-f26dde2dd70ba352c4083756d14f8b95d87c01e2f8d05f32c935e7ea04bbe898", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that the use of smid types in the ffi is gated by `smid_ffi` feature gate. #![feature(simd)] #[repr(C)] #[derive(Copy)] #[simd] pub struct f32x4(f32, f32, f32, f32); #[allow(dead_code)] extern { fn foo(x: f32x4); //~^ ERROR use of SIMD type `f32x4` in FFI is highly experimental and may result in invalid code //~| HELP add #![feature(simd_ffi)] to the crate attributes to enable } fn main() {} ", "commid": "rust_pr_23578"}], "negative_passages": []} {"query_id": "q-en-rust-f7a77fd8ed56b9c9ab41bb583708860add412113c9955ba722184dd695bb5608", "query": "and are obsolete, but is not.\nI think I spotted the problem, testing...\nFix in", "positive_passages": [{"docid": "doc-en-rust-eb52f6d7b855c198e6886991ef039f6a62a9de867d1b2e0f27ab801a08bbaa5f", "text": "```rust let captured_var = 10; let closure_no_args = |&:| println!(\"captured_var={}\", captured_var); let closure_no_args = || println!(\"captured_var={}\", captured_var); let closure_args = |&: arg: i32| -> i32 { let closure_args = |arg: i32| -> i32 { println!(\"captured_var={}, arg={}\", captured_var, arg); arg // Note lack of semicolon after 'arg' };", "commid": "rust_pr_22884"}], "negative_passages": []} {"query_id": "q-en-rust-f7a77fd8ed56b9c9ab41bb583708860add412113c9955ba722184dd695bb5608", "query": "and are obsolete, but is not.\nI think I spotted the problem, testing...\nFix in", "positive_passages": [{"docid": "doc-en-rust-730051034c93414da028693cb3e8eb463fae90d7915b8a1d18fc6e96f93d0242", "text": "} } (Ok(const_int(a)), Ok(const_int(b))) => { let is_a_min_value = |&:| { let is_a_min_value = || { let int_ty = match ty::expr_ty_opt(tcx, e).map(|ty| &ty.sty) { Some(&ty::ty_int(int_ty)) => int_ty, _ => return false", "commid": "rust_pr_22884"}], "negative_passages": []} {"query_id": "q-en-rust-f7a77fd8ed56b9c9ab41bb583708860add412113c9955ba722184dd695bb5608", "query": "and are obsolete, but is not.\nI think I spotted the problem, testing...\nFix in", "positive_passages": [{"docid": "doc-en-rust-b55d712815a18283d2dcbe0a0e8d0716db2fa1b2ab6999bdd3a6aabe71fe9eb9", "text": "scope: region::CodeExtent, depth: uint) { let origin = |&:| infer::SubregionOrigin::SafeDestructor(span); let origin = || infer::SubregionOrigin::SafeDestructor(span); let mut walker = ty_root.walk(); let opt_phantom_data_def_id = rcx.tcx().lang_items.phantom_data();", "commid": "rust_pr_22884"}], "negative_passages": []} {"query_id": "q-en-rust-f7a77fd8ed56b9c9ab41bb583708860add412113c9955ba722184dd695bb5608", "query": "and are obsolete, but is not.\nI think I spotted the problem, testing...\nFix in", "positive_passages": [{"docid": "doc-en-rust-0bb69faf6b95e298f3c43a88907e094361939f4ef43f91a87bd1954b95c76a21", "text": "// file descriptor. Otherwise, the first file descriptor opened // up in the child would be numbered as one of the stdio file // descriptors, which is likely to wreak havoc. let setup = |&: src: Option, dst: c_int| { let setup = |src: Option, dst: c_int| { let src = match src { None => { let flags = if dst == libc::STDIN_FILENO {", "commid": "rust_pr_22884"}], "negative_passages": []} {"query_id": "q-en-rust-f7a77fd8ed56b9c9ab41bb583708860add412113c9955ba722184dd695bb5608", "query": "and are obsolete, but is not.\nI think I spotted the problem, testing...\nFix in", "positive_passages": [{"docid": "doc-en-rust-b637ed6a889124da6cc3fcf9b9c090378e96d4aba248c7fb68a0788dbf6cda83", "text": "// Similarly to unix, we don't actually leave holes for the stdio file // descriptors, but rather open up /dev/null equivalents. These // equivalents are drawn from libuv's windows process spawning. let set_fd = |&: fd: &Option, slot: &mut HANDLE, let set_fd = |fd: &Option, slot: &mut HANDLE, is_stdin: bool| { match *fd { None => {", "commid": "rust_pr_22884"}], "negative_passages": []} {"query_id": "q-en-rust-f7a77fd8ed56b9c9ab41bb583708860add412113c9955ba722184dd695bb5608", "query": "and are obsolete, but is not.\nI think I spotted the problem, testing...\nFix in", "positive_passages": [{"docid": "doc-en-rust-5ba22cc6ff5d84b46968de9824c6eef8671a8581b275f63e3bdba5a98ba790d2", "text": "{ self.bump(); self.bump(); return; } else if self.eat(&token::Colon) {", "commid": "rust_pr_22884"}], "negative_passages": []} {"query_id": "q-en-rust-f7a77fd8ed56b9c9ab41bb583708860add412113c9955ba722184dd695bb5608", "query": "and are obsolete, but is not.\nI think I spotted the problem, testing...\nFix in", "positive_passages": [{"docid": "doc-en-rust-6230f36e278a3028cb11912e51c13a4c19b13694441ad5032e147a79185506d6", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test that we generate obsolete syntax errors around usages of closure kinds: `|:|`, `|&:|` and // `|&mut:|`. fn main() { let a = |:| {}; //~ ERROR obsolete syntax: `:`, `&mut:`, or `&:` let a = |&:| {}; //~ ERROR obsolete syntax: `:`, `&mut:`, or `&:` let a = |&mut:| {}; //~ ERROR obsolete syntax: `:`, `&mut:`, or `&:` } ", "commid": "rust_pr_22884"}], "negative_passages": []} {"query_id": "q-en-rust-b7b380b3af8436704444608b83eecf7c98d4b4da04b2367f9e5146bf29004734", "query": "compiles without any error. Notably this isn't a regression from I noticed it because has a public field that has had no stability attribute from far before . However, that PR did pick up some structs with missing stability attributes, I'm unsure of the exact circumstances when fields will get the errors or when they will not.\nAs points out in this isn't a bug; I didn't realise that stability propagation literally just ignores , meaning is due to the on . Closing.", "positive_passages": [{"docid": "doc-en-rust-bf73397461845389dcf6fc88876b35f047d391dd775c45392a4c75e7111fdbe4", "text": "f: F, /// The current internal state to be passed to the closure next. #[unstable(feature = \"core\")] pub state: St, }", "commid": "rust_pr_22952"}], "negative_passages": []} {"query_id": "q-en-rust-b7b380b3af8436704444608b83eecf7c98d4b4da04b2367f9e5146bf29004734", "query": "compiles without any error. Notably this isn't a regression from I noticed it because has a public field that has had no stability attribute from far before . However, that PR did pick up some structs with missing stability attributes, I'm unsure of the exact circumstances when fields will get the errors or when they will not.\nAs points out in this isn't a bug; I didn't realise that stability propagation literally just ignores , meaning is due to the on . Closing.", "positive_passages": [{"docid": "doc-en-rust-1b265879b04041cf8fef82054788bc3542ac2ded12cce4ac458a26270b9a0fdd", "text": "pub struct Unfold { f: F, /// Internal state that will be passed to the closure on the next iteration #[unstable(feature = \"core\")] pub state: St, }", "commid": "rust_pr_22952"}], "negative_passages": []} {"query_id": "q-en-rust-2159495f14bd6621932d9ae0ad8b8ccdcee708147df9d42e348cd5afa652b241", "query": "Here's a minimal example which results in a runtime error: I'd expect that format string to work (precision=0, so no digits after the decimal point), but here's what I get with RUSTBACKTRACE=1: Ubuntu 14.10 / 64-bit rustc 1.0.0-nightly ( 2015-03-05) (built 2015-03-06) binary: rustc commit-hash: commit-date: 2015-03-05 build-date: 2015-03-06 host: x8664-unknown-linux-gnu release: 1.0.0-nightly\nIt also happens with 9.5 but not with 9.499. The formatting routine tries to round, but fails to handle the case that rounding extends the digits.\nJust to clarify: this is not actually an ICE because the program is panicking, not the compiler. So, compilation is fine, but the program fails at runtime\nThis issue should be closed as it's now fixed.\nYay!", "positive_passages": [{"docid": "doc-en-rust-054c94343f5b52a85a239c8807841fa61820e504130e3eafc83ffdcf558aeeac", "text": "if i < 0 || buf[i as usize] == b'-' || buf[i as usize] == b'+' { for j in (i as usize + 1..end).rev() { for j in ((i + 1) as usize..end).rev() { buf[j + 1] = buf[j]; } buf[(i + 1) as usize] = value2ascii(1);", "commid": "rust_pr_24269"}], "negative_passages": []} {"query_id": "q-en-rust-2159495f14bd6621932d9ae0ad8b8ccdcee708147df9d42e348cd5afa652b241", "query": "Here's a minimal example which results in a runtime error: I'd expect that format string to work (precision=0, so no digits after the decimal point), but here's what I get with RUSTBACKTRACE=1: Ubuntu 14.10 / 64-bit rustc 1.0.0-nightly ( 2015-03-05) (built 2015-03-06) binary: rustc commit-hash: commit-date: 2015-03-05 build-date: 2015-03-06 host: x8664-unknown-linux-gnu release: 1.0.0-nightly\nIt also happens with 9.5 but not with 9.499. The formatting routine tries to round, but fails to handle the case that rounding extends the digits.\nJust to clarify: this is not actually an ICE because the program is panicking, not the compiler. So, compilation is fine, but the program fails at runtime\nThis issue should be closed as it's now fixed.\nYay!", "positive_passages": [{"docid": "doc-en-rust-76171e322521b4246b66ed6bf542b5dba2617ff49bf90545c0db4643fe492cfd", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #[test] fn test_format_float() { assert!(\"1\" == format!(\"{:.0}\", 1.0f64)); assert!(\"9\" == format!(\"{:.0}\", 9.4f64)); assert!(\"10\" == format!(\"{:.0}\", 9.9f64)); assert!(\"9.9\" == format!(\"{:.1}\", 9.85f64)); } ", "commid": "rust_pr_24269"}], "negative_passages": []} {"query_id": "q-en-rust-6d61f7d13c31d6883114ba26a2543bd8d9b13ce4cca088b0888a70f2b461ee63", "query": "I suspect that we may want more experience with this syntax before freezing it, but I may also be misremembering! cc cc\ntriage: I-nominated\nI believe you're right. I'll work on a fix for this. Here's a test case:\n0 beta, P-backcompat-lang.", "positive_passages": [{"docid": "doc-en-rust-586a1636d394df37fd0661138bd1a8c04a66644988fe29d113847368b542170e", "text": "// empty. } unsafe impl Send for .. { } impl !Send for *const T { } impl !Send for *mut T { } impl !Send for Managed { }", "commid": "rust_pr_23211"}], "negative_passages": []} {"query_id": "q-en-rust-6d61f7d13c31d6883114ba26a2543bd8d9b13ce4cca088b0888a70f2b461ee63", "query": "I suspect that we may want more experience with this syntax before freezing it, but I may also be misremembering! cc cc\ntriage: I-nominated\nI believe you're right. I'll work on a fix for this. Here's a test case:\n0 beta, P-backcompat-lang.", "positive_passages": [{"docid": "doc-en-rust-49d62629c6dcde77b9ff137409ebd55c5390a31120febb4cf9fb826c7ee20709", "text": "// Empty } unsafe impl Sync for .. { } impl !Sync for *const T { } impl !Sync for *mut T { } impl !Sync for Managed { }", "commid": "rust_pr_23211"}], "negative_passages": []} {"query_id": "q-en-rust-6d61f7d13c31d6883114ba26a2543bd8d9b13ce4cca088b0888a70f2b461ee63", "query": "I suspect that we may want more experience with this syntax before freezing it, but I may also be misremembering! cc cc\ntriage: I-nominated\nI believe you're right. I'll work on a fix for this. Here's a test case:\n0 beta, P-backcompat-lang.", "positive_passages": [{"docid": "doc-en-rust-d4ecd17484efe6445398260a64b8a1d258a2dd73a4881e4d3da5a985f4d35d1c", "text": "pub fn trait_has_default_impl(tcx: &ctxt, trait_def_id: DefId) -> bool { populate_implementations_for_trait_if_necessary(tcx, trait_def_id); match tcx.lang_items.to_builtin_kind(trait_def_id) { Some(BoundSend) | Some(BoundSync) => true, _ => tcx.traits_with_default_impls.borrow().contains_key(&trait_def_id), } tcx.traits_with_default_impls.borrow().contains_key(&trait_def_id) } /// Records a trait-to-implementation mapping.", "commid": "rust_pr_23211"}], "negative_passages": []} {"query_id": "q-en-rust-6d61f7d13c31d6883114ba26a2543bd8d9b13ce4cca088b0888a70f2b461ee63", "query": "I suspect that we may want more experience with this syntax before freezing it, but I may also be misremembering! cc cc\ntriage: I-nominated\nI believe you're right. I'll work on a fix for this. Here's a test case:\n0 beta, P-backcompat-lang.", "positive_passages": [{"docid": "doc-en-rust-f6fdd56a14d6a157a423e10a48032a20b49deae9a04717d72c470e96428e1a23", "text": "tcx: &'cx ty::ctxt<'tcx> } impl<'cx, 'tcx,'v> visit::Visitor<'v> for UnsafetyChecker<'cx, 'tcx> { fn visit_item(&mut self, item: &'v ast::Item) { match item.node { ast::ItemImpl(unsafety, polarity, _, _, _, _) => { match ty::impl_trait_ref(self.tcx, ast_util::local_def(item.id)) { None => { // Inherent impl. match unsafety { ast::Unsafety::Normal => { /* OK */ } ast::Unsafety::Unsafe => { span_err!(self.tcx.sess, item.span, E0197, \"inherent impls cannot be declared as unsafe\"); } } impl<'cx, 'tcx, 'v> UnsafetyChecker<'cx, 'tcx> { fn check_unsafety_coherence(&mut self, item: &'v ast::Item, unsafety: ast::Unsafety, polarity: ast::ImplPolarity) { match ty::impl_trait_ref(self.tcx, ast_util::local_def(item.id)) { None => { // Inherent impl. match unsafety { ast::Unsafety::Normal => { /* OK */ } ast::Unsafety::Unsafe => { span_err!(self.tcx.sess, item.span, E0197, \"inherent impls cannot be declared as unsafe\"); } } } Some(trait_ref) => { let trait_def = ty::lookup_trait_def(self.tcx, trait_ref.def_id); match (trait_def.unsafety, unsafety, polarity) { (ast::Unsafety::Unsafe, ast::Unsafety::Unsafe, ast::ImplPolarity::Negative) => { span_err!(self.tcx.sess, item.span, E0198, \"negative implementations are not unsafe\"); } Some(trait_ref) => { let trait_def = ty::lookup_trait_def(self.tcx, trait_ref.def_id); match (trait_def.unsafety, unsafety, polarity) { (ast::Unsafety::Unsafe, ast::Unsafety::Unsafe, ast::ImplPolarity::Negative) => { span_err!(self.tcx.sess, item.span, E0198, \"negative implementations are not unsafe\"); } (ast::Unsafety::Normal, ast::Unsafety::Unsafe, _) => { span_err!(self.tcx.sess, item.span, E0199, \"implementing the trait `{}` is not unsafe\", trait_ref.user_string(self.tcx)); } (ast::Unsafety::Normal, ast::Unsafety::Unsafe, _) => { span_err!(self.tcx.sess, item.span, E0199, \"implementing the trait `{}` is not unsafe\", trait_ref.user_string(self.tcx)); } (ast::Unsafety::Unsafe, ast::Unsafety::Normal, ast::ImplPolarity::Positive) => { span_err!(self.tcx.sess, item.span, E0200, \"the trait `{}` requires an `unsafe impl` declaration\", trait_ref.user_string(self.tcx)); } (ast::Unsafety::Unsafe, ast::Unsafety::Normal, ast::ImplPolarity::Positive) => { span_err!(self.tcx.sess, item.span, E0200, \"the trait `{}` requires an `unsafe impl` declaration\", trait_ref.user_string(self.tcx)); } (ast::Unsafety::Unsafe, ast::Unsafety::Normal, ast::ImplPolarity::Negative) | (ast::Unsafety::Unsafe, ast::Unsafety::Unsafe, ast::ImplPolarity::Positive) | (ast::Unsafety::Normal, ast::Unsafety::Normal, _) => { /* OK */ } } (ast::Unsafety::Unsafe, ast::Unsafety::Normal, ast::ImplPolarity::Negative) | (ast::Unsafety::Unsafe, ast::Unsafety::Unsafe, ast::ImplPolarity::Positive) | (ast::Unsafety::Normal, ast::Unsafety::Normal, _) => { /* OK */ } } } } } } impl<'cx, 'tcx,'v> visit::Visitor<'v> for UnsafetyChecker<'cx, 'tcx> { fn visit_item(&mut self, item: &'v ast::Item) { match item.node { ast::ItemDefaultImpl(unsafety, _) => { self.check_unsafety_coherence(item, unsafety, ast::ImplPolarity::Positive); } ast::ItemImpl(unsafety, polarity, _, _, _, _) => { self.check_unsafety_coherence(item, unsafety, polarity); } _ => { } }", "commid": "rust_pr_23211"}], "negative_passages": []} {"query_id": "q-en-rust-6d61f7d13c31d6883114ba26a2543bd8d9b13ce4cca088b0888a70f2b461ee63", "query": "I suspect that we may want more experience with this syntax before freezing it, but I may also be misremembering! cc cc\ntriage: I-nominated\nI believe you're right. I'll work on a fix for this. Here's a test case:\n0 beta, P-backcompat-lang.", "positive_passages": [{"docid": "doc-en-rust-e41bdaa6ab10b2ee0f3a88451c117ce1d585eb5601f101f1eb47cb0b6dc36b3e", "text": "} } ast::ItemDefaultImpl(..) => { self.gate_feature(\"optin_builtin_traits\", i.span, \"default trait implementations are experimental and possibly buggy\"); } ast::ItemImpl(_, polarity, _, _, _, _) => { match polarity { ast::ImplPolarity::Negative => {", "commid": "rust_pr_23211"}], "negative_passages": []} {"query_id": "q-en-rust-6d61f7d13c31d6883114ba26a2543bd8d9b13ce4cca088b0888a70f2b461ee63", "query": "I suspect that we may want more experience with this syntax before freezing it, but I may also be misremembering! cc cc\ntriage: I-nominated\nI believe you're right. I'll work on a fix for this. Here's a test case:\n0 beta, P-backcompat-lang.", "positive_passages": [{"docid": "doc-en-rust-2984e516067f32d1d0ded9c71f6ebaf5eb83cc3adf8d8c1185384f8bd5cfe1de", "text": "impl MyTrait for .. {} //~^ ERROR conflicting implementations for trait `MyTrait` trait MySafeTrait: MarkerTrait {} unsafe impl MySafeTrait for .. {} //~^ ERROR implementing the trait `MySafeTrait` is not unsafe unsafe trait MyUnsafeTrait: MarkerTrait {} impl MyUnsafeTrait for .. {} //~^ ERROR the trait `MyUnsafeTrait` requires an `unsafe impl` declaration fn main() {}", "commid": "rust_pr_23211"}], "negative_passages": []} {"query_id": "q-en-rust-6d61f7d13c31d6883114ba26a2543bd8d9b13ce4cca088b0888a70f2b461ee63", "query": "I suspect that we may want more experience with this syntax before freezing it, but I may also be misremembering! cc cc\ntriage: I-nominated\nI believe you're right. I'll work on a fix for this. Here's a test case:\n0 beta, P-backcompat-lang.", "positive_passages": [{"docid": "doc-en-rust-790914d60fc0202a9ef37e372fbf0f6643d5ec7e4b6b381d108a8df57c295f32", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(optin_builtin_traits)] pub mod bar { use std::marker;", "commid": "rust_pr_23211"}], "negative_passages": []} {"query_id": "q-en-rust-c75aa7188859137083d672d0f02418ea4dd848eaa515631dbd4d866d5fc0d853", "query": "The references states that is a logical right shift, however it seems that it behaves like an arithmetic right shift for signed integers. For instance the following snippet prints : A logical right shift should produce (assuming 2's complement representation which I think rust guarantees?). In the same spirit the reference does not say if shifting outside the range of an integer is defined. From my quick test it doesn't appear to be: This prints garbage on my computer and on the playpen, so I assume it's UB or at least UV (it doesn't trigger the overflow check in debug builds though, maybe it should?) On a similar topic the reference also states that the following behaviours are safe: Unsigned integer overflow (well-defined as wrapping) Signed integer overflow (well-defined as two's complement representation wrapping) I believe that's not true anymore.\nI see that checking for bitshift overflows is going to be implemented in so it seems to confirm that it's UB (UV?) for now.\nsee PR for the checks. We can go either way on the issue of whether overflow is UB or UV or neither. (That is, it is easy to adapt the PR accordingly; just a matter of deciding, and perhaps measuring the impact on code size, maybe.)\nwhat's the process by which we decide that? I'd like to knock this issue out.\nheh, I already made an \"executive decision\" (though admittedly it was one that was already in favor of), of making overflow panic when checks are on, and when the checks are off, we forcibly mask the RHS to ensure that we do not encounter weirdness such as that documented in and . (Though I still need to put in tests for that fallback behavior.)\nThat sounds very reasonable, thanks. Right shifts of signed values will remain arithmetic though, right? It's probably what most people will expect since that's how it's done in C/C++ and we'd have to simple way of doing arithmetic shifting otherwise.\nright-shift of signed values are still arithmetic shifts, but that is independent of the overflow checking (which is based solely on the value of the right-hand-side compared to the bitwidth of the type of the left-hand-side)\nOkay, that makes sense. I wanted to make sure that was the intended behaviour since the reference seems to say otherwise. Thank you!\noh I see; yes that part of the reference has probably been incorrect for a long time.", "positive_passages": [{"docid": "doc-en-rust-bb44090c801037fe00c51799eda7ea9ba0fbc3c5909a5b3ff374a62f89a7d224", "text": "* Sending signals * Accessing/modifying the file system * Unsigned integer overflow (well-defined as wrapping) * Signed integer overflow (well-defined as two's complement representation * Signed integer overflow (well-defined as two\u2019s complement representation wrapping) #### Diverging functions", "commid": "rust_pr_23662"}], "negative_passages": []} {"query_id": "q-en-rust-c75aa7188859137083d672d0f02418ea4dd848eaa515631dbd4d866d5fc0d853", "query": "The references states that is a logical right shift, however it seems that it behaves like an arithmetic right shift for signed integers. For instance the following snippet prints : A logical right shift should produce (assuming 2's complement representation which I think rust guarantees?). In the same spirit the reference does not say if shifting outside the range of an integer is defined. From my quick test it doesn't appear to be: This prints garbage on my computer and on the playpen, so I assume it's UB or at least UV (it doesn't trigger the overflow check in debug builds though, maybe it should?) On a similar topic the reference also states that the following behaviours are safe: Unsigned integer overflow (well-defined as wrapping) Signed integer overflow (well-defined as two's complement representation wrapping) I believe that's not true anymore.\nI see that checking for bitshift overflows is going to be implemented in so it seems to confirm that it's UB (UV?) for now.\nsee PR for the checks. We can go either way on the issue of whether overflow is UB or UV or neither. (That is, it is easy to adapt the PR accordingly; just a matter of deciding, and perhaps measuring the impact on code size, maybe.)\nwhat's the process by which we decide that? I'd like to knock this issue out.\nheh, I already made an \"executive decision\" (though admittedly it was one that was already in favor of), of making overflow panic when checks are on, and when the checks are off, we forcibly mask the RHS to ensure that we do not encounter weirdness such as that documented in and . (Though I still need to put in tests for that fallback behavior.)\nThat sounds very reasonable, thanks. Right shifts of signed values will remain arithmetic though, right? It's probably what most people will expect since that's how it's done in C/C++ and we'd have to simple way of doing arithmetic shifting otherwise.\nright-shift of signed values are still arithmetic shifts, but that is independent of the overflow checking (which is based solely on the value of the right-hand-side compared to the bitwidth of the type of the left-hand-side)\nOkay, that makes sense. I wanted to make sure that was the intended behaviour since the reference seems to say otherwise. Thank you!\noh I see; yes that part of the reference has probably been incorrect for a long time.", "positive_passages": [{"docid": "doc-en-rust-a40511bef2467e3f0def4b25e24916b6a3303b9572438f1849a08524256b8bf5", "text": ": Exclusive or. Calls the `bitxor` method of the `std::ops::BitXor` trait. * `<<` : Logical left shift. : Left shift. Calls the `shl` method of the `std::ops::Shl` trait. * `>>` : Logical right shift. : Right shift. Calls the `shr` method of the `std::ops::Shr` trait. #### Lazy boolean operators", "commid": "rust_pr_23662"}], "negative_passages": []} {"query_id": "q-en-rust-3a18f934546070b2ca40f472739a943382c8f3b64d81650a47d307777fd60c77", "query": "The following panics: The following works: Apologies in advance if this is intended behavior.\nI have found that it also works when using:\nThis will be fixed by", "positive_passages": [{"docid": "doc-en-rust-c9f64eaf4ac61510417c6247aabeae7f721d6d2a51c65296408c24ce9998fa7d", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] pub fn create_dir_all(path: &P) -> io::Result<()> { let path = path.as_path(); if path.is_dir() { return Ok(()) } if path == Path::new(\"\") || path.is_dir() { return Ok(()) } if let Some(p) = path.parent() { try!(create_dir_all(p)) } create_dir(path) }", "commid": "rust_pr_23383"}], "negative_passages": []} {"query_id": "q-en-rust-3a18f934546070b2ca40f472739a943382c8f3b64d81650a47d307777fd60c77", "query": "The following panics: The following works: Apologies in advance if this is intended behavior.\nI have found that it also works when using:\nThis will be fixed by", "positive_passages": [{"docid": "doc-en-rust-5b1d073da4ac68aeed71af3b5fa4652bd40aa4db9e251f2354758b6d5cb24ba1", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use std::env; use std::fs::{self, TempDir}; fn main() { let td = TempDir::new(\"create-dir-all-bare\").unwrap(); env::set_current_dir(td.path()).unwrap(); fs::create_dir_all(\"create-dir-all-bare\").unwrap(); } ", "commid": "rust_pr_23383"}], "negative_passages": []} {"query_id": "q-en-rust-cddaa217d569a7fc3a76395aab6e857f0c94fa46cf6328ba162cc9a418bc35c8", "query": "For clarity, the second call to ends up using the impl on . cc I had concerns about this kind of thing when these impls were being introduced but I thought I convinced myself that this kind of thing couldn't happen. Did something change in the way we designed these impls since then?\nPossible solutions are to either remove the impl for or update to prevent it from invoking . Seems to me like the second is the better approach.\nI believe the reason for the was because it was considered a good thing to require callers to explicitly pass references. I think we've actually gone back and forth on that a few times. Are we committed now to allowing callers to just say ?", "positive_passages": [{"docid": "doc-en-rust-79cac1470f473535a3844d271c6392c4b74bebd34375a2d1bd56d8e43b15c2bf", "text": "/// ``` #[macro_export] macro_rules! write { ($dst:expr, $($arg:tt)*) => ((&mut *$dst).write_fmt(format_args!($($arg)*))) ($dst:expr, $($arg:tt)*) => ($dst.write_fmt(format_args!($($arg)*))) } /// Equivalent to the `write!` macro, except that a newline is appended after", "commid": "rust_pr_23934"}], "negative_passages": []} {"query_id": "q-en-rust-908d2ae79636ccc24cade23ddf5ba0a30a2d69713706b125a7213b6901b88b7e", "query": "See as an example, goes nowhere\nAnother example: The type is public in this case, but it's explicitly hidden.\nYeah, there's a lot of these.\nFor , it could be related to re-exported signatures. The same method in doesn't have this issue: Also, is it a bug that you can have private items in a public signature's ? Or private items in an impl for a public type? cc\nI wonder if rustdoc should be resolving qualified paths that it encounters.\nIs there still some unresolved issue here?\nStill a thing; for example, tries to link to types in rand, but the doc link is broken. (That link goes to 1.19.0, but it's still there in today's nightly docs.) If i were to guess, it probably has to do with rustdoc seeing types that are in dependencies of std, but aren't being documented alongside it.\nWrote a fix for it. Will make the PR tomorrow or after tomorrow.\nThis issue was actually fixed by The issue i actually referenced in that PR was apparently a duplicate of this one.", "positive_passages": [{"docid": "doc-en-rust-4722ef90b5a35d7a9491125aa9d783baffde0367c04d6dc0dc2bee34127cbccf", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![crate_name = \"foo\"] #[doc(hidden)] pub trait Foo {} trait Dark {} pub trait Bam {} pub struct Bar; struct Hidden; // @!has foo/struct.Bar.html '//*[@id=\"impl-Foo\"]' 'impl Foo for Bar' impl Foo for Bar {} // @!has foo/struct.Bar.html '//*[@id=\"impl-Dark\"]' 'impl Dark for Bar' impl Dark for Bar {} // @has foo/struct.Bar.html '//*[@id=\"impl-Bam\"]' 'impl Bam for Bar' // @has foo/trait.Bam.html '//*[@id=\"implementors-list\"]' 'impl Bam for Bar' impl Bam for Bar {} // @!has foo/trait.Bam.html '//*[@id=\"implementors-list\"]' 'impl Bam for Hidden' impl Bam for Hidden {} ", "commid": "rust_pr_46359"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust - name: install InnoSetup run: src/ci/scripts/install-innosetup.sh if: success() && !env.SKIP_JOB - name: ensure the build happens on a partition with enough space run: src/ci/scripts/symlink-build-dir.sh if: success() && !env.SKIP_JOB", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust builder.install(&xform(&etc.join(\"exe/rust.iss\")), &exe, 0o644); builder.install(&etc.join(\"exe/modpath.iss\"), &exe, 0o644); builder.install(&etc.join(\"exe/upgrade.iss\"), &exe, 0o644); builder.install(&etc.join(\"gfx/rust-logo.ico\"), &exe, 0o644); builder.create(&exe.join(\"LICENSE.txt\"), &license); // Generate exe installer builder.info(\"building `exe` installer with `iscc`\"); let mut cmd = Command::new(\"iscc\"); cmd.arg(\"rust.iss\").arg(\"/Q\").current_dir(&exe); if target.contains(\"windows-gnu\") { cmd.arg(\"/dMINGW\"); } add_env(builder, &mut cmd, target); let time = timeit(builder); builder.run(&mut cmd); drop(time); builder.install( &exe.join(format!(\"{}-{}.exe\", pkgname(builder, \"rust\"), target)), &distdir(builder), 0o755, ); // Generate msi installer let wix = PathBuf::from(env::var_os(\"WIX\").unwrap());", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust - bash: src/ci/scripts/install-innosetup.sh displayName: Install InnoSetup condition: and(succeeded(), not(variables.SKIP_JOB)) - bash: src/ci/scripts/symlink-build-dir.sh displayName: Ensure the build happens on a partition with enough space condition: and(succeeded(), not(variables.SKIP_JOB))", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust - name: install InnoSetup run: src/ci/scripts/install-innosetup.sh <<: *step - name: ensure the build happens on a partition with enough space run: src/ci/scripts/symlink-build-dir.sh <<: *step", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust #!/bin/bash # We use InnoSetup and its `iscc` program to also create combined installers. # Honestly at this point WIX above and `iscc` are just holdovers from # oh-so-long-ago and are required for creating installers on Windows. I think # one is MSI installers and one is EXE, but they're not used so frequently at # this point anyway so perhaps it's a wash! set -euo pipefail IFS=$'nt' source \"$(cd \"$(dirname \"$0\")\" && pwd)/../shared.sh\" if isWindows; then curl.exe -o is-install.exe \"${MIRRORS_BASE}/2017-08-22-is.exe\" cmd.exe //c \"is-install.exe /VERYSILENT /SUPPRESSMSGBOXES /NORESTART /SP-\" ciCommandAddPath \"C:Program Files (x86)Inno Setup 5\" fi ", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust // ---------------------------------------------------------------------------- // // Inno Setup Ver:\t5.4.2 // Script Version:\t1.4.1 // Author:\t\t\tJared Breland // Homepage:\t\thttp://www.legroom.net/software // License:\t\t\tGNU Lesser General Public License (LGPL), version 3 //\t\t\t\t\t\thttp://www.gnu.org/licenses/lgpl.html // // Script Function: //\tAllow modification of environmental path directly from Inno Setup installers // // Instructions: //\tCopy modpath.iss to the same directory as your setup script // //\tAdd this statement to your [Setup] section //\t\tChangesEnvironment=true // //\tAdd this statement to your [Tasks] section //\tYou can change the Description or Flags //\tYou can change the Name, but it must match the ModPathName setting below //\t\tName: modifypath; Description: &Add application directory to your environmental path; Flags: unchecked // //\tAdd the following to the end of your [Code] section //\tModPathName defines the name of the task defined above //\tModPathType defines whether the 'user' or 'system' path will be modified; //\t\tthis will default to user if anything other than system is set //\tsetArrayLength must specify the total number of dirs to be added //\tResult[0] contains first directory, Result[1] contains second, etc. //\t\tconst //\t\t\tModPathName = 'modifypath'; //\t\t\tModPathType = 'user'; // //\t\tfunction ModPathDir(): TArrayOfString; //\t\tbegin //\t\t\tsetArrayLength(Result, 1); //\t\t\tResult[0] := ExpandConstant('{app}'); //\t\tend; //\t\t#include \"modpath.iss\" // ---------------------------------------------------------------------------- procedure ModPath(); var oldpath:\tString; newpath:\tString; updatepath:\tBoolean; pathArr:\tTArrayOfString; aExecFile:\tString; aExecArr:\tTArrayOfString; i, d:\t\tInteger; pathdir:\tTArrayOfString; regroot:\tInteger; regpath:\tString; begin // Get constants from main script and adjust behavior accordingly // ModPathType MUST be 'system' or 'user'; force 'user' if invalid if ModPathType = 'system' then begin regroot := HKEY_LOCAL_MACHINE; regpath := 'SYSTEMCurrentControlSetControlSession ManagerEnvironment'; end else begin regroot := HKEY_CURRENT_USER; regpath := 'Environment'; end; // Get array of new directories and act on each individually pathdir := ModPathDir(); for d := 0 to GetArrayLength(pathdir)-1 do begin updatepath := true; // Modify WinNT path if UsingWinNT() = true then begin // Get current path, split into an array RegQueryStringValue(regroot, regpath, 'Path', oldpath); oldpath := oldpath + ';'; i := 0; while (Pos(';', oldpath) > 0) do begin SetArrayLength(pathArr, i+1); pathArr[i] := Copy(oldpath, 0, Pos(';', oldpath)-1); oldpath := Copy(oldpath, Pos(';', oldpath)+1, Length(oldpath)); i := i + 1; // Check if current directory matches app dir if pathdir[d] = pathArr[i-1] then begin // if uninstalling, remove dir from path if IsUninstaller() = true then begin continue; // if installing, flag that dir already exists in path end else begin updatepath := false; end; end; // Add current directory to new path if i = 1 then begin newpath := pathArr[i-1]; end else begin newpath := newpath + ';' + pathArr[i-1]; end; end; // Append app dir to path if not already included if (IsUninstaller() = false) AND (updatepath = true) then newpath := newpath + ';' + pathdir[d]; // Write new path RegWriteStringValue(regroot, regpath, 'Path', newpath); // Modify Win9x path end else begin // Convert to shortened dirname pathdir[d] := GetShortName(pathdir[d]); // If autoexec.bat exists, check if app dir already exists in path aExecFile := 'C:AUTOEXEC.BAT'; if FileExists(aExecFile) then begin LoadStringsFromFile(aExecFile, aExecArr); for i := 0 to GetArrayLength(aExecArr)-1 do begin if IsUninstaller() = false then begin // If app dir already exists while installing, skip add if (Pos(pathdir[d], aExecArr[i]) > 0) then updatepath := false; break; end else begin // If app dir exists and = what we originally set, then delete at uninstall if aExecArr[i] = 'SET PATH=%PATH%;' + pathdir[d] then aExecArr[i] := ''; end; end; end; // If app dir not found, or autoexec.bat didn't exist, then (create and) append to current path if (IsUninstaller() = false) AND (updatepath = true) then begin SaveStringToFile(aExecFile, #13#10 + 'SET PATH=%PATH%;' + pathdir[d], True); // If uninstalling, write the full autoexec out end else begin SaveStringsToFile(aExecFile, aExecArr, False); end; end; end; end; // Split a string into an array using passed delimiter procedure Explode(var Dest: TArrayOfString; Text: String; Separator: String); var i: Integer; begin i := 0; repeat SetArrayLength(Dest, i+1); if Pos(Separator,Text) > 0 then\tbegin Dest[i] := Copy(Text, 1, Pos(Separator, Text)-1); Text := Copy(Text, Pos(Separator,Text) + Length(Separator), Length(Text)); i := i + 1; end else begin Dest[i] := Text; Text := ''; end; until Length(Text)=0; end; procedure ModPathCurStepChanged(CurStep: TSetupStep); var taskname:\tString; begin taskname := ModPathName; if CurStep = ssPostInstall then if IsTaskSelected(taskname) then ModPath(); end; procedure CurUninstallStepChanged(CurUninstallStep: TUninstallStep); var aSelectedTasks:\tTArrayOfString; i:\t\t\t\tInteger; taskname:\t\tString; regpath:\t\tString; regstring:\t\tString; appid:\t\t\tString; begin // only run during actual uninstall if CurUninstallStep = usUninstall then begin // get list of selected tasks saved in registry at install time appid := '{#emit SetupSetting(\"AppId\")}'; if appid = '' then appid := '{#emit SetupSetting(\"AppName\")}'; regpath := ExpandConstant('SoftwareMicrosoftWindowsCurrentVersionUninstall'+appid+'_is1'); RegQueryStringValue(HKLM, regpath, 'Inno Setup: Selected Tasks', regstring); if regstring = '' then RegQueryStringValue(HKCU, regpath, 'Inno Setup: Selected Tasks', regstring); // check each task; if matches modpath taskname, trigger patch removal if regstring <> '' then begin taskname := ModPathName; Explode(aSelectedTasks, regstring, ','); if GetArrayLength(aSelectedTasks) > 0 then begin for i := 0 to GetArrayLength(aSelectedTasks)-1 do begin if comparetext(aSelectedTasks[i], taskname) = 0 then ModPath(); end; end; end; end; end; function NeedRestart(): Boolean; var taskname:\tString; begin taskname := ModPathName; if IsTaskSelected(taskname) and not UsingWinNT() then begin Result := True; end else begin Result := False; end; end; ", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust #define CFG_RELEASE_NUM GetEnv(\"CFG_RELEASE_NUM\") #define CFG_RELEASE GetEnv(\"CFG_RELEASE\") #define CFG_PACKAGE_NAME GetEnv(\"CFG_PACKAGE_NAME\") #define CFG_BUILD GetEnv(\"CFG_BUILD\") [Setup] SetupIconFile=rust-logo.ico AppName=Rust AppVersion={#CFG_RELEASE} AppCopyright=Copyright (C) 2006-2014 Mozilla Foundation, MIT license AppPublisher=Mozilla Foundation AppPublisherURL=http://www.rust-lang.org VersionInfoVersion={#CFG_RELEASE_NUM} LicenseFile=LICENSE.txt PrivilegesRequired=lowest DisableWelcomePage=true DisableProgramGroupPage=true DisableReadyPage=true DisableStartupPrompt=true OutputDir=. SourceDir=. OutputBaseFilename={#CFG_PACKAGE_NAME}-{#CFG_BUILD} DefaultDirName={sd}Rust Compression=lzma2/normal InternalCompressLevel=normal SolidCompression=no ChangesEnvironment=true ChangesAssociations=no AllowUNCPath=false AllowNoIcons=true Uninstallable=yes [Tasks] Name: modifypath; Description: &Add {app}bin to your PATH (recommended) [Components] Name: rust; Description: \"Rust compiler and standard crates\"; Types: full compact custom; Flags: fixed #ifdef MINGW Name: gcc; Description: \"Linker and platform libraries\"; Types: full #endif Name: docs; Description: \"HTML documentation\"; Types: full Name: cargo; Description: \"Cargo, the Rust package manager\"; Types: full Name: std; Description: \"The Rust Standard Library\"; Types: full // tool-rls-start Name: rls; Description: \"RLS, the Rust Language Server\" // tool-rls-end [Files] Source: \"rustc/*.*\"; DestDir: \"{app}\"; Flags: ignoreversion recursesubdirs; Components: rust #ifdef MINGW Source: \"rust-mingw/*.*\"; DestDir: \"{app}\"; Flags: ignoreversion recursesubdirs; Components: gcc #endif Source: \"rust-docs/*.*\"; DestDir: \"{app}\"; Flags: ignoreversion recursesubdirs; Components: docs Source: \"cargo/*.*\"; DestDir: \"{app}\"; Flags: ignoreversion recursesubdirs; Components: cargo Source: \"rust-std/*.*\"; DestDir: \"{app}\"; Flags: ignoreversion recursesubdirs; Components: std // tool-rls-start Source: \"rls/*.*\"; DestDir: \"{app}\"; Flags: ignoreversion recursesubdirs; Components: rls Source: \"rust-analysis/*.*\"; DestDir: \"{app}\"; Flags: ignoreversion recursesubdirs; Components: rls // tool-rls-end [Code] const ModPathName = 'modifypath'; ModPathType = 'user'; function ModPathDir(): TArrayOfString; begin setArrayLength(Result, 1) Result[0] := ExpandConstant('{app}bin'); end; #include \"modpath.iss\" #include \"upgrade.iss\" // Both modpath.iss and upgrade.iss want to overload CurStepChanged. // This version does the overload then delegates to each. procedure CurStepChanged(CurStep: TSetupStep); begin UpgradeCurStepChanged(CurStep); ModPathCurStepChanged(CurStep); end; ", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-5069ad0a45c407986bb92a252cc093be7c8b50490e6c0129c7879f3147884d97", "query": "The .exe installer tries to install to \"C:Program Files (x86)Rust\", even though it's the 64-bit version. It then fails to create the destination directory due to a permission error. The .msi installer correctly defaults to \"C:Program FilesRust // The following code taken from https://stackoverflow.com/questions/2000296/innosetup-how-to-automatically-uninstall-previous-installed-version // It performs upgrades by running the uninstaller before the install ///////////////////////////////////////////////////////////////////// function GetUninstallString(): String; var sUnInstPath: String; sUnInstallString: String; begin sUnInstPath := ExpandConstant('SoftwareMicrosoftWindowsCurrentVersionUninstallRust_is1'); sUnInstallString := ''; if not RegQueryStringValue(HKLM, sUnInstPath, 'UninstallString', sUnInstallString) then RegQueryStringValue(HKCU, sUnInstPath, 'UninstallString', sUnInstallString); Result := sUnInstallString; end; ///////////////////////////////////////////////////////////////////// function IsUpgrade(): Boolean; begin Result := (GetUninstallString() <> ''); end; ///////////////////////////////////////////////////////////////////// function UnInstallOldVersion(): Integer; var sUnInstallString: String; iResultCode: Integer; begin // Return Values: // 1 - uninstall string is empty // 2 - error executing the UnInstallString // 3 - successfully executed the UnInstallString // default return value Result := 0; // get the uninstall string of the old app sUnInstallString := GetUninstallString(); if sUnInstallString <> '' then begin sUnInstallString := RemoveQuotes(sUnInstallString); if Exec(sUnInstallString, '/SILENT /NORESTART /SUPPRESSMSGBOXES','', SW_HIDE, ewWaitUntilTerminated, iResultCode) then Result := 3 else Result := 2; end else Result := 1; end; ///////////////////////////////////////////////////////////////////// procedure UpgradeCurStepChanged(CurStep: TSetupStep); begin if (CurStep=ssInstall) then begin if (IsUpgrade()) then begin UnInstallOldVersion(); end; end; end; ", "commid": "rust_pr_72569"}], "negative_passages": []} {"query_id": "q-en-rust-1c66ff7f788c315b361b7cf9045a4b88fc9461f1e6fbe998a904b748912194dc", "query": "cc\nAre you OK with just a PR?\nSure!\nIs it worth writing tests for this? I'm not really sure how it could break, but better safe than sorry I suppose...\nYeah but just some simple tests would be fine to have\nping?\nWorking on it right now. , Steven Allen wrote:\nThanks. I had some free time so I thought I'd check.", "positive_passages": [{"docid": "doc-en-rust-f17609927fc8fc0066f7b7080ee163c49a0a93c6b1cdd1443563a9455d092f1c", "text": "Repeat{element: elt} } /// An iterator that yields nothing. #[unstable(feature=\"iter_empty\", reason = \"new addition\")] pub struct Empty(marker::PhantomData); #[unstable(feature=\"iter_empty\", reason = \"new addition\")] impl Iterator for Empty { type Item = T; fn next(&mut self) -> Option { None } fn size_hint(&self) -> (usize, Option){ (0, Some(0)) } } #[unstable(feature=\"iter_empty\", reason = \"new addition\")] impl DoubleEndedIterator for Empty { fn next_back(&mut self) -> Option { None } } #[unstable(feature=\"iter_empty\", reason = \"new addition\")] impl ExactSizeIterator for Empty { fn len(&self) -> usize { 0 } } // not #[derive] because that adds a Clone bound on T, // which isn't necessary. #[unstable(feature=\"iter_empty\", reason = \"new addition\")] impl Clone for Empty { fn clone(&self) -> Empty { Empty(marker::PhantomData) } } // not #[derive] because that adds a Default bound on T, // which isn't necessary. #[unstable(feature=\"iter_empty\", reason = \"new addition\")] impl Default for Empty { fn default() -> Empty { Empty(marker::PhantomData) } } /// Creates an iterator that yields nothing. #[unstable(feature=\"iter_empty\", reason = \"new addition\")] pub fn empty() -> Empty { Empty(marker::PhantomData) } /// An iterator that yields an element exactly once. #[unstable(feature=\"iter_once\", reason = \"new addition\")] pub struct Once { inner: ::option::IntoIter } #[unstable(feature=\"iter_once\", reason = \"new addition\")] impl Iterator for Once { type Item = T; fn next(&mut self) -> Option { self.inner.next() } fn size_hint(&self) -> (usize, Option) { self.inner.size_hint() } } #[unstable(feature=\"iter_once\", reason = \"new addition\")] impl DoubleEndedIterator for Once { fn next_back(&mut self) -> Option { self.inner.next_back() } } #[unstable(feature=\"iter_once\", reason = \"new addition\")] impl ExactSizeIterator for Once { fn len(&self) -> usize { self.inner.len() } } /// Creates an iterator that yields an element exactly once. #[unstable(feature=\"iter_once\", reason = \"new addition\")] pub fn once(value: T) -> Once { Once { inner: Some(value).into_iter() } } /// Functions for lexicographical ordering of sequences. /// /// Lexicographical ordering through `<`, `<=`, `>=`, `>` requires", "commid": "rust_pr_25817"}], "negative_passages": []} {"query_id": "q-en-rust-1c66ff7f788c315b361b7cf9045a4b88fc9461f1e6fbe998a904b748912194dc", "query": "cc\nAre you OK with just a PR?\nSure!\nIs it worth writing tests for this? I'm not really sure how it could break, but better safe than sorry I suppose...\nYeah but just some simple tests would be fine to have\nping?\nWorking on it right now. , Steven Allen wrote:\nThanks. I had some free time so I thought I'd check.", "positive_passages": [{"docid": "doc-en-rust-4d8ba1096065066b061a3665dac76c4d946596623782cd7de62f9f3bf80e1f6d", "text": "// Can't check len now because count consumes. } #[test] fn test_once() { let mut it = once(42); assert_eq!(it.next(), Some(42)); assert_eq!(it.next(), None); } #[test] fn test_empty() { let mut it = empty::(); assert_eq!(it.next(), None); } #[bench] fn bench_rposition(b: &mut Bencher) { let it: Vec = (0..300).collect();", "commid": "rust_pr_25817"}], "negative_passages": []} {"query_id": "q-en-rust-1c66ff7f788c315b361b7cf9045a4b88fc9461f1e6fbe998a904b748912194dc", "query": "cc\nAre you OK with just a PR?\nSure!\nIs it worth writing tests for this? I'm not really sure how it could break, but better safe than sorry I suppose...\nYeah but just some simple tests would be fine to have\nping?\nWorking on it right now. , Steven Allen wrote:\nThanks. I had some free time so I thought I'd check.", "positive_passages": [{"docid": "doc-en-rust-f188e6cdffa42f0fdbaea4e1cf31e78b8d2591a5bd94ada1e8cd76c2e2f91c29", "text": "#![feature(slice_patterns)] #![feature(float_from_str_radix)] #![feature(cell_extras)] #![feature(iter_empty)] #![feature(iter_once)] extern crate core; extern crate test;", "commid": "rust_pr_25817"}], "negative_passages": []} {"query_id": "q-en-rust-1bf93e9ca147e0f74f78fbf9099a85f6395a728142c67421e3703dc116d465a2", "query": "This value has not changed in a long time, but to support sxs installation in a single dir it needs to. It should just be hashed from the value of .\nSo should we pipe the value through e.g. md5sum? That could be awkward on BSDs, where the hash programs have a . I think we can patch over the differences by adding a check to the configure script. Something like this: then in the Makefile we have: But if there's a simpler way then I'm all ears.", "positive_passages": [{"docid": "doc-en-rust-4f3d61fe6bfa170844161dcc1793b19fd45d92572cec5122e5704f56b17b5b48", "text": "T=$(command -v $P 2>&1) if [ $? -eq 0 ] then VER0=$($P --version 2>/dev/null | head -1 | sed -e 's/[^0-9]*([vV]?[0-9.]+[^ ]*).*/1/' ) VER0=$($P --version 2>/dev/null | grep -o '[vV]?[0-9][0-9.][a-z0-9.-]*' | head -1 ) if [ $? -eq 0 -a \"x${VER0}\" != \"x\" ] then VER=\"($VER0)\"", "commid": "rust_pr_25208"}], "negative_passages": []} {"query_id": "q-en-rust-1bf93e9ca147e0f74f78fbf9099a85f6395a728142c67421e3703dc116d465a2", "query": "This value has not changed in a long time, but to support sxs installation in a single dir it needs to. It should just be hashed from the value of .\nSo should we pipe the value through e.g. md5sum? That could be awkward on BSDs, where the hash programs have a . I think we can patch over the differences by adding a check to the configure script. Something like this: then in the Makefile we have: But if there's a simpler way then I'm all ears.", "positive_passages": [{"docid": "doc-en-rust-5dd18e77e47aff97a9178011681c8494e06d4bb02db95bcffefe759830d4c7b4", "text": "probe_need CFG_GIT git fi # Use `md5sum` on GNU platforms, or `md5 -q` on BSD probe CFG_MD5 md5 probe CFG_MD5SUM md5sum if [ -n \"$CFG_MD5\" ] then CFG_HASH_COMMAND=\"$CFG_MD5 -q | head -c 8\" elif [ -n \"$CFG_MD5SUM\" ] then CFG_HASH_COMMAND=\"$CFG_MD5SUM | head -c 8\" else err 'could not find one of: md5 md5sum' fi putvar CFG_HASH_COMMAND probe CFG_CLANG clang++ probe CFG_CCACHE ccache probe CFG_GCC gcc", "commid": "rust_pr_25208"}], "negative_passages": []} {"query_id": "q-en-rust-1bf93e9ca147e0f74f78fbf9099a85f6395a728142c67421e3703dc116d465a2", "query": "This value has not changed in a long time, but to support sxs installation in a single dir it needs to. It should just be hashed from the value of .\nSo should we pipe the value through e.g. md5sum? That could be awkward on BSDs, where the hash programs have a . I think we can patch over the differences by adding a check to the configure script. Something like this: then in the Makefile we have: But if there's a simpler way then I'm all ears.", "positive_passages": [{"docid": "doc-en-rust-33a40b934c320512e17b315aa53c13cb07d05fef62770302dbe2b3c0a520d58c", "text": "# versions (section 9) CFG_PRERELEASE_VERSION=.1 CFG_FILENAME_EXTRA=4e7c5e5c # Append a version-dependent hash to each library, so we can install different # versions in the same place CFG_FILENAME_EXTRA=$(shell printf '%s' $(CFG_RELEASE) | $(CFG_HASH_COMMAND)) ifeq ($(CFG_RELEASE_CHANNEL),stable) # This is the normal semver version string, e.g. \"0.12.0\", \"0.12.0-nightly\"", "commid": "rust_pr_25208"}], "negative_passages": []} {"query_id": "q-en-rust-03a996701b56d499fe7c2d33b64d26e7f27808d109e3e1024559fbde7304cedf", "query": "The failure happens in when the output of the command used in the test contains cyrillic letters. Changing the system locale to English (USA) or replacing with something like makes this error go away. cc\nHm interesting! This probably isn't so much connected to locales so much that something in the environment/output isn't valid UTF-8. Could you get a backtrace of this failure? I would be surprised if this was coming from , but the test looks pretty innocuous, so it may very well be coming from std!\nThe problem may actually reside in compiletest, it captures the output of the test and then can interpret it incorrectly. If the test is run as a standalone program outside of the test suite then no panic happens and the test works as expected. I'm still investigating. Here's the backtrace, but it's not very helpful:\nAha! It looks like . I suspect it would be fine for to just use", "positive_passages": [{"docid": "doc-en-rust-608b9371792d9c284099bb78f03a55b040cf810f0df84aa62eca3fe5810ad254", "text": "$(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log # As above but don't bother running tidy. check-notidy: cleantmptestlogs cleantestlibs all check-stage2 check-notidy: check-sanitycheck cleantmptestlogs cleantestlibs all check-stage2 $(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log # A slightly smaller set of tests for smoke testing. check-lite: cleantestlibs cleantmptestlogs check-lite: check-sanitycheck cleantestlibs cleantmptestlogs $(foreach crate,$(TEST_TARGET_CRATES),check-stage2-$(crate)) check-stage2-rpass check-stage2-rpass-valgrind check-stage2-rfail check-stage2-cfail check-stage2-pfail check-stage2-rmake $(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log # Only check the 'reference' tests: rpass/cfail/rfail/rmake. check-ref: cleantestlibs cleantmptestlogs check-stage2-rpass check-stage2-rpass-valgrind check-stage2-rfail check-stage2-cfail check-stage2-pfail check-stage2-rmake check-ref: check-sanitycheck cleantestlibs cleantmptestlogs check-stage2-rpass check-stage2-rpass-valgrind check-stage2-rfail check-stage2-cfail check-stage2-pfail check-stage2-rmake $(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log # Only check the docs. check-docs: cleantestlibs cleantmptestlogs check-stage2-docs check-docs: check-sanitycheck cleantestlibs cleantmptestlogs check-stage2-docs $(Q)$(CFG_PYTHON) $(S)src/etc/check-summary.py tmp/*.log # Some less critical tests that are not prone to breakage.", "commid": "rust_pr_25654"}], "negative_passages": []} {"query_id": "q-en-rust-03a996701b56d499fe7c2d33b64d26e7f27808d109e3e1024559fbde7304cedf", "query": "The failure happens in when the output of the command used in the test contains cyrillic letters. Changing the system locale to English (USA) or replacing with something like makes this error go away. cc\nHm interesting! This probably isn't so much connected to locales so much that something in the environment/output isn't valid UTF-8. Could you get a backtrace of this failure? I would be surprised if this was coming from , but the test looks pretty innocuous, so it may very well be coming from std!\nThe problem may actually reside in compiletest, it captures the output of the test and then can interpret it incorrectly. If the test is run as a standalone program outside of the test suite then no panic happens and the test works as expected. I'm still investigating. Here's the backtrace, but it's not very helpful:\nAha! It looks like . I suspect it would be fine for to just use", "positive_passages": [{"docid": "doc-en-rust-10b0b1aaa87a6df07850963c92573cc451c98eaa67e69935c9138f3c714da08f", "text": "# except according to those terms. import os import subprocess import sys import functools STATUS = 0 def error_unless_permitted(env_var, message): global STATUS if not os.getenv(env_var): sys.stderr.write(message) STATUS = 1 def only_on(platforms): def decorator(func): @functools.wraps(func)", "commid": "rust_pr_25654"}], "negative_passages": []} {"query_id": "q-en-rust-03a996701b56d499fe7c2d33b64d26e7f27808d109e3e1024559fbde7304cedf", "query": "The failure happens in when the output of the command used in the test contains cyrillic letters. Changing the system locale to English (USA) or replacing with something like makes this error go away. cc\nHm interesting! This probably isn't so much connected to locales so much that something in the environment/output isn't valid UTF-8. Could you get a backtrace of this failure? I would be surprised if this was coming from , but the test looks pretty innocuous, so it may very well be coming from std!\nThe problem may actually reside in compiletest, it captures the output of the test and then can interpret it incorrectly. If the test is run as a standalone program outside of the test suite then no panic happens and the test works as expected. I'm still investigating. Here's the backtrace, but it's not very helpful:\nAha! It looks like . I suspect it would be fine for to just use", "positive_passages": [{"docid": "doc-en-rust-0ddc47e15e5af66665f55c0a517e81672886c2542fa7eea2ad60001108d02a93", "text": "return inner return decorator @only_on(('linux', 'darwin', 'freebsd', 'openbsd')) @only_on(['linux', 'darwin', 'freebsd', 'openbsd']) def check_rlimit_core(): import resource soft, hard = resource.getrlimit(resource.RLIMIT_CORE)", "commid": "rust_pr_25654"}], "negative_passages": []} {"query_id": "q-en-rust-03a996701b56d499fe7c2d33b64d26e7f27808d109e3e1024559fbde7304cedf", "query": "The failure happens in when the output of the command used in the test contains cyrillic letters. Changing the system locale to English (USA) or replacing with something like makes this error go away. cc\nHm interesting! This probably isn't so much connected to locales so much that something in the environment/output isn't valid UTF-8. Could you get a backtrace of this failure? I would be surprised if this was coming from , but the test looks pretty innocuous, so it may very well be coming from std!\nThe problem may actually reside in compiletest, it captures the output of the test and then can interpret it incorrectly. If the test is run as a standalone program outside of the test suite then no panic happens and the test works as expected. I'm still investigating. Here's the backtrace, but it's not very helpful:\nAha! It looks like . I suspect it would be fine for to just use", "positive_passages": [{"docid": "doc-en-rust-35a4e777b0275bac4686946b4774dfa9479ef38b978286a61b7de57c8fe6cccd", "text": "set ALLOW_NONZERO_RLIMIT_CORE to ignore this warning \"\"\" % (soft)) @only_on(['win32']) def check_console_code_page(): if '65001' not in subprocess.check_output(['cmd', '/c', 'chcp']): sys.stderr.write('Warning: the console output code page is not UTF-8, some tests may fail. Use `cmd /c \"chcp 65001\"` to setup UTF-8 code page.n') def main(): check_console_code_page() check_rlimit_core() if __name__ == '__main__':", "commid": "rust_pr_25654"}], "negative_passages": []} {"query_id": "q-en-rust-475a5325a19c86230949a9b22795b1d4323dc876f6c5f7037455395657a6e10b", "query": "Running on the following code results in a stack overflow: With significantly fewer enum variants, it works. If I remove the use of , it works. This is breaking on one of my PRs (bjz/glfw-rs).\nThis seems to be fixed.\nIn my tests, you now need to have around 460+ enum variants in order to get the stack overflow. Can we assume nobody will ever have that many?\nshould a test still be written for this? Panicking is not correct, better would be that the number of variants is limited explicitly and that this is documented. Better still if there is no limit, of course.\nIn theory yeah if there's something that overflowed previously and no longer does, we should have a test for that :)\nTriage: not aware of a test yet", "positive_passages": [{"docid": "doc-en-rust-8f1afcfed9240aec8a236fad7d8fcc11089c3849b77c5b1ed7e4c916aca74941", "text": " // Copyright (c) 2015 Anders Kaseorg // Permission is hereby granted, free of charge, to any person obtaining // a copy of this software and associated documentation files (the // \u201cSoftware\u201d), to deal in the Software without restriction, including // without limitation the rights to use, copy, modify, merge, publish, // distribute, sublicense, and/or sell copies of the Software, and to // permit persons to whom the Software is furnished to do so, subject to // the following conditions: // The above copyright notice and this permission notice shall be // included in all copies or substantial portions of the Software. // THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, // EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF // MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. // IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY // CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, // TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE // SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. //! This crate exports a macro `enum_from_primitive!` that wraps an //! `enum` declaration and automatically adds an implementation of //! `num::FromPrimitive` (reexported here), to allow conversion from //! primitive integers to the enum. It therefore provides an //! alternative to the built-in `#[derive(FromPrimitive)]`, which //! requires the unstable `std::num::FromPrimitive` and is disabled in //! Rust 1.0. //! //! # Example //! //! ``` //! #[macro_use] extern crate enum_primitive; //! extern crate num_traits; //! use num_traits::FromPrimitive; //! //! enum_from_primitive! { //! #[derive(Debug, PartialEq)] //! enum FooBar { //! Foo = 17, //! Bar = 42, //! Baz, //! } //! } //! //! fn main() { //! assert_eq!(FooBar::from_i32(17), Some(FooBar::Foo)); //! assert_eq!(FooBar::from_i32(42), Some(FooBar::Bar)); //! assert_eq!(FooBar::from_i32(43), Some(FooBar::Baz)); //! assert_eq!(FooBar::from_i32(91), None); //! } //! ``` pub mod num_traits { pub trait FromPrimitive: Sized { fn from_i64(n: i64) -> Option; fn from_u64(n: u64) -> Option; } } pub use std::option::Option; pub use num_traits::FromPrimitive; /// Helper macro for internal use by `enum_from_primitive!`. #[macro_export] macro_rules! enum_from_primitive_impl_ty { ($meth:ident, $ty:ty, $name:ident, $( $variant:ident )*) => { #[allow(non_upper_case_globals, unused)] fn $meth(n: $ty) -> $crate::Option { $( if n == $name::$variant as $ty { $crate::Option::Some($name::$variant) } else )* { $crate::Option::None } } }; } /// Helper macro for internal use by `enum_from_primitive!`. #[macro_export] #[macro_use(enum_from_primitive_impl_ty)] macro_rules! enum_from_primitive_impl { ($name:ident, $( $variant:ident )*) => { impl $crate::FromPrimitive for $name { enum_from_primitive_impl_ty! { from_i64, i64, $name, $( $variant )* } enum_from_primitive_impl_ty! { from_u64, u64, $name, $( $variant )* } } }; } /// Wrap this macro around an `enum` declaration to get an /// automatically generated implementation of `num::FromPrimitive`. #[macro_export] #[macro_use(enum_from_primitive_impl)] macro_rules! enum_from_primitive { ( $( #[$enum_attr:meta] )* enum $name:ident { $( $( #[$variant_attr:meta] )* $variant:ident ),+ $( = $discriminator:expr, $( $( #[$variant_two_attr:meta] )* $variant_two:ident ),+ )* } ) => { $( #[$enum_attr] )* enum $name { $( $( #[$variant_attr] )* $variant ),+ $( = $discriminator, $( $( #[$variant_two_attr] )* $variant_two ),+ )* } enum_from_primitive_impl! { $name, $( $variant )+ $( $( $variant_two )+ )* } }; ( $( #[$enum_attr:meta] )* enum $name:ident { $( $( $( #[$variant_attr:meta] )* $variant:ident ),+ = $discriminator:expr ),* } ) => { $( #[$enum_attr] )* enum $name { $( $( $( #[$variant_attr] )* $variant ),+ = $discriminator ),* } enum_from_primitive_impl! { $name, $( $( $variant )+ )* } }; ( $( #[$enum_attr:meta] )* enum $name:ident { $( $( #[$variant_attr:meta] )* $variant:ident ),+ $( = $discriminator:expr, $( $( #[$variant_two_attr:meta] )* $variant_two:ident ),+ )*, } ) => { $( #[$enum_attr] )* enum $name { $( $( #[$variant_attr] )* $variant ),+ $( = $discriminator, $( $( #[$variant_two_attr] )* $variant_two ),+ )*, } enum_from_primitive_impl! { $name, $( $variant )+ $( $( $variant_two )+ )* } }; ( $( #[$enum_attr:meta] )* enum $name:ident { $( $( $( #[$variant_attr:meta] )* $variant:ident ),+ = $discriminator:expr ),+, } ) => { $( #[$enum_attr] )* enum $name { $( $( $( #[$variant_attr] )* $variant ),+ = $discriminator ),+, } enum_from_primitive_impl! { $name, $( $( $variant )+ )+ } }; ( $( #[$enum_attr:meta] )* pub enum $name:ident { $( $( #[$variant_attr:meta] )* $variant:ident ),+ $( = $discriminator:expr, $( $( #[$variant_two_attr:meta] )* $variant_two:ident ),+ )* } ) => { $( #[$enum_attr] )* pub enum $name { $( $( #[$variant_attr] )* $variant ),+ $( = $discriminator, $( $( #[$variant_two_attr] )* $variant_two ),+ )* } enum_from_primitive_impl! { $name, $( $variant )+ $( $( $variant_two )+ )* } }; ( $( #[$enum_attr:meta] )* pub enum $name:ident { $( $( $( #[$variant_attr:meta] )* $variant:ident ),+ = $discriminator:expr ),* } ) => { $( #[$enum_attr] )* pub enum $name { $( $( $( #[$variant_attr] )* $variant ),+ = $discriminator ),* } enum_from_primitive_impl! { $name, $( $( $variant )+ )* } }; ( $( #[$enum_attr:meta] )* pub enum $name:ident { $( $( #[$variant_attr:meta] )* $variant:ident ),+ $( = $discriminator:expr, $( $( #[$variant_two_attr:meta] )* $variant_two:ident ),+ )*, } ) => { $( #[$enum_attr] )* pub enum $name { $( $( #[$variant_attr] )* $variant ),+ $( = $discriminator, $( $( #[$variant_two_attr] )* $variant_two ),+ )*, } enum_from_primitive_impl! { $name, $( $variant )+ $( $( $variant_two )+ )* } }; ( $( #[$enum_attr:meta] )* pub enum $name:ident { $( $( $( #[$variant_attr:meta] )* $variant:ident ),+ = $discriminator:expr ),+, } ) => { $( #[$enum_attr] )* pub enum $name { $( $( $( #[$variant_attr] )* $variant ),+ = $discriminator ),+, } enum_from_primitive_impl! { $name, $( $( $variant )+ )+ } }; } ", "commid": "rust_pr_55568"}], "negative_passages": []} {"query_id": "q-en-rust-475a5325a19c86230949a9b22795b1d4323dc876f6c5f7037455395657a6e10b", "query": "Running on the following code results in a stack overflow: With significantly fewer enum variants, it works. If I remove the use of , it works. This is breaking on one of my PRs (bjz/glfw-rs).\nThis seems to be fixed.\nIn my tests, you now need to have around 460+ enum variants in order to get the stack overflow. Can we assume nobody will ever have that many?\nshould a test still be written for this? Panicking is not correct, better would be that the number of variants is limited explicitly and that this is documented. Better still if there is no limit, of course.\nIn theory yeah if there's something that overflowed previously and no longer does, we should have a test for that :)\nTriage: not aware of a test yet", "positive_passages": [{"docid": "doc-en-rust-dfa4102ff6222a71a7b89173e991bf2fab12373c7968af4202317b53d7b80716", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // ensure this code doesn't stack overflow // aux-build:enum_primitive.rs #[macro_use] extern crate enum_primitive; enum_from_primitive! { pub enum Test { A1,A2,A3,A4,A5,A6, B1,B2,B3,B4,B5,B6, C1,C2,C3,C4,C5,C6, D1,D2,D3,D4,D5,D6, E1,E2,E3,E4,E5,E6, F1,F2,F3,F4,F5,F6, G1,G2,G3,G4,G5,G6, H1,H2,H3,H4,H5,H6, I1,I2,I3,I4,I5,I6, J1,J2,J3,J4,J5,J6, K1,K2,K3,K4,K5,K6, L1,L2,L3,L4,L5,L6, M1,M2,M3,M4,M5,M6, N1,N2,N3,N4,N5,N6, O1,O2,O3,O4,O5,O6, P1,P2,P3,P4,P5,P6, Q1,Q2,Q3,Q4,Q5,Q6, R1,R2,R3,R4,R5,R6, S1,S2,S3,S4,S5,S6, T1,T2,T3,T4,T5,T6, U1,U2,U3,U4,U5,U6, V1,V2,V3,V4,V5,V6, W1,W2,W3,W4,W5,W6, X1,X2,X3,X4,X5,X6, Y1,Y2,Y3,Y4,Y5,Y6, Z1,Z2,Z3,Z4,Z5,Z6, } } ", "commid": "rust_pr_55568"}], "negative_passages": []} {"query_id": "q-en-rust-f4f3208c8141bf3dc83b547ed0e304313cef732d53c9ce0eca432a802f18cca4", "query": "The rust book currently links to, an old beta version of rust, causing some problems using the tutorial. Example: Dining philosophers will not compile.\nWhat's your rust version?\nAccording to the installer: 1.0.0-beta. On windows x64.\nIt compiles on 1.0.0-beta.2 and up. You should update your compiler.\nI believe is right here, so closing, but thanks for the report!", "positive_passages": [{"docid": "doc-en-rust-980d16551535058b252144ed55933a83736a76332482d562b53ba2ec5a35e00d", "text": "If you're on Windows, please download either the [32-bit installer][win32] or the [64-bit installer][win64] and run it. [win32]: https://static.rust-lang.org/dist/rust-1.0.0-beta-i686-pc-windows-gnu.msi [win64]: https://static.rust-lang.org/dist/rust-1.0.0-beta-x86_64-pc-windows-gnu.msi [win32]: https://static.rust-lang.org/dist/rust-1.0.0-i686-pc-windows-gnu.msi [win64]: https://static.rust-lang.org/dist/rust-1.0.0-x86_64-pc-windows-gnu.msi ## Uninstalling", "commid": "rust_pr_25493"}], "negative_passages": []} {"query_id": "q-en-rust-f4f3208c8141bf3dc83b547ed0e304313cef732d53c9ce0eca432a802f18cca4", "query": "The rust book currently links to, an old beta version of rust, causing some problems using the tutorial. Example: Dining philosophers will not compile.\nWhat's your rust version?\nAccording to the installer: 1.0.0-beta. On windows x64.\nIt compiles on 1.0.0-beta.2 and up. You should update your compiler.\nI believe is right here, so closing, but thanks for the report!", "positive_passages": [{"docid": "doc-en-rust-bf030d006d69ecd5b70d294b68229391c62cc6d24e30b2c599fac96eea97ab60", "text": "You should see the version number, commit hash, commit date and build date: ```bash rustc 1.0.0-beta (9854143cb 2015-04-02) (built 2015-04-02) rustc 1.0.0 (a59de37e9 2015-05-13) (built 2015-05-14) ``` If you did, Rust has been installed successfully! Congrats!", "commid": "rust_pr_25493"}], "negative_passages": []} {"query_id": "q-en-rust-2f8389d00cc4063d52ef74f0014ee9b03b9629fc81ed8d44a31aec0c635f5341", "query": "Here's the problematic piece: On my machine usually can't delete file with \"access denied\" error. If I insert something like before then the test consistently passes. Supposedly we are trying to delete it too soon when it's still locked after execution.", "positive_passages": [{"docid": "doc-en-rust-fc3e28979841059d20aeac5ff9b8453d8eab8718224f166cb7e9f735fa1fd299", "text": "str::from_utf8(&child_output.stdout).unwrap(), str::from_utf8(&child_output.stderr).unwrap())); fs::remove_dir_all(&child_dir).unwrap(); let res = fs::remove_dir_all(&child_dir); if res.is_err() { // On Windows deleting just executed mytest.exe can fail because it's still locked std::thread::sleep_ms(1000); fs::remove_dir_all(&child_dir).unwrap(); } }", "commid": "rust_pr_25615"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-867a2c298f618b229d8cea56d1238c0cc3aaf7a673d999e9dae2859b30dcdf98", "text": ".TP fBRUST_TEST_THREADSfR The test framework Rust provides executes tests in parallel. This variable sets the maximum number of threads used for this purpose. the maximum number of threads used for this purpose. This setting is overridden by the --test-threads option. .TP fBRUST_TEST_NOCAPTUREfR", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-7d46f3f64749588256cc7aaa9b2d1d64668f4b689bc13d3999f40bf102f44871", "text": "pub nocapture: bool, pub color: ColorConfig, pub quiet: bool, pub test_threads: Option, } impl TestOpts {", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-0cda0587325a7c86549a45eed913eb6ab5410083be5f88baef35f29a325442b7", "text": "nocapture: false, color: AutoColor, quiet: false, test_threads: None, } } }", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-05f89f5104e0b34409524693a9a97ae36e32e045e5d470bb7134719c6b6e97a0", "text": "of stdout\", \"PATH\"), getopts::optflag(\"\", \"nocapture\", \"don't capture stdout/stderr of each task, allow printing directly\"), getopts::optopt(\"\", \"test-threads\", \"Number of threads used for running tests in parallel\", \"n_threads\"), getopts::optflag(\"q\", \"quiet\", \"Display one character per test instead of one line\"), getopts::optopt(\"\", \"color\", \"Configure coloring of output: auto = colorize if stdout is a tty and tests are run on serially (default);", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-52f05871826fb43180317b06ced45dece41da74bf3de4f75359b2a6f9e191b7c", "text": "tests whose names contain the filter are run. By default, all tests are run in parallel. This can be altered with the RUST_TEST_THREADS environment variable when running tests (set it to 1). --test-threads flag or the RUST_TEST_THREADS environment variable when running tests (set it to 1). All tests have their standard output and standard error captured by default. This can be overridden with the --nocapture flag or setting RUST_TEST_NOCAPTURE", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-e75ae8e3a5e916f9aeb0fdd54640f3dffc5d88ab5f199543ff4b482adee5c4da", "text": "}; } let test_threads = match matches.opt_str(\"test-threads\") { Some(n_str) => match n_str.parse::() { Ok(n) => Some(n), Err(e) => return Some(Err(format!(\"argument for --test-threads must be a number > 0 (error: {})\", e))) }, None => None, }; let color = match matches.opt_str(\"color\").as_ref().map(|s| &**s) { Some(\"auto\") | None => AutoColor, Some(\"always\") => AlwaysColor,", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-c765050c3098f8c66a3a0dd0d8747dd1166c534f8766d748c0788521841c75af", "text": "nocapture: nocapture, color: color, quiet: quiet, test_threads: test_threads, }; Some(Ok(test_opts))", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-1a223a49905840ab7cb2d7456d1be75ed030f47e9fb76a5048086eef8516f1b7", "text": "} }); // It's tempting to just spawn all the tests at once, but since we have // many tests that run in other processes we would be making a big mess. let concurrency = get_concurrency(); let concurrency = match opts.test_threads { Some(n) => n, None => get_concurrency(), }; let mut remaining = filtered_tests; remaining.reverse();", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-e5d6fdabe900d869186bfab8e6e5e5c604f84a1b289afe452d00b0c5ea34b431", "query": "Discussion carried over from: rust-lang/cargo Test binaries created by rustc should have a option instead of (in addition to?) controlling the thread count with . It's not a consistent user experience to have some functionality controlled via environment variables and some via command line options. CLI options should be preferred since they encourage users to keep configuration local to a single execution of a test binary and not export it globally, making it affect other test runs perhaps accidentally. It would also make the output of more uniform and compact.\nAgreed.\nThe ability to specify this in the somehow might be nice. One of my crates I'm working on needs all the tests run like this.\nIt is already set table by an env var, no? Cargo just can't specify them. So still not ideal but might work in the meantime. On Apr 16, 2016, 19:05 -0400, Nathan , wrote:\nOh cool thanks, I'll set that in my project.", "positive_passages": [{"docid": "doc-en-rust-d28bb243a8b0b40620fbb519a5381c7f3fec3b12985cb86f77cba7667a9d43a7", "text": "Err(_) => false }, color: test::AutoColor, test_threads: None, } }", "commid": "rust_pr_35414"}], "negative_passages": []} {"query_id": "q-en-rust-aba2809e5798ce610de3adfd239181cd23ac9200a112f108f4d78f0ba551eec2", "query": "I got the message above,how to fix it?\nUrgh. This should never have gotten through the CI bots. Some rules must be being built by the install target that aren't under the normal build.\nI will try to look into this today while I'm investigating other nightly breakage.\nI assume this is being introduced by the doc comment I wrote here: So the two bugs here are: the above doc comment in dropck. (Presumably just replace the indents with nested bullets.) rules do we need to change to synchronize with the CI bots (to catch this kind of case in the future) -- presumably its happening because attempts to build documentation for rust itself ... which sounds a lot like good old: ; see also\nI posted a PR to stop building compiler docs. It does not fix the bug in the docs though.\nFixed.\nEr, oh yeah, the underlying problem is not fixed, that the docs have broken examples.\nSigh, actually, I'll call this fixed since the title of the bug is about install failure and that works now. Broken examples in the rustc crate are nbd to me atm.\nJust downloaded a fresh install from git and ran on a virgin arch linux box running on AWS and getting this error again cfg: using CC=gcc (CFGCC) cfg: disabling valgrind run-pass tests cfg: including prepare rules cfg: including dist rules cfg: including install rules cleaning destination tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/bin prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/lib prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/lib/rustlib/etc prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/share/man/man1 rustc: x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-gnu/lib/librustc recipe for target 'x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-' failed make[1]: [x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-] Killed make[1]: Leaving directory '/root/Downloads/rust' recipe for target 'install' failed make: [install] Error 2 any ideas", "positive_passages": [{"docid": "doc-en-rust-7265c61777a4d68a9065440f846a8ae0f069203bc7b56f089c6b5980db02f946", "text": "dist-install-dir-$(1): PREPARE_LIB_CMD=$(DEFAULT_PREPARE_LIB_CMD) dist-install-dir-$(1): PREPARE_MAN_CMD=$(DEFAULT_PREPARE_MAN_CMD) dist-install-dir-$(1): PREPARE_CLEAN=true dist-install-dir-$(1): prepare-base-dir-$(1) docs compiler-docs dist-install-dir-$(1): prepare-base-dir-$(1) docs $$(Q)mkdir -p $$(PREPARE_DEST_DIR)/share/doc/rust $$(Q)$$(PREPARE_MAN_CMD) $$(S)COPYRIGHT $$(PREPARE_DEST_DIR)/share/doc/rust $$(Q)$$(PREPARE_MAN_CMD) $$(S)LICENSE-APACHE $$(PREPARE_DEST_DIR)/share/doc/rust", "commid": "rust_pr_25717"}], "negative_passages": []} {"query_id": "q-en-rust-aba2809e5798ce610de3adfd239181cd23ac9200a112f108f4d78f0ba551eec2", "query": "I got the message above,how to fix it?\nUrgh. This should never have gotten through the CI bots. Some rules must be being built by the install target that aren't under the normal build.\nI will try to look into this today while I'm investigating other nightly breakage.\nI assume this is being introduced by the doc comment I wrote here: So the two bugs here are: the above doc comment in dropck. (Presumably just replace the indents with nested bullets.) rules do we need to change to synchronize with the CI bots (to catch this kind of case in the future) -- presumably its happening because attempts to build documentation for rust itself ... which sounds a lot like good old: ; see also\nI posted a PR to stop building compiler docs. It does not fix the bug in the docs though.\nFixed.\nEr, oh yeah, the underlying problem is not fixed, that the docs have broken examples.\nSigh, actually, I'll call this fixed since the title of the bug is about install failure and that works now. Broken examples in the rustc crate are nbd to me atm.\nJust downloaded a fresh install from git and ran on a virgin arch linux box running on AWS and getting this error again cfg: using CC=gcc (CFGCC) cfg: disabling valgrind run-pass tests cfg: including prepare rules cfg: including dist rules cfg: including install rules cleaning destination tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/bin prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/lib prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/lib/rustlib/etc prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/share/man/man1 rustc: x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-gnu/lib/librustc recipe for target 'x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-' failed make[1]: [x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-] Killed make[1]: Leaving directory '/root/Downloads/rust' recipe for target 'install' failed make: [install] Error 2 any ideas", "positive_passages": [{"docid": "doc-en-rust-e0d6baa0c9addd5f46c2915f88e643964fae78a24fae9b0045ece7fb89724584", "text": "--legacy-manifest-dirs=rustlib,cargo $$(Q)rm -R tmp/dist/$$(PKG_NAME)-$(1)-image dist-doc-install-dir-$(1): docs compiler-docs dist-doc-install-dir-$(1): docs $$(Q)mkdir -p tmp/dist/$$(DOC_PKG_NAME)-$(1)-image/share/doc/rust $$(Q)cp -r doc tmp/dist/$$(DOC_PKG_NAME)-$(1)-image/share/doc/rust/html", "commid": "rust_pr_25717"}], "negative_passages": []} {"query_id": "q-en-rust-aba2809e5798ce610de3adfd239181cd23ac9200a112f108f4d78f0ba551eec2", "query": "I got the message above,how to fix it?\nUrgh. This should never have gotten through the CI bots. Some rules must be being built by the install target that aren't under the normal build.\nI will try to look into this today while I'm investigating other nightly breakage.\nI assume this is being introduced by the doc comment I wrote here: So the two bugs here are: the above doc comment in dropck. (Presumably just replace the indents with nested bullets.) rules do we need to change to synchronize with the CI bots (to catch this kind of case in the future) -- presumably its happening because attempts to build documentation for rust itself ... which sounds a lot like good old: ; see also\nI posted a PR to stop building compiler docs. It does not fix the bug in the docs though.\nFixed.\nEr, oh yeah, the underlying problem is not fixed, that the docs have broken examples.\nSigh, actually, I'll call this fixed since the title of the bug is about install failure and that works now. Broken examples in the rustc crate are nbd to me atm.\nJust downloaded a fresh install from git and ran on a virgin arch linux box running on AWS and getting this error again cfg: using CC=gcc (CFGCC) cfg: disabling valgrind run-pass tests cfg: including prepare rules cfg: including dist rules cfg: including install rules cleaning destination tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/bin prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/lib prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/lib/rustlib/etc prepare: tmp/dist/rustc-1.5.0-dev-x8664-unknown-linux-gnu-image/share/man/man1 rustc: x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-gnu/lib/librustc recipe for target 'x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-' failed make[1]: [x8664-unknown-linux-gnu/stage0/lib/rustlib/x8664-unknown-linux-] Killed make[1]: Leaving directory '/root/Downloads/rust' recipe for target 'install' failed make: [install] Error 2 any ideas", "positive_passages": [{"docid": "doc-en-rust-c0d4605753e0b6c0dabd7d47abd674b6740317e14b49dfff254d8d7e219086b4", "text": "# Just copy the docs to a folder under dist with the appropriate name # for uploading to S3 dist-docs: docs compiler-docs dist-docs: docs $(Q) rm -Rf dist/doc $(Q) mkdir -p dist/doc/ $(Q) cp -r doc dist/doc/$(CFG_PACKAGE_VERS)", "commid": "rust_pr_25717"}], "negative_passages": []} {"query_id": "q-en-rust-7825406f61b0d363474f1baedea4566ef74e2063a49d7dd4829d1c8ae6d52e4b", "query": "Currently there isn't documentation explaining how the character works\nAre you referring to its usage or implementation. Currently the book provides a very brief description of what _ does, however I could write up something slightly more detailed if desired. (see below):\nHmm, the conversation that starts here might be interesting", "positive_passages": [{"docid": "doc-en-rust-3f6b1c2293b488207f92f263f7339566c369307012f3f51fb0570404f0fa1a60", "text": "[tuples]: primitive-types.html#tuples [enums]: enums.html # Ignoring bindings You can use `_` in a pattern to disregard the value. For example, here\u2019s a `match` against a `Result`: ```rust # let some_value: Result = Err(\"There was an error\"); match some_value { Ok(value) => println!(\"got a value: {}\", value), Err(_) => println!(\"an error occurred\"), } ``` In the first arm, we bind the value inside the `Ok` variant to `value`. But in the `Err` arm, we use `_` to disregard the specific error, and just print a general error message. `_` is valid in any pattern that creates a binding. This can be useful to ignore parts of a larger structure: ```rust fn coordinate() -> (i32, i32, i32) { // generate and return some sort of triple tuple # (1, 2, 3) } let (x, _, z) = coordinate(); ``` Here, we bind the first and last element of the tuple to `x` and `z`, but ignore the middle element. # Mix and Match Whew! That\u2019s a lot of different ways to match things, and they can all be", "commid": "rust_pr_26827"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-1b295570eed3794af18f3d83e1b73df5346319413392222509122aab1d3ae319", "text": "use build_helper::output; #[cfg(not(target_os = \"solaris\"))] const SH_CMD: &'static str = \"sh\"; // On Solaris, sh is the historical bourne shell, not a POSIX shell, or bash. #[cfg(target_os = \"solaris\")] const SH_CMD: &'static str = \"bash\"; use {Build, Compiler, Mode}; use util::{cp_r, libdir, is_dylib, cp_filtered, copy};", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-45e30ca4485191aafeabb038a9b47106e1c843316f98705c246de12ad04286e6", "text": "let src = build.out.join(host).join(\"doc\"); cp_r(&src, &dst); let mut cmd = Command::new(\"sh\"); let mut cmd = Command::new(SH_CMD); cmd.arg(sanitize_sh(&build.src.join(\"src/rust-installer/gen-installer.sh\"))) .arg(\"--product-name=Rust-Documentation\") .arg(\"--rel-manifest-dir=rustlib\")", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-7edc7ea3577dcbd0fb7102a4837f4b29f74ff31f856e0553f2995fd7f983e3b9", "text": ".arg(host); build.run(&mut cmd); let mut cmd = Command::new(\"sh\"); let mut cmd = Command::new(SH_CMD); cmd.arg(sanitize_sh(&build.src.join(\"src/rust-installer/gen-installer.sh\"))) .arg(\"--product-name=Rust-MinGW\") .arg(\"--rel-manifest-dir=rustlib\")", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-1efb29c704e18d4c9d8036c4b3edd1571347dd28bf16e02b81fb6cd06eee3d01", "text": "} // Finally, wrap everything up in a nice tarball! let mut cmd = Command::new(\"sh\"); let mut cmd = Command::new(SH_CMD); cmd.arg(sanitize_sh(&build.src.join(\"src/rust-installer/gen-installer.sh\"))) .arg(\"--product-name=Rust\") .arg(\"--rel-manifest-dir=rustlib\")", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-be39f677f933f657953bccf622ec2c42f60324df4bd021b7a5636cfb278cbd7e", "text": "let src = build.sysroot(compiler).join(\"lib/rustlib\"); cp_r(&src.join(target), &dst); let mut cmd = Command::new(\"sh\"); let mut cmd = Command::new(SH_CMD); cmd.arg(sanitize_sh(&build.src.join(\"src/rust-installer/gen-installer.sh\"))) .arg(\"--product-name=Rust\") .arg(\"--rel-manifest-dir=rustlib\")", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-c79a1c03086cbc5084e70c0cf2ce749ab762b65da229b2cd12d0b125322d0e9f", "text": "let image_src = src.join(\"save-analysis\"); let dst = image.join(\"lib/rustlib\").join(target).join(\"analysis\"); t!(fs::create_dir_all(&dst)); println!(\"image_src: {:?}, dst: {:?}\", image_src, dst); cp_r(&image_src, &dst); let mut cmd = Command::new(\"sh\"); let mut cmd = Command::new(SH_CMD); cmd.arg(sanitize_sh(&build.src.join(\"src/rust-installer/gen-installer.sh\"))) .arg(\"--product-name=Rust\") .arg(\"--rel-manifest-dir=rustlib\")", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-3842a4aabcb125995163f606a8707fce68e57982c2897f418a6d2792d8d42f55", "text": "build.run(&mut cmd); // Create source tarball in rust-installer format let mut cmd = Command::new(\"sh\"); let mut cmd = Command::new(SH_CMD); cmd.arg(sanitize_sh(&build.src.join(\"src/rust-installer/gen-installer.sh\"))) .arg(\"--product-name=Rust\") .arg(\"--rel-manifest-dir=rustlib\")", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-e6f363d6fe23aa131eac348cae6fb72336ba57c3126bd08edbf2dd916608d54b", "query": "The configure script begins with: This works on machines where bash implements its own version of sh. But fails on Solaris where sh and bash are separated programs. To fix, I changed that first line into: And after that it ran.\ncf. .\nIf the problem here is that we are using bashisms in the configure script, I would rather fix that.\nworks for me today. Was this issue fixed? I remember some other issues...\nworks, but the installer has the same issue. For this reason, doesn't work on Solaris.\nAccording to , as one example, gen- is using bashsims or other shell extensions: But in all honesty, I don't think it's worth it. I would encourage you to either rewrite the script in Python (such that it is compatible with Python 2.x and 3.x) or simply change the scripts to use instead. On Solaris, is the traditional bourne shell, not bash, and is not even POSIX compliant. This is for backwards-compatibility reasons, among others. The posix shell is .", "positive_passages": [{"docid": "doc-en-rust-61e3194d1c023e8f2af453bebf6d80da062ab6e4a0c50f742f3c0bcf2293e6f7", "text": "input_tarballs.push_str(&sanitize_sh(&mingw_installer)); } let mut cmd = Command::new(\"sh\"); let mut cmd = Command::new(SH_CMD); cmd.arg(sanitize_sh(&build.src.join(\"src/rust-installer/combine-installers.sh\"))) .arg(\"--product-name=Rust\") .arg(\"--rel-manifest-dir=rustlib\")", "commid": "rust_pr_39857"}], "negative_passages": []} {"query_id": "q-en-rust-1c991b5c564ed604216ff43be7e7d38357707d0f1cecc2688e941a28e14a8b0d", "query": "In it could be useful to explain the and syntax for getting a substring slice. Looking for a way to get a substring reference one is currently with the str API documentation which is quite a lot to digest and if one manages to find the unstable slice_chars(...) it says \"Use slicing syntax if you want to use byte indices rather than codepoint indices\", but what \"slicing syntax\" means will probably not be clear to someone who is new to Rust and not yet familiar with the Index trait.", "positive_passages": [{"docid": "doc-en-rust-c8ff925b7c45e9222d9052aafac3aa2ab2f64baa305c5a977e8a0dc987b56173", "text": "This emphasizes that we have to go through the whole list of `chars`. ## Slicing You can get a slice of a string with slicing syntax: ```rust let dog = \"hachiko\"; let hachi = &dog[0..5]; ``` But note that these are _byte_ offsets, not _character_ offsets. So this will fail at runtime: ```rust,should_panic let dog = \"\u5fe0\u72ac\u30cf\u30c1\u516c\"; let hachi = &dog[0..2]; ``` with this error: ```text thread '
' panicked at 'index 0 and/or 2 in `\u5fe0\u72ac\u30cf\u30c1\u516c` do not lie on character boundary' ``` ## Concatenation If you have a `String`, you can concatenate a `&str` to the end of it:", "commid": "rust_pr_26145"}], "negative_passages": []} {"query_id": "q-en-rust-89299611c40c6a630029b4e4db8133f5064f990c41b7591912d9760f1a85d5e0", "query": "From , (I assume identical to ) should report nanosecond times as or , not . This reportedly would require fixing in libtest.\nI posted a PR for this yesterday. Well I chose to round & use units, but you can opine either way\nnice! If you'll just tag that as fixing this then bors will take care of the rest.", "positive_passages": [{"docid": "doc-en-rust-4d0314f589e1b2147f1df3961587bd01da7b29821774c3acfba089705ac693aa", "text": "} } // Format a number with thousands separators fn fmt_thousands_sep(mut n: usize, sep: char) -> String { use std::fmt::Write; let mut output = String::new(); let mut first = true; for &pow in &[9, 6, 3, 0] { let base = 10_usize.pow(pow); if pow == 0 || n / base != 0 { if first { output.write_fmt(format_args!(\"{}\", n / base)).unwrap(); } else { output.write_fmt(format_args!(\"{:03}\", n / base)).unwrap(); } if pow != 0 { output.push(sep); } first = false; } n %= base; } output } pub fn fmt_bench_samples(bs: &BenchSamples) -> String { use std::fmt::Write; let mut output = String::new(); let median = bs.ns_iter_summ.median as usize; let deviation = (bs.ns_iter_summ.max - bs.ns_iter_summ.min) as usize; output.write_fmt(format_args!(\"{:>11} ns/iter (+/- {})\", fmt_thousands_sep(median, ','), fmt_thousands_sep(deviation, ','))).unwrap(); if bs.mb_s != 0 { format!(\"{:>9} ns/iter (+/- {}) = {} MB/s\", bs.ns_iter_summ.median as usize, (bs.ns_iter_summ.max - bs.ns_iter_summ.min) as usize, bs.mb_s) } else { format!(\"{:>9} ns/iter (+/- {})\", bs.ns_iter_summ.median as usize, (bs.ns_iter_summ.max - bs.ns_iter_summ.min) as usize) output.write_fmt(format_args!(\" = {} MB/s\", bs.mb_s)).unwrap(); } output } // A simple console test runner", "commid": "rust_pr_26068"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-fc4f84c06c1c68f95044a5e6a3084b8a0e79408bae86f6b7ee66a5e010bd28bf", "text": "branches: - 'master' schedule: - cron: '5 15 * * *' # At 15:05 UTC every day. - cron: '6 6 * * *' # At 6:06 UTC every day. env: CARGO_UNSTABLE_SPARSE_REGISTRY: 'true'", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-d9b770191cb5e7a5c1ec3bac5b9b42e175df06aafd7d02f9b30c0b8d043c83d1", "text": "strategy: fail-fast: false matrix: build: [linux64, macos, win32] include: - build: linux64 os: ubuntu-latest - os: ubuntu-latest host_target: x86_64-unknown-linux-gnu - build: macos os: macos-latest - os: macos-latest host_target: x86_64-apple-darwin - build: win32 os: windows-latest - os: windows-latest host_target: i686-pc-windows-msvc steps: - uses: actions/checkout@v3", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-25dee9d104e605e8bddf1ccfbd09218807aee99dcb2ed7270325509b5612c6ef", "text": ";; i686-pc-windows-msvc) MIRI_TEST_TARGET=x86_64-unknown-linux-gnu run_tests MIRI_TEST_TARGET=x86_64-pc-windows-gnu run_tests ;; *) echo \"FATAL: unknown OS\"", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-b28fc8d65873fada6c4fe2352a13caca182208dd1552ba72dd1259a8bb99c67e", "text": "mir, ty::{self, FloatTy, Ty}, }; use rustc_target::abi::Integer; use rustc_target::abi::{Integer, Size}; use crate::*; use atomic::EvalContextExt as _;", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-c0b676710a0a2cfa874aad330713591dc9b5027e608e1ca8010ba064c1c3ab24", "text": "this.write_bytes_ptr(ptr, iter::repeat(val_byte).take(byte_count.bytes_usize()))?; } \"ptr_mask\" => { let [ptr, mask] = check_arg_count(args)?; let ptr = this.read_pointer(ptr)?; let mask = this.read_scalar(mask)?.to_machine_usize(this)?; let masked_addr = Size::from_bytes(ptr.addr().bytes() & mask); this.write_pointer(Pointer::new(ptr.provenance, masked_addr), dest)?; } // Floating-point operations \"fabsf32\" => { let [f] = check_arg_count(args)?;", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-ed8d7ac91097fd1579e64cf9c6f50c0e21de6411f72296baaeeaa141f040dce5", "text": "let [name] = this.check_shim(abi, Abi::C { unwind: false }, link_name, args)?; let thread = this.pthread_self()?; let max_len = this.eval_libc(\"MAXTHREADNAMESIZE\")?.to_machine_usize(this)?; this.pthread_setname_np( let res = this.pthread_setname_np( thread, this.read_scalar(name)?, max_len.try_into().unwrap(), )?; // Contrary to the manpage, `pthread_setname_np` on macOS still // returns an integer indicating success. this.write_scalar(res, dest)?; } \"pthread_getname_np\" => { let [thread, name, len] =", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-5bff63534bdcded8675498aba84f3ee62bc3425c0665984bdd60332ae407645f", "text": "//@ignore-target-windows: No libc on Windows #![feature(cstr_from_bytes_until_nul)] use std::ffi::CStr; use std::ffi::{CStr, CString}; use std::thread; fn main() {", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-4b6f60e842f45fedd49595494f6a25367a8fcc112672ab80bcc6532a02611451", "text": ".chain(std::iter::repeat(\" yada\").take(100)) .collect::(); fn set_thread_name(name: &CStr) -> i32 { #[cfg(target_os = \"linux\")] return unsafe { libc::pthread_setname_np(libc::pthread_self(), name.as_ptr().cast()) }; #[cfg(target_os = \"macos\")] return unsafe { libc::pthread_setname_np(name.as_ptr().cast()) }; } let result = thread::Builder::new().name(long_name.clone()).spawn(move || { // Rust remembers the full thread name itself. assert_eq!(thread::current().name(), Some(long_name.as_str()));", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-20b7de66b44428d40124adf75d6e6689e7e28e588d28d49957d8c4a27ba332a7", "text": "// But the system is limited -- make sure we successfully set a truncation. let mut buf = vec![0u8; long_name.len() + 1]; unsafe { libc::pthread_getname_np(libc::pthread_self(), buf.as_mut_ptr().cast(), buf.len()); } libc::pthread_getname_np(libc::pthread_self(), buf.as_mut_ptr().cast(), buf.len()) }; let cstr = CStr::from_bytes_until_nul(&buf).unwrap(); assert!(cstr.to_bytes().len() >= 15); // POSIX seems to promise at least 15 chars assert!(long_name.as_bytes().starts_with(cstr.to_bytes())); // Also test directly calling pthread_setname to check its return value. assert_eq!(set_thread_name(&cstr), 0); // But with a too long name it should fail. assert_ne!(set_thread_name(&CString::new(long_name).unwrap()), 0); }); result.unwrap().join().unwrap(); }", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-11385ee46508272812a397bac8d2ecd1a3585e746aa8a6db29a5727a6925105c", "query": "A comment suggests the type should be something richer, perhaps, than just .\nBlocked on or something similar\nDefinitely not backwards incompatible for 1.0 if this depends on Rust 2.0 features. Renominating.\nNominating for \"far-future\" (does that require nomination?)\nFIXME is gone, bug is too vague, closing", "positive_passages": [{"docid": "doc-en-rust-18d3a807f0f67c8e30a2fa67453566c7100ecd999565a0ba85a30b89eff4268d", "text": " #![feature(ptr_mask)] #![feature(strict_provenance)] fn main() { let v: u32 = 0xABCDABCD; let ptr: *const u32 = &v; // u32 is 4 aligned, // so the lower `log2(4) = 2` bits of the address are always 0 assert_eq!(ptr.addr() & 0b11, 0); let tagged_ptr = ptr.map_addr(|a| a | 0b11); let tag = tagged_ptr.addr() & 0b11; let masked_ptr = tagged_ptr.mask(!0b11); assert_eq!(tag, 0b11); assert_eq!(unsafe { *masked_ptr }, 0xABCDABCD); } ", "commid": "rust_pr_103721"}], "negative_passages": []} {"query_id": "q-en-rust-ddb38879f63a8b8a5c9b0491dd1076c33bdf77ce336efa3345a6965596d6fc21", "query": "Atomics are a language feature that the compiler (LLVM) has to have a deep understanding of in its optimization passes. Send and Sync are also opt-in-built-in-traits, which makes them a pseudo-language construct.\nDoes this mean Atomics should be mentioned in the Language Reference? What about Send and Sync? Currently, only Sync is mentioned, and that only in passing.", "positive_passages": [{"docid": "doc-en-rust-c4689ce0662fdf38650c4d7d45a5030afac36bf7bfe7b0f33758af7b099c153e", "text": "concurrent code at compile time. Before we talk about the concurrency features that come with Rust, it's important to understand something: Rust is low-level enough that all of this is provided by the standard library, not by the language. This means that if you don't like some aspect of the way Rust handles concurrency, you can implement an alternative way of doing things. [mio](https://github.com/carllerche/mio) is a real-world example of this principle in action. to understand something: Rust is low-level enough that the vast majority of this is provided by the standard library, not by the language. This means that if you don't like some aspect of the way Rust handles concurrency, you can implement an alternative way of doing things. [mio](https://github.com/carllerche/mio) is a real-world example of this principle in action. ## Background: `Send` and `Sync`", "commid": "rust_pr_26855"}], "negative_passages": []} {"query_id": "q-en-rust-b70a7a3964e710d383a3e7587404873ff9f1e050e1da77e41ba8ff794c23dbe8", "query": "yields practically no information, and to even figure out what type of error occurred a user has to go run another rust script to use the Display implementation on an error with that os error code. It's not a good UX.\nThe Debug impl is used quite often, especially through unwrap.\nYou can look up the error code via (e.g. ). In fact on Windows the standard way to report an error is via its status.\nI appreciate the tip, but still consider this is a serious UX problem since I really shouldn't need python to conveniently find out what kind of error I'm dealing with.", "positive_passages": [{"docid": "doc-en-rust-1a6fd57adf6ee67e88d9279b4503920b92c8e528bd0fdbb654f3f513d048f8ca", "text": "repr: Repr, } #[derive(Debug)] enum Repr { Os(i32), Custom(Box),", "commid": "rust_pr_26416"}], "negative_passages": []} {"query_id": "q-en-rust-b70a7a3964e710d383a3e7587404873ff9f1e050e1da77e41ba8ff794c23dbe8", "query": "yields practically no information, and to even figure out what type of error occurred a user has to go run another rust script to use the Display implementation on an error with that os error code. It's not a good UX.\nThe Debug impl is used quite often, especially through unwrap.\nYou can look up the error code via (e.g. ). In fact on Windows the standard way to report an error is via its status.\nI appreciate the tip, but still consider this is a serious UX problem since I really shouldn't need python to conveniently find out what kind of error I'm dealing with.", "positive_passages": [{"docid": "doc-en-rust-f40e41f1312a36e1c9776d87d5566d29cc494c28f317dc49c5886ba082e6f2e1", "text": "} } impl fmt::Debug for Repr { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result { match self { &Repr::Os(ref code) => fmt.debug_struct(\"Os\").field(\"code\", code) .field(\"message\", &sys::os::error_string(*code)).finish(), &Repr::Custom(ref c) => fmt.debug_tuple(\"Custom\").field(c).finish(), } } } #[stable(feature = \"rust1\", since = \"1.0.0\")] impl fmt::Display for Error { fn fmt(&self, fmt: &mut fmt::Formatter) -> fmt::Result {", "commid": "rust_pr_26416"}], "negative_passages": []} {"query_id": "q-en-rust-b70a7a3964e710d383a3e7587404873ff9f1e050e1da77e41ba8ff794c23dbe8", "query": "yields practically no information, and to even figure out what type of error occurred a user has to go run another rust script to use the Display implementation on an error with that os error code. It's not a good UX.\nThe Debug impl is used quite often, especially through unwrap.\nYou can look up the error code via (e.g. ). In fact on Windows the standard way to report an error is via its status.\nI appreciate the tip, but still consider this is a serious UX problem since I really shouldn't need python to conveniently find out what kind of error I'm dealing with.", "positive_passages": [{"docid": "doc-en-rust-36f2d9b434b282aba7ade3a9a02e991ad8f2c0d61315857129d2ca45f3f9d460", "text": "use error; use error::Error as error_Error; use fmt; use sys::os::error_string; #[test] fn test_debug_error() { let code = 6; let msg = error_string(code); let err = Error { repr: super::Repr::Os(code) }; let expected = format!(\"Error {{ repr: Os {{ code: {:?}, message: {:?} }} }}\", code, msg); assert_eq!(format!(\"{:?}\", err), expected); } #[test] fn test_downcasting() {", "commid": "rust_pr_26416"}], "negative_passages": []} {"query_id": "q-en-rust-7f3cc3b7159225b5622c5373a772313151ff0f3e0dcd12a0c7c441c7eacbeb2d", "query": "!.html states that the macro will return a , where in reality it returns . Example: This same problem exists for as well: Documentation at: !.html.", "positive_passages": [{"docid": "doc-en-rust-4d2c2d14b958542c68f6d6c140fd6847be997f789d91dab67aed3e25b6df1887", "text": "/// A macro which expands to the line number on which it was invoked. /// /// The expanded expression has type `usize`, and the returned line is not /// The expanded expression has type `u32`, and the returned line is not /// the invocation of the `line!()` macro itself, but rather the first macro /// invocation leading up to the invocation of the `line!()` macro. ///", "commid": "rust_pr_26432"}], "negative_passages": []} {"query_id": "q-en-rust-7f3cc3b7159225b5622c5373a772313151ff0f3e0dcd12a0c7c441c7eacbeb2d", "query": "!.html states that the macro will return a , where in reality it returns . Example: This same problem exists for as well: Documentation at: !.html.", "positive_passages": [{"docid": "doc-en-rust-e7aadf6091a005aac5e8af0700197c45fa31d7f27a33a77c9e55d8e104ed73fe", "text": "/// A macro which expands to the column number on which it was invoked. /// /// The expanded expression has type `usize`, and the returned column is not /// The expanded expression has type `u32`, and the returned column is not /// the invocation of the `column!()` macro itself, but rather the first macro /// invocation leading up to the invocation of the `column!()` macro. ///", "commid": "rust_pr_26432"}], "negative_passages": []} {"query_id": "q-en-rust-2e32a49a8e67d16accf4cc01822157ebab2c722ef0472ca73c96fcbbe3b76901", "query": "but The order of the fields is important, and this is something that should be clearly documented, as changing the order of the fields of a struct can change program logic.\ngenerates a lexicographic ordering.", "positive_passages": [{"docid": "doc-en-rust-3bc899acadd1bc9df9063561bc84e6a7b306bb8e8522fffb8d2040117749a386", "text": "/// /// - total and antisymmetric: exactly one of `a < b`, `a == b` or `a > b` is true; and /// - transitive, `a < b` and `b < c` implies `a < c`. The same must hold for both `==` and `>`. /// /// When this trait is `derive`d, it produces a lexicographic ordering. #[stable(feature = \"rust1\", since = \"1.0.0\")] pub trait Ord: Eq + PartialOrd { /// This method returns an `Ordering` between `self` and `other`.", "commid": "rust_pr_26692"}], "negative_passages": []} {"query_id": "q-en-rust-b7a344f56957d5ac0ebc682ba7f6fc47539fbf22112adbb589b5e36185c4cb16", "query": "This causes a ICE: If the trait is defined in the same crate it compiles without error. Backtrace:", "positive_passages": [{"docid": "doc-en-rust-139522a372408f2a87984a07744db8bf425107d50d72b7fb488e6ba0a28817c8", "text": "encode_item_sort(rbml_w, 't'); encode_family(rbml_w, 'y'); if let Some(ty) = associated_type.ty { encode_type(ecx, rbml_w, ty); } is_nonstatic_method = false; } }", "commid": "rust_pr_26686"}], "negative_passages": []} {"query_id": "q-en-rust-b7a344f56957d5ac0ebc682ba7f6fc47539fbf22112adbb589b5e36185c4cb16", "query": "This causes a ICE: If the trait is defined in the same crate it compiles without error. Backtrace:", "positive_passages": [{"docid": "doc-en-rust-9dff2bafd64a7bc73325390dc4ebd6862ffd38a3a3aff123b1827f5f8dbbf789", "text": "// ought to be reported by the type checker method // `check_impl_items_against_trait`, so here we // just return TyError. debug!(\"confirm_impl_candidate: no associated type {:?} for {:?}\", assoc_ty.name, trait_ref); return (selcx.tcx().types.err, vec!()); } }", "commid": "rust_pr_26686"}], "negative_passages": []} {"query_id": "q-en-rust-b7a344f56957d5ac0ebc682ba7f6fc47539fbf22112adbb589b5e36185c4cb16", "query": "This causes a ICE: If the trait is defined in the same crate it compiles without error. Backtrace:", "positive_passages": [{"docid": "doc-en-rust-66127c93af5269569b49c9b8eb13300d453d8ce65cd8159292ac581ae312a073", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub trait Foo { type Input = usize; fn bar(&self, _: Self::Input) {} } impl Foo for () {} ", "commid": "rust_pr_26686"}], "negative_passages": []} {"query_id": "q-en-rust-b7a344f56957d5ac0ebc682ba7f6fc47539fbf22112adbb589b5e36185c4cb16", "query": "This causes a ICE: If the trait is defined in the same crate it compiles without error. Backtrace:", "positive_passages": [{"docid": "doc-en-rust-106f62c6fcd8ee6bea1570c00af49aef4bbb2e27bb386a95332ae5e9d1b7f248", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // aux-build:xcrate_associated_type_defaults.rs extern crate xcrate_associated_type_defaults; use xcrate_associated_type_defaults::Foo; fn main() { ().bar(5); } ", "commid": "rust_pr_26686"}], "negative_passages": []} {"query_id": "q-en-rust-51772817af40e995fe9e68ed4eb0c2329399db13fe8706bf6946610c42329519", "query": "Actually what happens here is the following: The tuple you create in is moved, because it is not . When you perform the you move out of , into , leaving uninitialized. Your assignment to is basically to uninitialized memory, and doesn't have an effect. When you perform the assertion you see the old value of , because your assignment to had no effect. The whole thing is confusing because Rust does not warn when you perform useless assignments to moved variables, and it probably should. The whole thing becomes obviously when you add a statement that uses in the match arm: Edit: the labels you have assigned to this issue are incorrect.\nDon't we see the new value , even though we're accessing , when performing the assertion, which causes it to fail? Doing this without a match statement passes the same assertion sucessfully:\nOops, you are correct, I am mistaken :( , mitaa wrote:\nheh, I made the same mistake. Typed out most of a response and then realised that I'd mis-read the code.", "positive_passages": [{"docid": "doc-en-rust-72ba153c93a54b7223298ca099b624aa307dbcf1080bc610685455b66359dcb8", "text": "match base_cmt.cat { mc::cat_upvar(mc::Upvar { id: ty::UpvarId { var_id: vid, .. }, .. }) | mc::cat_local(vid) => { self.reassigned |= self.node == vid && Some(field) == self.field self.reassigned |= self.node == vid && (self.field.is_none() || Some(field) == self.field) }, _ => {} }", "commid": "rust_pr_27011"}], "negative_passages": []} {"query_id": "q-en-rust-51772817af40e995fe9e68ed4eb0c2329399db13fe8706bf6946610c42329519", "query": "Actually what happens here is the following: The tuple you create in is moved, because it is not . When you perform the you move out of , into , leaving uninitialized. Your assignment to is basically to uninitialized memory, and doesn't have an effect. When you perform the assertion you see the old value of , because your assignment to had no effect. The whole thing is confusing because Rust does not warn when you perform useless assignments to moved variables, and it probably should. The whole thing becomes obviously when you add a statement that uses in the match arm: Edit: the labels you have assigned to this issue are incorrect.\nDon't we see the new value , even though we're accessing , when performing the assertion, which causes it to fail? Doing this without a match statement passes the same assertion sucessfully:\nOops, you are correct, I am mistaken :( , mitaa wrote:\nheh, I made the same mistake. Typed out most of a response and then realised that I'd mis-read the code.", "positive_passages": [{"docid": "doc-en-rust-3033ba76e479c58b80f460be40e580316762a63765a63214ad159013475cc53f", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { let mut c = (1, \"\".to_owned()); match c { c2 => { c.0 = 2; assert_eq!(c2.0, 1); } } } ", "commid": "rust_pr_27011"}], "negative_passages": []} {"query_id": "q-en-rust-4ddb0bfe1eb14bb5cc8101d7c2630469e3fae74128eff0855d3528895d5da948", "query": "This code produces \"error: expected one of , , , or , found \":\nThat code sample probably didn't require linking to a playpen, I've made the description and title more useful.", "positive_passages": [{"docid": "doc-en-rust-58daeb2e9bb2ab602d4ffd86a2cf6462c791e0e165ec9fa9cac497809bd8d2b6", "text": "// FOREIGN STATIC ITEM return Ok(Some(self.parse_item_foreign_static(visibility, lo, attrs)?)); } if self.check_keyword(keywords::Fn) || self.check_keyword(keywords::Unsafe) { if self.check_keyword(keywords::Fn) { // FOREIGN FUNCTION ITEM return Ok(Some(self.parse_item_foreign_fn(visibility, lo, attrs)?)); }", "commid": "rust_pr_33336"}], "negative_passages": []} {"query_id": "q-en-rust-4ddb0bfe1eb14bb5cc8101d7c2630469e3fae74128eff0855d3528895d5da948", "query": "This code produces \"error: expected one of , , , or , found \":\nThat code sample probably didn't require linking to a playpen, I've made the description and title more useful.", "positive_passages": [{"docid": "doc-en-rust-c6335d52b927602e9ee91d78a6b8f209cc39e5c2f2fbf857a4ac6a906cc122ca", "text": "// compile-flags: -Z parse-only extern { f(); //~ ERROR expected one of `fn`, `pub`, `static`, `unsafe`, or `}`, found `f` f(); //~ ERROR expected one of `fn`, `pub`, `static`, or `}`, found `f` } fn main() {", "commid": "rust_pr_33336"}], "negative_passages": []} {"query_id": "q-en-rust-4ddb0bfe1eb14bb5cc8101d7c2630469e3fae74128eff0855d3528895d5da948", "query": "This code produces \"error: expected one of , , , or , found \":\nThat code sample probably didn't require linking to a playpen, I've made the description and title more useful.", "positive_passages": [{"docid": "doc-en-rust-ed4334435b4d27429bd0815f695e47b7672cce5f5e3cd0c8156e7cb9fdfd937f", "text": "extern { const i: isize; //~^ ERROR expected one of `fn`, `pub`, `static`, `unsafe`, or `}`, found `const` //~^ ERROR expected one of `fn`, `pub`, `static`, or `}`, found `const` }", "commid": "rust_pr_33336"}], "negative_passages": []} {"query_id": "q-en-rust-1b9f12d782c46985978b7d88e2787dd3ae79bd3363e9ff8bba29fa629263e5fa", "query": "This is a tracking issue for the unstable methods in the standard library. cc perhaps you can fill this in some more? The specific methods in question are: -\nThis is basically a lame version of cursors, which is what we should use instead imo. It's not 1:1 though -- this api lets you have all the elements yielded at once. Classic iterator tradeoff.\nTriage: these are still unstable.\nThere are some ideas for a more complete API in\nseems misnamed. Doesn't it do exactly what we call elsewhere?\nI'm going to echo thoughts: An iterator conceptually represents a (double-ended) range of elements. The only operation that it supports is popping an element off the end of that range. A cursor on the other hand represents a position in a collection, and can be moved backwards or forwards in the collection. The cursor can be used to insert or remove an element at any given position in the collection. I strongly oppose adding insertion/deleting methods to since they do not fit at all with existing iterator usage patterns ( loops and functional-style methods). As I mentioned in the linked thread, a design based on the types in could be used.\nSince we now have cursors, I think these methods should be removed.\nReopening since we only deprecated the methods instead of removing them. I'd say give it about 2 weeks then remove them.", "positive_passages": [{"docid": "doc-en-rust-41e51ec993b342847b81928262ef8f5dbc0eaed80f9390f278a455d4f262c9a6", "text": "/// Inserts the given element just after the element most recently returned by `.next()`. /// The inserted element does not appear in the iteration. /// /// # Examples /// /// ``` /// #![feature(linked_list_extras)] /// /// use std::collections::LinkedList; /// /// let mut list: LinkedList<_> = vec![1, 3, 4].into_iter().collect(); /// /// { /// let mut it = list.iter_mut(); /// assert_eq!(it.next().unwrap(), &1); /// // insert `2` after `1` /// it.insert_next(2); /// } /// { /// let vec: Vec<_> = list.into_iter().collect(); /// assert_eq!(vec, [1, 2, 3, 4]); /// } /// ``` /// This method will be removed soon. #[inline] #[unstable( feature = \"linked_list_extras\", reason = \"this is probably better handled by a cursor type -- we'll see\", issue = \"27794\" )] #[rustc_deprecated( reason = \"Deprecated in favor of CursorMut methods. This method will be removed soon.\", since = \"1.47.0\" )] pub fn insert_next(&mut self, element: T) { match self.head { // `push_back` is okay with aliasing `element` references", "commid": "rust_pr_74644"}], "negative_passages": []} {"query_id": "q-en-rust-1b9f12d782c46985978b7d88e2787dd3ae79bd3363e9ff8bba29fa629263e5fa", "query": "This is a tracking issue for the unstable methods in the standard library. cc perhaps you can fill this in some more? The specific methods in question are: -\nThis is basically a lame version of cursors, which is what we should use instead imo. It's not 1:1 though -- this api lets you have all the elements yielded at once. Classic iterator tradeoff.\nTriage: these are still unstable.\nThere are some ideas for a more complete API in\nseems misnamed. Doesn't it do exactly what we call elsewhere?\nI'm going to echo thoughts: An iterator conceptually represents a (double-ended) range of elements. The only operation that it supports is popping an element off the end of that range. A cursor on the other hand represents a position in a collection, and can be moved backwards or forwards in the collection. The cursor can be used to insert or remove an element at any given position in the collection. I strongly oppose adding insertion/deleting methods to since they do not fit at all with existing iterator usage patterns ( loops and functional-style methods). As I mentioned in the linked thread, a design based on the types in could be used.\nSince we now have cursors, I think these methods should be removed.\nReopening since we only deprecated the methods instead of removing them. I'd say give it about 2 weeks then remove them.", "positive_passages": [{"docid": "doc-en-rust-0acfa697b7470d22994f7850941e450f0bc7c373dd4e1de2acbd81351ccdb754", "text": "/// Provides a reference to the next element, without changing the iterator. /// /// # Examples /// /// ``` /// #![feature(linked_list_extras)] /// /// use std::collections::LinkedList; /// /// let mut list: LinkedList<_> = vec![1, 2, 3].into_iter().collect(); /// /// let mut it = list.iter_mut(); /// assert_eq!(it.next().unwrap(), &1); /// assert_eq!(it.peek_next().unwrap(), &2); /// // We just peeked at 2, so it was not consumed from the iterator. /// assert_eq!(it.next().unwrap(), &2); /// ``` /// This method will be removed soon. #[inline] #[unstable( feature = \"linked_list_extras\", reason = \"this is probably better handled by a cursor type -- we'll see\", issue = \"27794\" )] #[rustc_deprecated( reason = \"Deprecated in favor of CursorMut methods. This method will be removed soon.\", since = \"1.47.0\" )] pub fn peek_next(&mut self) -> Option<&mut T> { if self.len == 0 { None", "commid": "rust_pr_74644"}], "negative_passages": []} {"query_id": "q-en-rust-1b9f12d782c46985978b7d88e2787dd3ae79bd3363e9ff8bba29fa629263e5fa", "query": "This is a tracking issue for the unstable methods in the standard library. cc perhaps you can fill this in some more? The specific methods in question are: -\nThis is basically a lame version of cursors, which is what we should use instead imo. It's not 1:1 though -- this api lets you have all the elements yielded at once. Classic iterator tradeoff.\nTriage: these are still unstable.\nThere are some ideas for a more complete API in\nseems misnamed. Doesn't it do exactly what we call elsewhere?\nI'm going to echo thoughts: An iterator conceptually represents a (double-ended) range of elements. The only operation that it supports is popping an element off the end of that range. A cursor on the other hand represents a position in a collection, and can be moved backwards or forwards in the collection. The cursor can be used to insert or remove an element at any given position in the collection. I strongly oppose adding insertion/deleting methods to since they do not fit at all with existing iterator usage patterns ( loops and functional-style methods). As I mentioned in the linked thread, a design based on the types in could be used.\nSince we now have cursors, I think these methods should be removed.\nReopening since we only deprecated the methods instead of removing them. I'd say give it about 2 weeks then remove them.", "positive_passages": [{"docid": "doc-en-rust-db47a3a81fb25712004f07412af2e49e8aa8dd061a89f8baa550f791542ae69e", "text": "} #[test] fn test_insert_prev() { let mut m = list_from(&[0, 2, 4, 6, 8]); let len = m.len(); { let mut it = m.iter_mut(); it.insert_next(-2); loop { match it.next() { None => break, Some(elt) => { it.insert_next(*elt + 1); match it.peek_next() { Some(x) => assert_eq!(*x, *elt + 2), None => assert_eq!(8, *elt), } } } } it.insert_next(0); it.insert_next(1); } check_links(&m); assert_eq!(m.len(), 3 + len * 2); assert_eq!(m.into_iter().collect::>(), [-2, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1]); } #[test] #[cfg_attr(target_os = \"emscripten\", ignore)] fn test_send() { let n = list_from(&[1, 2, 3]);", "commid": "rust_pr_74644"}], "negative_passages": []} {"query_id": "q-en-rust-1b9f12d782c46985978b7d88e2787dd3ae79bd3363e9ff8bba29fa629263e5fa", "query": "This is a tracking issue for the unstable methods in the standard library. cc perhaps you can fill this in some more? The specific methods in question are: -\nThis is basically a lame version of cursors, which is what we should use instead imo. It's not 1:1 though -- this api lets you have all the elements yielded at once. Classic iterator tradeoff.\nTriage: these are still unstable.\nThere are some ideas for a more complete API in\nseems misnamed. Doesn't it do exactly what we call elsewhere?\nI'm going to echo thoughts: An iterator conceptually represents a (double-ended) range of elements. The only operation that it supports is popping an element off the end of that range. A cursor on the other hand represents a position in a collection, and can be moved backwards or forwards in the collection. The cursor can be used to insert or remove an element at any given position in the collection. I strongly oppose adding insertion/deleting methods to since they do not fit at all with existing iterator usage patterns ( loops and functional-style methods). As I mentioned in the linked thread, a design based on the types in could be used.\nSince we now have cursors, I think these methods should be removed.\nReopening since we only deprecated the methods instead of removing them. I'd say give it about 2 weeks then remove them.", "positive_passages": [{"docid": "doc-en-rust-2a7e188186ea1fcca321b98b9937bb2944d22065d007293f0c3fe68eec65bd59", "text": "#[stable(feature = \"fused\", since = \"1.26.0\")] impl FusedIterator for IterMut<'_, T> {} impl IterMut<'_, T> { /// Inserts the given element just after the element most recently returned by `.next()`. /// The inserted element does not appear in the iteration. /// /// This method will be removed soon. #[inline] #[unstable( feature = \"linked_list_extras\", reason = \"this is probably better handled by a cursor type -- we'll see\", issue = \"27794\" )] #[rustc_deprecated( reason = \"Deprecated in favor of CursorMut methods. This method will be removed soon.\", since = \"1.47.0\" )] pub fn insert_next(&mut self, element: T) { match self.head { // `push_back` is okay with aliasing `element` references None => self.list.push_back(element), Some(head) => unsafe { let prev = match head.as_ref().prev { // `push_front` is okay with aliasing nodes None => return self.list.push_front(element), Some(prev) => prev, }; let node = Some( Box::leak(box Node { next: Some(head), prev: Some(prev), element }).into(), ); // Not creating references to entire nodes to not invalidate the // reference to `element` we handed to the user. (*prev.as_ptr()).next = node; (*head.as_ptr()).prev = node; self.list.len += 1; }, } } /// Provides a reference to the next element, without changing the iterator. /// /// This method will be removed soon. #[inline] #[unstable( feature = \"linked_list_extras\", reason = \"this is probably better handled by a cursor type -- we'll see\", issue = \"27794\" )] #[rustc_deprecated( reason = \"Deprecated in favor of CursorMut methods. This method will be removed soon.\", since = \"1.47.0\" )] pub fn peek_next(&mut self) -> Option<&mut T> { if self.len == 0 { None } else { unsafe { self.head.as_mut().map(|node| &mut node.as_mut().element) } } } } /// A cursor over a `LinkedList`. /// /// A `Cursor` is like an iterator, except that it can freely seek back-and-forth.", "commid": "rust_pr_79834"}], "negative_passages": []} {"query_id": "q-en-rust-2abc21d5f6e614a5ec0d32a43873daedd87599b96f13d7a2e9932dfe2c2414c9", "query": "Willing to mentor this. Fix is simple, just need to check for . We should also add a run-pass test to ensure that the above code continues to compile. Feel free to ask me if you have further questions!\nI've been willing for a long time to start getting involved in some contributions to Rust, so if it's ok for you I would like to give this one a go and make it my first contribution.\nGo ahead!", "positive_passages": [{"docid": "doc-en-rust-94bd36f9b37df26ce54f18f2392fd72d126065c7a835b004c76323f0b31bab49", "text": "for arg in &fd.inputs { match arg.pat.node { hir::PatIdent(hir::BindByValue(hir::MutImmutable), _, None) => {} hir::PatWild(_) => {} _ => { span_err!(self.tcx.sess, arg.pat.span, E0022, \"arguments of constant functions can only ", "commid": "rust_pr_28938"}], "negative_passages": []} {"query_id": "q-en-rust-2abc21d5f6e614a5ec0d32a43873daedd87599b96f13d7a2e9932dfe2c2414c9", "query": "Willing to mentor this. Fix is simple, just need to check for . We should also add a run-pass test to ensure that the above code continues to compile. Feel free to ask me if you have further questions!\nI've been willing for a long time to start getting involved in some contributions to Rust, so if it's ok for you I would like to give this one a go and make it my first contribution.\nGo ahead!", "positive_passages": [{"docid": "doc-en-rust-a0cd8b74124e43ad5b25d6de3df099987bc40146131e973762ff46852ba1cce2", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(const_fn)] fn main() {} const fn size_ofs(_: usize) {} const fn size_ofs2(_foo: usize) {} ", "commid": "rust_pr_28938"}], "negative_passages": []} {"query_id": "q-en-rust-7ecd7754fe700f47e9583e5206bfc9b5a53afaf7d16ca47d15f95e83cc281b29", "query": "The linkable icon is difficult to . I'm playing with this currently. I'm toying with trying to change the icon from unicode into a svg because unicode has a weird box border shape that doesn't allow the icon to scale to the full box size. I'll put up a PR if I get it working. !", "positive_passages": [{"docid": "doc-en-rust-aa48d710fd4614d47d0c6322ed8e7689d01ccc63afdb8696e7d46bbc1c8e5494", "text": ".rusttest { display: none; } pre.rust { position: relative; } .test-arrow { a.test-arrow { display: inline-block; position: absolute; top: 0; right: 10px; font-size: 150%; -webkit-transform: scaleX(-1); transform: scaleX(-1); background-color: #4e8bca; color: #f5f5f5; padding: 5px 10px 5px 10px; border-radius: 5px; font-size: 130%; top: 5px; right: 5px; } .methods .section-header {", "commid": "rust_pr_28963"}], "negative_passages": []} {"query_id": "q-en-rust-7ecd7754fe700f47e9583e5206bfc9b5a53afaf7d16ca47d15f95e83cc281b29", "query": "The linkable icon is difficult to . I'm playing with this currently. I'm toying with trying to change the icon from unicode into a svg because unicode has a weird box border shape that doesn't allow the icon to scale to the full box size. I'll put up a PR if I get it working. !", "positive_passages": [{"docid": "doc-en-rust-eaf235ad7691162fab4eb923fc26a92884f960b0d44ec131f2fa616affa0727b", "text": "} var a = document.createElement('a'); a.textContent = '\u21f1'; a.setAttribute('class', 'test-arrow'); a.textContent = 'Run'; var code = el.previousElementSibling.textContent;", "commid": "rust_pr_28963"}], "negative_passages": []} {"query_id": "q-en-rust-c57c360fdd2d674ab6159f2a225423dd523d169bf25f0483595bad0aec8b98b0", "query": "I tried this code: I expected to see this happen: \"Error: invalid float literal\" Instead, this happened: OK: 0 It seems the \"leading +\" and \"leading -\" is broken. Further, with i32 and i64 is emits an error but IMO an incorrect one: \"cannot parse integer from empty string\" I would expect: \"Invalid integer literal\" : rustc 1.5.0-nightly ( 2015-10-11) binary: rustc commit-hash: commit-date: 2015-10-11 host: x86_64-unknown-linux-gnu release: 1.5.0-nightly\nOh my, the float part is embarrassing. Preparing a fix right now.\nThanks for taking a look Looks like this is working as expected on stable and has leaked into beta/nightly, so tagging as such.\nYes, this is a bug (not checking for empty strings in the right places) of the dec2flt code. The bug is also in beta, so the PR (which I'll file very soon) should probably be backported to beta.", "positive_passages": [{"docid": "doc-en-rust-afa300880b832834f232dee5a90839a31544b2b08909117fba6f63288bf73a45", "text": "let s = s.as_bytes(); let (integral, s) = eat_digits(s); match s.first() { None => Valid(Decimal::new(integral, b\"\", 0)), None => { if integral.is_empty() { return Invalid; // No digits at all } Valid(Decimal::new(integral, b\"\", 0)) } Some(&b'e') | Some(&b'E') => { if integral.is_empty() { return Invalid; // No digits before 'e'", "commid": "rust_pr_29050"}], "negative_passages": []} {"query_id": "q-en-rust-c57c360fdd2d674ab6159f2a225423dd523d169bf25f0483595bad0aec8b98b0", "query": "I tried this code: I expected to see this happen: \"Error: invalid float literal\" Instead, this happened: OK: 0 It seems the \"leading +\" and \"leading -\" is broken. Further, with i32 and i64 is emits an error but IMO an incorrect one: \"cannot parse integer from empty string\" I would expect: \"Invalid integer literal\" : rustc 1.5.0-nightly ( 2015-10-11) binary: rustc commit-hash: commit-date: 2015-10-11 host: x86_64-unknown-linux-gnu release: 1.5.0-nightly\nOh my, the float part is embarrassing. Preparing a fix right now.\nThanks for taking a look Looks like this is working as expected on stable and has leaked into beta/nightly, so tagging as such.\nYes, this is a bug (not checking for empty strings in the right places) of the dec2flt code. The bug is also in beta, so the PR (which I'll file very soon) should probably be backported to beta.", "positive_passages": [{"docid": "doc-en-rust-762a9cdc49420cf56d5d572226f88cd661905499f2bae5d71d039673b67b701e", "text": "} #[test] fn lonely_sign() { assert!(\"+\".parse::().is_err()); assert!(\"-\".parse::().is_err()); } #[test] fn whitespace() { assert!(\" 1.0\".parse::().is_err()); assert!(\"1.0 \".parse::().is_err()); } #[test] fn nan() { assert!(\"NaN\".parse::().unwrap().is_nan()); assert!(\"NaN\".parse::().unwrap().is_nan());", "commid": "rust_pr_29050"}], "negative_passages": []} {"query_id": "q-en-rust-5d98bb8b4f86aeda14eeb11e9dc856e53b439140952cb58589797bc0ae012401", "query": "Playpen: Code like that puts a unicode character in a byte literal gets a confusing error message: A lot of people read this as if the part after the colon is in reference to \"use a xHH escape\", so it should show an example of that, but it really is just displaying the char that is in error, though somewhat obfuscated by using the codepoint instead of the actual char.\nI agree. As someone unfamiliar with the language the error message lead me to fix the playpen code like this: fn main() { b'u{2192}' } Which leads to: error: unicode escape sequences cannot be used as a byte or in a byte string", "positive_passages": [{"docid": "doc-en-rust-f592d0721851853b7abd20faac7ab48d926f9de9f10d056e3b7a6883e4ea3cbb", "text": "_ => { if ascii_only && first_source_char > 'x7F' { let last_pos = self.last_pos; self.err_span_char(start, last_pos, \"byte constant must be ASCII. Use a xHH escape for a non-ASCII byte\", first_source_char); self.err_span_(start, last_pos, \"byte constant must be ASCII. Use a xHH escape for a non-ASCII byte\"); return false; } }", "commid": "rust_pr_33334"}], "negative_passages": []} {"query_id": "q-en-rust-c58f7b7660477a36d4f4e0b97257ff6dd013746ba8d66ba6cc4a6a220b7b64ed", "query": "The message for E0139 suggests replacing with . Actually performing such a replacement will result in double drops, since the version consumes but the version does not. (It presumably needs a or something.)", "positive_passages": [{"docid": "doc-en-rust-f4d6ec07215a1a32663b89912b6987b6ce425def3d306872ad55a6423dedac7f", "text": "``` ptr::read(&v as *const _ as *const SomeType) // `v` transmuted to `SomeType` ``` Note that this does not move `v` (unlike `transmute`), and may need a call to `mem::forget(v)` in case you want to avoid destructors being called. \"##, E0152: r##\"", "commid": "rust_pr_29980"}], "negative_passages": []} {"query_id": "q-en-rust-eb115c156a9d4cf72e03272bd5b906c7b257ad691c3a65269f358ed95fcbec60", "query": "The first example\nThanks, these are hard to keep in sync, as they don't get automated testing :/", "positive_passages": [{"docid": "doc-en-rust-4bca4712685d8ebdec7462a6e737271ee6f18e1db07c23814454eb7db6625961", "text": "fn expand_rn(cx: &mut ExtCtxt, sp: Span, args: &[TokenTree]) -> Box { static NUMERALS: &'static [(&'static str, u32)] = &[ static NUMERALS: &'static [(&'static str, usize)] = &[ (\"M\", 1000), (\"CM\", 900), (\"D\", 500), (\"CD\", 400), (\"C\", 100), (\"XC\", 90), (\"L\", 50), (\"XL\", 40), (\"X\", 10), (\"IX\", 9), (\"V\", 5), (\"IV\", 4), (\"I\", 1)]; let text = match args { [TokenTree::Token(_, token::Ident(s, _))] => s.to_string(), if args.len() != 1 { cx.span_err( sp, &format!(\"argument should be a single identifier, but got {} arguments\", args.len())); return DummyResult::any(sp); } let text = match args[0] { TokenTree::Token(_, token::Ident(s, _)) => s.to_string(), _ => { cx.span_err(sp, \"argument should be a single identifier\"); return DummyResult::any(sp);", "commid": "rust_pr_29951"}], "negative_passages": []} {"query_id": "q-en-rust-eb115c156a9d4cf72e03272bd5b906c7b257ad691c3a65269f358ed95fcbec60", "query": "The first example\nThanks, these are hard to keep in sync, as they don't get automated testing :/", "positive_passages": [{"docid": "doc-en-rust-bcd99e38f67b3c4e38de0169ac4e0915fd1406f43a8f353309a6adce1bc1c6aa", "text": "} } MacEager::expr(cx.expr_u32(sp, total)) MacEager::expr(cx.expr_usize(sp, total)) } #[plugin_registrar]", "commid": "rust_pr_29951"}], "negative_passages": []} {"query_id": "q-en-rust-eb115c156a9d4cf72e03272bd5b906c7b257ad691c3a65269f358ed95fcbec60", "query": "The first example\nThanks, these are hard to keep in sync, as they don't get automated testing :/", "positive_passages": [{"docid": "doc-en-rust-4fb46e073835c3240f4df71cea502d3d504a77b575dd5481c67e7157896f55be", "text": "(\"X\", 10), (\"IX\", 9), (\"V\", 5), (\"IV\", 4), (\"I\", 1)]; let text = match args { [TokenTree::Token(_, token::Ident(s, _))] => s.to_string(), if args.len() != 1 { cx.span_err( sp, &format!(\"argument should be a single identifier, but got {} arguments\", args.len())); return DummyResult::any(sp); } let text = match args[0] { TokenTree::Token(_, token::Ident(s, _)) => s.to_string(), _ => { cx.span_err(sp, \"argument should be a single identifier\"); return DummyResult::any(sp);", "commid": "rust_pr_29951"}], "negative_passages": []} {"query_id": "q-en-rust-4f8fa19da47584dcdd6742ebe385ccb30abee11abd2a1b1ca97a18b7c72588b7", "query": "at the current version of error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'internal error: entered unreachable code', playpen: application terminated with error code 101\nThe reason this is happening is because the code thinks is an enum variant or a tuple struct. The code around this is somewhat redundant and would probably better be solved by making enum variants const evaluable and adding a function that translates s to", "positive_passages": [{"docid": "doc-en-rust-8c5649561364358f77e6b19e81f4181b3c3b9ecc862fe52b27005669f901e437", "text": "let path = match def.full_def() { def::DefStruct(def_id) => def_to_path(tcx, def_id), def::DefVariant(_, variant_did, _) => def_to_path(tcx, variant_did), def::DefFn(..) => return P(hir::Pat { id: expr.id, node: hir::PatLit(P(expr.clone())), span: span, }), _ => unreachable!() }; let pats = args.iter().map(|expr| const_expr_to_pat(tcx, &**expr, span)).collect();", "commid": "rust_pr_30141"}], "negative_passages": []} {"query_id": "q-en-rust-4f8fa19da47584dcdd6742ebe385ccb30abee11abd2a1b1ca97a18b7c72588b7", "query": "at the current version of error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'internal error: entered unreachable code', playpen: application terminated with error code 101\nThe reason this is happening is because the code thinks is an enum variant or a tuple struct. The code around this is somewhat redundant and would probably better be solved by making enum variants const evaluable and adding a function that translates s to", "positive_passages": [{"docid": "doc-en-rust-90bf803f3b133e166a9ee9f8eb33c631a61cb28e38019e40e1420a678a113660", "text": "_ => signal!(e, NonConstPath), }, Some(ast_map::NodeTraitItem(..)) => signal!(e, NonConstPath), Some(_) => unimplemented!(), Some(_) => signal!(e, UnimplementedConstVal(\"calling struct, tuple or variant\")), } }", "commid": "rust_pr_30141"}], "negative_passages": []} {"query_id": "q-en-rust-4f8fa19da47584dcdd6742ebe385ccb30abee11abd2a1b1ca97a18b7c72588b7", "query": "at the current version of error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'internal error: entered unreachable code', playpen: application terminated with error code 101\nThe reason this is happening is because the code thinks is an enum variant or a tuple struct. The code around this is somewhat redundant and would probably better be solved by making enum variants const evaluable and adding a function that translates s to", "positive_passages": [{"docid": "doc-en-rust-5f06788a0239870d55552ed1e08fd4965929de505299161ea72dc5eddb359c0e", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(const_fn)] enum Cake { BlackForest, Marmor, } use Cake::*; const BOO: (Cake, Cake) = (Marmor, BlackForest); //~^ ERROR: constant evaluation error: non-constant path in constant expression [E0471] const FOO: Cake = BOO.1; const fn foo() -> Cake { Marmor //~ ERROR: constant evaluation error: non-constant path in constant expression [E0471] //~^ ERROR: non-constant path in constant expression } const WORKS: Cake = Marmor; const GOO: Cake = foo(); fn main() { match BlackForest { FOO => println!(\"hi\"), //~ NOTE: in pattern here GOO => println!(\"meh\"), //~ NOTE: in pattern here WORKS => println!(\"m\u00f6p\"), _ => println!(\"bye\"), } } ", "commid": "rust_pr_30141"}], "negative_passages": []} {"query_id": "q-en-rust-4f8fa19da47584dcdd6742ebe385ccb30abee11abd2a1b1ca97a18b7c72588b7", "query": "at the current version of error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'internal error: entered unreachable code', playpen: application terminated with error code 101\nThe reason this is happening is because the code thinks is an enum variant or a tuple struct. The code around this is somewhat redundant and would probably better be solved by making enum variants const evaluable and adding a function that translates s to", "positive_passages": [{"docid": "doc-en-rust-c82cbc185771c2f85ad75e4852f6bd36cae724046a3d511f627bea71be05dfb3", "text": " // Copyright 2012 The Rust Project Developers. See the COPYRIGHT // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. //", "commid": "rust_pr_30141"}], "negative_passages": []} {"query_id": "q-en-rust-4f8fa19da47584dcdd6742ebe385ccb30abee11abd2a1b1ca97a18b7c72588b7", "query": "at the current version of error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'internal error: entered unreachable code', playpen: application terminated with error code 101\nThe reason this is happening is because the code thinks is an enum variant or a tuple struct. The code around this is somewhat redundant and would probably better be solved by making enum variants const evaluable and adding a function that translates s to", "positive_passages": [{"docid": "doc-en-rust-f3ef24a317545055a270f3d86422fbcaf8b2bf605966d2ebc2961b6d77f70d17", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(const_fn)] const FOO: isize = 10; const BAR: isize = 3; const fn foo() -> isize { 4 } const BOO: isize = foo(); pub fn main() { let x: isize = 3; let y = match x { FOO => 1, BAR => 2, BOO => 4, _ => 3 }; assert_eq!(y, 2);", "commid": "rust_pr_30141"}], "negative_passages": []} {"query_id": "q-en-rust-4f8fa19da47584dcdd6742ebe385ccb30abee11abd2a1b1ca97a18b7c72588b7", "query": "at the current version of error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'internal error: entered unreachable code', playpen: application terminated with error code 101\nThe reason this is happening is because the code thinks is an enum variant or a tuple struct. The code around this is somewhat redundant and would probably better be solved by making enum variants const evaluable and adding a function that translates s to", "positive_passages": [{"docid": "doc-en-rust-780a0125de280190c8bf0e55b5b2dd13ea94ca29a3a82c518e1ee1baee94e0ac", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(const_fn)] const fn f() -> usize { 5 } struct A { field: usize, } fn main() { let _ = [0; f()]; } ", "commid": "rust_pr_30141"}], "negative_passages": []} {"query_id": "q-en-rust-4f8fa19da47584dcdd6742ebe385ccb30abee11abd2a1b1ca97a18b7c72588b7", "query": "at the current version of error: internal compiler error: unexpected panic note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'internal error: entered unreachable code', playpen: application terminated with error code 101\nThe reason this is happening is because the code thinks is an enum variant or a tuple struct. The code around this is somewhat redundant and would probably better be solved by making enum variants const evaluable and adding a function that translates s to", "positive_passages": [{"docid": "doc-en-rust-7ee2425eef9a97e18ab55a314d7b8dd85f5428397fc9d3bf86c078b20e6cb8ee", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(const_fn)] struct A { field: usize, } const fn f() -> usize { 5 } fn main() { let _ = [0; f()]; } ", "commid": "rust_pr_30141"}], "negative_passages": []} {"query_id": "q-en-rust-67284bbe8a6bfaee9d36697823c512a17ff7ae74fdb3c03ccf17663a4c041a35", "query": "The macros chapter says: However the second sentence isn't true. is either nothing or a path with one and one identifier. That is, this pair of macros doesn't work, either when called from within the crate or from another one:", "positive_passages": [{"docid": "doc-en-rust-54eb03b5f22362ecfcfa2807f0be11c7b2112d5ada200ee605ae48bea5bf566f", "text": "function name will expand to either `::increment` or `::mylib::increment`. To keep this system simple and correct, `#[macro_use] extern crate ...` may only appear at the root of your crate, not inside `mod`. This ensures that `$crate` is a single identifier. only appear at the root of your crate, not inside `mod`. # The deep end", "commid": "rust_pr_30405"}], "negative_passages": []} {"query_id": "q-en-rust-ce53f5f969ebf5ed98f498f326e4b0cad36eefb0a9666841dbe971a63b13bf11", "query": "broke this code:\nClone is not implemented for anymore, because it now requires (previously ), which doesn't hold for arrays bigger than 32.\nI'm assuming that trying to call on an would previously ICE the compiler?\nIn certain contexts, such as it would / will ICE yes, issue\nAh, the bound was before.\nIf only we had type level integers so we could just properly implement Clone for all fixed size array sizes without massive bloat.\nIt doesn't even requires actual numbers, most of impls act on \"size-erased\" arrays. Something like would be enough.\nhow would that work? Don't you need to loop N times to do the actual clone?\nSomething like: So is not used explicitly in the implementation, although it is still needed by internally. Whether it is easier to implement than type level integers or not is a separate question.", "positive_passages": [{"docid": "doc-en-rust-e5e7499cbe44e2669987ced0191bfce62d38b8227b8f81ce2b40ccbe37778c32", "text": "use fmt; use hash::{Hash, self}; use iter::IntoIterator; use marker::{Sized, Unsize}; use marker::{Copy, Sized, Unsize}; use option::Option; use slice::{Iter, IterMut, SliceExt};", "commid": "rust_pr_30247"}], "negative_passages": []} {"query_id": "q-en-rust-ce53f5f969ebf5ed98f498f326e4b0cad36eefb0a9666841dbe971a63b13bf11", "query": "broke this code:\nClone is not implemented for anymore, because it now requires (previously ), which doesn't hold for arrays bigger than 32.\nI'm assuming that trying to call on an would previously ICE the compiler?\nIn certain contexts, such as it would / will ICE yes, issue\nAh, the bound was before.\nIf only we had type level integers so we could just properly implement Clone for all fixed size array sizes without massive bloat.\nIt doesn't even requires actual numbers, most of impls act on \"size-erased\" arrays. Something like would be enough.\nhow would that work? Don't you need to loop N times to do the actual clone?\nSomething like: So is not used explicitly in the implementation, although it is still needed by internally. Whether it is easier to implement than type level integers or not is a separate question.", "positive_passages": [{"docid": "doc-en-rust-af39811f6c82552558de56b6553d0fb85c45c7988a93397483c9ca3f8e042abb", "text": "} #[stable(feature = \"rust1\", since = \"1.0.0\")] impl Clone for [T; $N] { fn clone(&self) -> [T; $N] { *self } } #[stable(feature = \"rust1\", since = \"1.0.0\")] impl Hash for [T; $N] { fn hash(&self, state: &mut H) { Hash::hash(&self[..], state)", "commid": "rust_pr_30247"}], "negative_passages": []} {"query_id": "q-en-rust-ce53f5f969ebf5ed98f498f326e4b0cad36eefb0a9666841dbe971a63b13bf11", "query": "broke this code:\nClone is not implemented for anymore, because it now requires (previously ), which doesn't hold for arrays bigger than 32.\nI'm assuming that trying to call on an would previously ICE the compiler?\nIn certain contexts, such as it would / will ICE yes, issue\nAh, the bound was before.\nIf only we had type level integers so we could just properly implement Clone for all fixed size array sizes without massive bloat.\nIt doesn't even requires actual numbers, most of impls act on \"size-erased\" arrays. Something like would be enough.\nhow would that work? Don't you need to loop N times to do the actual clone?\nSomething like: So is not used explicitly in the implementation, although it is still needed by internally. Whether it is easier to implement than type level integers or not is a separate question.", "positive_passages": [{"docid": "doc-en-rust-f10fbe1bace21f3c98411080581f323828237f52c6c38311646f29c2f8cb0828", "text": "} array_impl_default!{32, T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T T} macro_rules! array_impl_clone { {$n:expr, $i:expr, $($idx:expr,)*} => { #[stable(feature = \"rust1\", since = \"1.0.0\")] impl Clone for [T; $n] { fn clone(&self) -> [T; $n] { [self[$i-$i].clone(), $(self[$i-$idx].clone()),*] } } array_impl_clone!{$i, $($idx,)*} }; {$n:expr,} => { #[stable(feature = \"rust1\", since = \"1.0.0\")] impl Clone for [T; 0] { fn clone(&self) -> [T; 0] { [] } } }; } array_impl_clone! { 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0, } ", "commid": "rust_pr_30247"}], "negative_passages": []} {"query_id": "q-en-rust-ce53f5f969ebf5ed98f498f326e4b0cad36eefb0a9666841dbe971a63b13bf11", "query": "broke this code:\nClone is not implemented for anymore, because it now requires (previously ), which doesn't hold for arrays bigger than 32.\nI'm assuming that trying to call on an would previously ICE the compiler?\nIn certain contexts, such as it would / will ICE yes, issue\nAh, the bound was before.\nIf only we had type level integers so we could just properly implement Clone for all fixed size array sizes without massive bloat.\nIt doesn't even requires actual numbers, most of impls act on \"size-erased\" arrays. Something like would be enough.\nhow would that work? Don't you need to loop N times to do the actual clone?\nSomething like: So is not used explicitly in the implementation, although it is still needed by internally. Whether it is easier to implement than type level integers or not is a separate question.", "positive_passages": [{"docid": "doc-en-rust-d6e6435e4839cccbb28ae1b7acfcade5f29aa205bef5911310d184823fadf400", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // test for issue #30244 #[derive(Copy, Clone)] struct Array { arr: [[u8; 256]; 4] } pub fn main() {} ", "commid": "rust_pr_30247"}], "negative_passages": []} {"query_id": "q-en-rust-f13484e454bdd5ca28fcde5b37e81f9509e71be98d60357660e56abc0c1cbd5a", "query": "This code isn't valid Rust: We have an okay, but not great, diagnostic here: It might be nice to add a when the keyword found is that mentions that is not an expression, and so cannot be used in this way.\nwant to work on this? It's a similar task to the other doc comment one, but much simpler since it's a very specific situation.\nI'd be happy to help someone :)\nSo, the parser already has its ways to differentiate keywords from identifiers. All we have to do is find the right place where that error is thrown, check whether it's a and add a note along with the error. The parser has all sorts of code, and adding a note's also around there somewhere :) Code:\nI would like to try this. :)\nAwesome! Feel free to ask waffles or me for help (you can find us both on IRC on #rust-internals), or ask questions here. (self-assigning since I can't assign to you -- just to mark it as \"being taken\")", "positive_passages": [{"docid": "doc-en-rust-925930760c1a0c4bd17e89d0fd57ce56140bb3ba1bba7061aa247777e934ad20", "text": "use self::Destination::*; use codemap::{self, COMMAND_LINE_SP, COMMAND_LINE_EXPN, Pos, Span}; use codemap::{self, COMMAND_LINE_SP, COMMAND_LINE_EXPN, DUMMY_SP, Pos, Span}; use diagnostics; use errors::{Level, RenderSpan, DiagnosticBuilder};", "commid": "rust_pr_31211"}], "negative_passages": []} {"query_id": "q-en-rust-f13484e454bdd5ca28fcde5b37e81f9509e71be98d60357660e56abc0c1cbd5a", "query": "This code isn't valid Rust: We have an okay, but not great, diagnostic here: It might be nice to add a when the keyword found is that mentions that is not an expression, and so cannot be used in this way.\nwant to work on this? It's a similar task to the other doc comment one, but much simpler since it's a very specific situation.\nI'd be happy to help someone :)\nSo, the parser already has its ways to differentiate keywords from identifiers. All we have to do is find the right place where that error is thrown, check whether it's a and add a note along with the error. The parser has all sorts of code, and adding a note's also around there somewhere :) Code:\nI would like to try this. :)\nAwesome! Feel free to ask waffles or me for help (you can find us both on IRC on #rust-internals), or ask questions here. (self-assigning since I can't assign to you -- just to mark it as \"being taken\")", "positive_passages": [{"docid": "doc-en-rust-d3711ff1d468e3453efab83403e27e0c9757586f039a9531ba43e6def0653e82", "text": "lvl: Level) { let error = match sp { Some(COMMAND_LINE_SP) => self.emit_(FileLine(COMMAND_LINE_SP), msg, code, lvl), Some(DUMMY_SP) | None => print_diagnostic(&mut self.dst, \"\", lvl, msg, code), Some(sp) => self.emit_(FullSpan(sp), msg, code, lvl), None => print_diagnostic(&mut self.dst, \"\", lvl, msg, code), }; if let Err(e) = error {", "commid": "rust_pr_31211"}], "negative_passages": []} {"query_id": "q-en-rust-f13484e454bdd5ca28fcde5b37e81f9509e71be98d60357660e56abc0c1cbd5a", "query": "This code isn't valid Rust: We have an okay, but not great, diagnostic here: It might be nice to add a when the keyword found is that mentions that is not an expression, and so cannot be used in this way.\nwant to work on this? It's a similar task to the other doc comment one, but much simpler since it's a very specific situation.\nI'd be happy to help someone :)\nSo, the parser already has its ways to differentiate keywords from identifiers. All we have to do is find the right place where that error is thrown, check whether it's a and add a note along with the error. The parser has all sorts of code, and adding a note's also around there somewhere :) Code:\nI would like to try this. :)\nAwesome! Feel free to ask waffles or me for help (you can find us both on IRC on #rust-internals), or ask questions here. (self-assigning since I can't assign to you -- just to mark it as \"being taken\")", "positive_passages": [{"docid": "doc-en-rust-21970372081cd4ca6e62269552918bea2e1f99df514dec4386659c1aa0fbdc5f", "text": "ex = ExprBreak(None); } hi = self.last_span.hi; } else if self.token.is_keyword(keywords::Let) { // Catch this syntax error here, instead of in `check_strict_keywords`, so // that we can explicitly mention that let is not to be used as an expression let mut db = self.fatal(\"expected expression, found statement (`let`)\"); db.note(\"variable declaration using `let` is a statement\"); return Err(db); } else if self.check(&token::ModSep) || self.token.is_ident() && !self.check_keyword(keywords::True) &&", "commid": "rust_pr_31211"}], "negative_passages": []} {"query_id": "q-en-rust-f13484e454bdd5ca28fcde5b37e81f9509e71be98d60357660e56abc0c1cbd5a", "query": "This code isn't valid Rust: We have an okay, but not great, diagnostic here: It might be nice to add a when the keyword found is that mentions that is not an expression, and so cannot be used in this way.\nwant to work on this? It's a similar task to the other doc comment one, but much simpler since it's a very specific situation.\nI'd be happy to help someone :)\nSo, the parser already has its ways to differentiate keywords from identifiers. All we have to do is find the right place where that error is thrown, check whether it's a and add a note along with the error. The parser has all sorts of code, and adding a note's also around there somewhere :) Code:\nI would like to try this. :)\nAwesome! Feel free to ask waffles or me for help (you can find us both on IRC on #rust-internals), or ask questions here. (self-assigning since I can't assign to you -- just to mark it as \"being taken\")", "positive_passages": [{"docid": "doc-en-rust-25585000a526ce72a6295267d520c97dcaef1936575169746ca45a593d5a6558", "text": "// ignore-cross-compile // error-pattern:expected identifier, found keyword `let` // error-pattern:expected expression, found statement (`let`) #![feature(quote, rustc_private)]", "commid": "rust_pr_31211"}], "negative_passages": []} {"query_id": "q-en-rust-bdeb360e2351b292865b5ab5b3d8c0087a6cd49477896d6b8ca69354de6ec179", "query": "All public types should implement Debug. Per conversation on #rust-internals, there might be some useful information that could be printed for these types -- but even just \"Sender { .. }\" etc. would be a start.\nWas this\nprobably. It has been suggested that way more useful info could actually be printed instead of the opaque placeholders -- but that would be more of a wishlist item than a bug... The patch is sufficient to fix the actual problem I was running into: not being able to #[derive(Debug)] on any structure containing mpsc Senders/Receivers.\nOk, closing.", "positive_passages": [{"docid": "doc-en-rust-6e59e27e0adeab04a756c070f223dc2f8fa7d98f80db4e1f9ea4b32ce7e5c0e5", "text": "} } #[stable(feature = \"mpsc_debug\", since = \"1.7.0\")] impl fmt::Debug for Sender { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, \"Sender {{ .. }}\") } } //////////////////////////////////////////////////////////////////////////////// // SyncSender ////////////////////////////////////////////////////////////////////////////////", "commid": "rust_pr_30894"}], "negative_passages": []} {"query_id": "q-en-rust-bdeb360e2351b292865b5ab5b3d8c0087a6cd49477896d6b8ca69354de6ec179", "query": "All public types should implement Debug. Per conversation on #rust-internals, there might be some useful information that could be printed for these types -- but even just \"Sender { .. }\" etc. would be a start.\nWas this\nprobably. It has been suggested that way more useful info could actually be printed instead of the opaque placeholders -- but that would be more of a wishlist item than a bug... The patch is sufficient to fix the actual problem I was running into: not being able to #[derive(Debug)] on any structure containing mpsc Senders/Receivers.\nOk, closing.", "positive_passages": [{"docid": "doc-en-rust-88e024a9c8258c2a3b467e0180c2c972a14eea0ff1a059a86bfb5d77ca293799", "text": "} } #[stable(feature = \"mpsc_debug\", since = \"1.7.0\")] impl fmt::Debug for SyncSender { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, \"SyncSender {{ .. }}\") } } //////////////////////////////////////////////////////////////////////////////// // Receiver ////////////////////////////////////////////////////////////////////////////////", "commid": "rust_pr_30894"}], "negative_passages": []} {"query_id": "q-en-rust-bdeb360e2351b292865b5ab5b3d8c0087a6cd49477896d6b8ca69354de6ec179", "query": "All public types should implement Debug. Per conversation on #rust-internals, there might be some useful information that could be printed for these types -- but even just \"Sender { .. }\" etc. would be a start.\nWas this\nprobably. It has been suggested that way more useful info could actually be printed instead of the opaque placeholders -- but that would be more of a wishlist item than a bug... The patch is sufficient to fix the actual problem I was running into: not being able to #[derive(Debug)] on any structure containing mpsc Senders/Receivers.\nOk, closing.", "positive_passages": [{"docid": "doc-en-rust-9555cc0dd489dcafe43922cc32187d1391f81331dfa6fa17c0196f54f2e9d9d6", "text": "} } #[stable(feature = \"mpsc_debug\", since = \"1.7.0\")] impl fmt::Debug for Receiver { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, \"Receiver {{ .. }}\") } } #[stable(feature = \"rust1\", since = \"1.0.0\")] impl fmt::Debug for SendError { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {", "commid": "rust_pr_30894"}], "negative_passages": []} {"query_id": "q-en-rust-bdeb360e2351b292865b5ab5b3d8c0087a6cd49477896d6b8ca69354de6ec179", "query": "All public types should implement Debug. Per conversation on #rust-internals, there might be some useful information that could be printed for these types -- but even just \"Sender { .. }\" etc. would be a start.\nWas this\nprobably. It has been suggested that way more useful info could actually be printed instead of the opaque placeholders -- but that would be more of a wishlist item than a bug... The patch is sufficient to fix the actual problem I was running into: not being able to #[derive(Debug)] on any structure containing mpsc Senders/Receivers.\nOk, closing.", "positive_passages": [{"docid": "doc-en-rust-31fbbf6d199e1920c12bce5433eab36d5d5c409bf9ceb3cabb01bbe119743568", "text": "repro() } } #[test] fn fmt_debug_sender() { let (tx, _) = channel::(); assert_eq!(format!(\"{:?}\", tx), \"Sender { .. }\"); } #[test] fn fmt_debug_recv() { let (_, rx) = channel::(); assert_eq!(format!(\"{:?}\", rx), \"Receiver { .. }\"); } #[test] fn fmt_debug_sync_sender() { let (tx, _) = sync_channel::(1); assert_eq!(format!(\"{:?}\", tx), \"SyncSender { .. }\"); } }", "commid": "rust_pr_30894"}], "negative_passages": []} {"query_id": "q-en-rust-bdeb360e2351b292865b5ab5b3d8c0087a6cd49477896d6b8ca69354de6ec179", "query": "All public types should implement Debug. Per conversation on #rust-internals, there might be some useful information that could be printed for these types -- but even just \"Sender { .. }\" etc. would be a start.\nWas this\nprobably. It has been suggested that way more useful info could actually be printed instead of the opaque placeholders -- but that would be more of a wishlist item than a bug... The patch is sufficient to fix the actual problem I was running into: not being able to #[derive(Debug)] on any structure containing mpsc Senders/Receivers.\nOk, closing.", "positive_passages": [{"docid": "doc-en-rust-0492a841d8c288c61667816aa066b7cf6c518277dfc682b2bba8b2e3d44a0407", "text": "issue = \"27800\")] use fmt; use core::cell::{Cell, UnsafeCell}; use core::marker; use core::ptr;", "commid": "rust_pr_30894"}], "negative_passages": []} {"query_id": "q-en-rust-bdeb360e2351b292865b5ab5b3d8c0087a6cd49477896d6b8ca69354de6ec179", "query": "All public types should implement Debug. Per conversation on #rust-internals, there might be some useful information that could be printed for these types -- but even just \"Sender { .. }\" etc. would be a start.\nWas this\nprobably. It has been suggested that way more useful info could actually be printed instead of the opaque placeholders -- but that would be more of a wishlist item than a bug... The patch is sufficient to fix the actual problem I was running into: not being able to #[derive(Debug)] on any structure containing mpsc Senders/Receivers.\nOk, closing.", "positive_passages": [{"docid": "doc-en-rust-7662ca2df535eaa7c360c35cceb7a5a19ddc5be7e806d21e6d7b1743051de3bf", "text": "} } #[stable(feature = \"mpsc_debug\", since = \"1.7.0\")] impl fmt::Debug for Select { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, \"Select {{ .. }}\") } } #[stable(feature = \"mpsc_debug\", since = \"1.7.0\")] impl<'rx, T:Send+'rx> fmt::Debug for Handle<'rx, T> { fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { write!(f, \"Handle {{ .. }}\") } } #[cfg(test)] #[allow(unused_imports)] mod tests {", "commid": "rust_pr_30894"}], "negative_passages": []} {"query_id": "q-en-rust-bdeb360e2351b292865b5ab5b3d8c0087a6cd49477896d6b8ca69354de6ec179", "query": "All public types should implement Debug. Per conversation on #rust-internals, there might be some useful information that could be printed for these types -- but even just \"Sender { .. }\" etc. would be a start.\nWas this\nprobably. It has been suggested that way more useful info could actually be printed instead of the opaque placeholders -- but that would be more of a wishlist item than a bug... The patch is sufficient to fix the actual problem I was running into: not being able to #[derive(Debug)] on any structure containing mpsc Senders/Receivers.\nOk, closing.", "positive_passages": [{"docid": "doc-en-rust-861d23dc0d6f0718722015004c7704f3090d7081ecf6ff084f23c75357b8aca4", "text": "} } } #[test] fn fmt_debug_select() { let sel = Select::new(); assert_eq!(format!(\"{:?}\", sel), \"Select { .. }\"); } #[test] fn fmt_debug_handle() { let (_, rx) = channel::(); let sel = Select::new(); let mut handle = sel.handle(&rx); assert_eq!(format!(\"{:?}\", handle), \"Handle { .. }\"); } }", "commid": "rust_pr_30894"}], "negative_passages": []} {"query_id": "q-en-rust-88e44b0e76a4e5daad0fafa310bfec8d4b5ca019ac81f853d3d3fe1401f1a3e3", "query": "Code like the following one generates some unnecessary LLVM IR. Namely: We should skip zero-sized fields in .", "positive_passages": [{"docid": "doc-en-rust-2447340192c6ffc7a1713199e0c40f19a6b931c22b0c3a960d8c33b68a335d3f", "text": "operand: OperandRef<'tcx>) { debug!(\"store_operand: operand={}\", operand.repr(bcx)); // Avoid generating stores of zero-sized values, because the only way to have a zero-sized // value is through `undef`, and store itself is useless. if common::type_is_zero_size(bcx.ccx(), operand.ty) { return; } match operand.val { OperandValue::Ref(r) => base::memcpy_ty(bcx, lldest, r, operand.ty), OperandValue::Immediate(s) => base::store_ty(bcx, s, lldest, operand.ty),", "commid": "rust_pr_30848"}], "negative_passages": []} {"query_id": "q-en-rust-88e44b0e76a4e5daad0fafa310bfec8d4b5ca019ac81f853d3d3fe1401f1a3e3", "query": "Code like the following one generates some unnecessary LLVM IR. Namely: We should skip zero-sized fields in .", "positive_passages": [{"docid": "doc-en-rust-2a810bfebf5070028ac32a47de42f8f66e87ec6cd118d37d5aed25d970a9e7d0", "text": "}, _ => { for (i, operand) in operands.iter().enumerate() { // Note: perhaps this should be StructGep, but // note that in some cases the values here will // not be structs but arrays. let lldest_i = build::GEPi(bcx, dest.llval, &[0, i]); self.trans_operand_into(bcx, lldest_i, operand); let op = self.trans_operand(bcx, operand); // Do not generate stores and GEPis for zero-sized fields. if !common::type_is_zero_size(bcx.ccx(), op.ty) { // Note: perhaps this should be StructGep, but // note that in some cases the values here will // not be structs but arrays. let dest = build::GEPi(bcx, dest.llval, &[0, i]); self.store_operand(bcx, dest, op); } } } }", "commid": "rust_pr_30848"}], "negative_passages": []} {"query_id": "q-en-rust-88e44b0e76a4e5daad0fafa310bfec8d4b5ca019ac81f853d3d3fe1401f1a3e3", "query": "Code like the following one generates some unnecessary LLVM IR. Namely: We should skip zero-sized fields in .", "positive_passages": [{"docid": "doc-en-rust-5c09b9de3f491ec65a0014babe09ea65d481f36dae48fab6b5f01a023b0b1b69", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // compile-flags: -C no-prepopulate-passes #![feature(rustc_attrs)] #![crate_type = \"lib\"] use std::marker::PhantomData; struct Zst { phantom: PhantomData } // CHECK-LABEL: @mir #[no_mangle] #[rustc_mir] fn mir(){ // CHECK-NOT: getelementptr // CHECK-NOT: store{{.*}}undef let x = Zst { phantom: PhantomData }; } ", "commid": "rust_pr_30848"}], "negative_passages": []} {"query_id": "q-en-rust-dbfd688bfaa4a192db912bc316f6ec1c0b1b2c51cfe57e8944a4afb5e32dd791", "query": "As , if one of the stdio handles for a Windows process is unset (e.g. isn't present) then you won't actually be able to spawn a process by default. All handles are set, by default, to \"inherit\", but duplicate the existing handle, which in turn means that if the stdio handle isn't present this operation will fail. Rust currently by always opening up handles even if is specified (just pointing them to a blank stream), but Rust should also defend against this situation when it was spawned from elsewhere.\ncc", "positive_passages": [{"docid": "doc-en-rust-4c8ea6e976b94b40d20a3a4523ef811a6fd8c61ff14366c01cf1689686e9420c", "text": "impl io::Read for Maybe { fn read(&mut self, buf: &mut [u8]) -> io::Result { match *self { Maybe::Real(ref mut r) => handle_ebadf(r.read(buf), buf.len()), Maybe::Real(ref mut r) => handle_ebadf(r.read(buf), 0), Maybe::Fake => Ok(0) } }", "commid": "rust_pr_31177"}], "negative_passages": []} {"query_id": "q-en-rust-dbfd688bfaa4a192db912bc316f6ec1c0b1b2c51cfe57e8944a4afb5e32dd791", "query": "As , if one of the stdio handles for a Windows process is unset (e.g. isn't present) then you won't actually be able to spawn a process by default. All handles are set, by default, to \"inherit\", but duplicate the existing handle, which in turn means that if the stdio handle isn't present this operation will fail. Rust currently by always opening up handles even if is specified (just pointing them to a blank stream), but Rust should also defend against this situation when it was spawned from elsewhere.\ncc", "positive_passages": [{"docid": "doc-en-rust-6c0d2fe607ad05de9b888a8f7156c41fd69324d720962869e2dd26e3a2f5b954", "text": "impl Stdio { fn to_handle(&self, stdio_id: c::DWORD) -> io::Result { match *self { // If no stdio handle is available, then inherit means that it // should still be unavailable so propagate the // INVALID_HANDLE_VALUE. Stdio::Inherit => { stdio::get(stdio_id).and_then(|io| { io.handle().duplicate(0, true, c::DUPLICATE_SAME_ACCESS) }) match stdio::get(stdio_id) { Ok(io) => io.handle().duplicate(0, true, c::DUPLICATE_SAME_ACCESS), Err(..) => Ok(Handle::new(c::INVALID_HANDLE_VALUE)), } } Stdio::Raw(handle) => { RawHandle::new(handle).duplicate(0, true, c::DUPLICATE_SAME_ACCESS)", "commid": "rust_pr_31177"}], "negative_passages": []} {"query_id": "q-en-rust-dbfd688bfaa4a192db912bc316f6ec1c0b1b2c51cfe57e8944a4afb5e32dd791", "query": "As , if one of the stdio handles for a Windows process is unset (e.g. isn't present) then you won't actually be able to spawn a process by default. All handles are set, by default, to \"inherit\", but duplicate the existing handle, which in turn means that if the stdio handle isn't present this operation will fail. Rust currently by always opening up handles even if is specified (just pointing them to a blank stream), but Rust should also defend against this situation when it was spawned from elsewhere.\ncc", "positive_passages": [{"docid": "doc-en-rust-028ea6fb6aca8f4353a3dc3e3a69bdd09a99367333dfe86fd776789c543a376f", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(libc)] extern crate libc; use std::process::{Command, Stdio}; use std::env; use std::io::{self, Read, Write}; #[cfg(unix)] unsafe fn without_stdio R>(f: F) -> R { let doit = |a| { let r = libc::dup(a); assert!(r >= 0); return r }; let a = doit(0); let b = doit(1); let c = doit(2); assert!(libc::close(0) >= 0); assert!(libc::close(1) >= 0); assert!(libc::close(2) >= 0); let r = f(); assert!(libc::dup2(a, 0) >= 0); assert!(libc::dup2(b, 1) >= 0); assert!(libc::dup2(c, 2) >= 0); return r } #[cfg(windows)] unsafe fn without_stdio R>(f: F) -> R { type DWORD = u32; type HANDLE = *mut u8; type BOOL = i32; const STD_INPUT_HANDLE: DWORD = -10i32 as DWORD; const STD_OUTPUT_HANDLE: DWORD = -11i32 as DWORD; const STD_ERROR_HANDLE: DWORD = -12i32 as DWORD; const INVALID_HANDLE_VALUE: HANDLE = !0 as HANDLE; extern \"system\" { fn GetStdHandle(which: DWORD) -> HANDLE; fn SetStdHandle(which: DWORD, handle: HANDLE) -> BOOL; } let doit = |id| { let handle = GetStdHandle(id); assert!(handle != INVALID_HANDLE_VALUE); assert!(SetStdHandle(id, INVALID_HANDLE_VALUE) != 0); return handle }; let a = doit(STD_INPUT_HANDLE); let b = doit(STD_OUTPUT_HANDLE); let c = doit(STD_ERROR_HANDLE); let r = f(); let doit = |id, handle| { assert!(SetStdHandle(id, handle) != 0); }; doit(STD_INPUT_HANDLE, a); doit(STD_OUTPUT_HANDLE, b); doit(STD_ERROR_HANDLE, c); return r } fn main() { if env::args().len() > 1 { println!(\"test\"); assert!(io::stdout().write(b\"testn\").is_ok()); assert!(io::stderr().write(b\"testn\").is_ok()); assert_eq!(io::stdin().read(&mut [0; 10]).unwrap(), 0); return } // First, make sure reads/writes without stdio work if stdio itself is // missing. let (a, b, c) = unsafe { without_stdio(|| { let a = io::stdout().write(b\"testn\"); let b = io::stderr().write(b\"testn\"); let c = io::stdin().read(&mut [0; 10]); (a, b, c) }) }; assert_eq!(a.unwrap(), 5); assert_eq!(b.unwrap(), 5); assert_eq!(c.unwrap(), 0); // Second, spawn a child and do some work with \"null\" descriptors to make // sure it's ok let me = env::current_exe().unwrap(); let status = Command::new(&me) .arg(\"next\") .stdin(Stdio::null()) .stdout(Stdio::null()) .stderr(Stdio::null()) .status().unwrap(); assert!(status.success(), \"{:?} isn't a success\", status); // Finally, close everything then spawn a child to make sure everything is // *still* ok. let status = unsafe { without_stdio(|| Command::new(&me).arg(\"next\").status()) }.unwrap(); assert!(status.success(), \"{:?} isn't a success\", status); } ", "commid": "rust_pr_31177"}], "negative_passages": []} {"query_id": "q-en-rust-aaa5a3ee333423b667a9c40eee5d23ef3ae78609a2bc656233d5d4760a024d1a", "query": "I am following the Book in the Getting Started chapter, try my first rustc and get the error: The manual should at least mention the need for a C compiler (or at least a linker) as a dependency if you are using to install the binaries.", "positive_passages": [{"docid": "doc-en-rust-26de3772cd678f1e2b3c357b27f7e0ae0f6a5510ea4fff1e05ef4de01a82ee61", "text": "repair, or remove installation\" page and ensure \"Add to PATH\" is installed on the local hard drive. Rust does not do its own linking, and so you\u2019ll need to have a linker installed. Doing so will depend on your specific system, consult its documentation for more details. If not, there are a number of places where we can get help. The easiest is [the #rust IRC channel on irc.mozilla.org][irc], which we can access through [Mibbit][mibbit]. Click that link, and we'll be chatting with other Rustaceans", "commid": "rust_pr_31199"}], "negative_passages": []} {"query_id": "q-en-rust-c0a381e93cd755a3a738305000b35b277d65be6c0b605dccc6f9cfe94853c728", "query": "Brand new to Rust so I apologize if this is not the right place for something like this. I just finished \"The Guessing Game\" and noticed a small typo: We can do that with three more lines. Here\u2019s our new program: The new three lines: It appears the book was updated at some point to only contain two lines. Similarly. in the following section. this could be a typo or possibly just confusing: If we leave off calling these two methods, our program will compile, but we\u2019ll get a warning: The two methods could refer to .expect and panic!.\nNope, this is a typo. There used to be calls to in there, and they were removed, but the English wasn't.", "positive_passages": [{"docid": "doc-en-rust-2aee2c541d7ce81ddb1afe129b06914cb6acbfea4f2859b705f5e73ce896e467", "text": "[expect]: ../std/option/enum.Option.html#method.expect [panic]: error-handling.html If we leave off calling these two methods, our program will compile, but If we leave off calling this method, our program will compile, but we\u2019ll get a warning: ```bash", "commid": "rust_pr_31294"}], "negative_passages": []} {"query_id": "q-en-rust-c0a381e93cd755a3a738305000b35b277d65be6c0b605dccc6f9cfe94853c728", "query": "Brand new to Rust so I apologize if this is not the right place for something like this. I just finished \"The Guessing Game\" and noticed a small typo: We can do that with three more lines. Here\u2019s our new program: The new three lines: It appears the book was updated at some point to only contain two lines. Similarly. in the following section. this could be a typo or possibly just confusing: If we leave off calling these two methods, our program will compile, but we\u2019ll get a warning: The two methods could refer to .expect and panic!.\nNope, this is a typo. There used to be calls to in there, and they were removed, but the English wasn't.", "positive_passages": [{"docid": "doc-en-rust-f582056ed1cde8e9e1c235851695f5513cb02ceb1c2dcc47996c3e1fe2c1d133", "text": "} ``` The new three lines: The new two lines: ```rust,ignore let guess: u32 = guess.trim().parse()", "commid": "rust_pr_31294"}], "negative_passages": []} {"query_id": "q-en-rust-79680a4c1ae9382bb09dca8326bedf7ca1a0a246ed108b49225d13536e6334d8", "query": "The method on iterators consumes the first falsy value, instead of leaving it in the iterator. This behavior seems unintuitive to me; and I could not find any documentation of this functionality. Due to this (I think) being a breaking change, it's probably not worth changing the functionality, but documenting this is probably a good idea. I tried code: I expected to see this output: Instead, this I saw this output: : I believe this also occurs on stable (tested in ).\nNot only would it be a breaking change, but it is impossible for a general iterator: the only way to tell if needs to stop or not is to get the element out of the iterator with , and there's no way to uncall /push an element back on to the end. (E.g. consider something like : once the message has been received and yielded, it is gone.)\nFor people reaching here and seeking a way to do this currently, provides a that will leave the last element in the iterator, but at the cost of cloning the entire iterator every iteration.\nWell, maybe we can put the first falsy value to inside and add an accessor to it...\nThe cost of cloning an iterator is totally depedent on which iterator it is. A slice iterator is implemented using two pointers, so the cloning amounts to storing two pointers during the iteration. When the whole iteration is inlined, and llvm breaking the structs down to their members, it should be able to remove the store of the pointer that doesn't change (the end), so the overhead is down to storing and restoring one pointer.\nis a nice hack, but it should be complemented by either adaptors on Peekable and/or itertools' PutBack.", "positive_passages": [{"docid": "doc-en-rust-a482422ed8a19cdf4278f5dd15bedaf45ad172c9ca59decb3fdc835eeece68f4", "text": "/// // got a false, take_while() isn't used any more /// assert_eq!(iter.next(), None); /// ``` /// /// Because `take_while()` needs to look at the value in order to see if it /// should be included or not, consuming iterators will see that it is /// removed: /// /// ``` /// let a = [1, 2, 3, 4]; /// let mut iter = a.into_iter(); /// /// let result: Vec = iter.by_ref() /// .take_while(|n| **n != 3) /// .cloned() /// .collect(); /// /// assert_eq!(result, &[1, 2]); /// /// let result: Vec = iter.cloned().collect(); /// /// assert_eq!(result, &[4]); /// ``` /// /// The `3` is no longer there, because it was consumed in order to see if /// the iteration should stop, but wasn't placed back into the iterator or /// some similar thing. #[inline] #[stable(feature = \"rust1\", since = \"1.0.0\")] fn take_while

(self, predicate: P) -> TakeWhile where", "commid": "rust_pr_31351"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-50ce1d9db8285766b8ba24a9d8f5e3936f442126ac0d100b9b59c1aa30aa483f", "text": "run_lints!(cx, check_item, late_passes, it); cx.visit_ids(|v| v.visit_item(it)); hir_visit::walk_item(cx, it); run_lints!(cx, check_item_post, late_passes, it); }) }", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-f54e9d6e2df47a9881b8318482ffcc780064def6d71bafd0519b33bbb9bf8126", "text": "fn visit_block(&mut self, b: &hir::Block) { run_lints!(self, check_block, late_passes, b); hir_visit::walk_block(self, b); run_lints!(self, check_block_post, late_passes, b); } fn visit_arm(&mut self, a: &hir::Arm) {", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-5ca167bf9327db5cef394580410f3b57c95521b53c1462fef32c2811c6487bfd", "text": "run_lints!(cx, check_item, early_passes, it); cx.visit_ids(|v| v.visit_item(it)); ast_visit::walk_item(cx, it); run_lints!(cx, check_item_post, early_passes, it); }) }", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-137932b32ff0c6f0d2d11734538d82203259d9890a9c323fa50270beec1c15b2", "text": "fn visit_block(&mut self, b: &ast::Block) { run_lints!(self, check_block, early_passes, b); ast_visit::walk_block(self, b); run_lints!(self, check_block_post, early_passes, b); } fn visit_arm(&mut self, a: &ast::Arm) {", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-3c12f952c9d55e9e36dd347e88a36a205bb28413d7a703a6e2100685dff3bc4f", "text": "run_lints!(cx, check_crate, late_passes, krate); hir_visit::walk_crate(cx, krate); run_lints!(cx, check_crate_post, late_passes, krate); }); // If we missed any lints added to the session, then there's a bug somewhere", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-754a55dbe08bc4a174e942e3218ef7ea1a028a19cdc7d46551790394755aa3cd", "text": "run_lints!(cx, check_crate, early_passes, krate); ast_visit::walk_crate(cx, krate); run_lints!(cx, check_crate_post, early_passes, krate); }); // Put the lint store back in the session.", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-f50b4c07fac902455c158062996d1f20208ed3cd6764920ead25e3bcb149537e", "text": "pub trait LateLintPass: LintPass { fn check_name(&mut self, _: &LateContext, _: Span, _: ast::Name) { } fn check_crate(&mut self, _: &LateContext, _: &hir::Crate) { } fn check_crate_post(&mut self, _: &LateContext, _: &hir::Crate) { } fn check_mod(&mut self, _: &LateContext, _: &hir::Mod, _: Span, _: ast::NodeId) { } fn check_foreign_item(&mut self, _: &LateContext, _: &hir::ForeignItem) { } fn check_item(&mut self, _: &LateContext, _: &hir::Item) { } fn check_item_post(&mut self, _: &LateContext, _: &hir::Item) { } fn check_local(&mut self, _: &LateContext, _: &hir::Local) { } fn check_block(&mut self, _: &LateContext, _: &hir::Block) { } fn check_block_post(&mut self, _: &LateContext, _: &hir::Block) { } fn check_stmt(&mut self, _: &LateContext, _: &hir::Stmt) { } fn check_arm(&mut self, _: &LateContext, _: &hir::Arm) { } fn check_pat(&mut self, _: &LateContext, _: &hir::Pat) { }", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-96394ede21775e1a48832b44e9b32d001727e191c0535cceb7504f93fbe95d67", "query": "For clippy's lint, I made use of a so I could stop clippy from linting the same macro multiple times while being reasonably fast. However, this unfortunately means the lint ignores .. attributes below the crate level. To solve this and other related problems, I propose any of allow a lint to check anything from anywhere in the analysis by moving up the parent chain to check the level allow a lint to opt out of calls for the duration of a specific node including its child nodes add methods to the {Early, Late}LintPass trait, so lints can decide themselves The third option is probably the easiest to implement, yet the most flexible. This would bring the trait closer to other stream parser implementations like SAX.\nSeeing that we already have a few, _post methods, I think adding some more will be an acceptable solution here. I'll have a PR shortly.", "positive_passages": [{"docid": "doc-en-rust-7a23e3f0611bbf4011dccc24544be993c4a3a8a117d4365d04bedbe454d496ac", "text": "pub trait EarlyLintPass: LintPass { fn check_ident(&mut self, _: &EarlyContext, _: Span, _: ast::Ident) { } fn check_crate(&mut self, _: &EarlyContext, _: &ast::Crate) { } fn check_crate_post(&mut self, _: &EarlyContext, _: &ast::Crate) { } fn check_mod(&mut self, _: &EarlyContext, _: &ast::Mod, _: Span, _: ast::NodeId) { } fn check_foreign_item(&mut self, _: &EarlyContext, _: &ast::ForeignItem) { } fn check_item(&mut self, _: &EarlyContext, _: &ast::Item) { } fn check_item_post(&mut self, _: &EarlyContext, _: &ast::Item) { } fn check_local(&mut self, _: &EarlyContext, _: &ast::Local) { } fn check_block(&mut self, _: &EarlyContext, _: &ast::Block) { } fn check_block_post(&mut self, _: &EarlyContext, _: &ast::Block) { } fn check_stmt(&mut self, _: &EarlyContext, _: &ast::Stmt) { } fn check_arm(&mut self, _: &EarlyContext, _: &ast::Arm) { } fn check_pat(&mut self, _: &EarlyContext, _: &ast::Pat) { }", "commid": "rust_pr_31562"}], "negative_passages": []} {"query_id": "q-en-rust-9adf74faa9faab24deeff452c50c4d5fe17c4318695a97bd82e74c3b9bf8ff45", "query": "The documentation of the unstable function makes a reference to , but this function does not exist (anymore?). Expressing the semantics of in terms of something that does not exist is unhelpful. The problem seems to be located . I imagine the fix would involve traveling back in time until before was removed and copying its documentation, after correcting for the differences between it and the function.\nFilling drop is a temporary \"feature\" of rust, and in the long run this function will not be needed once the in-struct drop flags are gone. In anticipation of this improvement, it would be best to not write any code that relies on readanddrop. Edit: Well maybe that's exactly what to have in the docs..\nI'm not using that function, I was merely browsing the docs and noticed the documentation was wrong. I would expect the documentation of functions to be correct even if they are unstable and scheduled for removal, unless the removal is scheduled for the immediate future.", "positive_passages": [{"docid": "doc-en-rust-05794d08531f79a61c28863dd98520c274ceb6e974f5b783c286d6975ea35b5f", "text": "tmp } /// Variant of read_and_zero that writes the specific drop-flag byte /// (which may be more appropriate than zero). #[allow(missing_docs)] #[inline(always)] #[unstable(feature = \"filling_drop\", reason = \"may play a larger role in std::ptr future extensions\",", "commid": "rust_pr_31607"}], "negative_passages": []} {"query_id": "q-en-rust-c4498c0b389846dca9c8e5000a990d69493dd69319ad11f66ddcfe9afe072ecf", "query": "In a few places in there are casts from to or for Windows APIs which causes breakage on 64-bit. Some examples I have found so far: () () (rust-lang-nursery/rand, ) There may be more.\nWell I didn't find any more so I'll close this.", "positive_passages": [{"docid": "doc-en-rust-a824613b3215dd37aeb9fa0bffcf38d7c334edd2418da6cc3b8688d153934b0c", "text": "use prelude::v1::*; use cmp; use ffi::{CStr, CString}; use fmt; use io::{self, Error, ErrorKind};", "commid": "rust_pr_31858"}], "negative_passages": []} {"query_id": "q-en-rust-c4498c0b389846dca9c8e5000a990d69493dd69319ad11f66ddcfe9afe072ecf", "query": "In a few places in there are casts from to or for Windows APIs which causes breakage on 64-bit. Some examples I have found so far: () () (rust-lang-nursery/rand, ) There may be more.\nWell I didn't find any more so I'll close this.", "positive_passages": [{"docid": "doc-en-rust-2d0578296c8337e421d4804be3940bf95a62c077e2947b778838014e06c4f77d", "text": "} pub fn write(&self, buf: &[u8]) -> io::Result { let len = cmp::min(buf.len(), ::max_value() as usize) as wrlen_t; let ret = try!(cvt(unsafe { c::send(*self.inner.as_inner(), buf.as_ptr() as *const c_void, buf.len() as wrlen_t, len, 0) })); Ok(ret as usize)", "commid": "rust_pr_31858"}], "negative_passages": []} {"query_id": "q-en-rust-c4498c0b389846dca9c8e5000a990d69493dd69319ad11f66ddcfe9afe072ecf", "query": "In a few places in there are casts from to or for Windows APIs which causes breakage on 64-bit. Some examples I have found so far: () () (rust-lang-nursery/rand, ) There may be more.\nWell I didn't find any more so I'll close this.", "positive_passages": [{"docid": "doc-en-rust-33ece65ce1bde9563358e4565b76dd836ad5a4e5b39666fd878e0116daaafae1", "text": "pub fn recv_from(&self, buf: &mut [u8]) -> io::Result<(usize, SocketAddr)> { let mut storage: c::sockaddr_storage = unsafe { mem::zeroed() }; let mut addrlen = mem::size_of_val(&storage) as c::socklen_t; let len = cmp::min(buf.len(), ::max_value() as usize) as wrlen_t; let n = try!(cvt(unsafe { c::recvfrom(*self.inner.as_inner(), buf.as_mut_ptr() as *mut c_void, buf.len() as wrlen_t, 0, len, 0, &mut storage as *mut _ as *mut _, &mut addrlen) })); Ok((n as usize, try!(sockaddr_to_addr(&storage, addrlen as usize)))) } pub fn send_to(&self, buf: &[u8], dst: &SocketAddr) -> io::Result { let len = cmp::min(buf.len(), ::max_value() as usize) as wrlen_t; let (dstp, dstlen) = dst.into_inner(); let ret = try!(cvt(unsafe { c::sendto(*self.inner.as_inner(), buf.as_ptr() as *const c_void, buf.len() as wrlen_t, buf.as_ptr() as *const c_void, len, 0, dstp, dstlen) })); Ok(ret as usize)", "commid": "rust_pr_31858"}], "negative_passages": []} {"query_id": "q-en-rust-c4498c0b389846dca9c8e5000a990d69493dd69319ad11f66ddcfe9afe072ecf", "query": "In a few places in there are casts from to or for Windows APIs which causes breakage on 64-bit. Some examples I have found so far: () () (rust-lang-nursery/rand, ) There may be more.\nWell I didn't find any more so I'll close this.", "positive_passages": [{"docid": "doc-en-rust-9202ff78fd59f34017409006f251b923474d9490a4c1fb916e4d50c77c91823a", "text": "// option. This file may not be copied, modified, or distributed // except according to those terms. use cmp; use io; use libc::{c_int, c_void}; use mem;", "commid": "rust_pr_31858"}], "negative_passages": []} {"query_id": "q-en-rust-c4498c0b389846dca9c8e5000a990d69493dd69319ad11f66ddcfe9afe072ecf", "query": "In a few places in there are casts from to or for Windows APIs which causes breakage on 64-bit. Some examples I have found so far: () () (rust-lang-nursery/rand, ) There may be more.\nWell I didn't find any more so I'll close this.", "positive_passages": [{"docid": "doc-en-rust-e7d814148def1361119c101aaa22d0657b69ee7acbba2aebc180727c49fd24e8", "text": "pub fn read(&self, buf: &mut [u8]) -> io::Result { // On unix when a socket is shut down all further reads return 0, so we // do the same on windows to map a shut down socket to returning EOF. let len = cmp::min(buf.len(), i32::max_value() as usize) as i32; unsafe { match c::recv(self.0, buf.as_mut_ptr() as *mut c_void, buf.len() as i32, 0) { match c::recv(self.0, buf.as_mut_ptr() as *mut c_void, len, 0) { -1 if c::WSAGetLastError() == c::WSAESHUTDOWN => Ok(0), -1 => Err(last_error()), n => Ok(n as usize)", "commid": "rust_pr_31858"}], "negative_passages": []} {"query_id": "q-en-rust-e0c5dc5bea107a7e436913a7cca5b4738e650fa018ed8a4de301827b26a2b97e", "query": "Travis-CI build failed, my local builds with a slightly older rustc nightly works just fine. This is the repository in question and the Travis-CI build log: Repo: Travis-CI Log: rustc only seems to fail when compiling my crate: Could be related to , a way older and closed issue. (rustc 0.13.0-dev) Backtrace:\ncc\nRelated: arcnmx/cargo-clippy/issues/19", "positive_passages": [{"docid": "doc-en-rust-2ddcdc59ae146e34d046c59a162d075221009031d9afc657784d4803d3777244", "text": "// Define the name or return the existing binding if there is a collision. pub fn try_define_child(&self, name: Name, ns: Namespace, binding: NameBinding<'a>) -> Result<(), &'a NameBinding<'a>> { if self.resolutions.borrow_state() != ::std::cell::BorrowState::Unused { return Ok(()); } self.update_resolution(name, ns, |resolution| { resolution.try_define(self.arenas.alloc_name_binding(binding)) })", "commid": "rust_pr_32814"}], "negative_passages": []} {"query_id": "q-en-rust-e0c5dc5bea107a7e436913a7cca5b4738e650fa018ed8a4de301827b26a2b97e", "query": "Travis-CI build failed, my local builds with a slightly older rustc nightly works just fine. This is the repository in question and the Travis-CI build log: Repo: Travis-CI Log: rustc only seems to fail when compiling my crate: Could be related to , a way older and closed issue. (rustc 0.13.0-dev) Backtrace:\ncc\nRelated: arcnmx/cargo-clippy/issues/19", "positive_passages": [{"docid": "doc-en-rust-2ec7962c1fb5fcd0f965c11528f818cb5b51f4e82493667570acf2e3becdfa5d", "text": "fn update_resolution(&self, name: Name, ns: Namespace, update: F) -> T where F: FnOnce(&mut NameResolution<'a>) -> T { let mut resolution = &mut *self.resolution(name, ns).borrow_mut(); let was_known = resolution.binding().is_some(); let t = update(resolution); if !was_known { if let Some(binding) = resolution.binding() { self.define_in_glob_importers(name, ns, binding); // Ensure that `resolution` isn't borrowed during `define_in_glob_importers`, // where it might end up getting re-defined via a glob cycle. let (new_binding, t) = { let mut resolution = &mut *self.resolution(name, ns).borrow_mut(); let was_known = resolution.binding().is_some(); let t = update(resolution); if was_known { return t; } match resolution.binding() { Some(binding) => (binding, t), None => return t, } } }; self.define_in_glob_importers(name, ns, new_binding); t }", "commid": "rust_pr_32814"}], "negative_passages": []} {"query_id": "q-en-rust-e0c5dc5bea107a7e436913a7cca5b4738e650fa018ed8a4de301827b26a2b97e", "query": "Travis-CI build failed, my local builds with a slightly older rustc nightly works just fine. This is the repository in question and the Travis-CI build log: Repo: Travis-CI Log: rustc only seems to fail when compiling my crate: Could be related to , a way older and closed issue. (rustc 0.13.0-dev) Backtrace:\ncc\nRelated: arcnmx/cargo-clippy/issues/19", "positive_passages": [{"docid": "doc-en-rust-0581ef0de325b64cc5d84b02a21e4f64f43c2722a396800fa693a4a58110eda3", "text": "// Add to target_module's glob_importers target_module.glob_importers.borrow_mut().push((module_, directive)); for (&(name, ns), resolution) in target_module.resolutions.borrow().iter() { if let Some(binding) = resolution.borrow().binding() { if binding.defined_with(DefModifiers::IMPORTABLE | DefModifiers::PUBLIC) { let _ = module_.try_define_child(name, ns, directive.import(binding, None)); } // Ensure that `resolutions` isn't borrowed during `try_define_child`, // since it might get updated via a glob cycle. let bindings = target_module.resolutions.borrow().iter().filter_map(|(name, resolution)| { resolution.borrow().binding().map(|binding| (*name, binding)) }).collect::>(); for ((name, ns), binding) in bindings { if binding.defined_with(DefModifiers::IMPORTABLE | DefModifiers::PUBLIC) { let _ = module_.try_define_child(name, ns, directive.import(binding, None)); } }", "commid": "rust_pr_32814"}], "negative_passages": []} {"query_id": "q-en-rust-e0c5dc5bea107a7e436913a7cca5b4738e650fa018ed8a4de301827b26a2b97e", "query": "Travis-CI build failed, my local builds with a slightly older rustc nightly works just fine. This is the repository in question and the Travis-CI build log: Repo: Travis-CI Log: rustc only seems to fail when compiling my crate: Could be related to , a way older and closed issue. (rustc 0.13.0-dev) Backtrace:\ncc\nRelated: arcnmx/cargo-clippy/issues/19", "positive_passages": [{"docid": "doc-en-rust-02a5f3e412be980f717be01321602bd695a96139fac82ad489c6773aedc0f0bb", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. mod foo { pub use bar::*; pub use main as f; //~ ERROR has already been imported } mod bar { pub use foo::*; } pub use foo::*; pub use baz::*; //~ ERROR has already been imported mod baz { pub use super::*; } pub fn main() {} ", "commid": "rust_pr_32814"}], "negative_passages": []} {"query_id": "q-en-rust-e0c5dc5bea107a7e436913a7cca5b4738e650fa018ed8a4de301827b26a2b97e", "query": "Travis-CI build failed, my local builds with a slightly older rustc nightly works just fine. This is the repository in question and the Travis-CI build log: Repo: Travis-CI Log: rustc only seems to fail when compiling my crate: Could be related to , a way older and closed issue. (rustc 0.13.0-dev) Backtrace:\ncc\nRelated: arcnmx/cargo-clippy/issues/19", "positive_passages": [{"docid": "doc-en-rust-36091a81d861407b2f7843e0cf34c2c010159f0504dad8d8876c2d52d4a3f9c2", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub use bar::*; mod bar { pub use super::*; } pub use baz::*; //~ ERROR already been imported mod baz { pub use main as f; } pub fn main() {} ", "commid": "rust_pr_32814"}], "negative_passages": []} {"query_id": "q-en-rust-7cf7d005d83b750e534e9022ac631655a32f323f17e76fe295db847608d731a8", "query": ":\nTriage: still an issue.\nI just hit this, though the error is a bit different now: \"resolving bounds after type-checking\" rather than \"fulfilling during trans\". error: internal compiler error: Encountered errors resolving bounds after type-checking note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box // Test to ensure that trait bounds are propertly // checked on specializable associated types #![allow(incomplete_features)] #![feature(specialization)] trait UncheckedCopy: Sized { type Output: From + Copy + Into; } impl UncheckedCopy for T { default type Output = Self; //~^ ERROR: the trait bound `T: Copy` is not satisfied } fn unchecked_copy(other: &T::Output) -> T { (*other).into() } fn bug(origin: String) { // Turn the String into it's Output type... // Which we can just do by `.into()`, the assoc type states `From`. let origin_output = origin.into(); // Make a copy of String::Output, which is a String... let mut copy: String = unchecked_copy::(&origin_output); // Turn the Output type into a String again, // Which we can just do by `.into()`, the assoc type states `Into`. let mut origin: String = origin_output.into(); // assert both Strings use the same buffer. assert_eq!(copy.as_ptr(), origin.as_ptr()); // Any use of the copy we made becomes invalid, drop(origin); // OH NO! UB UB UB UB! copy.push_str(\" world!\"); println!(\"{}\", copy); } fn main() { bug(String::from(\"hello\")); } ", "commid": "rust_pr_84496"}], "negative_passages": []} {"query_id": "q-en-rust-7cf7d005d83b750e534e9022ac631655a32f323f17e76fe295db847608d731a8", "query": ":\nTriage: still an issue.\nI just hit this, though the error is a bit different now: \"resolving bounds after type-checking\" rather than \"fulfilling during trans\". error: internal compiler error: Encountered errors resolving bounds after type-checking note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box error[E0277]: the trait bound `T: Copy` is not satisfied --> $DIR/issue-33017.rs:12:5 | LL | type Output: From + Copy + Into; | ---- required by this bound in `UncheckedCopy::Output` ... LL | default type Output = Self; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `Copy` is not implemented for `T` | help: consider restricting type parameter `T` | LL | impl UncheckedCopy for T { | ^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error For more information about this error, try `rustc --explain E0277`. ", "commid": "rust_pr_84496"}], "negative_passages": []} {"query_id": "q-en-rust-7cf7d005d83b750e534e9022ac631655a32f323f17e76fe295db847608d731a8", "query": ":\nTriage: still an issue.\nI just hit this, though the error is a bit different now: \"resolving bounds after type-checking\" rather than \"fulfilling during trans\". error: internal compiler error: Encountered errors resolving bounds after type-checking note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box #![allow(incomplete_features)] #![feature(const_generics)] #![feature(const_evaluatable_checked)] #![feature(specialization)] pub trait Trait { type Type; } impl Trait for T { default type Type = [u8; 1]; } impl Trait for *const T { type Type = [u8; std::mem::size_of::<::Type>()]; //~^ ERROR: unconstrained generic constant } fn main() {} ", "commid": "rust_pr_84496"}], "negative_passages": []} {"query_id": "q-en-rust-7cf7d005d83b750e534e9022ac631655a32f323f17e76fe295db847608d731a8", "query": ":\nTriage: still an issue.\nI just hit this, though the error is a bit different now: \"resolving bounds after type-checking\" rather than \"fulfilling during trans\". error: internal compiler error: Encountered errors resolving bounds after type-checking note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box error: unconstrained generic constant --> $DIR/issue-51892.rs:15:5 | LL | type Type = [u8; std::mem::size_of::<::Type>()]; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = help: try adding a `where` bound using this expression: `where [(); std::mem::size_of::<::Type>()]:` error: aborting due to previous error ", "commid": "rust_pr_84496"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-1e85c953e0b7aba2aa42befaafa6eb67adfc022a22da6c3b042da52093bbbaf9", "text": "Ok(()) } fn document_short(w: &mut fmt::Formatter, item: &clean::Item, link: AssocItemLink) -> fmt::Result { if let Some(s) = item.doc_value() { let markdown = if s.contains('n') { format!(\"{} [Read more]({})\", &plain_summary_line(Some(s)), naive_assoc_href(item, link)) } else { format!(\"{}\", &plain_summary_line(Some(s))) }; write!(w, \"

{}
\", Markdown(&markdown))?; } Ok(()) } fn item_module(w: &mut fmt::Formatter, cx: &Context, item: &clean::Item, items: &[clean::Item]) -> fmt::Result { document(w, cx, item)?;", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-55ae39679947601bec10e3fb4aac508854903ac6fab018ff1628943ba435e8fa", "text": "} fn doctraititem(w: &mut fmt::Formatter, cx: &Context, item: &clean::Item, link: AssocItemLink, render_static: bool, is_default_item: bool, outer_version: Option<&str>) -> fmt::Result { link: AssocItemLink, render_static: bool, is_default_item: bool, outer_version: Option<&str>, trait_: Option<&clean::Trait>) -> fmt::Result { let shortty = shortty(item); let name = item.name.as_ref().unwrap();", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-788245b218cbccc42e40ba7c2142204adbf9f4ec0da107aabdd87ac6aab86aa8", "text": "_ => panic!(\"can't make docs for trait item with name {:?}\", item.name) } if !is_default_item && (!is_static || render_static) { document(w, cx, item) } else { Ok(()) if !is_static || render_static { if !is_default_item { if item.doc_value().is_some() { document(w, cx, item)?; } else { // In case the item isn't documented, // provide short documentation from the trait if let Some(t) = trait_ { if let Some(it) = t.items.iter() .find(|i| i.name == item.name) { document_short(w, it, link)?; } } } } else { document_short(w, item, link)?; } } Ok(()) } let traits = &cache().traits; let trait_ = i.trait_did().and_then(|did| traits.get(&did)); write!(w, \"
\")?; for trait_item in &i.inner_impl().items { doctraititem(w, cx, trait_item, link, render_header, false, outer_version)?; doctraititem(w, cx, trait_item, link, render_header, false, outer_version, trait_)?; } fn render_default_items(w: &mut fmt::Formatter,", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-22fd2e41322e0382ed825f04e6a8cfdceb0de95d7253059f6eefdb632239a870", "text": "let assoc_link = AssocItemLink::GotoSource(did, &i.provided_trait_methods); doctraititem(w, cx, trait_item, assoc_link, render_static, true, outer_version)?; outer_version, None)?; } Ok(()) } // If we've implemented a trait, then also emit documentation for all // default items which weren't overridden in the implementation block. if let Some(did) = i.trait_did() { if let Some(t) = cache().traits.get(&did) { render_default_items(w, cx, t, &i.inner_impl(), render_header, outer_version)?; } if let Some(t) = trait_ { render_default_items(w, cx, t, &i.inner_impl(), render_header, outer_version)?; } write!(w, \"
\")?; Ok(())", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-81d651e1dc5b87a64189b437443e094860a340eb64d0c1c090b1e19451300400", "text": "fn b_method(&self) -> usize { self.a_method() } /// Docs associated with the trait c_method definition. /// /// There is another line fn c_method(&self) -> usize { self.a_method() } } // @has manual_impl/struct.S1.html '//*[@class=\"trait\"]' 'T' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S1 trait implementation.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S1 trait a_method implementation.' // @!has - '//*[@class=\"docblock\"]' 'Docs associated with the trait a_method definition.' // @!has - '//*[@class=\"docblock\"]' 'Docs associated with the trait b_method definition.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the trait b_method definition.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the trait b_method definition.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the trait c_method definition.' // @!has - '//*[@class=\"docblock\"]' 'There is another line' // @has - '//*[@class=\"docblock\"]' 'Read more' pub struct S1(usize); /// Docs associated with the S1 trait implementation.", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-5518394ab2a5be64af9da3a95da7385fda01eb525e926b13595f93eff8e494a5", "text": "// @has manual_impl/struct.S2.html '//*[@class=\"trait\"]' 'T' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S2 trait implementation.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S2 trait a_method implementation.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S2 trait b_method implementation.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S2 trait c_method implementation.' // @!has - '//*[@class=\"docblock\"]' 'Docs associated with the trait a_method definition.' // @!has - '//*[@class=\"docblock\"]' 'Docs associated with the trait b_method definition.' // @!has - '//*[@class=\"docblock\"]' 'Docs associated with the trait c_method definition.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the trait b_method definition.' // @!has - '//*[@class=\"docblock\"]' 'Read more' pub struct S2(usize); /// Docs associated with the S2 trait implementation.", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-20a95cf5f3ec40863dab199c2720b55f92ea67336161ab59f3242bfc97a64655", "text": "self.0 } /// Docs associated with the S2 trait b_method implementation. fn b_method(&self) -> usize { /// Docs associated with the S2 trait c_method implementation. fn c_method(&self) -> usize { 5 } }", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-1849971e45ae2fb5dc89442d0e0a315cfc29d48b2532f257517ee51eb9355ce3", "query": "We've always been unsure how much information to show about implementations, and right now, we only show signatures. someone suggested that including the summary line would be helpful, so that you can understand what these things actually do, if you haven't used that particular trait before.\nPerhaps we should somehow link to the full description? With a [read more]-like link\nReducing the amount of link chasing necessary to understand what a function does sounds like a good idea, and I agree with that a \"read more\" link should be included. The question then is how much of the description to include. First sentence? Some arbitrary number of characters?", "positive_passages": [{"docid": "doc-en-rust-eee5a7d43ac2bf72fbc04552ba84ac5e9ed8a7153671f222cb0d244b3c043b99", "text": "// @has manual_impl/struct.S3.html '//*[@class=\"trait\"]' 'T' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S3 trait implementation.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the S3 trait b_method implementation.' // @!has - '//*[@class=\"docblock\"]' 'Docs associated with the trait a_method definition.' // @has - '//*[@class=\"docblock\"]' 'Docs associated with the trait a_method definition.' pub struct S3(usize); /// Docs associated with the S3 trait implementation.", "commid": "rust_pr_33679"}], "negative_passages": []} {"query_id": "q-en-rust-949e662d50fbe444d5457780e2a6aea46c364a1c593dc264e1797d10455ac1a8", "query": "The following snippet: produces the following error: How is anybody supposed to extract from that one forgot the in the closure? The explanation E0369 doesn't help here. Maybe a better thing for rustc would be to \"try\" and see if a combination of and would have solved the issue and offer a suggestion of the form \"Did you meant to make a reference here?\".", "positive_passages": [{"docid": "doc-en-rust-2eeee177e5d37fe98622e8bc3273748265e1272d8d8172e66fdc2562674a40d7", "text": "use super::FnCtxt; use hir::def_id::DefId; use rustc::ty::{Ty, TypeFoldable, PreferMutLvalue}; use rustc::ty::{Ty, TypeFoldable, PreferMutLvalue, TypeVariants}; use rustc::infer::type_variable::TypeVariableOrigin; use syntax::ast; use syntax::symbol::Symbol;", "commid": "rust_pr_38617"}], "negative_passages": []} {"query_id": "q-en-rust-949e662d50fbe444d5457780e2a6aea46c364a1c593dc264e1797d10455ac1a8", "query": "The following snippet: produces the following error: How is anybody supposed to extract from that one forgot the in the closure? The explanation E0369 doesn't help here. Maybe a better thing for rustc would be to \"try\" and see if a combination of and would have solved the issue and offer a suggestion of the form \"Did you meant to make a reference here?\".", "positive_passages": [{"docid": "doc-en-rust-40bcda27da7e54a55a21ca21afc9a721ab0078dd7f3de69e08f789b7b667c67c", "text": "\"binary operation `{}` cannot be applied to type `{}`\", op.node.as_str(), lhs_ty); if let TypeVariants::TyRef(_, ref ty_mut) = lhs_ty.sty { if !self.infcx.type_moves_by_default(ty_mut.ty, lhs_expr.span) && self.lookup_op_method(expr, ty_mut.ty, vec![rhs_ty_var], Symbol::intern(name), trait_def_id, lhs_expr).is_ok() { err.span_note( lhs_expr.span, &format!( \"this is a reference of type that `{}` can be applied to, you need to dereference this variable once for this operation to work\", op.node.as_str())); } } let missing_trait = match op.node { hir::BiAdd => Some(\"std::ops::Add\"), hir::BiSub => Some(\"std::ops::Sub\"),", "commid": "rust_pr_38617"}], "negative_passages": []} {"query_id": "q-en-rust-949e662d50fbe444d5457780e2a6aea46c364a1c593dc264e1797d10455ac1a8", "query": "The following snippet: produces the following error: How is anybody supposed to extract from that one forgot the in the closure? The explanation E0369 doesn't help here. Maybe a better thing for rustc would be to \"try\" and see if a combination of and would have solved the issue and offer a suggestion of the form \"Did you meant to make a reference here?\".", "positive_passages": [{"docid": "doc-en-rust-911da902aa7268ef7596cf03eb3f9954e1d65f906b41b8106f12062a6482858a", "text": " // Copyright 2012-2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { let v = vec![1, 2, 3, 4, 5, 6, 7, 8, 9]; let vr = v.iter().filter(|x| { x % 2 == 0 //~^ ERROR binary operation `%` cannot be applied to type `&&{integer}` //~| NOTE this is a reference of type that `%` can be applied to //~| NOTE an implementation of `std::ops::Rem` might be missing for `&&{integer}` }); println!(\"{:?}\", vr); } ", "commid": "rust_pr_38617"}], "negative_passages": []} {"query_id": "q-en-rust-949e662d50fbe444d5457780e2a6aea46c364a1c593dc264e1797d10455ac1a8", "query": "The following snippet: produces the following error: How is anybody supposed to extract from that one forgot the in the closure? The explanation E0369 doesn't help here. Maybe a better thing for rustc would be to \"try\" and see if a combination of and would have solved the issue and offer a suggestion of the form \"Did you meant to make a reference here?\".", "positive_passages": [{"docid": "doc-en-rust-b7feb2dd1a986cbe3f9ce7d1be3c2f11d40db6b556aa51cc8ca00fae295f1b0b", "text": " // Copyright 2012-2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { let a: &String = &\"1\".to_owned(); let b: &str = &\"2\"; let c = a + b; //~^ ERROR binary operation `+` cannot be applied to type `&std::string::String` //~| NOTE an implementation of `std::ops::Add` might be missing for `&std::string::String` println!(\"{:?}\", c); } ", "commid": "rust_pr_38617"}], "negative_passages": []} {"query_id": "q-en-rust-76be00e8b186c067dffce851f57076d7eecbf5364cdf367195a4d6afeaffbe45", "query": "Check out in mode : This succeeds (when compiled with optimizations) despite having both a and a . is safe from this because at it believes it's mutably borrowed and panics. There's a line in which would catch this with debug assertions enabled, I believe it should be an ( is unstable and likely not performance critical anyway). cc", "positive_passages": [{"docid": "doc-en-rust-3c751521c069dbf8203c1333e608c36d992d83c87fe0eb3bf2d67ebade2b0dde", "text": "// Since this Ref exists, we know the borrow flag // is not set to WRITING. let borrow = self.borrow.get(); debug_assert!(borrow != WRITING && borrow != UNUSED); debug_assert!(borrow != UNUSED); // Prevent the borrow counter from overflowing. assert!(borrow != WRITING); self.borrow.set(borrow + 1); BorrowRef { borrow: self.borrow } }", "commid": "rust_pr_33960"}], "negative_passages": []} {"query_id": "q-en-rust-03765fc0fbfd2735ef7422ac81400aa5527ac0dafd263d2c11750968d69b0e9f", "query": ". It's been awhile since it's been updated. Here's two changes that haven't been merged into this document yet: Anything else?", "positive_passages": [{"docid": "doc-en-rust-833563be1e27ba43ea1edf389e3c88451cb9b1ddcaabbca995c1cd611a6dca41", "text": "the compiler. They are accessible via the `--explain` flag. Each explanation comes with an example of how to trigger it and advice on how to fix it. Please read [RFC 1567](https://github.com/rust-lang/rfcs/blob/master/text/1567-long-error-codes-explanation-normalization.md) for details on how to format and write long error codes. * All of them are accessible [online](http://doc.rust-lang.org/error-index.html), which are auto-generated from rustc source code in different places: [librustc](https://github.com/rust-lang/rust/blob/master/src/librustc/diagnostics.rs), [libsyntax](https://github.com/rust-lang/rust/blob/master/src/libsyntax/diagnostics.rs), [librustc_borrowck](https://github.com/rust-lang/rust/blob/master/src/librustc_borrowck/diagnostics.rs), [librustc_const_eval](https://github.com/rust-lang/rust/blob/master/src/librustc_const_eval/diagnostics.rs), [librustc_lint](https://github.com/rust-lang/rust/blob/master/src/librustc_lint/types.rs), [librustc_metadata](https://github.com/rust-lang/rust/blob/master/src/librustc_metadata/diagnostics.rs), [librustc_mir](https://github.com/rust-lang/rust/blob/master/src/librustc_mir/diagnostics.rs), [librustc_passes](https://github.com/rust-lang/rust/blob/master/src/librustc_passes/diagnostics.rs), [librustc_privacy](https://github.com/rust-lang/rust/blob/master/src/librustc_privacy/diagnostics.rs), [librustc_resolve](https://github.com/rust-lang/rust/blob/master/src/librustc_resolve/diagnostics.rs), [librustc_trans](https://github.com/rust-lang/rust/blob/master/src/librustc_trans/diagnostics.rs), [librustc_plugin](https://github.com/rust-lang/rust/blob/master/src/librustc_plugin/diagnostics.rs), [librustc_typeck](https://github.com/rust-lang/rust/blob/master/src/librustc_typeck/diagnostics.rs). * Explanations have full markdown support. Use it, especially to highlight code with backticks.", "commid": "rust_pr_41791"}], "negative_passages": []} {"query_id": "q-en-rust-03765fc0fbfd2735ef7422ac81400aa5527ac0dafd263d2c11750968d69b0e9f", "query": ". It's been awhile since it's been updated. Here's two changes that haven't been merged into this document yet: Anything else?", "positive_passages": [{"docid": "doc-en-rust-9cb6d5a91afb4ea45ae9d6bb1c96aab96c1cc336b1ac77372d42d24bb2e2e435", "text": "* Flags should be orthogonal to each other. For example, if we'd have a json-emitting variant of multiple actions `foo` and `bar`, an additional --json flag is better than adding `--foo-json` and `--bar-json`. * Always give options a long descriptive name, if only for better * Always give options a long descriptive name, if only for more understandable compiler scripts. * The `--verbose` flag is for adding verbose information to `rustc` output when not compiling a program. For example, using it with the `--version` flag", "commid": "rust_pr_41791"}], "negative_passages": []} {"query_id": "q-en-rust-22a295fa6d5d180e7034c15824f0a9604a0fda1c64691abcd6313fa7c5ac4bbe", "query": "Today I learned that eight to the power of eight is zero. Who'da thunk? Incorrect: :arrowright: :x: :arrowright: :x: :arrowright: :x: Correct: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: Oddly enough, this one breaks the pattern: :arrowright: overflow :whitecheck_mark: Haven't bothered with further cases; at this point it's probably best to just look at the implementation and see what's going on.\nwas supposed to fix this; looks like it missed the unsigned version?\nsays that is on nightly.\nI meant that fixed this for pow() on signed integers, but not on unsigned integers; it looks like the implementation is copy-pasted for some reason.\nCould I take a stab at this?\nSure; feel free to ask if you have questions.\nIs overflow checking off in the libcore tests? It'd be nice to be able to write some standard tests for overflow, but they seem to not panic even on cases that should right now.\nOverflow checking in unit tests () reflects whether the whole compiler was built in debug or release mode. If you need to specify the overflow mode, you can use a separate test which explicitly specifies , like .\nThis has since been fixed, in , which it looks like also a test, so closing.", "positive_passages": [{"docid": "doc-en-rust-40f25d5d4ec605faffb691cbc734f0f1fcd713089cae12f59f6e6dce30adb304", "text": "let mut base = self; let mut acc = 1; let mut prev_base = self; let mut base_oflo = false; while exp > 0 { while exp > 1 { if (exp & 1) == 1 { if base_oflo { // ensure overflow occurs in the same manner it // would have otherwise (i.e. signal any exception // it would have otherwise). acc = acc * (prev_base * prev_base); } else { acc = acc * base; } acc = acc * base; } prev_base = base; let (new_base, new_base_oflo) = base.overflowing_mul(base); base = new_base; base_oflo = new_base_oflo; exp /= 2; base = base * base; } // Deal with the final bit of the exponent separately, since // squaring the base afterwards is not necessary and may cause a // needless overflow. if exp == 1 { acc = acc * base; } acc }", "commid": "rust_pr_34942"}], "negative_passages": []} {"query_id": "q-en-rust-22a295fa6d5d180e7034c15824f0a9604a0fda1c64691abcd6313fa7c5ac4bbe", "query": "Today I learned that eight to the power of eight is zero. Who'da thunk? Incorrect: :arrowright: :x: :arrowright: :x: :arrowright: :x: Correct: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: Oddly enough, this one breaks the pattern: :arrowright: overflow :whitecheck_mark: Haven't bothered with further cases; at this point it's probably best to just look at the implementation and see what's going on.\nwas supposed to fix this; looks like it missed the unsigned version?\nsays that is on nightly.\nI meant that fixed this for pow() on signed integers, but not on unsigned integers; it looks like the implementation is copy-pasted for some reason.\nCould I take a stab at this?\nSure; feel free to ask if you have questions.\nIs overflow checking off in the libcore tests? It'd be nice to be able to write some standard tests for overflow, but they seem to not panic even on cases that should right now.\nOverflow checking in unit tests () reflects whether the whole compiler was built in debug or release mode. If you need to specify the overflow mode, you can use a separate test which explicitly specifies , like .\nThis has since been fixed, in , which it looks like also a test, so closing.", "positive_passages": [{"docid": "doc-en-rust-25de414261b95ecc468b45e0652f27cedf2748002401d733d5b5f1e140032107", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:thread 'main' panicked at 'attempt to multiply with overflow' // compile-flags: -C debug-assertions fn main() { let _x = 2i32.pow(1024); } ", "commid": "rust_pr_34942"}], "negative_passages": []} {"query_id": "q-en-rust-22a295fa6d5d180e7034c15824f0a9604a0fda1c64691abcd6313fa7c5ac4bbe", "query": "Today I learned that eight to the power of eight is zero. Who'da thunk? Incorrect: :arrowright: :x: :arrowright: :x: :arrowright: :x: Correct: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: Oddly enough, this one breaks the pattern: :arrowright: overflow :whitecheck_mark: Haven't bothered with further cases; at this point it's probably best to just look at the implementation and see what's going on.\nwas supposed to fix this; looks like it missed the unsigned version?\nsays that is on nightly.\nI meant that fixed this for pow() on signed integers, but not on unsigned integers; it looks like the implementation is copy-pasted for some reason.\nCould I take a stab at this?\nSure; feel free to ask if you have questions.\nIs overflow checking off in the libcore tests? It'd be nice to be able to write some standard tests for overflow, but they seem to not panic even on cases that should right now.\nOverflow checking in unit tests () reflects whether the whole compiler was built in debug or release mode. If you need to specify the overflow mode, you can use a separate test which explicitly specifies , like .\nThis has since been fixed, in , which it looks like also a test, so closing.", "positive_passages": [{"docid": "doc-en-rust-c0e5b51e07d0268b2b409a33a39184423c4cdcb72795fd2bc8c566bb33428a7b", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:thread 'main' panicked at 'attempt to multiply with overflow' // compile-flags: -C debug-assertions fn main() { let _x = 2u32.pow(1024); } ", "commid": "rust_pr_34942"}], "negative_passages": []} {"query_id": "q-en-rust-22a295fa6d5d180e7034c15824f0a9604a0fda1c64691abcd6313fa7c5ac4bbe", "query": "Today I learned that eight to the power of eight is zero. Who'da thunk? Incorrect: :arrowright: :x: :arrowright: :x: :arrowright: :x: Correct: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: :arrowright: overflow :whitecheckmark: Oddly enough, this one breaks the pattern: :arrowright: overflow :whitecheck_mark: Haven't bothered with further cases; at this point it's probably best to just look at the implementation and see what's going on.\nwas supposed to fix this; looks like it missed the unsigned version?\nsays that is on nightly.\nI meant that fixed this for pow() on signed integers, but not on unsigned integers; it looks like the implementation is copy-pasted for some reason.\nCould I take a stab at this?\nSure; feel free to ask if you have questions.\nIs overflow checking off in the libcore tests? It'd be nice to be able to write some standard tests for overflow, but they seem to not panic even on cases that should right now.\nOverflow checking in unit tests () reflects whether the whole compiler was built in debug or release mode. If you need to specify the overflow mode, you can use a separate test which explicitly specifies , like .\nThis has since been fixed, in , which it looks like also a test, so closing.", "positive_passages": [{"docid": "doc-en-rust-ab3835b087aaff909b6f2ea083744e5b914961e7c71d67f448ca3f6e81774506", "text": " // Copyright 2015 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // error-pattern:thread 'main' panicked at 'attempt to multiply with overflow' // compile-flags: -C debug-assertions fn main() { let _x = 2i32.pow(1024); } ", "commid": "rust_pr_34942"}], "negative_passages": []} {"query_id": "q-en-rust-25ea9bbfbc56c8e28464d67aa22a8de268146778b21624a03749ce8a1b64f61e", "query": "Some days ago, I wrote some code involving char as HashMap key. At that time, char didn't implement the IterBytes trait, so I implemented it locally myself. After updating rust, my code failed to compile with a somewhat less than obvious error. Example: Produces the following errors: Note that there is no way to know that the problem is the duplicate implementation of IterBytes.\nReproduced as of\nBumping to 0.7, though.\nNot critical for 0.7. Nominating for milestone 5, production-ready.\naccepted for production-ready milestone\nStill reproduces: It has a bunch of other non-bug errors too, but it still exhibits this bug.\nAccepted for P-high.", "positive_passages": [{"docid": "doc-en-rust-c0c7f9729c8044c344734bce9a2237ca864a8c5c8c4e027fa86fad434a0ba0d2", "text": "// each trait in the system to its implementations. use metadata::csearch::{each_impl, get_impl_trait}; use metadata::csearch::{each_impl, get_impl_trait, each_implementation_for_trait}; use metadata::csearch; use middle::ty::get; use middle::ty::{ImplContainer, lookup_item_type, subst};", "commid": "rust_pr_12023.0"}], "negative_passages": []} {"query_id": "q-en-rust-25ea9bbfbc56c8e28464d67aa22a8de268146778b21624a03749ce8a1b64f61e", "query": "Some days ago, I wrote some code involving char as HashMap key. At that time, char didn't implement the IterBytes trait, so I implemented it locally myself. After updating rust, my code failed to compile with a somewhat less than obvious error. Example: Produces the following errors: Note that there is no way to know that the problem is the duplicate implementation of IterBytes.\nReproduced as of\nBumping to 0.7, though.\nNot critical for 0.7. Nominating for milestone 5, production-ready.\naccepted for production-ready milestone\nStill reproduces: It has a bunch of other non-bug errors too, but it still exhibits this bug.\nAccepted for P-high.", "positive_passages": [{"docid": "doc-en-rust-2edbee47bca8cc8274d00e6e68c2ffd8589a27f3c9a9769c7f86699faeb195f3", "text": "pub fn check_implementation_coherence_of(&self, trait_def_id: DefId) { // Unify pairs of polytypes. self.iter_impls_of_trait(trait_def_id, |a| { self.iter_impls_of_trait_local(trait_def_id, |a| { let implementation_a = a; let polytype_a = self.get_self_type_for_implementation(implementation_a);", "commid": "rust_pr_12023.0"}], "negative_passages": []} {"query_id": "q-en-rust-25ea9bbfbc56c8e28464d67aa22a8de268146778b21624a03749ce8a1b64f61e", "query": "Some days ago, I wrote some code involving char as HashMap key. At that time, char didn't implement the IterBytes trait, so I implemented it locally myself. After updating rust, my code failed to compile with a somewhat less than obvious error. Example: Produces the following errors: Note that there is no way to know that the problem is the duplicate implementation of IterBytes.\nReproduced as of\nBumping to 0.7, though.\nNot critical for 0.7. Nominating for milestone 5, production-ready.\naccepted for production-ready milestone\nStill reproduces: It has a bunch of other non-bug errors too, but it still exhibits this bug.\nAccepted for P-high.", "positive_passages": [{"docid": "doc-en-rust-d652db6d0e36e6786eca54639948a6de9efa6dc7553fb2f3f3dc80cf58d89dd4", "text": "if self.polytypes_unify(polytype_a.clone(), polytype_b) { let session = self.crate_context.tcx.sess; session.span_err( self.span_of_impl(implementation_b), self.span_of_impl(implementation_a), format!(\"conflicting implementations for trait `{}`\", ty::item_path_str(self.crate_context.tcx, trait_def_id))); session.span_note(self.span_of_impl(implementation_a), \"note conflicting implementation here\"); if implementation_b.did.crate == LOCAL_CRATE { session.span_note(self.span_of_impl(implementation_b), \"note conflicting implementation here\"); } else { let crate_store = self.crate_context.tcx.sess.cstore; let cdata = crate_store.get_crate_data(implementation_b.did.crate); session.note( \"conflicting implementation in crate `\" + cdata.name + \"`\"); } } } })", "commid": "rust_pr_12023.0"}], "negative_passages": []} {"query_id": "q-en-rust-25ea9bbfbc56c8e28464d67aa22a8de268146778b21624a03749ce8a1b64f61e", "query": "Some days ago, I wrote some code involving char as HashMap key. At that time, char didn't implement the IterBytes trait, so I implemented it locally myself. After updating rust, my code failed to compile with a somewhat less than obvious error. Example: Produces the following errors: Note that there is no way to know that the problem is the duplicate implementation of IterBytes.\nReproduced as of\nBumping to 0.7, though.\nNot critical for 0.7. Nominating for milestone 5, production-ready.\naccepted for production-ready milestone\nStill reproduces: It has a bunch of other non-bug errors too, but it still exhibits this bug.\nAccepted for P-high.", "positive_passages": [{"docid": "doc-en-rust-57766293cad0d0e83358f5599b3f6cd3b14de53543601dc42eaef4058e725f7e", "text": "} pub fn iter_impls_of_trait(&self, trait_def_id: DefId, f: |@Impl|) { self.iter_impls_of_trait_local(trait_def_id, |x| f(x)); if trait_def_id.crate == LOCAL_CRATE { return; } let crate_store = self.crate_context.tcx.sess.cstore; csearch::each_implementation_for_trait(crate_store, trait_def_id, |impl_def_id| { let implementation = @csearch::get_impl(self.crate_context.tcx, impl_def_id); let _ = lookup_item_type(self.crate_context.tcx, implementation.did); f(implementation); }); } pub fn iter_impls_of_trait_local(&self, trait_def_id: DefId, f: |@Impl|) { let trait_impls = self.crate_context.tcx.trait_impls.borrow(); match trait_impls.get().find(&trait_def_id) { Some(impls) => {", "commid": "rust_pr_12023.0"}], "negative_passages": []} {"query_id": "q-en-rust-25ea9bbfbc56c8e28464d67aa22a8de268146778b21624a03749ce8a1b64f61e", "query": "Some days ago, I wrote some code involving char as HashMap key. At that time, char didn't implement the IterBytes trait, so I implemented it locally myself. After updating rust, my code failed to compile with a somewhat less than obvious error. Example: Produces the following errors: Note that there is no way to know that the problem is the duplicate implementation of IterBytes.\nReproduced as of\nBumping to 0.7, though.\nNot critical for 0.7. Nominating for milestone 5, production-ready.\naccepted for production-ready milestone\nStill reproduces: It has a bunch of other non-bug errors too, but it still exhibits this bug.\nAccepted for P-high.", "positive_passages": [{"docid": "doc-en-rust-22266463a75cef8c2aedf4962b19664d1bc028a22ae7f4e31c844ae895e310aa", "text": " // Copyright 2012 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub trait Foo { } impl Foo for int { } ", "commid": "rust_pr_12023.0"}], "negative_passages": []} {"query_id": "q-en-rust-25ea9bbfbc56c8e28464d67aa22a8de268146778b21624a03749ce8a1b64f61e", "query": "Some days ago, I wrote some code involving char as HashMap key. At that time, char didn't implement the IterBytes trait, so I implemented it locally myself. After updating rust, my code failed to compile with a somewhat less than obvious error. Example: Produces the following errors: Note that there is no way to know that the problem is the duplicate implementation of IterBytes.\nReproduced as of\nBumping to 0.7, though.\nNot critical for 0.7. Nominating for milestone 5, production-ready.\naccepted for production-ready milestone\nStill reproduces: It has a bunch of other non-bug errors too, but it still exhibits this bug.\nAccepted for P-high.", "positive_passages": [{"docid": "doc-en-rust-f0f1e83ed8fc0e2f7fa99ace56e1dc8c12ec0c283af51cfe8f075fb8311dadd2", "text": " // Copyright 2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Regression test for #3512 - conflicting trait impls in different crates should give a // 'conflicting implementations' error message. // aux-build:trait_impl_conflict.rs extern mod trait_impl_conflict; use trait_impl_conflict::Foo; impl
Foo for A { //~^ ERROR conflicting implementations for trait `trait_impl_conflict::Foo` //~^^ ERROR cannot provide an extension implementation where both trait and type are not defined in this crate } fn main() { } ", "commid": "rust_pr_12023.0"}], "negative_passages": []} {"query_id": "q-en-rust-c934b994315d8163d5a92a1aa1507b62349a4e006228bb7ae3b13534262fc256", "query": "From: src/test/compile- Error E0164 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-4c63558cc9d3a7d49de2099dde6fed65293235d399582099d4ea5d910ad53337", "text": "let &(ref first_arm_pats, _) = &arms[0]; let first_pat = &first_arm_pats[0]; let span = first_pat.span; span_err!(cx.tcx.sess, span, E0165, \"irrefutable while-let pattern\"); struct_span_err!(cx.tcx.sess, span, E0165, \"irrefutable while-let pattern\") .span_label(span, &format!(\"irrefutable pattern\")) .emit(); }, hir::MatchSource::ForLoopDesugar => {", "commid": "rust_pr_36125"}], "negative_passages": []} {"query_id": "q-en-rust-c934b994315d8163d5a92a1aa1507b62349a4e006228bb7ae3b13534262fc256", "query": "From: src/test/compile- Error E0164 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-8b08f0d27415a104fa48b934f0765a0355d43ca6fc67afd713dc36a5a957c834", "text": "tcx.sess.add_lint(lint::builtin::MATCH_OF_UNIT_VARIANT_VIA_PAREN_DOTDOT, pat.id, pat.span, msg); } else { span_err!(tcx.sess, pat.span, E0164, \"{}\", msg); struct_span_err!(tcx.sess, pat.span, E0164, \"{}\", msg) .span_label(pat.span, &format!(\"not a tuple variant or struct\")).emit(); on_error(); } };", "commid": "rust_pr_36125"}], "negative_passages": []} {"query_id": "q-en-rust-c934b994315d8163d5a92a1aa1507b62349a4e006228bb7ae3b13534262fc256", "query": "From: src/test/compile- Error E0164 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-ff8270d0215ddbada1cc300e82fad9e762a3ece3b4bbee04e4bcb0dce691dc6b", "text": ".emit(); } Err(CopyImplementationError::HasDestructor) => { span_err!(tcx.sess, span, E0184, struct_span_err!(tcx.sess, span, E0184, \"the trait `Copy` may not be implemented for this type; the type has a destructor\"); the type has a destructor\") .span_label(span, &format!(\"Copy not allowed on types with destructors\")) .emit(); } } });", "commid": "rust_pr_36125"}], "negative_passages": []} {"query_id": "q-en-rust-c934b994315d8163d5a92a1aa1507b62349a4e006228bb7ae3b13534262fc256", "query": "From: src/test/compile- Error E0164 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-89b81bc2ced1e642ad37d83220c291f95b6679f99542f755cb9e538c06b880f2", "text": "fn bar(foo: Foo) -> u32 { match foo { Foo::B(i) => i, //~ ERROR E0164 //~| NOTE not a tuple variant or struct } }", "commid": "rust_pr_36125"}], "negative_passages": []} {"query_id": "q-en-rust-c934b994315d8163d5a92a1aa1507b62349a4e006228bb7ae3b13534262fc256", "query": "From: src/test/compile- Error E0164 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-5ac9244c3a6c15fbdae0e13a10f6fd45a8109f654a92720dd619ed7e268a3979", "text": "fn main() { let irr = Irrefutable(0); while let Irrefutable(x) = irr { //~ ERROR E0165 //~| irrefutable pattern // ... } }", "commid": "rust_pr_36125"}], "negative_passages": []} {"query_id": "q-en-rust-c934b994315d8163d5a92a1aa1507b62349a4e006228bb7ae3b13534262fc256", "query": "From: src/test/compile- Error E0164 needs a span_label, updating it from: To:", "positive_passages": [{"docid": "doc-en-rust-2c8aa37c87b67ffc2fb3043a235cfa0e3690b6a6bb14675adbdd13e85870289f", "text": "// except according to those terms. #[derive(Copy)] //~ ERROR E0184 //~| NOTE Copy not allowed on types with destructors //~| NOTE in this expansion of #[derive(Copy)] struct Foo; impl Drop for Foo {", "commid": "rust_pr_36125"}], "negative_passages": []} {"query_id": "q-en-rust-61a2541bfe423eb32c08a11a0d4091752de83c7053319ea07f82f1b4d1c05f4e", "query": "Here is a minimum example of the problem. Compilation fails with following error. Which means that the fragment qualifier is not taken into account to choose which variant of the macro is to be used.\nYeah, macros don't do backtracking. You'll need to adjust the matchers to provide some other way to disambiguate.\nYes, I obviously know that I could just put brackets or whatever around one of the patterns to make it distinct. But the whole point of having fragment qualifiers is precisely to distinguish between two otherwise identical patterns. If macros cannot do it although they have the perfect tool for it, then they are stupid and ought to be improved.\nIt wasn't obvious to me that you knew the workaround -- sorry for assuming! Also see .\nThanks for pointing the PR. It is blocked for now, but I suppose there is still hope. Sorry for being harsh. I have been raging for months over how annoyingly limited and stupid Rust macros are, and I finally found a case general and obvious enough to justify my complaining\u2026\ntriage: Macros still don't backtrack. I can't find an issue for that, though I do suspect that one exists in some form. Labeling as A-macros, though.\nhere is (closed).\nClosing in favor of -- macro rules do not backtrack.", "positive_passages": [{"docid": "doc-en-rust-592884341fb30d497dc5b58af9f9e620642125b395062590d792c2da15cb5bbe", "text": "return Error(span, \"missing fragment specifier\".to_string()); } } TokenTree::MetaVarDecl(..) => { TokenTree::MetaVarDecl(_, _, id) => { // Built-in nonterminals never start with these tokens, // so we can eliminate them from consideration. match *token { token::CloseDelim(_) => {}, _ => bb_eis.push(ei), if may_begin_with(&*id.name.as_str(), token) { bb_eis.push(ei); } } seq @ TokenTree::Delimited(..) | seq @ TokenTree::Token(_, DocComment(..)) => {", "commid": "rust_pr_42913"}], "negative_passages": []} {"query_id": "q-en-rust-61a2541bfe423eb32c08a11a0d4091752de83c7053319ea07f82f1b4d1c05f4e", "query": "Here is a minimum example of the problem. Compilation fails with following error. Which means that the fragment qualifier is not taken into account to choose which variant of the macro is to be used.\nYeah, macros don't do backtracking. You'll need to adjust the matchers to provide some other way to disambiguate.\nYes, I obviously know that I could just put brackets or whatever around one of the patterns to make it distinct. But the whole point of having fragment qualifiers is precisely to distinguish between two otherwise identical patterns. If macros cannot do it although they have the perfect tool for it, then they are stupid and ought to be improved.\nIt wasn't obvious to me that you knew the workaround -- sorry for assuming! Also see .\nThanks for pointing the PR. It is blocked for now, but I suppose there is still hope. Sorry for being harsh. I have been raging for months over how annoyingly limited and stupid Rust macros are, and I finally found a case general and obvious enough to justify my complaining\u2026\ntriage: Macros still don't backtrack. I can't find an issue for that, though I do suspect that one exists in some form. Labeling as A-macros, though.\nhere is (closed).\nClosing in favor of -- macro rules do not backtrack.", "positive_passages": [{"docid": "doc-en-rust-bfec328db0155d500fac5884264b1c54a94fcb3b40409b1aea2abe7833398e6e", "text": "} } /// Checks whether a non-terminal may begin with a particular token. /// /// Returning `false` is a *stability guarantee* that such a matcher will *never* begin with that /// token. Be conservative (return true) if not sure. fn may_begin_with(name: &str, token: &Token) -> bool { /// Checks whether the non-terminal may contain a single (non-keyword) identifier. fn may_be_ident(nt: &token::Nonterminal) -> bool { match *nt { token::NtItem(_) | token::NtBlock(_) | token::NtVis(_) => false, _ => true, } } match name { \"expr\" => token.can_begin_expr(), \"ty\" => token.can_begin_type(), \"ident\" => token.is_ident(), \"vis\" => match *token { // The follow-set of :vis + \"priv\" keyword + interpolated Token::Comma | Token::Ident(_) | Token::Interpolated(_) => true, _ => token.can_begin_type(), }, \"block\" => match *token { Token::OpenDelim(token::Brace) => true, Token::Interpolated(ref nt) => match nt.0 { token::NtItem(_) | token::NtPat(_) | token::NtTy(_) | token::NtIdent(_) | token::NtMeta(_) | token::NtPath(_) | token::NtVis(_) => false, // none of these may start with '{'. _ => true, }, _ => false, }, \"path\" | \"meta\" => match *token { Token::ModSep | Token::Ident(_) => true, Token::Interpolated(ref nt) => match nt.0 { token::NtPath(_) | token::NtMeta(_) => true, _ => may_be_ident(&nt.0), }, _ => false, }, \"pat\" => match *token { Token::Ident(_) | // box, ref, mut, and other identifiers (can stricten) Token::OpenDelim(token::Paren) | // tuple pattern Token::OpenDelim(token::Bracket) | // slice pattern Token::BinOp(token::And) | // reference Token::BinOp(token::Minus) | // negative literal Token::AndAnd | // double reference Token::Literal(..) | // literal Token::DotDot | // range pattern (future compat) Token::DotDotDot | // range pattern (future compat) Token::ModSep | // path Token::Lt | // path (UFCS constant) Token::BinOp(token::Shl) | // path (double UFCS) Token::Underscore => true, // placeholder Token::Interpolated(ref nt) => may_be_ident(&nt.0), _ => false, }, _ => match *token { token::CloseDelim(_) => false, _ => true, }, } } fn parse_nt<'a>(p: &mut Parser<'a>, sp: Span, name: &str) -> Nonterminal { if name == \"tt\" { return token::NtTT(p.parse_token_tree());", "commid": "rust_pr_42913"}], "negative_passages": []} {"query_id": "q-en-rust-61a2541bfe423eb32c08a11a0d4091752de83c7053319ea07f82f1b4d1c05f4e", "query": "Here is a minimum example of the problem. Compilation fails with following error. Which means that the fragment qualifier is not taken into account to choose which variant of the macro is to be used.\nYeah, macros don't do backtracking. You'll need to adjust the matchers to provide some other way to disambiguate.\nYes, I obviously know that I could just put brackets or whatever around one of the patterns to make it distinct. But the whole point of having fragment qualifiers is precisely to distinguish between two otherwise identical patterns. If macros cannot do it although they have the perfect tool for it, then they are stupid and ought to be improved.\nIt wasn't obvious to me that you knew the workaround -- sorry for assuming! Also see .\nThanks for pointing the PR. It is blocked for now, but I suppose there is still hope. Sorry for being harsh. I have been raging for months over how annoyingly limited and stupid Rust macros are, and I finally found a case general and obvious enough to justify my complaining\u2026\ntriage: Macros still don't backtrack. I can't find an issue for that, though I do suspect that one exists in some form. Labeling as A-macros, though.\nhere is (closed).\nClosing in favor of -- macro rules do not backtrack.", "positive_passages": [{"docid": "doc-en-rust-e130950d89df032312409a5bff0bb0d8d0cc74299da955820234065ca2c344d9", "text": "// except according to those terms. fn main() { panic!(@); //~ ERROR expected expression, found `@` panic!(@); //~ ERROR no rules expected the token `@` }", "commid": "rust_pr_42913"}], "negative_passages": []} {"query_id": "q-en-rust-61a2541bfe423eb32c08a11a0d4091752de83c7053319ea07f82f1b4d1c05f4e", "query": "Here is a minimum example of the problem. Compilation fails with following error. Which means that the fragment qualifier is not taken into account to choose which variant of the macro is to be used.\nYeah, macros don't do backtracking. You'll need to adjust the matchers to provide some other way to disambiguate.\nYes, I obviously know that I could just put brackets or whatever around one of the patterns to make it distinct. But the whole point of having fragment qualifiers is precisely to distinguish between two otherwise identical patterns. If macros cannot do it although they have the perfect tool for it, then they are stupid and ought to be improved.\nIt wasn't obvious to me that you knew the workaround -- sorry for assuming! Also see .\nThanks for pointing the PR. It is blocked for now, but I suppose there is still hope. Sorry for being harsh. I have been raging for months over how annoyingly limited and stupid Rust macros are, and I finally found a case general and obvious enough to justify my complaining\u2026\ntriage: Macros still don't backtrack. I can't find an issue for that, though I do suspect that one exists in some form. Labeling as A-macros, though.\nhere is (closed).\nClosing in favor of -- macro rules do not backtrack.", "positive_passages": [{"docid": "doc-en-rust-59e629907a9828b1d4b83a580ba24531a5ebd0f4286fae62b4115aba0d786525", "text": "// except according to those terms. pub fn main() { vec![,]; //~ ERROR expected expression, found `,` vec![,]; //~ ERROR no rules expected the token `,` }", "commid": "rust_pr_42913"}], "negative_passages": []} {"query_id": "q-en-rust-61a2541bfe423eb32c08a11a0d4091752de83c7053319ea07f82f1b4d1c05f4e", "query": "Here is a minimum example of the problem. Compilation fails with following error. Which means that the fragment qualifier is not taken into account to choose which variant of the macro is to be used.\nYeah, macros don't do backtracking. You'll need to adjust the matchers to provide some other way to disambiguate.\nYes, I obviously know that I could just put brackets or whatever around one of the patterns to make it distinct. But the whole point of having fragment qualifiers is precisely to distinguish between two otherwise identical patterns. If macros cannot do it although they have the perfect tool for it, then they are stupid and ought to be improved.\nIt wasn't obvious to me that you knew the workaround -- sorry for assuming! Also see .\nThanks for pointing the PR. It is blocked for now, but I suppose there is still hope. Sorry for being harsh. I have been raging for months over how annoyingly limited and stupid Rust macros are, and I finally found a case general and obvious enough to justify my complaining\u2026\ntriage: Macros still don't backtrack. I can't find an issue for that, though I do suspect that one exists in some form. Labeling as A-macros, though.\nhere is (closed).\nClosing in favor of -- macro rules do not backtrack.", "positive_passages": [{"docid": "doc-en-rust-f4fe0fd75586faba8219094f2f96f7d9ff8177dd3da2e5769daee3cd009a49ff", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(macro_vis_matcher)] //{{{ issue 40569 ============================================================== macro_rules! my_struct { ($(#[$meta:meta])* $ident:ident) => { $(#[$meta])* struct $ident; } } my_struct!(#[derive(Debug, PartialEq)] Foo40569); fn test_40569() { assert_eq!(Foo40569, Foo40569); } //}}} //{{{ issue 26444 ============================================================== macro_rules! foo_26444 { ($($beginning:ident),*; $middle:ident; $($end:ident),*) => { stringify!($($beginning,)* $middle $(,$end)*) } } fn test_26444() { assert_eq!(\"a , b , c , d , e\", foo_26444!(a, b; c; d, e)); assert_eq!(\"f\", foo_26444!(; f ;)); } macro_rules! pat_26444 { ($fname:ident $($arg:pat)* =) => {} } pat_26444!(foo 1 2 5...7 =); pat_26444!(bar Some(ref x) Ok(ref mut y) &(w, z) =); //}}} //{{{ issue 40984 ============================================================== macro_rules! thread_local_40984 { () => {}; ($(#[$attr:meta])* $vis:vis static $name:ident: $t:ty = $init:expr; $($rest:tt)*) => { thread_local_40984!($($rest)*); }; ($(#[$attr:meta])* $vis:vis static $name:ident: $t:ty = $init:expr) => {}; } thread_local_40984! { // no docs #[allow(unused)] static FOO: i32 = 42; /// docs pub static BAR: String = String::from(\"bar\"); // look at these restrictions!! pub(crate) static BAZ: usize = 0; pub(in foo) static QUUX: usize = 0; } //}}} //{{{ issue 35650 ============================================================== macro_rules! size { ($ty:ty) => { std::mem::size_of::<$ty>() }; ($size:tt) => { $size }; } fn test_35650() { assert_eq!(size!(u64), 8); assert_eq!(size!(5), 5); } //}}} //{{{ issue 27832 ============================================================== macro_rules! m { ( $i:ident ) => (); ( $t:tt $j:tt ) => (); } m!(c); m!(t 9); m!(0 9); m!(struct); m!(struct Foo); macro_rules! m2 { ( $b:expr ) => (); ( $t:tt $u:tt ) => (); } m2!(3); m2!(1 2); m2!(_ 1); m2!(enum Foo); //}}} //{{{ issue 39964 ============================================================== macro_rules! foo_39964 { ($a:ident) => {}; (_) => {}; } foo_39964!(_); //}}} //{{{ issue 34030 ============================================================== macro_rules! foo_34030 { ($($t:ident),* /) => {}; } foo_34030!(a, b/); foo_34030!(a/); foo_34030!(/); //}}} //{{{ issue 24189 ============================================================== macro_rules! foo_24189 { ( pub enum $name:ident { $( #[$attr:meta] )* $var:ident } ) => { pub enum $name { $( #[$attr] )* $var } }; } foo_24189! { pub enum Foo24189 { #[doc = \"Bar\"] Baz } } macro_rules! serializable { ( $(#[$struct_meta:meta])* pub struct $name:ident { $( $(#[$field_meta:meta])* $field:ident: $type_:ty ),* , } ) => { $(#[$struct_meta])* pub struct $name { $( $(#[$field_meta])* $field: $type_ ),* , } } } serializable! { #[allow(dead_code)] /// This is a test pub struct Tester { #[allow(dead_code)] name: String, } } macro_rules! foo_24189_c { ( $( > )* $x:ident ) => { }; } foo_24189_c!( > a ); fn test_24189() { let _ = Foo24189::Baz; let _ = Tester { name: \"\".to_owned() }; } //}}} //{{{ some more tests ========================================================== macro_rules! test_block { (< $($b:block)* >) => {} } test_block!(<>); test_block!(<{}>); test_block!(<{1}{2}>); macro_rules! test_ty { ($($t:ty),* $(,)*) => {} } test_ty!(); test_ty!(,); test_ty!(u8); test_ty!(u8,); macro_rules! test_path { ($($t:path),* $(,)*) => {} } test_path!(); test_path!(,); test_path!(::std); test_path!(std::u8,); test_path!(any, super, super::super::self::path, X::Z<'a, T=U>); macro_rules! test_meta_block { ($($m:meta)* $b:block) => {}; } test_meta_block!(windows {}); //}}} fn main() { test_26444(); test_40569(); test_35650(); test_24189(); } ", "commid": "rust_pr_42913"}], "negative_passages": []} {"query_id": "q-en-rust-a8705e4aea1d9478ec376b5d89b92212902e42d5fbcd288b4ef900948eb2da4a", "query": "Can you provide more assistance in debugging as well? For example what was at line 896 of What configuration did you provide? etc.\nRelevant line is a part of At that point anything pertaining i686 in my were Replacing with does still result in the same error. For now, I just specified for 32 bit version of LLVM I have, which works.\nAh ok, thanks for the info! Right now this should be a better error (of course) but the cause is how rustbuild treats the and arrays in the configuration. No C++ compilers are found for the array, so you can't request any host-related targets (e.g. LLVM) when a triple is only mentioned in the array. You can fix this by moving the target to the array.", "positive_passages": [{"docid": "doc-en-rust-2ab8659868cb4d432b4d06877128b7d3aedf46504844771077cffaf6e829e612", "text": "/// Returns the path to the C++ compiler for the target specified, may panic /// if no C++ compiler was configured for the target. fn cxx(&self, target: &str) -> &Path { self.cxx[target].path() match self.cxx.get(target) { Some(p) => p.path(), None => panic!(\"nntarget `{}` is not configured as a host, only as a targetnn\", target), } } /// Returns flags to pass to the compiler to generate code for `target`.", "commid": "rust_pr_36442"}], "negative_passages": []} {"query_id": "q-en-rust-6db7bee2129b6b17807b6034331f8a7c9c6c43926271e0d899d4faf4b4a88659", "query": "Inlined from Another issue. If a struct has two different custom derives on it and the second one panics, the error span will point to the first one, not the one which panicked. Reproduction Script\nwas \"Reproduction Script\" supposed to be a link? I'm spinning this off from as I don't think it's going to block stabilization and/or closing that issue.\nReproduction script there is a summary tag\n suite(\"check-ui\", \"src/test/ui\", \"ui\", \"ui\"); } if build.config.build.contains(\"msvc\") {", "commid": "rust_pr_38607"}], "negative_passages": []} {"query_id": "q-en-rust-6db7bee2129b6b17807b6034331f8a7c9c6c43926271e0d899d4faf4b4a88659", "query": "Inlined from Another issue. If a struct has two different custom derives on it and the second one panics, the error span will point to the first one, not the one which panicked. Reproduction Script\nwas \"Reproduction Script\" supposed to be a link? I'm spinning this off from as I don't think it's going to block stabilization and/or closing that issue.\nReproduction script there is a summary tag\n suite(\"check-ui\", \"src/test/ui\", \"ui\", \"ui\"); suite(\"check-rpass-full\", \"src/test/run-pass-fulldeps\", \"run-pass\", \"run-pass-fulldeps\"); suite(\"check-rfail-full\", \"src/test/run-fail-fulldeps\",", "commid": "rust_pr_38607"}], "negative_passages": []} {"query_id": "q-en-rust-6db7bee2129b6b17807b6034331f8a7c9c6c43926271e0d899d4faf4b4a88659", "query": "Inlined from Another issue. If a struct has two different custom derives on it and the second one panics, the error span will point to the first one, not the one which panicked. Reproduction Script\nwas \"Reproduction Script\" supposed to be a link? I'm spinning this off from as I don't think it's going to block stabilization and/or closing that issue.\nReproduction script there is a summary tag\n assert!(plan.iter().any(|s| s.name.contains(\"cfail\"))); assert!(plan.iter().any(|s| s.name.contains(\"cfail-full\"))); assert!(plan.iter().any(|s| s.name.contains(\"codegen-units\"))); assert!(plan.iter().any(|s| s.name.contains(\"debuginfo\")));", "commid": "rust_pr_38607"}], "negative_passages": []} {"query_id": "q-en-rust-6db7bee2129b6b17807b6034331f8a7c9c6c43926271e0d899d4faf4b4a88659", "query": "Inlined from Another issue. If a struct has two different custom derives on it and the second one panics, the error span will point to the first one, not the one which panicked. Reproduction Script\nwas \"Reproduction Script\" supposed to be a link? I'm spinning this off from as I don't think it's going to block stabilization and/or closing that issue.\nReproduction script there is a summary tag\n assert!(plan.iter().any(|s| s.name.contains(\"-ui\"))); assert!(plan.iter().any(|s| s.name.contains(\"cfail\"))); assert!(!plan.iter().any(|s| s.name.contains(\"-ui\"))); assert!(plan.iter().any(|s| s.name.contains(\"cfail\"))); assert!(!plan.iter().any(|s| s.name.contains(\"cfail-full\"))); assert!(plan.iter().any(|s| s.name.contains(\"codegen-units\")));", "commid": "rust_pr_38607"}], "negative_passages": []} {"query_id": "q-en-rust-6db7bee2129b6b17807b6034331f8a7c9c6c43926271e0d899d4faf4b4a88659", "query": "Inlined from Another issue. If a struct has two different custom derives on it and the second one panics, the error span will point to the first one, not the one which panicked. Reproduction Script\nwas \"Reproduction Script\" supposed to be a link? I'm spinning this off from as I don't think it's going to block stabilization and/or closing that issue.\nReproduction script there is a summary tag\n // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // no-prefer-dynamic #![crate_type = \"proc-macro\"] #![feature(proc_macro, proc_macro_lib)] extern crate proc_macro; use proc_macro::TokenStream; #[proc_macro_derive(Foo)] pub fn derive_foo(input: TokenStream) -> TokenStream { input } #[proc_macro_derive(Bar)] pub fn derive_bar(input: TokenStream) -> TokenStream { panic!(\"lolnope\"); } ", "commid": "rust_pr_38607"}], "negative_passages": []} {"query_id": "q-en-rust-6db7bee2129b6b17807b6034331f8a7c9c6c43926271e0d899d4faf4b4a88659", "query": "Inlined from Another issue. If a struct has two different custom derives on it and the second one panics, the error span will point to the first one, not the one which panicked. Reproduction Script\nwas \"Reproduction Script\" supposed to be a link? I'm spinning this off from as I don't think it's going to block stabilization and/or closing that issue.\nReproduction script there is a summary tag\n // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // aux-build:plugin.rs #![feature(proc_macro)] #[macro_use] extern crate plugin; #[derive(Foo, Bar)] struct Baz { a: i32, b: i32, } fn main() {} ", "commid": "rust_pr_38607"}], "negative_passages": []} {"query_id": "q-en-rust-6db7bee2129b6b17807b6034331f8a7c9c6c43926271e0d899d4faf4b4a88659", "query": "Inlined from Another issue. If a struct has two different custom derives on it and the second one panics, the error span will point to the first one, not the one which panicked. Reproduction Script\nwas \"Reproduction Script\" supposed to be a link? I'm spinning this off from as I don't think it's going to block stabilization and/or closing that issue.\nReproduction script there is a summary tag\n error: custom derive attribute panicked --> $DIR/issue-36935.rs:17:15 | 17 | #[derive(Foo, Bar)] | ^^^ | = help: message: lolnope ", "commid": "rust_pr_38607"}], "negative_passages": []} {"query_id": "q-en-rust-4b81634400b3a9c18cc7ee4d6fda8119c6f5ca30ec0f1522991f65f402f41b68", "query": "Today when you compile for a target that does not exist rustc gives the standard message for failing to find a crate, for . We can and should be more explicit about what this means. Actually what is almost certainly happening is that you don't have that target installed: It could instead say: A reasonable heuristic for printing this might be if the crate is named or and the sysroot is not overridden.\nWhen I run on I get the error:\nI would like to try my hand at this if no one else has picked it up already. I am fairly new to rust, so I might need some assistance getting the pull request into a something resembling a reasonable state.", "positive_passages": [{"docid": "doc-en-rust-06ce30c508fa23ce752b2b0fc794a87bfc53e0d6beff79a0544d3d2de63013c9", "text": "use schema::{METADATA_HEADER, rustc_version}; use rustc::hir::svh::Svh; use rustc::session::Session; use rustc::session::{config, Session}; use rustc::session::filesearch::{FileSearch, FileMatches, FileDoesntMatch}; use rustc::session::search_paths::PathKind; use rustc::util::common;", "commid": "rust_pr_37424"}], "negative_passages": []} {"query_id": "q-en-rust-4b81634400b3a9c18cc7ee4d6fda8119c6f5ca30ec0f1522991f65f402f41b68", "query": "Today when you compile for a target that does not exist rustc gives the standard message for failing to find a crate, for . We can and should be more explicit about what this means. Actually what is almost certainly happening is that you don't have that target installed: It could instead say: A reasonable heuristic for printing this might be if the crate is named or and the sysroot is not overridden.\nWhen I run on I get the error:\nI would like to try my hand at this if no one else has picked it up already. I am fairly new to rust, so I might need some assistance getting the pull request into a something resembling a reasonable state.", "positive_passages": [{"docid": "doc-en-rust-d7deb48ae998a85c50aad49d9c4f0138a61da29f8e702c5533d5de98ca0de447", "text": "\"can't find crate for `{}`{}\", self.ident, add); if (self.ident == \"std\" || self.ident == \"core\") && self.triple != config::host_triple() { err.note(&format!(\"the `{}` target may not be installed\", self.triple)); } err.span_label(self.span, &format!(\"can't find crate\")); err };", "commid": "rust_pr_37424"}], "negative_passages": []} {"query_id": "q-en-rust-4b81634400b3a9c18cc7ee4d6fda8119c6f5ca30ec0f1522991f65f402f41b68", "query": "Today when you compile for a target that does not exist rustc gives the standard message for failing to find a crate, for . We can and should be more explicit about what this means. Actually what is almost certainly happening is that you don't have that target installed: It could instead say: A reasonable heuristic for printing this might be if the crate is named or and the sysroot is not overridden.\nWhen I run on I get the error:\nI would like to try my hand at this if no one else has picked it up already. I am fairly new to rust, so I might need some assistance getting the pull request into a something resembling a reasonable state.", "positive_passages": [{"docid": "doc-en-rust-c545b3ece583d9e7825358abd42ce55a2a179bbdd0ac3aca7f2f499f3794894a", "text": " // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Tests that compiling for a target which is not installed will result in a helpful // error message. // compile-flags: --target=s390x-unknown-linux-gnu // ignore s390x // error-pattern:target may not be installed fn main() { } ", "commid": "rust_pr_37424"}], "negative_passages": []} {"query_id": "q-en-rust-4b81634400b3a9c18cc7ee4d6fda8119c6f5ca30ec0f1522991f65f402f41b68", "query": "Today when you compile for a target that does not exist rustc gives the standard message for failing to find a crate, for . We can and should be more explicit about what this means. Actually what is almost certainly happening is that you don't have that target installed: It could instead say: A reasonable heuristic for printing this might be if the crate is named or and the sysroot is not overridden.\nWhen I run on I get the error:\nI would like to try my hand at this if no one else has picked it up already. I am fairly new to rust, so I might need some assistance getting the pull request into a something resembling a reasonable state.", "positive_passages": [{"docid": "doc-en-rust-235a476efe21034649b04984074d0eecc06c53585f40a63fbfd8f99dc311d335", "text": "// FIXME (#9639): This needs to handle non-utf8 paths let mut args = vec![input_file.to_str().unwrap().to_owned(), \"-L\".to_owned(), self.config.build_base.to_str().unwrap().to_owned(), format!(\"--target={}\", target)]; self.config.build_base.to_str().unwrap().to_owned()]; // Optionally prevent default --target if specified in test compile-flags. let custom_target = self.props.compile_flags .iter() .fold(false, |acc, ref x| acc || x.starts_with(\"--target\")); if !custom_target { args.extend(vec![ format!(\"--target={}\", target), ]); } if let Some(revision) = self.revision { args.extend(vec![", "commid": "rust_pr_37424"}], "negative_passages": []} {"query_id": "q-en-rust-8c01e5ce4f5c4ac93a1188b80a3bd2537000dd3d5896e675b382df864154db7a", "query": "So I tried to compile core and I get this errors from assembler: the code on line 9468 looks like this: I tried both (4.6.3) and (5.3.0) and both give the same errors. Then I've renamed symbol to in assembly and it compiled successfully. I think msp430 assembler just does not like character in symbol names. So I see 2 options: MSP430 target in LLVM so it will rename symbols with to not emit on MSP430", "positive_passages": [{"docid": "doc-en-rust-6f29eab5c4eba99c0eea6574c4adb3e0c8863a164e2dc8bb4fa6d69812af0d76", "text": "use std::str; pub const MAX_BASE: u64 = 64; pub const ALPHANUMERIC_ONLY: u64 = 62; const BASE_64: &'static [u8; MAX_BASE as usize] = b\"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ@$\";", "commid": "rust_pr_38286"}], "negative_passages": []} {"query_id": "q-en-rust-8c01e5ce4f5c4ac93a1188b80a3bd2537000dd3d5896e675b382df864154db7a", "query": "So I tried to compile core and I get this errors from assembler: the code on line 9468 looks like this: I tried both (4.6.3) and (5.3.0) and both give the same errors. Then I've renamed symbol to in assembly and it compiled successfully. I think msp430 assembler just does not like character in symbol names. So I see 2 options: MSP430 target in LLVM so it will rename symbols with to not emit on MSP430", "positive_passages": [{"docid": "doc-en-rust-f194cd439418506fe44f4d24e56933fc5c2b39c4abbef91ebe5083661b679411", "text": "let mut name = String::with_capacity(prefix.len() + 6); name.push_str(prefix); name.push_str(\".\"); base_n::push_str(idx as u64, base_n::MAX_BASE, &mut name); base_n::push_str(idx as u64, base_n::ALPHANUMERIC_ONLY, &mut name); name } }", "commid": "rust_pr_38286"}], "negative_passages": []} {"query_id": "q-en-rust-f54b62d24067581ab830283dcd93cf3d91df2a375ae817edba9b28f628bd156d", "query": "Vec::withcapacity(u64) panics. Error message: rustc -v error: internal compiler error: ../src/librustctypeck unexpected definition: PrimTy(TyUint(u64)) note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: thread 'rustc' panicked at 'Box #[derive(Copy, Clone)] #[derive(Copy, Clone, PartialEq)] enum PathScope { Global, Lexical,", "commid": "rust_pr_38375"}], "negative_passages": []} {"query_id": "q-en-rust-f54b62d24067581ab830283dcd93cf3d91df2a375ae817edba9b28f628bd156d", "query": "Vec::withcapacity(u64) panics. Error message: rustc -v error: internal compiler error: ../src/librustctypeck unexpected definition: PrimTy(TyUint(u64)) note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: thread 'rustc' panicked at 'Box _ if self.primitive_type_table.primitive_types.contains_key(&path[0].name) => { PathResult::Module(..) | PathResult::Failed(..) if scope == PathScope::Lexical && (ns == TypeNS || path.len() > 1) && self.primitive_type_table.primitive_types.contains_key(&path[0].name) => { PathResolution { base_def: Def::PrimTy(self.primitive_type_table.primitive_types[&path[0].name]), depth: segments.len() - 1,", "commid": "rust_pr_38375"}], "negative_passages": []} {"query_id": "q-en-rust-f54b62d24067581ab830283dcd93cf3d91df2a375ae817edba9b28f628bd156d", "query": "Vec::withcapacity(u64) panics. Error message: rustc -v error: internal compiler error: ../src/librustctypeck unexpected definition: PrimTy(TyUint(u64)) note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: thread 'rustc' panicked at 'Box // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { // Make sure primitive type fallback doesn't work in value namespace std::mem::size_of(u16); //~^ ERROR unresolved name `u16` //~| ERROR this function takes 0 parameters but 1 parameter was supplied // Make sure primitive type fallback doesn't work with global paths let _: ::u8; //~^ ERROR type name `u8` is undefined or not in scope } ", "commid": "rust_pr_38375"}], "negative_passages": []} {"query_id": "q-en-rust-11597af7475ca9e82209f9928939686328c1e9dad83c443c01ff0016f2bb1501", "query": "Compiling this code: Gives me the following error The error is not limited to just the i32 type. : rustc 1.15.0-nightly ( 2016-12-05) binary: rustc commit-hash: commit-date: 2016-12-05 host: x8664-unknown-linux-gnu release: 1.15.0-nightly LLVM version: 3.9 Backtrace:\nDuplicate of , fixed in\nClosing as duplicate, thanks!", "positive_passages": [{"docid": "doc-en-rust-0c60f965108a0d513566a80eb3eeb6dc69f6df4804f79dd73b7f9ace7de5c10f", "text": "} } #[derive(Copy, Clone)] #[derive(Copy, Clone, PartialEq)] enum PathScope { Global, Lexical,", "commid": "rust_pr_38375"}], "negative_passages": []} {"query_id": "q-en-rust-11597af7475ca9e82209f9928939686328c1e9dad83c443c01ff0016f2bb1501", "query": "Compiling this code: Gives me the following error The error is not limited to just the i32 type. : rustc 1.15.0-nightly ( 2016-12-05) binary: rustc commit-hash: commit-date: 2016-12-05 host: x8664-unknown-linux-gnu release: 1.15.0-nightly LLVM version: 3.9 Backtrace:\nDuplicate of , fixed in\nClosing as duplicate, thanks!", "positive_passages": [{"docid": "doc-en-rust-baa66de69a1108c2f80d978361c77b8154dab8bda147864dc400b08f0f688592", "text": "// // Such behavior is required for backward compatibility. // The same fallback is used when `a` resolves to nothing. _ if self.primitive_type_table.primitive_types.contains_key(&path[0].name) => { PathResult::Module(..) | PathResult::Failed(..) if scope == PathScope::Lexical && (ns == TypeNS || path.len() > 1) && self.primitive_type_table.primitive_types.contains_key(&path[0].name) => { PathResolution { base_def: Def::PrimTy(self.primitive_type_table.primitive_types[&path[0].name]), depth: segments.len() - 1,", "commid": "rust_pr_38375"}], "negative_passages": []} {"query_id": "q-en-rust-11597af7475ca9e82209f9928939686328c1e9dad83c443c01ff0016f2bb1501", "query": "Compiling this code: Gives me the following error The error is not limited to just the i32 type. : rustc 1.15.0-nightly ( 2016-12-05) binary: rustc commit-hash: commit-date: 2016-12-05 host: x8664-unknown-linux-gnu release: 1.15.0-nightly LLVM version: 3.9 Backtrace:\nDuplicate of , fixed in\nClosing as duplicate, thanks!", "positive_passages": [{"docid": "doc-en-rust-bcd24b48389024dcf5f4052aad3e529bf9c4848c63289df7cc8388eb3aff3765", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { // Make sure primitive type fallback doesn't work in value namespace std::mem::size_of(u16); //~^ ERROR unresolved name `u16` //~| ERROR this function takes 0 parameters but 1 parameter was supplied // Make sure primitive type fallback doesn't work with global paths let _: ::u8; //~^ ERROR type name `u8` is undefined or not in scope } ", "commid": "rust_pr_38375"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box Resolver<'a> { fn resolution(&self, module: Module<'a>, ident: Ident, ns: Namespace) -> &'a RefCell> { let ident = ident.unhygienize(); *module.resolutions.borrow_mut().entry((ident, ns)) .or_insert_with(|| self.arenas.alloc_name_resolution()) }", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box) -> Result<&'a NameBinding<'a>, Determinacy> { let ident = ident.unhygienize(); self.populate_module_if_necessary(module); let resolution = self.resolution(module, ident, ns)", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box) -> Result<(), &'a NameBinding<'a>> { let ident = ident.unhygienize(); self.update_resolution(module, ident, ns, |this, resolution| { if let Some(old_binding) = resolution.binding { if binding.is_glob_import() {", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box { ViewPathSimple(ident, fld.fold_path(path)) ViewPathSimple(fld.fold_ident(ident), fld.fold_path(path)) } ViewPathGlob(path) => { ViewPathGlob(fld.fold_path(path)) } ViewPathList(path, path_list_idents) => { ViewPathList(fld.fold_path(path), path_list_idents.move_map(|path_list_ident| { Spanned { node: PathListItem_ { id: fld.new_id(path_list_ident.node.id), rename: path_list_ident.node.rename, name: path_list_ident.node.name, }, span: fld.new_span(path_list_ident.span) } })) let path = fld.fold_path(path); let path_list_idents = path_list_idents.move_map(|path_list_ident| Spanned { node: PathListItem_ { id: fld.new_id(path_list_ident.node.id), rename: path_list_ident.node.rename.map(|ident| fld.fold_ident(ident)), name: fld.fold_ident(path_list_ident.node.name), }, span: fld.new_span(path_list_ident.span) }); ViewPathList(path, path_list_idents) } }, span: fld.new_span(span)", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box(b: TypeBinding, fld: &mut T) -> TypeBinding { TypeBinding { id: fld.new_id(b.id), ident: b.ident, ident: fld.fold_ident(b.ident), ty: fld.fold_ty(b.ty), span: fld.new_span(b.span), }", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box>() .into(), id: fld.new_id(id), ident: ident, ident: fld.fold_ident(ident), bounds: fld.fold_bounds(bounds), default: default.map(|x| fld.fold_ty(x)), span: span", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box ident: f.node.ident, ident: folder.fold_ident(f.node.ident), pat: folder.fold_pat(f.node.pat), is_shorthand: f.node.is_shorthand, }}", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-45e6fa9d767e5c36a1e1e49e60a7e49f2b02d088222a119eb57fd552ee33e5d8", "query": "Test case: Compiles successfully on , causes an ICE on . Diff contains only , CC Found while trying to minimize , but filed separately since the error message is different and this one has a small test case.\nDo you have a backtrace?\nYes of course. Sorry I forgot to include it: rust error: internal compiler error: attempted .defid() on invalid def: Err note: the compiler unexpectedly panicked. this is a bug. note: we would appreciate a bug report: note: run with for a backtrace thread 'rustc' panicked at 'Box // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub struct Foo; macro_rules! reexport { () => { use Foo as Bar; } } reexport!(); fn main() { use Bar; fn f(_: Bar) {} } ", "commid": "rust_pr_38566"}], "negative_passages": []} {"query_id": "q-en-rust-30065096429b855672e589981a4fcda2f959184cd14b7912030e3594750bd8fb", "query": "While digging around for something else I have just discovered that setting the UID for a subprocess with also causes a call to in the child. This should be documented. (Related to )\nre-tagging as ; would you like this to appear in the docs, or not? If so, we're happy to do it, but want to make sure to not over-specify.\nThis seems like something we should document, yes. It's an unusual special-case that people should be aware of. This also interacts with .\nOk, this is ready for a PR. I've labeled it accordingly.\nhi! It may be a long time now but may I ask if this issue is still needed? Because I can't find the call anywhere. I looked it as follows: Did I lookup the wrong function and structs?", "positive_passages": [{"docid": "doc-en-rust-eae0f0a641671ef3ce07a382911040394ceb946700e208b0807db78142d13562", "text": "/// Sets the child process's user ID. This translates to a /// `setuid` call in the child process. Failure in the `setuid` /// call will cause the spawn to fail. /// /// # Notes /// /// This will also trigger a call to `setgroups(0, NULL)` in the child /// process if no groups have been specified. /// This removes supplementary groups that might have given the child /// unwanted permissions. #[stable(feature = \"rust1\", since = \"1.0.0\")] fn uid(&mut self, id: UserId) -> &mut process::Command;", "commid": "rust_pr_121650"}], "negative_passages": []} {"query_id": "q-en-rust-30065096429b855672e589981a4fcda2f959184cd14b7912030e3594750bd8fb", "query": "While digging around for something else I have just discovered that setting the UID for a subprocess with also causes a call to in the child. This should be documented. (Related to )\nre-tagging as ; would you like this to appear in the docs, or not? If so, we're happy to do it, but want to make sure to not over-specify.\nThis seems like something we should document, yes. It's an unusual special-case that people should be aware of. This also interacts with .\nOk, this is ready for a PR. I've labeled it accordingly.\nhi! It may be a long time now but may I ask if this issue is still needed? Because I can't find the call anywhere. I looked it as follows: Did I lookup the wrong function and structs?", "positive_passages": [{"docid": "doc-en-rust-eadb004e2279ab0fcd860ca5c8672b18b5a674b7e2854b7cbe5372c99c7ba1fe", "text": "if let Some(u) = self.get_uid() { // When dropping privileges from root, the `setgroups` call // will remove any extraneous groups. We only drop groups // if the current uid is 0 and we weren't given an explicit // if we have CAP_SETGID and we weren't given an explicit // set of groups. If we don't call this, then even though our // uid has dropped, we may still have groups that enable us to // do super-user things. //FIXME: Redox kernel does not support setgroups yet #[cfg(not(target_os = \"redox\"))] if libc::getuid() == 0 && self.get_groups().is_none() { cvt(libc::setgroups(0, crate::ptr::null()))?; if self.get_groups().is_none() { let res = cvt(libc::setgroups(0, crate::ptr::null())); if let Err(e) = res { // Here we ignore the case of not having CAP_SETGID. // An alternative would be to require CAP_SETGID (in // addition to CAP_SETUID) for setting the UID. if e.raw_os_error() != Some(libc::EPERM) { return Err(e.into()); } } } cvt(libc::setuid(u as uid_t))?; }", "commid": "rust_pr_121650"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-423e24357dd4617901b861c193b4b432e999b4477dbb664c55e9341034877f06", "text": "## `Cell` [`Cell`][cell] is a type that provides zero-cost interior mutability, but only for `Copy` types. [`Cell`][cell] is a type that provides zero-cost interior mutability by moving data in and out of the cell. Since the compiler knows that all the data owned by the contained value is on the stack, there's no worry of leaking any data behind references (or worse!) by simply replacing the data.", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-dcce4e289ef7af3610064a47e3561fe3b9b258f5f06846d96d682055d1bc2645", "text": "unnecessary. However, this also relaxes the guarantees that the restriction provides; so if your invariants depend on data stored within `Cell`, you should be careful. This is useful for mutating primitives and other `Copy` types when there is no easy way of This is useful for mutating primitives and other types when there is no easy way of doing it in line with the static rules of `&` and `&mut`. `Cell` does not let you obtain interior references to the data, which makes it safe to freely", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-0e774ba7005d8705090e3cc31aac0fab8a7041cf7d25ad5bdf2fd1e31354d6f1", "text": "#### Cost There is no runtime cost to using `Cell`, however if you are using it to wrap larger (`Copy`) There is no runtime cost to using `Cell`, however if you are using it to wrap larger structs, it might be worthwhile to instead wrap individual fields in `Cell` since each write is otherwise a full copy of the struct. ## `RefCell` [`RefCell`][refcell] also provides interior mutability, but isn't restricted to `Copy` types. [`RefCell`][refcell] also provides interior mutability, but doesn't move data in and out of the cell. Instead, it has a runtime cost. `RefCell` enforces the read-write lock pattern at runtime (it's However, it has a runtime cost. `RefCell` enforces the read-write lock pattern at runtime (it's like a single-threaded mutex), unlike `&T`/`&mut T` which do so at compile time. This is done by the `borrow()` and `borrow_mut()` functions, which modify an internal reference count and return smart pointers which can be dereferenced immutably and mutably respectively. The refcount is restored when", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-62a28b15b234281d7a074aa1810d4a7b84416be72206d1940ee851ae19ab899b", "text": "//! references. We say that `Cell` and `RefCell` provide 'interior mutability', in contrast //! with typical Rust types that exhibit 'inherited mutability'. //! //! Cell types come in two flavors: `Cell` and `RefCell`. `Cell` provides `get` and `set` //! methods that change the interior value with a single method call. `Cell` though is only //! compatible with types that implement `Copy`. For other types, one must use the `RefCell` //! type, acquiring a write lock before mutating. //! Cell types come in two flavors: `Cell` and `RefCell`. `Cell` implements interior //! mutability by moving values in and out of the `Cell`. To use references instead of values, //! one must use the `RefCell` type, acquiring a write lock before mutating. `Cell` provides //! methods to retrieve and change the current interior value: //! //! - For types that implement `Copy`, the `get` method retrieves the current interior value. //! - For types that implement `Default`, the `take` method replaces the current interior value //! with `Default::default()` and returns the replaced value. //! - For all types, the `replace` method replaces the current interior value and returns the //! replaced value and the `into_inner` method consumes the `Cell` and returns the interior //! value. Additionally, the `set` method replaces the interior value, dropping the replaced //! value. //! //! `RefCell` uses Rust's lifetimes to implement 'dynamic borrowing', a process whereby one can //! claim temporary, exclusive, mutable access to the inner value. Borrows for `RefCell`s are", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-4ffe89fedc37c27e64db8da49f3b6d89efaf6b5ee259fb1cd62ea6d2bee0b28f", "text": "use cmp::Ordering; use fmt::{self, Debug, Display}; use marker::Unsize; use mem; use ops::{Deref, DerefMut, CoerceUnsized}; /// A mutable memory location that admits only `Copy` data. /// A mutable memory location. /// /// See the [module-level documentation](index.html) for more. #[stable(feature = \"rust1\", since = \"1.0.0\")]", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-7bbcc52639ecd5243b9a99873e44dba8dee4b2b973d5e0dca404a741557e578a", "text": "} impl Cell { /// Creates a new `Cell` containing the given value. /// /// # Examples /// /// ``` /// use std::cell::Cell; /// /// let c = Cell::new(5); /// ``` #[stable(feature = \"rust1\", since = \"1.0.0\")] #[inline] pub const fn new(value: T) -> Cell { Cell { value: UnsafeCell::new(value), } } /// Returns a copy of the contained value. /// /// # Examples", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-0e019692733b798a8a5461eff4f0c41bf3e590a19034f06c7a51164776fdedf5", "text": "unsafe{ *self.value.get() } } /// Sets the contained value. /// /// # Examples /// /// ``` /// use std::cell::Cell; /// /// let c = Cell::new(5); /// /// c.set(10); /// ``` #[inline] #[stable(feature = \"rust1\", since = \"1.0.0\")] pub fn set(&self, value: T) { unsafe { *self.value.get() = value; } } /// Returns a reference to the underlying `UnsafeCell`. /// /// # Examples", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-1085d160d056a088f9f1759dbda9d61670353caa3eda5fd3d71cdc8919c52739", "text": "} } impl Cell { /// Creates a new `Cell` containing the given value. /// /// # Examples /// /// ``` /// use std::cell::Cell; /// /// let c = Cell::new(5); /// ``` #[stable(feature = \"rust1\", since = \"1.0.0\")] #[inline] pub const fn new(value: T) -> Cell { Cell { value: UnsafeCell::new(value), } } /// Sets the contained value. /// /// # Examples /// /// ``` /// use std::cell::Cell; /// /// let c = Cell::new(5); /// /// c.set(10); /// ``` #[inline] #[stable(feature = \"rust1\", since = \"1.0.0\")] pub fn set(&self, val: T) { let old = self.replace(val); drop(old); } /// Replaces the contained value. /// /// # Examples /// /// ``` /// #![feature(move_cell)] /// use std::cell::Cell; /// /// let c = Cell::new(5); /// let old = c.replace(10); /// /// assert_eq!(5, old); /// ``` #[unstable(feature = \"move_cell\", issue = \"39264\")] pub fn replace(&self, val: T) -> T { mem::replace(unsafe { &mut *self.value.get() }, val) } /// Unwraps the value. /// /// # Examples /// /// ``` /// #![feature(move_cell)] /// use std::cell::Cell; /// /// let c = Cell::new(5); /// let five = c.into_inner(); /// /// assert_eq!(five, 5); /// ``` #[unstable(feature = \"move_cell\", issue = \"39264\")] pub fn into_inner(self) -> T { unsafe { self.value.into_inner() } } } impl Cell { /// Takes the value of the cell, leaving `Default::default()` in its place. /// /// # Examples /// /// ``` /// #![feature(move_cell)] /// use std::cell::Cell; /// /// let c = Cell::new(5); /// let five = c.take(); /// /// assert_eq!(five, 5); /// assert_eq!(c.into_inner(), 0); /// ``` #[unstable(feature = \"move_cell\", issue = \"39264\")] pub fn take(&self) -> T { self.replace(Default::default()) } } #[unstable(feature = \"coerce_unsized\", issue = \"27732\")] impl, U> CoerceUnsized> for Cell {}", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-4bc4f6d14b619b23a0fba302e13a0c3882b82f7cd146b845182e30863a826f9c", "text": "} #[test] fn cell_set() { let cell = Cell::new(10); cell.set(20); assert_eq!(20, cell.get()); let cell = Cell::new(\"Hello\".to_owned()); cell.set(\"World\".to_owned()); assert_eq!(\"World\".to_owned(), cell.into_inner()); } #[test] fn cell_replace() { let cell = Cell::new(10); assert_eq!(10, cell.replace(20)); assert_eq!(20, cell.get()); let cell = Cell::new(\"Hello\".to_owned()); assert_eq!(\"Hello\".to_owned(), cell.replace(\"World\".to_owned())); assert_eq!(\"World\".to_owned(), cell.into_inner()); } #[test] fn cell_into_inner() { let cell = Cell::new(10); assert_eq!(10, cell.into_inner()); let cell = Cell::new(\"Hello world\".to_owned()); assert_eq!(\"Hello world\".to_owned(), cell.into_inner()); } #[test] fn refcell_default() { let cell: RefCell = Default::default(); assert_eq!(0, *cell.borrow());", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-06f999e27cb1ab4d200ef4e22abc642e4fe87b84adeacfd08f9335dd8756b7a7", "query": "broke a library of mine where I had code like this: calls that used to call this trait method now call the new inherent method, but since that\u2019s unstable the crate didn\u2019t build until I . After that I could remove the trait since the new method has the same behavior. Anyway, this is just FYI. If we don\u2019t want this kind of breakage we can\u2019t add any new method to existing types ever, so that doesn\u2019t sound viable. If the new method had had different behavior or if I wanted to keep supporting older compilers, it wouldn\u2019t be too hard to rename my trait\u2019s method or use UFCS.\nYeah in general we reserve the right to break downstream code with differences in method resolution because the downstream code can always be written in an \"elaborated\" form which works on all compilers (e.g. using ufcs). If a change to the standard library breaks an excessive amount of code in the ecosystem, though, then we'd need to take the steps much more purposefully.\nSounds good. Again, I don\u2019t expect anything to change here and only made the previous comment to let people know this happened. I made this particular crate as an experiment, I don\u2019t know of anyone actually using it.\nGiven that the core changes to the bounds on existing methods were instantly stable, it doesn't make a lot of sense to keep and as unstable. I propose we immediately stabilize these APIs. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nping (checkbox)\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.\nWhat\u2019s the next step here? Batch stabilization PR near the end of the release cycle?\nYep, I'm preparing one right now.", "positive_passages": [{"docid": "doc-en-rust-3c10e267167c923022826c5ee75f239813b713a0af2312480e38900dd1b80848", "text": "#![feature(ordering_chaining)] #![feature(result_unwrap_or_default)] #![feature(ptr_unaligned)] #![feature(move_cell)] extern crate core; extern crate test;", "commid": "rust_pr_39287"}], "negative_passages": []} {"query_id": "q-en-rust-2d503a9f7b090b291bdc6a19177f0e3bc2931d974a630b44a0c211ee1906b7cd", "query": "Hi, at this point, rust nightly pretty much builds out of the box (assuming you have an already built beta rustc compiler) on Solaris for x86_64. However, it doesn't (yet) build for sparcv9. I will provide a pull request shortly that adds the final, missing piece to support sparcv9 / Solaris.", "positive_passages": [{"docid": "doc-en-rust-0128ed7a4961719321cedf8f3893f54b337caa08522b3c3b1a1f15b7c46b2e3e", "text": "ostype += 'abi64' elif cputype in {'powerpc', 'ppc', 'ppc64'}: cputype = 'powerpc' elif cputype == 'sparcv9': pass elif cputype in {'amd64', 'x86_64', 'x86-64', 'x64'}: cputype = 'x86_64' else:", "commid": "rust_pr_39903"}], "negative_passages": []} {"query_id": "q-en-rust-2d503a9f7b090b291bdc6a19177f0e3bc2931d974a630b44a0c211ee1906b7cd", "query": "Hi, at this point, rust nightly pretty much builds out of the box (assuming you have an already built beta rustc compiler) on Solaris for x86_64. However, it doesn't (yet) build for sparcv9. I will provide a pull request shortly that adds the final, missing piece to support sparcv9 / Solaris.", "positive_passages": [{"docid": "doc-en-rust-64782bbb3cb0b1a00b9aacbe325fede7e81013f8af4dc2de21eab030f1c5c83a", "text": "// compiler-rt's build system already cfg.flag(\"-fno-builtin\"); cfg.flag(\"-fvisibility=hidden\"); cfg.flag(\"-fomit-frame-pointer\"); // Accepted practice on Solaris is to never omit frame pointer so that // system observability tools work as expected. In addition, at least // on Solaris, -fomit-frame-pointer on sparcv9 appears to generate // references to data outside of the current stack frame. A search of // the gcc bug database provides a variety of issues surrounding // -fomit-frame-pointer on non-x86 platforms. if !target.contains(\"solaris\") && !target.contains(\"sparc\") { cfg.flag(\"-fomit-frame-pointer\"); } cfg.flag(\"-ffreestanding\"); cfg.define(\"VISIBILITY_HIDDEN\", None); }", "commid": "rust_pr_39903"}], "negative_passages": []} {"query_id": "q-en-rust-2d503a9f7b090b291bdc6a19177f0e3bc2931d974a630b44a0c211ee1906b7cd", "query": "Hi, at this point, rust nightly pretty much builds out of the box (assuming you have an already built beta rustc compiler) on Solaris for x86_64. However, it doesn't (yet) build for sparcv9. I will provide a pull request shortly that adds the final, missing piece to support sparcv9 / Solaris.", "positive_passages": [{"docid": "doc-en-rust-9537459d551adc68d17923e42928b51a7aaa6ec635ec84759bdd0ffe3347357d", "text": "(\"armv7s-apple-ios\", armv7s_apple_ios), (\"x86_64-sun-solaris\", x86_64_sun_solaris), (\"sparcv9-sun-solaris\", sparcv9_sun_solaris), (\"x86_64-pc-windows-gnu\", x86_64_pc_windows_gnu), (\"i686-pc-windows-gnu\", i686_pc_windows_gnu),", "commid": "rust_pr_39903"}], "negative_passages": []} {"query_id": "q-en-rust-2d503a9f7b090b291bdc6a19177f0e3bc2931d974a630b44a0c211ee1906b7cd", "query": "Hi, at this point, rust nightly pretty much builds out of the box (assuming you have an already built beta rustc compiler) on Solaris for x86_64. However, it doesn't (yet) build for sparcv9. I will provide a pull request shortly that adds the final, missing piece to support sparcv9 / Solaris.", "positive_passages": [{"docid": "doc-en-rust-ae781d732f4bed30bf1dfba933c6d4459646b70ff350423a5876a8be1c2245fe", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. use target::{Target, TargetResult}; pub fn target() -> TargetResult { let mut base = super::solaris_base::opts(); base.pre_link_args.push(\"-m64\".to_string()); // llvm calls this \"v9\" base.cpu = \"v9\".to_string(); base.max_atomic_width = Some(64); Ok(Target { llvm_target: \"sparcv9-sun-solaris\".to_string(), target_endian: \"big\".to_string(), target_pointer_width: \"64\".to_string(), data_layout: \"E-m:e-i64:64-n32:64-S128\".to_string(), // Use \"sparc64\" instead of \"sparcv9\" here, since the former is already // used widely in the source base. If we ever needed ABI // differentiation from the sparc64, we could, but that would probably // just be confusing. arch: \"sparc64\".to_string(), target_os: \"solaris\".to_string(), target_env: \"\".to_string(), target_vendor: \"sun\".to_string(), options: base, }) } ", "commid": "rust_pr_39903"}], "negative_passages": []} {"query_id": "q-en-rust-5317954fc8251fadc3b34016a070f6d45607ab82aac6f171c93212d0f1bee7c2", "query": "is this code: yielding:\n. I don't know why we do this, the \"else\" part of \"if\" or \"if-let\" is an arbitrary expression: This seems simpler, less error prone and it won't hit arbitrary limitations specific to guards.\nThe RFC states: This doesn't feel like a compelling reason, particularly since the desugaring is not otherwise user visible. I would call this a bug.\ntriage: P-medium\nThis is a great mentoring bug! I'm going to write up some very quick pointers for now. Please reach out for more details. In short, the code that needs to be changed is in the \"HIR lowering\", which converts from the AST (what the user wrote) to the compiler's internal HIR (also an abstract syntax tree, but somewhat desugared). The code in question is in , specifically right . There is also some more logic. You'll want to change how it is desugaring along the lines that suggested .\nI'm interested in taking this bug.\nHello I'm new to open source and Rust project. I wanted to contribute to the project. May I work on this bug?\nSure thing! Let us know how far you get with We're also on IRC.\nI'll yield to you -- good luck!\nIf IfLet expression is desugared to the one shown in desugar_test() then it works as suggested by I have also reproduced the bug after desugaring the expression.", "positive_passages": [{"docid": "doc-en-rust-93042a5e03d39eb7515def0b728006dd0a31f0ada1fd69df302d15fce2ada081", "text": "// // match { // => , // [_ if => ,] // _ => [ | ()] // }", "commid": "rust_pr_41316"}], "negative_passages": []} {"query_id": "q-en-rust-5317954fc8251fadc3b34016a070f6d45607ab82aac6f171c93212d0f1bee7c2", "query": "is this code: yielding:\n. I don't know why we do this, the \"else\" part of \"if\" or \"if-let\" is an arbitrary expression: This seems simpler, less error prone and it won't hit arbitrary limitations specific to guards.\nThe RFC states: This doesn't feel like a compelling reason, particularly since the desugaring is not otherwise user visible. I would call this a bug.\ntriage: P-medium\nThis is a great mentoring bug! I'm going to write up some very quick pointers for now. Please reach out for more details. In short, the code that needs to be changed is in the \"HIR lowering\", which converts from the AST (what the user wrote) to the compiler's internal HIR (also an abstract syntax tree, but somewhat desugared). The code in question is in , specifically right . There is also some more logic. You'll want to change how it is desugaring along the lines that suggested .\nI'm interested in taking this bug.\nHello I'm new to open source and Rust project. I wanted to contribute to the project. May I work on this bug?\nSure thing! Let us know how far you get with We're also on IRC.\nI'll yield to you -- good luck!\nIf IfLet expression is desugared to the one shown in desugar_test() then it works as suggested by I have also reproduced the bug after desugaring the expression.", "positive_passages": [{"docid": "doc-en-rust-9529c454ac9cf286826b6d3f3bec2bbc3dbf71c109033875c04decfac8c5c64d", "text": "arms.push(self.arm(hir_vec![pat], body_expr)); } // `[_ if => ,]` // `_ => [ | ()]` // _ => [|()] { let mut current: Option<&Expr> = else_opt.as_ref().map(|p| &**p); let mut else_exprs: Vec> = vec![current]; // First, we traverse the AST and recursively collect all // `else` branches into else_exprs, e.g.: // // if let Some(_) = x { // ... // } else if ... { // Expr1 // ... // } else if ... { // Expr2 // ... // } else { // Expr3 // ... // } // // ... results in else_exprs = [Some(&Expr1), // Some(&Expr2), // Some(&Expr3)] // // Because there also the case there is no `else`, these // entries can also be `None`, as in: // // if let Some(_) = x { // ... // } else if ... { // Expr1 // ... // } else if ... { // Expr2 // ... // } // // ... results in else_exprs = [Some(&Expr1), // Some(&Expr2), // None] // // The last entry in this list is always translated into // the final \"unguard\" wildcard arm of the `match`. In the // case of a `None`, it becomes `_ => ()`. loop { if let Some(e) = current { // There is an else branch at this level if let ExprKind::If(_, _, ref else_opt) = e.node { // The else branch is again an if-expr current = else_opt.as_ref().map(|p| &**p); else_exprs.push(current); } else { // The last item in the list is not an if-expr, // stop here break } } else { // We have no more else branch break } } // Now translate the list of nested else-branches into the // arms of the match statement. for else_expr in else_exprs { if let Some(else_expr) = else_expr { let (guard, body) = if let ExprKind::If(ref cond, ref then, _) = else_expr.node { let then = self.lower_block(then, false); (Some(cond), self.expr_block(then, ThinVec::new())) } else { (None, self.lower_expr(else_expr)) }; arms.push(hir::Arm { attrs: hir_vec![], pats: hir_vec![self.pat_wild(e.span)], guard: guard.map(|e| P(self.lower_expr(e))), body: P(body), }); } else { // There was no else-branch, push a noop let pat_under = self.pat_wild(e.span); let unit = self.expr_tuple(e.span, hir_vec![]); arms.push(self.arm(hir_vec![pat_under], unit)); } } let wildcard_arm: Option<&Expr> = else_opt.as_ref().map(|p| &**p); let wildcard_pattern = self.pat_wild(e.span); let body = if let Some(else_expr) = wildcard_arm { P(self.lower_expr(else_expr)) } else { self.expr_tuple(e.span, hir_vec![]) }; arms.push(self.arm(hir_vec![wildcard_pattern], body)); } let contains_else_clause = else_opt.is_some();", "commid": "rust_pr_41316"}], "negative_passages": []} {"query_id": "q-en-rust-5317954fc8251fadc3b34016a070f6d45607ab82aac6f171c93212d0f1bee7c2", "query": "is this code: yielding:\n. I don't know why we do this, the \"else\" part of \"if\" or \"if-let\" is an arbitrary expression: This seems simpler, less error prone and it won't hit arbitrary limitations specific to guards.\nThe RFC states: This doesn't feel like a compelling reason, particularly since the desugaring is not otherwise user visible. I would call this a bug.\ntriage: P-medium\nThis is a great mentoring bug! I'm going to write up some very quick pointers for now. Please reach out for more details. In short, the code that needs to be changed is in the \"HIR lowering\", which converts from the AST (what the user wrote) to the compiler's internal HIR (also an abstract syntax tree, but somewhat desugared). The code in question is in , specifically right . There is also some more logic. You'll want to change how it is desugaring along the lines that suggested .\nI'm interested in taking this bug.\nHello I'm new to open source and Rust project. I wanted to contribute to the project. May I work on this bug?\nSure thing! Let us know how far you get with We're also on IRC.\nI'll yield to you -- good luck!\nIf IfLet expression is desugared to the one shown in desugar_test() then it works as suggested by I have also reproduced the bug after desugaring the expression.", "positive_passages": [{"docid": "doc-en-rust-9f98d6fa17e03e2c81f14d992f6fbb4598c82fccaef212cc3374dfbf20d026f0", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. struct Foo; impl Foo { fn bar(&mut self) -> bool { true } } fn error(foo: &mut Foo) { if let Some(_) = Some(true) { } else if foo.bar() {} } fn ok(foo: &mut Foo) { if let Some(_) = Some(true) { } else { if foo.bar() {} } } fn main() {} ", "commid": "rust_pr_41316"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-d18fd7e278b9abf554a18bb410b5712cf61605e93a68203705164daa028d8646", "text": "# 32/64-bit MinGW builds. # # The MinGW builds unfortunately have to both download a custom toolchain and # avoid the one installed by AppVeyor by default. Interestingly, though, for # different reasons! # We are using MinGW with posix threads since LLVM does not compile with # the win32 threads version due to missing support for C++'s std::thread. # # For 32-bit the installed gcc toolchain on AppVeyor uses the pthread # threading model. This is unfortunately not what we want, and if we compile # with it then there's lots of link errors in the standard library (undefined # references to pthread symbols). # # For 64-bit the installed gcc toolchain is currently 5.3.0 which # unfortunately segfaults on Windows with --enable-llvm-assertions (segfaults # in LLVM). See rust-lang/rust#28445 for more information, but to work around # this we go back in time to 4.9.2 specifically. # Instead of relying on the MinGW version installed on appveryor we download # and install one ourselves so we won't be surprised by changes to appveyor's # build image. # # Finally, note that the downloads below are all in the `rust-lang-ci` S3 # bucket, but they cleraly didn't originate there! The downloads originally # came from the mingw-w64 SourceForge download site. Unfortunately # SourceForge is notoriously flaky, so we mirror it on our own infrastructure. # # And as a final point of note, the 32-bit MinGW build using the makefiles do # *not* use debug assertions and llvm assertions. This is because they take # too long on appveyor and this is tested by rustbuild below. - MSYS_BITS: 32 RUST_CONFIGURE_ARGS: --build=i686-pc-windows-gnu --enable-ninja SCRIPT: python x.py test MINGW_URL: https://s3.amazonaws.com/rust-lang-ci/rust-ci-mirror MINGW_ARCHIVE: i686-6.2.0-release-win32-dwarf-rt_v5-rev1.7z MINGW_ARCHIVE: i686-6.2.0-release-posix-dwarf-rt_v5-rev1.7z MINGW_DIR: mingw32 - MSYS_BITS: 64 SCRIPT: python x.py test RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-gnu --enable-ninja MINGW_URL: https://s3.amazonaws.com/rust-lang-ci/rust-ci-mirror MINGW_ARCHIVE: x86_64-6.2.0-release-win32-seh-rt_v5-rev1.7z MINGW_ARCHIVE: x86_64-6.2.0-release-posix-seh-rt_v5-rev1.7z MINGW_DIR: mingw64 # 32/64 bit MSVC and GNU deployment", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-86c9ada332e896c4b394c1dc84129b46150cb8b4cad0e432f434f65e092a631e", "text": "RUST_CONFIGURE_ARGS: --build=i686-pc-windows-gnu --enable-extended --enable-ninja SCRIPT: python x.py dist MINGW_URL: https://s3.amazonaws.com/rust-lang-ci/rust-ci-mirror MINGW_ARCHIVE: i686-6.2.0-release-win32-dwarf-rt_v5-rev1.7z MINGW_ARCHIVE: i686-6.2.0-release-posix-dwarf-rt_v5-rev1.7z MINGW_DIR: mingw32 DEPLOY: 1 - MSYS_BITS: 64 SCRIPT: python x.py dist RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-gnu --enable-extended --enable-ninja MINGW_URL: https://s3.amazonaws.com/rust-lang-ci/rust-ci-mirror MINGW_ARCHIVE: x86_64-6.2.0-release-win32-seh-rt_v5-rev1.7z MINGW_ARCHIVE: x86_64-6.2.0-release-posix-seh-rt_v5-rev1.7z MINGW_DIR: mingw64 DEPLOY: 1", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-238898a81af0a37e967a4f51c683700e990139c60885063d123bcc0382fead25", "text": "COPY build-emscripten.sh /tmp/ RUN ./build-emscripten.sh ENV PATH=$PATH:/tmp/emsdk_portable ENV PATH=$PATH:/tmp/emsdk_portable/clang/tag-e1.37.1/build_tag-e1.37.1_32/bin ENV PATH=$PATH:/tmp/emsdk_portable/clang/tag-e1.37.10/build_tag-e1.37.10_32/bin ENV PATH=$PATH:/tmp/emsdk_portable/node/4.1.1_32bit/bin ENV PATH=$PATH:/tmp/emsdk_portable/emscripten/tag-1.37.1 ENV EMSCRIPTEN=/tmp/emsdk_portable/emscripten/tag-1.37.1 ENV PATH=$PATH:/tmp/emsdk_portable/emscripten/tag-1.37.10 ENV EMSCRIPTEN=/tmp/emsdk_portable/emscripten/tag-1.37.10 ENV RUST_CONFIGURE_ARGS --target=asmjs-unknown-emscripten", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-8fe75fbb11c04942f7d65cded5756fd3387a3556d61e0a1e5acb5930fcd41ab9", "text": "source emsdk_portable/emsdk_env.sh hide_output emsdk update hide_output emsdk install --build=Release sdk-tag-1.37.1-32bit hide_output emsdk activate --build=Release sdk-tag-1.37.1-32bit hide_output emsdk install --build=Release sdk-tag-1.37.10-32bit hide_output emsdk activate --build=Release sdk-tag-1.37.10-32bit ", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-f1b0fae89bfd07c38c7b05307750bdd1d168d4d8c7858077dd60f3b0297dd2aa", "text": " Subproject commit d30da544a8afc5d78391dee270bdf40e74a215d3 Subproject commit c8a8767c56ad3d3f4eb45c87b95026936fb9aa35 ", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-6378e4243879e5441538c532ac808304722e5ce5710e3f279813a36e589c892d", "text": "} if target.contains(\"arm\") && !target.contains(\"ios\") { // (At least) udivsi3.S is broken for Thumb 1 which our gcc uses by // default, we don't want Thumb 2 since it isn't supported on some // devices, so disable thumb entirely. // Upstream bug: https://bugs.llvm.org/show_bug.cgi?id=32492 cfg.define(\"__ARM_ARCH_ISA_THUMB\", Some(\"0\")); sources.extend(&[\"arm/aeabi_cdcmp.S\", \"arm/aeabi_cdcmpeq_check_nan.c\", \"arm/aeabi_cfcmp.S\",", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-89e22bae5332f73ea20ef3121dae357c44e7260e05c457ddc5f3319f097fddc9", "text": " Subproject commit 2e951c3ae354bcbd2e50b30798e232949a926b75 Subproject commit a884d21cc5f0b23a1693d1e872fd8998a4fdd17f ", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-7afe46694c3b1903fd54d0e3d88360b36d31b67d328e15109129e4c7d2a0c1de", "text": "// write_volatile causes an LLVM assert with composite types // ignore-emscripten See #41299: probably a bad optimization #![feature(volatile)] use std::ptr::{read_volatile, write_volatile};", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-1c7c8d034747add611cd3f38fc9ab26ac979126aabff85727b4f8574b4a1b427", "text": "exit_success_if_unwind::bar(do_panic); } } let s = Command::new(env::args_os().next().unwrap()).arg(\"foo\").status(); let mut cmd = Command::new(env::args_os().next().unwrap()); cmd.arg(\"foo\"); // ARMv6 hanges while printing the backtrace, see #41004 if cfg!(target_arch = \"arm\") && cfg!(target_env = \"gnu\") { cmd.env(\"RUST_BACKTRACE\", \"0\"); } let s = cmd.status(); assert!(s.unwrap().code() != Some(0)); }", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-12cc40d469c2eb459423e70058521caac5f3de10084edc258a436978026c3cdc", "query": "fails on emscripten with the following error message: Thus the test is currently ignored on emscripten. Note: it may be that the failure occurs only on when the test is compiled with optimizations. Note: this happens when using LLVM 4.0 (this issue was opened now to get an issue number to reference).", "positive_passages": [{"docid": "doc-en-rust-ed315265e3ba100fe5c8b0cb7cc1ee128edff47bd774ddaf806ba9b7eb66f620", "text": "panic!(\"try to catch me\"); } } let s = Command::new(env::args_os().next().unwrap()).arg(\"foo\").status(); let mut cmd = Command::new(env::args_os().next().unwrap()); cmd.arg(\"foo\"); // ARMv6 hanges while printing the backtrace, see #41004 if cfg!(target_arch = \"arm\") && cfg!(target_env = \"gnu\") { cmd.env(\"RUST_BACKTRACE\", \"0\"); } let s = cmd.status(); assert!(s.unwrap().code() != Some(0)); }", "commid": "rust_pr_40123"}], "negative_passages": []} {"query_id": "q-en-rust-0393e6e4f555f7353f874ab1684fb8c580f22c0cfcfb09b3bd52589513a11610", "query": "hi, my code used to compile last week. I updated the compiler today and now it crashes note that you can get my code here ()\nnote: going back to nightly-2017-04-01; everything work again. 'nightly-2017-04-17 is not ok. so troubles started somewhere in between.\ncc\nnot too sure what to do with this message. can I help you with something ? I can do a small dichotomy to figure out the last working nightly if this can help.\nMinified:\nthere is a pending fix (), no action needed on your part\ntriage: P-high", "positive_passages": [{"docid": "doc-en-rust-bf28373ac496849ad71b1bdd2d7ebb40eb548bc1da298be84db2d97ea595d5ff", "text": "// So peel off one-level, turning the &T into T. match base_ty.builtin_deref(false, ty::NoPreference) { Some(t) => t.ty, None => { return Err(()); } None => { debug!(\"By-ref binding of non-derefable type {:?}\", base_ty); return Err(()); } } } _ => base_ty,", "commid": "rust_pr_41578"}], "negative_passages": []} {"query_id": "q-en-rust-0393e6e4f555f7353f874ab1684fb8c580f22c0cfcfb09b3bd52589513a11610", "query": "hi, my code used to compile last week. I updated the compiler today and now it crashes note that you can get my code here ()\nnote: going back to nightly-2017-04-01; everything work again. 'nightly-2017-04-17 is not ok. so troubles started somewhere in between.\ncc\nnot too sure what to do with this message. can I help you with something ? I can do a small dichotomy to figure out the last working nightly if this can help.\nMinified:\nthere is a pending fix (), no action needed on your part\ntriage: P-high", "positive_passages": [{"docid": "doc-en-rust-68f5989759e20f246b33a99a09808858561a5586fa828db1008f6269f4fa6099", "text": "match base_cmt.ty.builtin_index() { Some(ty) => (ty, ElementKind::VecElement), None => { debug!(\"Explicit index of non-indexable type {:?}\", base_cmt); return Err(()); } }", "commid": "rust_pr_41578"}], "negative_passages": []} {"query_id": "q-en-rust-0393e6e4f555f7353f874ab1684fb8c580f22c0cfcfb09b3bd52589513a11610", "query": "hi, my code used to compile last week. I updated the compiler today and now it crashes note that you can get my code here ()\nnote: going back to nightly-2017-04-01; everything work again. 'nightly-2017-04-17 is not ok. so troubles started somewhere in between.\ncc\nnot too sure what to do with this message. can I help you with something ? I can do a small dichotomy to figure out the last working nightly if this can help.\nMinified:\nthere is a pending fix (), no action needed on your part\ntriage: P-high", "positive_passages": [{"docid": "doc-en-rust-a54b9bf1555b63c76a9ac625bde17582b0b54ff5ba302b506d385cad7d5b21dc", "text": "PatKind::TupleStruct(hir::QPath::Resolved(_, ref path), ..) | PatKind::Struct(hir::QPath::Resolved(_, ref path), ..) => { match path.def { Def::Err => return Err(()), Def::Err => { debug!(\"access to unresolvable pattern {:?}\", pat); return Err(()) } Def::Variant(variant_did) | Def::VariantCtor(variant_did, ..) => { // univariant enums do not need downcasts", "commid": "rust_pr_41578"}], "negative_passages": []} {"query_id": "q-en-rust-0393e6e4f555f7353f874ab1684fb8c580f22c0cfcfb09b3bd52589513a11610", "query": "hi, my code used to compile last week. I updated the compiler today and now it crashes note that you can get my code here ()\nnote: going back to nightly-2017-04-01; everything work again. 'nightly-2017-04-17 is not ok. so troubles started somewhere in between.\ncc\nnot too sure what to do with this message. can I help you with something ? I can do a small dichotomy to figure out the last working nightly if this can help.\nMinified:\nthere is a pending fix (), no action needed on your part\ntriage: P-high", "positive_passages": [{"docid": "doc-en-rust-535c38de01cf9385a0b205c0499bc734450ed88db00f97223e360a8b2338e798", "text": "panic!(ExplicitBug); } pub fn delay_span_bug>(&self, sp: S, msg: &str) { if self.treat_err_as_bug { self.span_bug(sp, msg); } let mut delayed = self.delayed_span_bug.borrow_mut(); *delayed = Some((sp.into(), msg.to_string())); }", "commid": "rust_pr_41578"}], "negative_passages": []} {"query_id": "q-en-rust-0393e6e4f555f7353f874ab1684fb8c580f22c0cfcfb09b3bd52589513a11610", "query": "hi, my code used to compile last week. I updated the compiler today and now it crashes note that you can get my code here ()\nnote: going back to nightly-2017-04-01; everything work again. 'nightly-2017-04-17 is not ok. so troubles started somewhere in between.\ncc\nnot too sure what to do with this message. can I help you with something ? I can do a small dichotomy to figure out the last working nightly if this can help.\nMinified:\nthere is a pending fix (), no action needed on your part\ntriage: P-high", "positive_passages": [{"docid": "doc-en-rust-08504fe507f0224aadf28ae859a561f514a8de7438a39bd907b4db09bf909429", "text": "}; let index_expr_ty = self.node_ty(index_expr.id); let adjusted_base_ty = self.resolve_type_vars_if_possible(&adjusted_base_ty); let index_expr_ty = self.resolve_type_vars_if_possible(&index_expr_ty); let result = self.try_index_step(ty::MethodCall::expr(expr.id), expr,", "commid": "rust_pr_41578"}], "negative_passages": []} {"query_id": "q-en-rust-0393e6e4f555f7353f874ab1684fb8c580f22c0cfcfb09b3bd52589513a11610", "query": "hi, my code used to compile last week. I updated the compiler today and now it crashes note that you can get my code here ()\nnote: going back to nightly-2017-04-01; everything work again. 'nightly-2017-04-17 is not ok. so troubles started somewhere in between.\ncc\nnot too sure what to do with this message. can I help you with something ? I can do a small dichotomy to figure out the last working nightly if this can help.\nMinified:\nthere is a pending fix (), no action needed on your part\ntriage: P-high", "positive_passages": [{"docid": "doc-en-rust-37e41b49c8abacf8b657d5f56578dfc4672726f1787fbf7ff253be8bc2e0545e", "text": "let expr_ty = self.node_ty(expr.id); self.demand_suptype(expr.span, expr_ty, return_ty); } else { // We could not perform a mutable index. Re-apply the // immutable index adjustments - borrowck will detect // this as an error. if let Some(adjustment) = adjustment { self.apply_adjustment(expr.id, adjustment); } self.tcx.sess.delay_span_bug( expr.span, \"convert_lvalue_derefs_to_mutable failed\"); } } hir::ExprUnary(hir::UnDeref, ref base_expr) => {", "commid": "rust_pr_41578"}], "negative_passages": []} {"query_id": "q-en-rust-0393e6e4f555f7353f874ab1684fb8c580f22c0cfcfb09b3bd52589513a11610", "query": "hi, my code used to compile last week. I updated the compiler today and now it crashes note that you can get my code here ()\nnote: going back to nightly-2017-04-01; everything work again. 'nightly-2017-04-17 is not ok. so troubles started somewhere in between.\ncc\nnot too sure what to do with this message. can I help you with something ? I can do a small dichotomy to figure out the last working nightly if this can help.\nMinified:\nthere is a pending fix (), no action needed on your part\ntriage: P-high", "positive_passages": [{"docid": "doc-en-rust-8a6bf928d2775a48e8eb2edda5c614fed0cf1ef76703ea8248e8c42db9ab9d52", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // regression test for issue #41498. struct S; impl S { fn mutate(&mut self) {} } fn call_and_ref T>(x: &mut Option, f: F) -> &mut T { *x = Some(f()); x.as_mut().unwrap() } fn main() { let mut n = None; call_and_ref(&mut n, || [S])[0].mutate(); } ", "commid": "rust_pr_41578"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-df0968c1deabc9e1f6afb9559c03e5b0481b8ed34cdf957cfe1722e3efd6a988", "text": "use rustc::mir::*; use rustc::mir::transform::{MirSuite, MirPassIndex, MirSource}; use rustc::ty::TyCtxt; use rustc::ty::item_path; use rustc_data_structures::fx::FxHashMap; use rustc_data_structures::indexed_vec::{Idx}; use std::fmt::Display;", "commid": "rust_pr_41777"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-13e7c8c2090d396a0f8bf9f6a6214a283404dca3e5874951f16d213eeaaefb98", "text": "return; } let node_path = tcx.item_path_str(tcx.hir.local_def_id(source.item_id())); let node_path = item_path::with_forced_impl_filename_line(|| { // see notes on #41697 below tcx.item_path_str(tcx.hir.local_def_id(source.item_id())) }); dump_matched_mir_node(tcx, pass_num, pass_name, &node_path, disambiguator, source, mir); for (index, promoted_mir) in mir.promoted.iter_enumerated() {", "commid": "rust_pr_41777"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-54e131ad2f75c250515da966cd46b197f3787ffc426175bda9f99fea268c88be", "text": "Some(ref filters) => filters, }; let node_id = source.item_id(); let node_path = tcx.item_path_str(tcx.hir.local_def_id(node_id)); let node_path = item_path::with_forced_impl_filename_line(|| { // see notes on #41697 below tcx.item_path_str(tcx.hir.local_def_id(node_id)) }); filters.split(\"&\") .any(|filter| { filter == \"all\" ||", "commid": "rust_pr_41777"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-a6191f0a9e196cca26f6a3c58b2fda7c4476e8ae9499e7b1646f2d72a968d2be", "text": "}) } // #41697 -- we use `with_forced_impl_filename_line()` because // `item_path_str()` would otherwise trigger `type_of`, and this can // run while we are already attempting to evaluate `type_of`. fn dump_matched_mir_node<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, pass_num: Option<(MirSuite, MirPassIndex)>, pass_name: &str,", "commid": "rust_pr_41777"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-1dfe82c268a9352910ae1cfdf938896ceacb078657dd8fb1064831238f8a5186", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // compile-flags:-Zdump-mir=NEVER_MATCHED // Regression test for #41697. Using dump-mir was triggering // artificial cycles: during type-checking, we had to get the MIR for // the constant expressions in `[u8; 2]`, which in turn would trigger // an attempt to get the item-path, which in turn would request the // types of the impl, which would trigger a cycle. We supressed this // cycle now by forcing mir-dump to avoid asking for types of an impl. #![feature(rustc_attrs)] use std::sync::Arc; trait Foo { fn get(&self) -> [u8; 2]; } impl Foo for [u8; 2] { fn get(&self) -> [u8; 2] { *self } } struct Bar(T); fn unsize_fat_ptr<'a>(x: &'a Bar) -> &'a Bar { x } fn unsize_nested_fat_ptr(x: Arc) -> Arc { x } fn main() { let x: Box> = Box::new(Bar([1,2])); assert_eq!(unsize_fat_ptr(&*x).0.get(), [1, 2]); let x: Arc = Arc::new([3, 4]); assert_eq!(unsize_nested_fat_ptr(x).get(), [3, 4]); } ", "commid": "rust_pr_41777"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-62191fcdfe8176218393d6258454e8b3f7bae7984705ab4ebdd0694238d10877", "text": "output: $output:tt) => { define_map_struct! { tcx: $tcx, ready: ([pub] $attrs $name), ready: ([] $attrs $name), input: ($($input)*), output: $output }", "commid": "rust_pr_42017"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-ae8a6350fb7d40d20ba2102bcad3c43084637510446a9593a45e85136c65c9b0", "text": "MirSource::Promoted(_, i) => write!(w, \"{:?} in\", i)? } write!(w, \" {}\", tcx.node_path_str(src.item_id()))?; item_path::with_forced_impl_filename_line(|| { // see notes on #41697 elsewhere write!(w, \" {}\", tcx.node_path_str(src.item_id())) })?; if let MirSource::Fn(_) = src { write!(w, \"(\")?;", "commid": "rust_pr_42017"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-65a28427c24e447d35fd97c0fe257b0e8c2f88abfd4782bbc2a49fad6430b787", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Regression test for #41697. Using dump-mir was triggering // artificial cycles: during type-checking, we had to get the MIR for // the constant expressions in `[u8; 2]`, which in turn would trigger // an attempt to get the item-path, which in turn would request the // types of the impl, which would trigger a cycle. We supressed this // cycle now by forcing mir-dump to avoid asking for types of an impl. #![feature(rustc_attrs)] use std::sync::Arc; trait Foo { fn get(&self) -> [u8; 2]; } impl Foo for [u8; 2] { fn get(&self) -> [u8; 2] { *self } } struct Bar(T); fn unsize_fat_ptr<'a>(x: &'a Bar) -> &'a Bar { x } fn unsize_nested_fat_ptr(x: Arc) -> Arc { x } fn main() { let x: Box> = Box::new(Bar([1,2])); assert_eq!(unsize_fat_ptr(&*x).0.get(), [1, 2]); let x: Arc = Arc::new([3, 4]); assert_eq!(unsize_nested_fat_ptr(x).get(), [3, 4]); } ", "commid": "rust_pr_42017"}], "negative_passages": []} {"query_id": "q-en-rust-bd3f0fd61f73a1df73c09b6c314371176bfa88cbe7331aa4c6aa59a30ce24534", "query": "running yields Happening on . Did not happen on the latest nightly available through rustup on 27.04.2017.\nLast working nightly: First broken nightly: .\nmaybe it was ? cc\ncould be. let me try it on my big MIR refactoring branch =)\nI confirm that the error occurs at but not at So it looks to me like is indeed the culprit.\nYes. I can reproduce. Will investigate.\nI see the problem. PR coming soon.\nFix at\nI'm still seeing this issue, on the latest nightly and on master.\ncan you give more details of how you are reproducing the problem? (I a test for the precise scenario reported, so I'm reasonably sure it's not that case anymore.)\nHmm, perhaps my regression test wasn't sufficient. I see I used ; I remember that the test did fail, but I guess that it passing doesn't prove all issues are resolved. =)\nThe earliest commit where I can reproduce this problem is", "positive_passages": [{"docid": "doc-en-rust-88024cf21ca031fc6570096588250a9d1caf0c318be091e3ef203cfb468b444f", "text": " // Copyright 2016 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // compile-flags:-Zdump-mir=NEVER_MATCHED // Regression test for #41697. Using dump-mir was triggering // artificial cycles: during type-checking, we had to get the MIR for // the constant expressions in `[u8; 2]`, which in turn would trigger // an attempt to get the item-path, which in turn would request the // types of the impl, which would trigger a cycle. We supressed this // cycle now by forcing mir-dump to avoid asking for types of an impl. #![feature(rustc_attrs)] use std::sync::Arc; trait Foo { fn get(&self) -> [u8; 2]; } impl Foo for [u8; 2] { fn get(&self) -> [u8; 2] { *self } } struct Bar(T); fn unsize_fat_ptr<'a>(x: &'a Bar) -> &'a Bar { x } fn unsize_nested_fat_ptr(x: Arc) -> Arc { x } fn main() { let x: Box> = Box::new(Bar([1,2])); assert_eq!(unsize_fat_ptr(&*x).0.get(), [1, 2]); let x: Arc = Arc::new([3, 4]); assert_eq!(unsize_nested_fat_ptr(x).get(), [3, 4]); } ", "commid": "rust_pr_42017"}], "negative_passages": []} {"query_id": "q-en-rust-ffd6a39896df8d7efe66d64023dd6317148ce594e87642b82edcf3abc9cbb3dd", "query": "cc Maybe related to\nAlso affects eclectica-0.0.9 cc\nAlso igo-0.2.1 cc\nAlso isbfc-0.0.1 cc\nAlso cargo-update-0.8.0 () cc\nAlso cc\nRust commit: (looks like bug isn't here) Cargo commit: , which seems correct. is the stable/nightly issue, is the PR updating cargo submodule to fix this, which probably needs to be backported to beta. I'm uncertain, though.\ncc sorry to tag you in again, but if you could help take a look at this that'd be great! If not no worries!\nHi this definitely looks related to my changes in cargo. I've investigated and this crate defines two bins ( and ). I suppose they were intended to be aliases and this was possible because of a original \"bug\" in cargo (falling back to ). I have mixed feelings about this. I would say that cargo should fail here instead of falling back to , but I suppose we cannot do that because of backwards compatibility. Right? fails because it defines bin targets inside directly, instead of . Seems easy and I'll try to fix this today in the evening. Could you please provide me with the cargo's output for the other crates (especially )? That would be great.\nAh yeah for now our priority is to stem the breakage, and afterwards we can evaluate warnings cycles/etc to push further towards the desired behavior here. Thanks for investigating this\nI think I have fixed this, just need one more day to do some more testing and I should open a pull request tomorrow.\nAdded a proposed fix.\nAh we should actually leave this open to track the inclusion into beta, I'll close when all the backports are done.\nOk I've , updated our and updated our , this'll get closed when those start merging. Thanks again for fixing this\nOk, I think everything's landed here", "positive_passages": [{"docid": "doc-en-rust-501c5a0a1d52b1809b66e2598c70424412da5cb9f90478b670798641a84366b4", "text": "let path = @{span: span, global: false, idents: ~[nm], rp: None, types: ~[]}; @{id: self.next_id(), node: ast::pat_ident(ast::bind_by_implicit_ref, node: ast::pat_ident(ast::bind_by_ref(ast::m_imm), path, None), span: span}", "commid": "rust_pr_4046"}], "negative_passages": []} {"query_id": "q-en-rust-ffd6a39896df8d7efe66d64023dd6317148ce594e87642b82edcf3abc9cbb3dd", "query": "cc Maybe related to\nAlso affects eclectica-0.0.9 cc\nAlso igo-0.2.1 cc\nAlso isbfc-0.0.1 cc\nAlso cargo-update-0.8.0 () cc\nAlso cc\nRust commit: (looks like bug isn't here) Cargo commit: , which seems correct. is the stable/nightly issue, is the PR updating cargo submodule to fix this, which probably needs to be backported to beta. I'm uncertain, though.\ncc sorry to tag you in again, but if you could help take a look at this that'd be great! If not no worries!\nHi this definitely looks related to my changes in cargo. I've investigated and this crate defines two bins ( and ). I suppose they were intended to be aliases and this was possible because of a original \"bug\" in cargo (falling back to ). I have mixed feelings about this. I would say that cargo should fail here instead of falling back to , but I suppose we cannot do that because of backwards compatibility. Right? fails because it defines bin targets inside directly, instead of . Seems easy and I'll try to fix this today in the evening. Could you please provide me with the cargo's output for the other crates (especially )? That would be great.\nAh yeah for now our priority is to stem the breakage, and afterwards we can evaluate warnings cycles/etc to push further towards the desired behavior here. Thanks for investigating this\nI think I have fixed this, just need one more day to do some more testing and I should open a pull request tomorrow.\nAdded a proposed fix.\nAh we should actually leave this open to track the inclusion into beta, I'll close when all the backports are done.\nOk I've , updated our and updated our , this'll get closed when those start merging. Thanks again for fixing this\nOk, I think everything's landed here", "positive_passages": [{"docid": "doc-en-rust-3ae1028ce24836dc6476a0641471b3a360051aabf914c5746407c602a9ebd91e", "text": "let pat_node = if pats.is_empty() { ast::pat_ident( ast::bind_by_implicit_ref, ast::bind_by_ref(ast::m_imm), cx.path(span, ~[v_name]), None )", "commid": "rust_pr_4046"}], "negative_passages": []} {"query_id": "q-en-rust-ffd6a39896df8d7efe66d64023dd6317148ce594e87642b82edcf3abc9cbb3dd", "query": "cc Maybe related to\nAlso affects eclectica-0.0.9 cc\nAlso igo-0.2.1 cc\nAlso isbfc-0.0.1 cc\nAlso cargo-update-0.8.0 () cc\nAlso cc\nRust commit: (looks like bug isn't here) Cargo commit: , which seems correct. is the stable/nightly issue, is the PR updating cargo submodule to fix this, which probably needs to be backported to beta. I'm uncertain, though.\ncc sorry to tag you in again, but if you could help take a look at this that'd be great! If not no worries!\nHi this definitely looks related to my changes in cargo. I've investigated and this crate defines two bins ( and ). I suppose they were intended to be aliases and this was possible because of a original \"bug\" in cargo (falling back to ). I have mixed feelings about this. I would say that cargo should fail here instead of falling back to , but I suppose we cannot do that because of backwards compatibility. Right? fails because it defines bin targets inside directly, instead of . Seems easy and I'll try to fix this today in the evening. Could you please provide me with the cargo's output for the other crates (especially )? That would be great.\nAh yeah for now our priority is to stem the breakage, and afterwards we can evaluate warnings cycles/etc to push further towards the desired behavior here. Thanks for investigating this\nI think I have fixed this, just need one more day to do some more testing and I should open a pull request tomorrow.\nAdded a proposed fix.\nAh we should actually leave this open to track the inclusion into beta, I'll close when all the backports are done.\nOk I've , updated our and updated our , this'll get closed when those start merging. Thanks again for fixing this\nOk, I think everything's landed here", "positive_passages": [{"docid": "doc-en-rust-e3a9e1b99f81cd84518372fef248726775279991127b30bb177a195d0d5fb5ed", "text": " #[forbid(deprecated_pattern)]; extern mod std; // These tests used to be separate files, but I wanted to refactor all", "commid": "rust_pr_4046"}], "negative_passages": []} {"query_id": "q-en-rust-ffd6a39896df8d7efe66d64023dd6317148ce594e87642b82edcf3abc9cbb3dd", "query": "cc Maybe related to\nAlso affects eclectica-0.0.9 cc\nAlso igo-0.2.1 cc\nAlso isbfc-0.0.1 cc\nAlso cargo-update-0.8.0 () cc\nAlso cc\nRust commit: (looks like bug isn't here) Cargo commit: , which seems correct. is the stable/nightly issue, is the PR updating cargo submodule to fix this, which probably needs to be backported to beta. I'm uncertain, though.\ncc sorry to tag you in again, but if you could help take a look at this that'd be great! If not no worries!\nHi this definitely looks related to my changes in cargo. I've investigated and this crate defines two bins ( and ). I suppose they were intended to be aliases and this was possible because of a original \"bug\" in cargo (falling back to ). I have mixed feelings about this. I would say that cargo should fail here instead of falling back to , but I suppose we cannot do that because of backwards compatibility. Right? fails because it defines bin targets inside directly, instead of . Seems easy and I'll try to fix this today in the evening. Could you please provide me with the cargo's output for the other crates (especially )? That would be great.\nAh yeah for now our priority is to stem the breakage, and afterwards we can evaluate warnings cycles/etc to push further towards the desired behavior here. Thanks for investigating this\nI think I have fixed this, just need one more day to do some more testing and I should open a pull request tomorrow.\nAdded a proposed fix.\nAh we should actually leave this open to track the inclusion into beta, I'll close when all the backports are done.\nOk I've , updated our and updated our , this'll get closed when those start merging. Thanks again for fixing this\nOk, I think everything's landed here", "positive_passages": [{"docid": "doc-en-rust-d89fb24f9742aade4920839ff4bc0d2f31c9607459a61d8b2b89602c80bd6894", "text": "let ebml_w = &EBWriter::Serializer(wr); a1.serialize(ebml_w) }; let d = EBReader::Doc(@bytes); let d = EBReader::Doc(@move bytes); let a2: A = deserialize(&EBReader::Deserializer(d)); assert *a1 == a2; }", "commid": "rust_pr_4046"}], "negative_passages": []} {"query_id": "q-en-rust-da1169cee97b7a09600004b1246e79f8ca349071ef164a0b9d3edd0c7446bb0c", "query": "About to submit a PR for exposing intrinsics::needs_drop under mem. Needs a tracking issue. Here it is!\nThe PR is\nfcp merge Seems like this may be good to stabilize!\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-73ab323806371a581397fa631bbb750f4501e504e57a004924eb0457688741e5", "text": "#![feature(core_intrinsics)] #![feature(dropck_eyepatch)] #![feature(generic_param_attrs)] #![feature(needs_drop)] #![cfg_attr(test, feature(test))] #![allow(deprecated)]", "commid": "rust_pr_44639"}], "negative_passages": []} {"query_id": "q-en-rust-da1169cee97b7a09600004b1246e79f8ca349071ef164a0b9d3edd0c7446bb0c", "query": "About to submit a PR for exposing intrinsics::needs_drop under mem. Needs a tracking issue. Here it is!\nThe PR is\nfcp merge Seems like this may be good to stabilize!\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-ef0f1f6f58d11f91ec094804a6b6ed91fd814107d28b71c4271e66aaed2cf1d1", "text": "/// Here's an example of how a collection might make use of needs_drop: /// /// ``` /// #![feature(needs_drop)] /// use std::{mem, ptr}; /// /// pub struct MyCollection {", "commid": "rust_pr_44639"}], "negative_passages": []} {"query_id": "q-en-rust-da1169cee97b7a09600004b1246e79f8ca349071ef164a0b9d3edd0c7446bb0c", "query": "About to submit a PR for exposing intrinsics::needs_drop under mem. Needs a tracking issue. Here it is!\nThe PR is\nfcp merge Seems like this may be good to stabilize!\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-a8b80d969187c47a560980fd8c57f81132695ae314daf2a2c044f58e2169fd41", "text": "/// } /// ``` #[inline] #[unstable(feature = \"needs_drop\", issue = \"41890\")] #[stable(feature = \"needs_drop\", since = \"1.22.0\")] pub fn needs_drop() -> bool { unsafe { intrinsics::needs_drop::() } }", "commid": "rust_pr_44639"}], "negative_passages": []} {"query_id": "q-en-rust-da1169cee97b7a09600004b1246e79f8ca349071ef164a0b9d3edd0c7446bb0c", "query": "About to submit a PR for exposing intrinsics::needs_drop under mem. Needs a tracking issue. Here it is!\nThe PR is\nfcp merge Seems like this may be good to stabilize!\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] No concerns currently listed. Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-b8dc7ba19862b3165d4699481d1755740e38cc72b9d212454469c4fee7c1ff07", "text": "#![feature(macro_reexport)] #![feature(macro_vis_matcher)] #![feature(needs_panic_runtime)] #![feature(needs_drop)] #![feature(never_type)] #![feature(num_bits_bytes)] #![feature(old_wrapping)]", "commid": "rust_pr_44639"}], "negative_passages": []} {"query_id": "q-en-rust-3f7fd9bae7becfef90bec23eea3939fcc8918920fb201e87554bb554fa276048", "query": "Test case: Compiling this stuck forever in the \"expansion\" pass. cc ( tracking issue). Since can match \"nothing\", the macro should error with \"repetition matches empty token tree\".", "positive_passages": [{"docid": "doc-en-rust-18057383635020510e58ee61e7e7b34f1653718fdc8b51a8e331c3326e4db638", "text": "TokenTree::Sequence(span, ref seq) => { if seq.separator.is_none() && seq.tts.iter().all(|seq_tt| { match *seq_tt { TokenTree::MetaVarDecl(_, _, id) => id.name == \"vis\", TokenTree::Sequence(_, ref sub_seq) => sub_seq.op == quoted::KleeneOp::ZeroOrMore, _ => false,", "commid": "rust_pr_43078"}], "negative_passages": []} {"query_id": "q-en-rust-3f7fd9bae7becfef90bec23eea3939fcc8918920fb201e87554bb554fa276048", "query": "Test case: Compiling this stuck forever in the \"expansion\" pass. cc ( tracking issue). Since can match \"nothing\", the macro should error with \"repetition matches empty token tree\".", "positive_passages": [{"docid": "doc-en-rust-b20d962feb4247da36b60b6183e5147f320085a6e7974510e620d8baa69f51ef", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(macro_vis_matcher)] macro_rules! foo { ($($p:vis)*) => {} //~ ERROR repetition matches empty token tree } foo!(a); ", "commid": "rust_pr_43078"}], "negative_passages": []} {"query_id": "q-en-rust-4deda348646bc50fc20faa9ce6d9e4a0faa277fc6de45b3beefdcdd044c11526", "query": "Trying to natively build powerpc64 as of , I get an ICE in stage 1:\nFWIW beta () builds fine. I'll see if I can bisect it.\nI seem to be hitting unrelated issues when bisecting, so it may be regressing in and out lately... It still ICEs on powerpc64 as of . I found that powerpc64le builds fine though, so this could be a general big-endian issue. If I get time I'll try s390x. Also, the reported region sometimes changes, though still the same location. Most often I get: error: internal compiler error: Region parameter out of range when substituting in region 'b (root type=Some((&&'b str,))) (index=1) But a few times I've seen: error: internal compiler error: Region parameter out of range when substituting in region 'a (root type=Some((str::pattern::StrSearcher<'a, 'b,))) (index=0) And the backtraces are still useless...\nCould you post the results of ?\nThat's doing a rebuild from the working dir of my prior attempts, so the only interesting piece is the stage1 libcore that ICEs. If you want the log of a complete build from scratch, I can get that too.\nI'm seeing a different error: Could you compile with debuginfo to get a backtrace? That's the setting in\nFresh build directory, debuginfo enabled: error: internal compiler error: Type parameter (T/0) out of range when substituting (root type=Some((&mut [T], _))) substs=[] Same dir just building again: error: internal compiler error: Region parameter out of range when substituting in region 'a (root type=Some((str::pattern::StrSearcher<'a, 'b,))) (index=0)\nVery odd. rustc compiled on x86-64 (both a rustc I hand-compiled and the official nightlies) works without any problems, but rustc compiled on powerpc64 crashes with all sorts of random errors. I wonder whether there is some sort of LLVM bug - the next beta uses LLVM 4.0.1, so I should check whether that fixes things. Also, there shouldn't be any LLVM IR differences between a rustc compiled on powerpc and a rustc compiled on x86, so maybe I could do a diff\nWe've seen weird differences between cross- and native-compiled rustc before for big-endian machines, which seems confirmed by powerpc64le working fine here. I'm still trying to get s390x access... cc who has found a few of these.\nThe usual problem is that cross-compiled and native-compiled metadata end up being incompatible, so an x86-built ppc64 libcore is incompatible with a ppc-built ppc64 libcore. Through maybe we've ended up in a case where ppc can't decode its own metadata.\nYes, my first suspect would be endianness issue. It could be fairly plausible that the metadata decoder is correct, but the encoder is not, hence the bug showing up on ppc-built stuff only.\nI got an s390x build, and it fared even worse. Tons of move errors on types like this: before it finally ICEd: thread 'rustc' panicked at 'assertion failed: (Qualif::MUTABLEINTERIOR)', I'm guessing that these particular errors and ICEs are not significant in themselves, only as signs of some more general corruption.\nOddly enough, if I use an x86 stage0 to compile a powerpc stage1, or an powerpc stage0 to compile an x86 stage1, everything works fine, but if I use a powerpc stage0 to compile a powerpc stage1, the powerpc stage1 created is broken.\nI can't figure out a good way to debug this without a powerpc computer. So I'm gotta leave this alone unless anyone's going to help me track it down (e.g., ssh access to a fast(!) powerpc computer).\nI don't know if I can provide any machine access -- I'll check -- but I will keep working on this myself. Any tips for how to approach this? I tried valgrind, but I got so many errors that I think either I'm using it wrong or that build is just hopelessly broken.\nBisecting took me to 's merge commit . The parent commit builds fine. I didn't drill down into the individual commits for that PR, as it just seems to be iterating different ideas. If you look on master, also moved that code to to use in more places. I don't know what that code would have to do with endianness or cross-compiling weirdness per se, but I do worry about the memory alignment of these: Later the temp block gets cast to , as do the input pointers in the first place, which would seem to imply that alignment won't matter. But if is still having an effect as stated, then I worry that perhaps the extra alignment constraints are just being assumed in the code generated to move between , , and . The input could have a smaller alignment than .\nI replaced with a naive byte-byte swap on master, and it works! Now to figure out the right fix...\nJust removing that gets a working build! Regarding endianness and cross-compiling, could there be a bug in the way the simd type is translated to LLVM? Especially since it's being seen through a pointer. I'm going to do some comparisons of the builds with and without that . if it would help, I could share those builds somewhere for you to grab.\ncc since I'm blaming your PRs...\nAffects both powerpc64le and powerpc64 in Debian now: Correction, the latest upload for ppc64 builds fine:\nThe Debian bug is little-endian not big-endian. This bug is the other way around, and is very old.\nRight, the failure on big-endian was a different one for the previous 1.30 upload to experimental.\nEr for the record, no the previous beta-7 was also problematic for little-endian but OK for big-endian - the big-endian one , otherwise it would be fine (20 test failures). Anyway we have some clues on the other bug, more comments there.\nIsn't this bug fixed by ? - i.e., what is the Rust situation on powerpc?\nThis is likely fixed. Rust 1.32.0 succeeded on ppc64el, see\nThis bug is on big-endian, the link you pasted is little-endian, but if you remove the \"el\" at the end of the URL you get to the big-endian results which are also OK. We do things slightly differently from Fedora though.\ndo you know what the status on Fedora is?\nIt was \"fixed\" by by disabling SIMD in that function -- not really a satisfactory solution IMO. So in that sense, we don't have any issue on Fedora any more. But perhaps we should see what happens to codegen these days if we revert that and try allowing SIMD there again. edit: to be clear -- I will try changing that back myself to see what happens.", "positive_passages": [{"docid": "doc-en-rust-39693079c7326e06379c5944caceb2b202874a67d9b4c979d3cc1f49f7fe85e3", "text": "// #[repr(simd)], even if we don't actually use this struct directly. // // FIXME repr(simd) broken on emscripten and redox // It's also broken on big-endian powerpc64 and s390x. #42778 #[cfg_attr(not(any(target_os = \"emscripten\", target_os = \"redox\", target_endian = \"big\")), repr(simd))] #[cfg_attr(not(any(target_os = \"emscripten\", target_os = \"redox\")), repr(simd))] struct Block(u64, u64, u64, u64); struct UnalignedBlock(u64, u64, u64, u64);", "commid": "rust_pr_60588"}], "negative_passages": []} {"query_id": "q-en-rust-e70bfe5e3f24e9d6ef379c54b741fb8734ec9d895add5cc6d37965370319f99b", "query": "The fix is shown in , but I couldn't complete it because the line length linter doesn't like the long lines and didn't have enough time to adjust the tidy checker", "positive_passages": [{"docid": "doc-en-rust-d4b26411305fb613a8efb6d0609865bb7be84b96efc9b6c58cd5dcb2ad54c0a8", "text": "trait_obj.method_two(); ``` You can read more about trait objects in the Trait Object section of the Reference: You can read more about trait objects in the [Trait Objects] section of the Reference. https://doc.rust-lang.org/reference.html#trait-objects [Trait Objects]: https://doc.rust-lang.org/reference/types.html#trait-objects \"##, E0034: r##\"", "commid": "rust_pr_43075"}], "negative_passages": []} {"query_id": "q-en-rust-e70bfe5e3f24e9d6ef379c54b741fb8734ec9d895add5cc6d37965370319f99b", "query": "The fix is shown in , but I couldn't complete it because the line length linter doesn't like the long lines and didn't have enough time to adjust the tidy checker", "positive_passages": [{"docid": "doc-en-rust-0f7818bfc8c1d9173ba75e5817c16350d075f6ad0fe5e5af3b665420edd64bdb", "text": "optional namespacing), a dereference, an indexing expression or a field reference. More details can be found here: https://doc.rust-lang.org/reference.html#lvalues-rvalues-and-temporaries More details can be found in the [Expressions] section of the Reference. [Expressions]: https://doc.rust-lang.org/reference/expressions.html#lvalues-rvalues-and-temporaries Now, we can go further. Here are some erroneous code examples:", "commid": "rust_pr_43075"}], "negative_passages": []} {"query_id": "q-en-rust-e70bfe5e3f24e9d6ef379c54b741fb8734ec9d895add5cc6d37965370319f99b", "query": "The fix is shown in , but I couldn't complete it because the line length linter doesn't like the long lines and didn't have enough time to adjust the tidy checker", "positive_passages": [{"docid": "doc-en-rust-09e4da4bbedab300815d213cdf2468abb0eb6e08c05fa5c6327f508e794ba719", "text": "} ``` PhantomData can also be used to express information about unused type parameters. You can read more about it in the API documentation: [PhantomData] can also be used to express information about unused type parameters. https://doc.rust-lang.org/std/marker/struct.PhantomData.html [PhantomData]: https://doc.rust-lang.org/std/marker/struct.PhantomData.html \"##, E0393: r##\"", "commid": "rust_pr_43075"}], "negative_passages": []} {"query_id": "q-en-rust-e70bfe5e3f24e9d6ef379c54b741fb8734ec9d895add5cc6d37965370319f99b", "query": "The fix is shown in , but I couldn't complete it because the line length linter doesn't like the long lines and didn't have enough time to adjust the tidy checker", "positive_passages": [{"docid": "doc-en-rust-5396f16999d800ae76db9ec63165cf07587661e3d585928c59e56c1eed794da0", "text": "println!(\"x: {}, y: {}\", variable.x, variable.y); ``` For more information see The Rust Book: https://doc.rust-lang.org/book/ For more information about primitives and structs, take a look at The Book: https://doc.rust-lang.org/book/first-edition/primitive-types.html https://doc.rust-lang.org/book/first-edition/structs.html \"##, E0611: r##\"", "commid": "rust_pr_43075"}], "negative_passages": []} {"query_id": "q-en-rust-e70bfe5e3f24e9d6ef379c54b741fb8734ec9d895add5cc6d37965370319f99b", "query": "The fix is shown in , but I couldn't complete it because the line length linter doesn't like the long lines and didn't have enough time to adjust the tidy checker", "positive_passages": [{"docid": "doc-en-rust-9d0b35535ff721df9eff1a78b855ff85ae12e30075c92766c44a7c183540880b", "text": "} ``` To fix this error, you need to pass variables corresponding to C types as much as possible. For better explanations, see The Rust Book: https://doc.rust-lang.org/book/ Certain Rust types must be cast before passing them to a variadic function, because of arcane ABI rules dictated by the C standard. To fix the error, cast the value to the type specified by the error message (which you may need to import from `std::os::raw`). \"##, E0618: r##\"", "commid": "rust_pr_43075"}], "negative_passages": []} {"query_id": "q-en-rust-c02743ccbe2b17fc38615115bf4287e3b85dd13f6ab118108c305636904063d4", "query": "RFC: RFC PR: Todo: [x] Land [x] Make it work with cross-crate imports (, ) [x] Make it work for proc macros [x] Make it warn more on resolution errors () [x] Make warnings/errors use better spans than the first line of docs [x] first step: attribute () [x] point to actual link in doc comment () [x] Add lint for link resolution failure () [x] Implied Shortcut Reference Links () [x] blocked on and [x] Enum variants and UFCS/associated methods () [x] Primitives () [x] Use the right resolution scopes for inner doc comments on things that aren't modules () [x] Handle fields and variants () [x] Handle fields inside struct variants () [x] Support () [x] Write documentation () [x] Stabilization ( - current impl is gated to only run on nightly, but does not require a feature gate) [x] Handle methods from traits implementations [x] Support anchors () [x] Support missing primitive types (such as slices) () [x] Allow referring to (for on ) [x] Fix current crate linking when no module is provided in the path (for example, won't link to the crate's type for some reason) [x] reexports can break intra-doc links () [x] Support associated items in contexts other than documenting a trait method () Other related issues: [x] Suppress privacy errors with document-private-types (, fix at ) [x] provide option to turn intra-link resolution errors into proper errors ()\nPreliminary implementation notes: Getting just the links from the markdown is relatively straightforward. You'll need to do this twice, to capture the behavior in both the Hoedown and Pulldown markdown renderers. Thankfully, each has a way of capturing links as they are parsed from the document. (While capturing the link you can also look for the \"struct/enum/mod/macro/static/etc\" marker to aid in resolution down the line.) In Hoedown, the function in captures all the events as they are captured. Hoedown uses a system of registering callbacks on the renderer configuration to capture each event separately. The event and its related function pointer signature are already present in the file, so you would need to create and use that callback to capture and modify links as they appear. For Pulldown, the parser is surfaced as an of events. There are already a handful of iterator wrappers present in , so adding another one (and using it in the farther down) shouldn't be a big deal. I'm... honestly not that familiar with how Pulldown represents link definitions in its Event/Tag enums, so more research would be needed to effectively get link text for that. Now, with the links in hand, you need to do two things, neither of which i know offhand since they involve interfacing with the compiler: Attempt to parse the link as a path (if it's not a valid Rust path, just emit the URL as-is and skip the rest of this) Resolve the path in the \"current\" module (emit a warning if resolution fails) Presumably there's a way to make the resolution come up with a DefId? Even after cleaning the AST, rustdoc still deals in those, so if that's the case there are ways to map that to the proper relative link.\nI've got a WIP going on in , which works off of Misdreavus' WIP (they did most of the work).\nNote that that branch is now live as , which i'm polishing and finishing up.\nNote for people watching this thread but not We're trying to wind that PR down to get a minimal version landed, so an initial implementation is fairly close.\nOne thing that appears to not work in (and I don't think is ever explicitly addressed in the RFC) is how to link to primitives like or .\nYeah I have to investigate that\nshould we turn the error into a warning ? I don't like the idea of crate docs breaking because someone introduced a name into a glob import\ndoes rustdoc have some idea of \"lint levels\"? What are the current deprecation warnings? Just println!s?\nWe can just instead. rustc can emit warnings not tied to actual warning names. There are no lint levels.\nAlright, let's do that for now. In the end I'd really like to have a way to check the docs for errors but that's a more involved/fragile process I imagine. Manish Goregaokar < am Mi. 24. Jan. 2018 um 09:17:\nShortcut links being implemented at\nIf will work as part of this RFC, will also work for linking to specific sections of that documentation?\nNot currently. This can be implemented. \u096f \u092e\u093e\u0930\u094d\u091a, \u0968\u0966\u0967\u096e \u0968:\u0967\u096a \u092e.\u0909. \u0930\u094b\u091c\u0940, \"Sunjay Varma\" - [Linking to items by name](linking-to-items-by-name.md) - [Lints](lints.md) - [Passes](passes.md) - [Advanced Features](advanced-features.md)", "commid": "rust_pr_74430"}], "negative_passages": []} {"query_id": "q-en-rust-c02743ccbe2b17fc38615115bf4287e3b85dd13f6ab118108c305636904063d4", "query": "RFC: RFC PR: Todo: [x] Land [x] Make it work with cross-crate imports (, ) [x] Make it work for proc macros [x] Make it warn more on resolution errors () [x] Make warnings/errors use better spans than the first line of docs [x] first step: attribute () [x] point to actual link in doc comment () [x] Add lint for link resolution failure () [x] Implied Shortcut Reference Links () [x] blocked on and [x] Enum variants and UFCS/associated methods () [x] Primitives () [x] Use the right resolution scopes for inner doc comments on things that aren't modules () [x] Handle fields and variants () [x] Handle fields inside struct variants () [x] Support () [x] Write documentation () [x] Stabilization ( - current impl is gated to only run on nightly, but does not require a feature gate) [x] Handle methods from traits implementations [x] Support anchors () [x] Support missing primitive types (such as slices) () [x] Allow referring to (for on ) [x] Fix current crate linking when no module is provided in the path (for example, won't link to the crate's type for some reason) [x] reexports can break intra-doc links () [x] Support associated items in contexts other than documenting a trait method () Other related issues: [x] Suppress privacy errors with document-private-types (, fix at ) [x] provide option to turn intra-link resolution errors into proper errors ()\nPreliminary implementation notes: Getting just the links from the markdown is relatively straightforward. You'll need to do this twice, to capture the behavior in both the Hoedown and Pulldown markdown renderers. Thankfully, each has a way of capturing links as they are parsed from the document. (While capturing the link you can also look for the \"struct/enum/mod/macro/static/etc\" marker to aid in resolution down the line.) In Hoedown, the function in captures all the events as they are captured. Hoedown uses a system of registering callbacks on the renderer configuration to capture each event separately. The event and its related function pointer signature are already present in the file, so you would need to create and use that callback to capture and modify links as they appear. For Pulldown, the parser is surfaced as an of events. There are already a handful of iterator wrappers present in , so adding another one (and using it in the farther down) shouldn't be a big deal. I'm... honestly not that familiar with how Pulldown represents link definitions in its Event/Tag enums, so more research would be needed to effectively get link text for that. Now, with the links in hand, you need to do two things, neither of which i know offhand since they involve interfacing with the compiler: Attempt to parse the link as a path (if it's not a valid Rust path, just emit the URL as-is and skip the rest of this) Resolve the path in the \"current\" module (emit a warning if resolution fails) Presumably there's a way to make the resolution come up with a DefId? Even after cleaning the AST, rustdoc still deals in those, so if that's the case there are ways to map that to the proper relative link.\nI've got a WIP going on in , which works off of Misdreavus' WIP (they did most of the work).\nNote that that branch is now live as , which i'm polishing and finishing up.\nNote for people watching this thread but not We're trying to wind that PR down to get a minimal version landed, so an initial implementation is fairly close.\nOne thing that appears to not work in (and I don't think is ever explicitly addressed in the RFC) is how to link to primitives like or .\nYeah I have to investigate that\nshould we turn the error into a warning ? I don't like the idea of crate docs breaking because someone introduced a name into a glob import\ndoes rustdoc have some idea of \"lint levels\"? What are the current deprecation warnings? Just println!s?\nWe can just instead. rustc can emit warnings not tied to actual warning names. There are no lint levels.\nAlright, let's do that for now. In the end I'd really like to have a way to check the docs for errors but that's a more involved/fragile process I imagine. Manish Goregaokar < am Mi. 24. Jan. 2018 um 09:17:\nShortcut links being implemented at\nIf will work as part of this RFC, will also work for linking to specific sections of that documentation?\nNot currently. This can be implemented. \u096f \u092e\u093e\u0930\u094d\u091a, \u0968\u0966\u0967\u096e \u0968:\u0967\u096a \u092e.\u0909. \u0930\u094b\u091c\u0940, \"Sunjay Varma\" # Linking to items by name Rustdoc is capable of directly linking to other rustdoc pages in Markdown documentation using the path of item as a link. For example, in the following code all of the links will link to the rustdoc page for `Bar`: ```rust /// This struct is not [Bar] pub struct Foo1; /// This struct is also not [bar](Bar) pub struct Foo2; /// This struct is also not [bar][b] /// /// [b]: Bar pub struct Foo3; /// This struct is also not [`Bar`] pub struct Foo4; pub struct Bar; ``` You can refer to anything in scope, and use paths, including `Self`, `self`, `super`, and `crate`. You may also use `foo()` and `foo!()` to refer to methods/functions and macros respectively. Backticks around the link will be stripped. ```rust,edition2018 use std::sync::mpsc::Receiver; /// This is an version of [`Receiver`], with support for [`std::future`]. /// /// You can obtain a [`std::future::Future`] by calling [`Self::recv()`]. pub struct AsyncReceiver { sender: Receiver } impl AsyncReceiver { pub async fn recv() -> T { unimplemented!() } } ``` You can also link to sections using URL fragment specifiers: ```rust /// This is a special implementation of [positional parameters] /// /// [positional parameters]: std::fmt#formatting-parameters struct MySpecialFormatter; ``` Paths in Rust have three namespaces: type, value, and macro. Items from these namespaces are allowed to overlap. In case of ambiguity, rustdoc will warn about the ambiguity and ask you to disambiguate, which can be done by using a prefix like `struct@`, `enum@`, `type@`, `trait@`, `union@`, `const@`, `static@`, `value@`, `function@`, `mod@`, `fn@`, `module@`, `method@`, `prim@`, `primitive@`, `macro@`, or `derive@`:: ```rust /// See also: [`Foo`](struct@Foo) struct Bar; /// This is different from [`Foo`](fn@Foo) struct Foo {} fn Foo() {} ``` Note: Because of how `macro_rules` macros are scoped in Rust, the intra-doc links of a `macro_rules` macro will be resolved relative to the crate root, as opposed to the module it is defined in. ", "commid": "rust_pr_74430"}], "negative_passages": []} {"query_id": "q-en-rust-c02743ccbe2b17fc38615115bf4287e3b85dd13f6ab118108c305636904063d4", "query": "RFC: RFC PR: Todo: [x] Land [x] Make it work with cross-crate imports (, ) [x] Make it work for proc macros [x] Make it warn more on resolution errors () [x] Make warnings/errors use better spans than the first line of docs [x] first step: attribute () [x] point to actual link in doc comment () [x] Add lint for link resolution failure () [x] Implied Shortcut Reference Links () [x] blocked on and [x] Enum variants and UFCS/associated methods () [x] Primitives () [x] Use the right resolution scopes for inner doc comments on things that aren't modules () [x] Handle fields and variants () [x] Handle fields inside struct variants () [x] Support () [x] Write documentation () [x] Stabilization ( - current impl is gated to only run on nightly, but does not require a feature gate) [x] Handle methods from traits implementations [x] Support anchors () [x] Support missing primitive types (such as slices) () [x] Allow referring to (for on ) [x] Fix current crate linking when no module is provided in the path (for example, won't link to the crate's type for some reason) [x] reexports can break intra-doc links () [x] Support associated items in contexts other than documenting a trait method () Other related issues: [x] Suppress privacy errors with document-private-types (, fix at ) [x] provide option to turn intra-link resolution errors into proper errors ()\nPreliminary implementation notes: Getting just the links from the markdown is relatively straightforward. You'll need to do this twice, to capture the behavior in both the Hoedown and Pulldown markdown renderers. Thankfully, each has a way of capturing links as they are parsed from the document. (While capturing the link you can also look for the \"struct/enum/mod/macro/static/etc\" marker to aid in resolution down the line.) In Hoedown, the function in captures all the events as they are captured. Hoedown uses a system of registering callbacks on the renderer configuration to capture each event separately. The event and its related function pointer signature are already present in the file, so you would need to create and use that callback to capture and modify links as they appear. For Pulldown, the parser is surfaced as an of events. There are already a handful of iterator wrappers present in , so adding another one (and using it in the farther down) shouldn't be a big deal. I'm... honestly not that familiar with how Pulldown represents link definitions in its Event/Tag enums, so more research would be needed to effectively get link text for that. Now, with the links in hand, you need to do two things, neither of which i know offhand since they involve interfacing with the compiler: Attempt to parse the link as a path (if it's not a valid Rust path, just emit the URL as-is and skip the rest of this) Resolve the path in the \"current\" module (emit a warning if resolution fails) Presumably there's a way to make the resolution come up with a DefId? Even after cleaning the AST, rustdoc still deals in those, so if that's the case there are ways to map that to the proper relative link.\nI've got a WIP going on in , which works off of Misdreavus' WIP (they did most of the work).\nNote that that branch is now live as , which i'm polishing and finishing up.\nNote for people watching this thread but not We're trying to wind that PR down to get a minimal version landed, so an initial implementation is fairly close.\nOne thing that appears to not work in (and I don't think is ever explicitly addressed in the RFC) is how to link to primitives like or .\nYeah I have to investigate that\nshould we turn the error into a warning ? I don't like the idea of crate docs breaking because someone introduced a name into a glob import\ndoes rustdoc have some idea of \"lint levels\"? What are the current deprecation warnings? Just println!s?\nWe can just instead. rustc can emit warnings not tied to actual warning names. There are no lint levels.\nAlright, let's do that for now. In the end I'd really like to have a way to check the docs for errors but that's a more involved/fragile process I imagine. Manish Goregaokar < am Mi. 24. Jan. 2018 um 09:17:\nShortcut links being implemented at\nIf will work as part of this RFC, will also work for linking to specific sections of that documentation?\nNot currently. This can be implemented. \u096f \u092e\u093e\u0930\u094d\u091a, \u0968\u0966\u0967\u096e \u0968:\u0967\u096a \u092e.\u0909. \u0930\u094b\u091c\u0940, \"Sunjay Varma\" This lint **warns by default** and is **nightly-only**. This lint detects when an intra-doc link fails to get resolved. For example: This lint **warns by default**. This lint detects when an [intra-doc link] fails to get resolved. For example: [intra-doc link]: linking-to-items-by-name.html ```rust /// I want to link to [`Inexistent`] but it doesn't exist! /// I want to link to [`Nonexistent`] but it doesn't exist! pub fn foo() {} ``` You'll get a warning saying: ```text error: `[`Inexistent`]` cannot be resolved, ignoring it... warning: unresolved link to `Nonexistent` --> test.rs:1:24 | 1 | /// I want to link to [`Nonexistent`] but it doesn't exist! | ^^^^^^^^^^^^^ no item named `Nonexistent` in `test` ``` It will also warn when there is an ambiguity and suggest how to disambiguate: ```rust /// [`Foo`] pub fn function() {} pub enum Foo {} pub fn Foo(){} ``` ```text warning: `Foo` is both an enum and a function --> test.rs:1:6 | 1 | /// [`Foo`] | ^^^^^ ambiguous link | = note: `#[warn(broken_intra_doc_links)]` on by default help: to link to the enum, prefix with the item type | 1 | /// [`enum@Foo`] | ^^^^^^^^^^ help: to link to the function, add parentheses | 1 | /// [`Foo()`] | ^^^^^^^ ``` ## missing_docs", "commid": "rust_pr_74430"}], "negative_passages": []} {"query_id": "q-en-rust-c02743ccbe2b17fc38615115bf4287e3b85dd13f6ab118108c305636904063d4", "query": "RFC: RFC PR: Todo: [x] Land [x] Make it work with cross-crate imports (, ) [x] Make it work for proc macros [x] Make it warn more on resolution errors () [x] Make warnings/errors use better spans than the first line of docs [x] first step: attribute () [x] point to actual link in doc comment () [x] Add lint for link resolution failure () [x] Implied Shortcut Reference Links () [x] blocked on and [x] Enum variants and UFCS/associated methods () [x] Primitives () [x] Use the right resolution scopes for inner doc comments on things that aren't modules () [x] Handle fields and variants () [x] Handle fields inside struct variants () [x] Support () [x] Write documentation () [x] Stabilization ( - current impl is gated to only run on nightly, but does not require a feature gate) [x] Handle methods from traits implementations [x] Support anchors () [x] Support missing primitive types (such as slices) () [x] Allow referring to (for on ) [x] Fix current crate linking when no module is provided in the path (for example, won't link to the crate's type for some reason) [x] reexports can break intra-doc links () [x] Support associated items in contexts other than documenting a trait method () Other related issues: [x] Suppress privacy errors with document-private-types (, fix at ) [x] provide option to turn intra-link resolution errors into proper errors ()\nPreliminary implementation notes: Getting just the links from the markdown is relatively straightforward. You'll need to do this twice, to capture the behavior in both the Hoedown and Pulldown markdown renderers. Thankfully, each has a way of capturing links as they are parsed from the document. (While capturing the link you can also look for the \"struct/enum/mod/macro/static/etc\" marker to aid in resolution down the line.) In Hoedown, the function in captures all the events as they are captured. Hoedown uses a system of registering callbacks on the renderer configuration to capture each event separately. The event and its related function pointer signature are already present in the file, so you would need to create and use that callback to capture and modify links as they appear. For Pulldown, the parser is surfaced as an of events. There are already a handful of iterator wrappers present in , so adding another one (and using it in the farther down) shouldn't be a big deal. I'm... honestly not that familiar with how Pulldown represents link definitions in its Event/Tag enums, so more research would be needed to effectively get link text for that. Now, with the links in hand, you need to do two things, neither of which i know offhand since they involve interfacing with the compiler: Attempt to parse the link as a path (if it's not a valid Rust path, just emit the URL as-is and skip the rest of this) Resolve the path in the \"current\" module (emit a warning if resolution fails) Presumably there's a way to make the resolution come up with a DefId? Even after cleaning the AST, rustdoc still deals in those, so if that's the case there are ways to map that to the proper relative link.\nI've got a WIP going on in , which works off of Misdreavus' WIP (they did most of the work).\nNote that that branch is now live as , which i'm polishing and finishing up.\nNote for people watching this thread but not We're trying to wind that PR down to get a minimal version landed, so an initial implementation is fairly close.\nOne thing that appears to not work in (and I don't think is ever explicitly addressed in the RFC) is how to link to primitives like or .\nYeah I have to investigate that\nshould we turn the error into a warning ? I don't like the idea of crate docs breaking because someone introduced a name into a glob import\ndoes rustdoc have some idea of \"lint levels\"? What are the current deprecation warnings? Just println!s?\nWe can just instead. rustc can emit warnings not tied to actual warning names. There are no lint levels.\nAlright, let's do that for now. In the end I'd really like to have a way to check the docs for errors but that's a more involved/fragile process I imagine. Manish Goregaokar < am Mi. 24. Jan. 2018 um 09:17:\nShortcut links being implemented at\nIf will work as part of this RFC, will also work for linking to specific sections of that documentation?\nNot currently. This can be implemented. \u096f \u092e\u093e\u0930\u094d\u091a, \u0968\u0966\u0967\u096e \u0968:\u0967\u096a \u092e.\u0909. \u0930\u094b\u091c\u0940, \"Sunjay Varma\" ### Linking to items by name Rustdoc is capable of directly linking to other rustdoc pages in Markdown documentation using the path of item as a link. For example, in the following code all of the links will link to the rustdoc page for `Bar`: ```rust /// This struct is not [Bar] pub struct Foo1; /// This struct is also not [bar](Bar) pub struct Foo2; /// This struct is also not [bar][b] /// /// [b]: Bar pub struct Foo3; /// This struct is also not [`Bar`] pub struct Foo4; pub struct Bar; ``` You can refer to anything in scope, and use paths, including `Self`. You may also use `foo()` and `foo!()` to refer to methods/functions and macros respectively. ```rust,edition2018 use std::sync::mpsc::Receiver; /// This is an version of [`Receiver`], with support for [`std::future`]. /// /// You can obtain a [`std::future::Future`] by calling [`Self::recv()`]. pub struct AsyncReceiver { sender: Receiver } impl AsyncReceiver { pub async fn recv() -> T { unimplemented!() } } ``` Paths in Rust have three namespaces: type, value, and macro. Items from these namespaces are allowed to overlap. In case of ambiguity, rustdoc will warn about the ambiguity and ask you to disambiguate, which can be done by using a prefix like `struct@`, `enum@`, `type@`, `trait@`, `union@`, `const@`, `static@`, `value@`, `function@`, `mod@`, `fn@`, `module@`, `method@`, `prim@`, `primitive@`, `macro@`, or `derive@`: ```rust /// See also: [`Foo`](struct@Foo) struct Bar; /// This is different from [`Foo`](fn@Foo) struct Foo {} fn Foo() {} ``` Note: Because of how `macro_rules` macros are scoped in Rust, the intra-doc links of a `macro_rules` macro will be resolved relative to the crate root, as opposed to the module it is defined in. ## Extensions to the `#[doc]` attribute These features operate by extending the `#[doc]` attribute, and thus can be caught by the compiler", "commid": "rust_pr_74430"}], "negative_passages": []} {"query_id": "q-en-rust-c02743ccbe2b17fc38615115bf4287e3b85dd13f6ab118108c305636904063d4", "query": "RFC: RFC PR: Todo: [x] Land [x] Make it work with cross-crate imports (, ) [x] Make it work for proc macros [x] Make it warn more on resolution errors () [x] Make warnings/errors use better spans than the first line of docs [x] first step: attribute () [x] point to actual link in doc comment () [x] Add lint for link resolution failure () [x] Implied Shortcut Reference Links () [x] blocked on and [x] Enum variants and UFCS/associated methods () [x] Primitives () [x] Use the right resolution scopes for inner doc comments on things that aren't modules () [x] Handle fields and variants () [x] Handle fields inside struct variants () [x] Support () [x] Write documentation () [x] Stabilization ( - current impl is gated to only run on nightly, but does not require a feature gate) [x] Handle methods from traits implementations [x] Support anchors () [x] Support missing primitive types (such as slices) () [x] Allow referring to (for on ) [x] Fix current crate linking when no module is provided in the path (for example, won't link to the crate's type for some reason) [x] reexports can break intra-doc links () [x] Support associated items in contexts other than documenting a trait method () Other related issues: [x] Suppress privacy errors with document-private-types (, fix at ) [x] provide option to turn intra-link resolution errors into proper errors ()\nPreliminary implementation notes: Getting just the links from the markdown is relatively straightforward. You'll need to do this twice, to capture the behavior in both the Hoedown and Pulldown markdown renderers. Thankfully, each has a way of capturing links as they are parsed from the document. (While capturing the link you can also look for the \"struct/enum/mod/macro/static/etc\" marker to aid in resolution down the line.) In Hoedown, the function in captures all the events as they are captured. Hoedown uses a system of registering callbacks on the renderer configuration to capture each event separately. The event and its related function pointer signature are already present in the file, so you would need to create and use that callback to capture and modify links as they appear. For Pulldown, the parser is surfaced as an of events. There are already a handful of iterator wrappers present in , so adding another one (and using it in the farther down) shouldn't be a big deal. I'm... honestly not that familiar with how Pulldown represents link definitions in its Event/Tag enums, so more research would be needed to effectively get link text for that. Now, with the links in hand, you need to do two things, neither of which i know offhand since they involve interfacing with the compiler: Attempt to parse the link as a path (if it's not a valid Rust path, just emit the URL as-is and skip the rest of this) Resolve the path in the \"current\" module (emit a warning if resolution fails) Presumably there's a way to make the resolution come up with a DefId? Even after cleaning the AST, rustdoc still deals in those, so if that's the case there are ways to map that to the proper relative link.\nI've got a WIP going on in , which works off of Misdreavus' WIP (they did most of the work).\nNote that that branch is now live as , which i'm polishing and finishing up.\nNote for people watching this thread but not We're trying to wind that PR down to get a minimal version landed, so an initial implementation is fairly close.\nOne thing that appears to not work in (and I don't think is ever explicitly addressed in the RFC) is how to link to primitives like or .\nYeah I have to investigate that\nshould we turn the error into a warning ? I don't like the idea of crate docs breaking because someone introduced a name into a glob import\ndoes rustdoc have some idea of \"lint levels\"? What are the current deprecation warnings? Just println!s?\nWe can just instead. rustc can emit warnings not tied to actual warning names. There are no lint levels.\nAlright, let's do that for now. In the end I'd really like to have a way to check the docs for errors but that's a more involved/fragile process I imagine. Manish Goregaokar < am Mi. 24. Jan. 2018 um 09:17:\nShortcut links being implemented at\nIf will work as part of this RFC, will also work for linking to specific sections of that documentation?\nNot currently. This can be implemented. \u096f \u092e\u093e\u0930\u094d\u091a, \u0968\u0966\u0967\u096e \u0968:\u0967\u096a \u092e.\u0909. \u0930\u094b\u091c\u0940, \"Sunjay Varma\" use rustc_feature::UnstableFeatures; use rustc_hir as hir; use rustc_hir::def::{ DefKind,", "commid": "rust_pr_74430"}], "negative_passages": []} {"query_id": "q-en-rust-c02743ccbe2b17fc38615115bf4287e3b85dd13f6ab118108c305636904063d4", "query": "RFC: RFC PR: Todo: [x] Land [x] Make it work with cross-crate imports (, ) [x] Make it work for proc macros [x] Make it warn more on resolution errors () [x] Make warnings/errors use better spans than the first line of docs [x] first step: attribute () [x] point to actual link in doc comment () [x] Add lint for link resolution failure () [x] Implied Shortcut Reference Links () [x] blocked on and [x] Enum variants and UFCS/associated methods () [x] Primitives () [x] Use the right resolution scopes for inner doc comments on things that aren't modules () [x] Handle fields and variants () [x] Handle fields inside struct variants () [x] Support () [x] Write documentation () [x] Stabilization ( - current impl is gated to only run on nightly, but does not require a feature gate) [x] Handle methods from traits implementations [x] Support anchors () [x] Support missing primitive types (such as slices) () [x] Allow referring to (for on ) [x] Fix current crate linking when no module is provided in the path (for example, won't link to the crate's type for some reason) [x] reexports can break intra-doc links () [x] Support associated items in contexts other than documenting a trait method () Other related issues: [x] Suppress privacy errors with document-private-types (, fix at ) [x] provide option to turn intra-link resolution errors into proper errors ()\nPreliminary implementation notes: Getting just the links from the markdown is relatively straightforward. You'll need to do this twice, to capture the behavior in both the Hoedown and Pulldown markdown renderers. Thankfully, each has a way of capturing links as they are parsed from the document. (While capturing the link you can also look for the \"struct/enum/mod/macro/static/etc\" marker to aid in resolution down the line.) In Hoedown, the function in captures all the events as they are captured. Hoedown uses a system of registering callbacks on the renderer configuration to capture each event separately. The event and its related function pointer signature are already present in the file, so you would need to create and use that callback to capture and modify links as they appear. For Pulldown, the parser is surfaced as an of events. There are already a handful of iterator wrappers present in , so adding another one (and using it in the farther down) shouldn't be a big deal. I'm... honestly not that familiar with how Pulldown represents link definitions in its Event/Tag enums, so more research would be needed to effectively get link text for that. Now, with the links in hand, you need to do two things, neither of which i know offhand since they involve interfacing with the compiler: Attempt to parse the link as a path (if it's not a valid Rust path, just emit the URL as-is and skip the rest of this) Resolve the path in the \"current\" module (emit a warning if resolution fails) Presumably there's a way to make the resolution come up with a DefId? Even after cleaning the AST, rustdoc still deals in those, so if that's the case there are ways to map that to the proper relative link.\nI've got a WIP going on in , which works off of Misdreavus' WIP (they did most of the work).\nNote that that branch is now live as , which i'm polishing and finishing up.\nNote for people watching this thread but not We're trying to wind that PR down to get a minimal version landed, so an initial implementation is fairly close.\nOne thing that appears to not work in (and I don't think is ever explicitly addressed in the RFC) is how to link to primitives like or .\nYeah I have to investigate that\nshould we turn the error into a warning ? I don't like the idea of crate docs breaking because someone introduced a name into a glob import\ndoes rustdoc have some idea of \"lint levels\"? What are the current deprecation warnings? Just println!s?\nWe can just instead. rustc can emit warnings not tied to actual warning names. There are no lint levels.\nAlright, let's do that for now. In the end I'd really like to have a way to check the docs for errors but that's a more involved/fragile process I imagine. Manish Goregaokar < am Mi. 24. Jan. 2018 um 09:17:\nShortcut links being implemented at\nIf will work as part of this RFC, will also work for linking to specific sections of that documentation?\nNot currently. This can be implemented. \u096f \u092e\u093e\u0930\u094d\u091a, \u0968\u0966\u0967\u096e \u0968:\u0967\u096a \u092e.\u0909. \u0930\u094b\u091c\u0940, \"Sunjay Varma\" ) -> Crate { if !UnstableFeatures::from_environment().is_nightly_build() { krate } else { let mut coll = LinkCollector::new(cx); coll.fold_crate(krate) } let mut coll = LinkCollector::new(cx); coll.fold_crate(krate) } enum ErrorKind<'a> {", "commid": "rust_pr_74430"}], "negative_passages": []} {"query_id": "q-en-rust-f0b8dcd36abe568e520c06d72c6df6e36bb4e3f9f9cb15702b666af380495058", "query": "After discussing it on the forums I bit I decided to look into what the optimizer spends time on, and inlining seems to be a significant part of it. There seems to be a lot of room for improvement here. Tested by compiling syntex in rustc-benchmarks (though the inlining time seem quite significant in other crates as well.). I counted which functions were inlined the most, and and to a lesser extent , which are when automatically deriving , and which simply wrap two other functions inside fmt::builder show up a lot. Maybe derive(debug) could output the wrapped functions instead, or the equivalent code, so llvm doesn't have to spend time inlining them (unless this causes problems for debug info). is also inlined a lot, and also plays a role in derive(Debug). Maybe something could be done about that one as well, as it's that creates a struct but doesn't do anything else. #[inline] pub fn debug_struct<'b>(&'b mut self, name: &str) -> DebugStruct<'b, 'a> { builders::debug_struct_new(self, name) }", "commid": "rust_pr_43856"}], "negative_passages": []} {"query_id": "q-en-rust-f0b8dcd36abe568e520c06d72c6df6e36bb4e3f9f9cb15702b666af380495058", "query": "After discussing it on the forums I bit I decided to look into what the optimizer spends time on, and inlining seems to be a significant part of it. There seems to be a lot of room for improvement here. Tested by compiling syntex in rustc-benchmarks (though the inlining time seem quite significant in other crates as well.). I counted which functions were inlined the most, and and to a lesser extent , which are when automatically deriving , and which simply wrap two other functions inside fmt::builder show up a lot. Maybe derive(debug) could output the wrapped functions instead, or the equivalent code, so llvm doesn't have to spend time inlining them (unless this causes problems for debug info). is also inlined a lot, and also plays a role in derive(Debug). Maybe something could be done about that one as well, as it's that creates a struct but doesn't do anything else. #[inline] pub fn debug_tuple<'b>(&'b mut self, name: &str) -> DebugTuple<'b, 'a> { builders::debug_tuple_new(self, name) }", "commid": "rust_pr_43856"}], "negative_passages": []} {"query_id": "q-en-rust-f0b8dcd36abe568e520c06d72c6df6e36bb4e3f9f9cb15702b666af380495058", "query": "After discussing it on the forums I bit I decided to look into what the optimizer spends time on, and inlining seems to be a significant part of it. There seems to be a lot of room for improvement here. Tested by compiling syntex in rustc-benchmarks (though the inlining time seem quite significant in other crates as well.). I counted which functions were inlined the most, and and to a lesser extent , which are when automatically deriving , and which simply wrap two other functions inside fmt::builder show up a lot. Maybe derive(debug) could output the wrapped functions instead, or the equivalent code, so llvm doesn't have to spend time inlining them (unless this causes problems for debug info). is also inlined a lot, and also plays a role in derive(Debug). Maybe something could be done about that one as well, as it's that creates a struct but doesn't do anything else. #[inline] pub fn debug_list<'b>(&'b mut self) -> DebugList<'b, 'a> { builders::debug_list_new(self) }", "commid": "rust_pr_43856"}], "negative_passages": []} {"query_id": "q-en-rust-f0b8dcd36abe568e520c06d72c6df6e36bb4e3f9f9cb15702b666af380495058", "query": "After discussing it on the forums I bit I decided to look into what the optimizer spends time on, and inlining seems to be a significant part of it. There seems to be a lot of room for improvement here. Tested by compiling syntex in rustc-benchmarks (though the inlining time seem quite significant in other crates as well.). I counted which functions were inlined the most, and and to a lesser extent , which are when automatically deriving , and which simply wrap two other functions inside fmt::builder show up a lot. Maybe derive(debug) could output the wrapped functions instead, or the equivalent code, so llvm doesn't have to spend time inlining them (unless this causes problems for debug info). is also inlined a lot, and also plays a role in derive(Debug). Maybe something could be done about that one as well, as it's that creates a struct but doesn't do anything else. #[inline] pub fn debug_set<'b>(&'b mut self) -> DebugSet<'b, 'a> { builders::debug_set_new(self) }", "commid": "rust_pr_43856"}], "negative_passages": []} {"query_id": "q-en-rust-f0b8dcd36abe568e520c06d72c6df6e36bb4e3f9f9cb15702b666af380495058", "query": "After discussing it on the forums I bit I decided to look into what the optimizer spends time on, and inlining seems to be a significant part of it. There seems to be a lot of room for improvement here. Tested by compiling syntex in rustc-benchmarks (though the inlining time seem quite significant in other crates as well.). I counted which functions were inlined the most, and and to a lesser extent , which are when automatically deriving , and which simply wrap two other functions inside fmt::builder show up a lot. Maybe derive(debug) could output the wrapped functions instead, or the equivalent code, so llvm doesn't have to spend time inlining them (unless this causes problems for debug info). is also inlined a lot, and also plays a role in derive(Debug). Maybe something could be done about that one as well, as it's that creates a struct but doesn't do anything else. #[inline] pub fn debug_map<'b>(&'b mut self) -> DebugMap<'b, 'a> { builders::debug_map_new(self) }", "commid": "rust_pr_43856"}], "negative_passages": []} {"query_id": "q-en-rust-b751cb2955cf926a93abe6f66783f8ba02ca6ea4c04088bbcff75267915cbd14", "query": "In the following scenario, the user will have a better experience if we allow them to have an \"unused\" extern crate. If we permit them to write , as recommended by Serde's docs for this reason, the error message is: If they remove the unused , they generally get nastier error messages. If we can't fix the error message to be as good as it was before the unusedexterncrates warning, let's disable unusedexterncrates in cases where the same crate is used later in some nested scope (submodule or code block). cc who has been involved with this warning.\nArguable point. As the statement doesn't impact the binary (linkage), it doesn't harm if we disabled the lint. I'm not sure about the way to implement it though.\nthe above example works now without the requirement of . Is there anything else blocking this other than ?\nClosing this as the error case is the same now irrespective of whether extern crate is included or not.\nThere is still a minor improvement from adding . Less bad compared to though.", "positive_passages": [{"docid": "doc-en-rust-887a3ce2481d125913767ab0a1d337c3f6294683378af72c555ec00cf85dbae3", "text": "declare_lint! { pub UNUSED_EXTERN_CRATES, Warn, Allow, \"extern crates that are never used\" }", "commid": "rust_pr_44825"}], "negative_passages": []} {"query_id": "q-en-rust-2d8db29441fd3de139df7d18a731cc7263283de0404428de8779d93e4d4510ef", "query": "Was looking at the printout of llvm passes when I noticed that the functions that was inlined everywhere, and due to this was dead code, was optimized more after inlining. So as a test I at the line and this reduces the compile time of release build in racer by an average of 23%. I do not know if this is a good place to add this extra pass but maybe someone that knows how llvm works can use this information to speedup the compilation of release builds. was also wondering how this works in MIR I see that there is an inline pass but I can not find any dead code elimination\nthe stage1 compilation of rustc went from 828.89 to 691.75 secs with this fix on my computer but that is only a 15% improvement\n15% is pretty significant.\nWow, great work!\nThat's some nice improvements! If you feel up to it, go ahead an submit a pull request with your suggested changes. That way someone with experience with LLVM is guaranteed to look at the patch and it won't get lost.\nI believe the PR should be targeted at\n:+1: If you want to push through this patch the general process works like this: Submit the patch against the currently active branch of (which is at the moment) Wait for it to get reviewed & merged Submit a pull request against which updates the submodule. If you have any questions about the process or want someone else to do this, just mention it here or contact me or someone else on one of the .\ncc\ncc\nIdeally, the process should include discussion with upstream LLVM too.\nUnfortunately it looks like we were a bit too eager to close this in :(\nFrom what I have learned from the llvm maintainers I think that we have received the improvement that can be expected. The to good to be true improvements I measured was probably due to less optimizations was possible and resulted in less optimized code.", "positive_passages": [{"docid": "doc-en-rust-baa2502cdf6d39f365aba4b73251991ec79f1fc87d118d581a2a0a4fc82530c9", "text": " Subproject commit d9e7d2696e41983b6b5a0b4fac604b4e548a84d3 Subproject commit c7a16bd57c2a9c643a52f0cebecdaf0b6a996da1 ", "commid": "rust_pr_45054"}], "negative_passages": []} {"query_id": "q-en-rust-82451d4ef2e2ae14065e10254414f657d76057960bb24f89a20f8c903648cd47", "query": "The standard library provides a way to get the process id of a child process with , but doesn't have any way to get the current processes id (the equivalent of ). It is possible to do this with , but it seems to me that this should be something that is part of the standard library, especially since it already has cross-platform support for process ids of child processes.\nSeems reasonable to add!\non Windows.\nReopening as the tracking issue.\nWhile porting Rust over to , a sandboxed UNIX-like runtime environment, I ran into the issue that there is no way for me to implement this function. As we're aiming to use CloudABI in distributed settings, there is no traditional 16/32-bit process identifier. Instead, processes may be identified by a UUID. This is useful, as integer process identifiers can easily collide when dealing with many instances of a job (due to the birthday paradox). As this function was only introduced recently, would it still be possible to alter it to return an opaque, but displayable data type instead? That said, almost all of the other functions provided by this module are also broken on CloudABI. The traditional model doesn't lend well to sandboxing. Maybe it makes more sense to disable on CloudABI entirely.\nI'm opposed a completely opaque data type, because that limits the usefulness. For example, that makes it difficult to use in C (or dbus) that require a PID. However, I think it would be alright if the it was a semi-opaque type with OS-specific API's to get the underlying representation (in std::os::unix, etc). However, that would break consistency with , which is already stabilized. I'm not sure how you would deal with that either.\nAs has mentioned, the ship has already sailed here due to . More generally, though. I'm not sure it makes sense to avoid supporting getpid of all things because some platform decided not to implement it.\nOne thing that would be low-hanging fruit would be to at least have some kind of type alias for this. On UNIX/Windows, it could be defined as . On CloudABI, it could be a string. That way, existing code that uses continues to build on existing platforms, whereas less conventional platforms can choose to implement this function differently.\nGiven this: and this: I\u2019m inclined to stabilize as-is. (Returning .) fcp merge Hopefully the can eventually help a CloudABI port to \"cleanly\" implement this differently, or not at all.\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-6689207a51413b89f3ac01f4a89275b0e6b69d068cae9e689582ce00d5e1d740", "text": "unsafe { ::sys::abort_internal() }; } /// Returns the OS-assigned process identifier associated with this process. /// /// # Examples /// /// Basic usage: /// /// ```no_run /// #![feature(getpid)] /// use std::process; /// /// println!(\"My pid is {}\", process::id()); /// ``` /// /// #[unstable(feature = \"getpid\", issue = \"44971\", reason = \"recently added\")] pub fn id() -> u32 { ::sys::os::getpid() } #[cfg(all(test, not(target_os = \"emscripten\")))] mod tests { use io::prelude::*;", "commid": "rust_pr_45059"}], "negative_passages": []} {"query_id": "q-en-rust-82451d4ef2e2ae14065e10254414f657d76057960bb24f89a20f8c903648cd47", "query": "The standard library provides a way to get the process id of a child process with , but doesn't have any way to get the current processes id (the equivalent of ). It is possible to do this with , but it seems to me that this should be something that is part of the standard library, especially since it already has cross-platform support for process ids of child processes.\nSeems reasonable to add!\non Windows.\nReopening as the tracking issue.\nWhile porting Rust over to , a sandboxed UNIX-like runtime environment, I ran into the issue that there is no way for me to implement this function. As we're aiming to use CloudABI in distributed settings, there is no traditional 16/32-bit process identifier. Instead, processes may be identified by a UUID. This is useful, as integer process identifiers can easily collide when dealing with many instances of a job (due to the birthday paradox). As this function was only introduced recently, would it still be possible to alter it to return an opaque, but displayable data type instead? That said, almost all of the other functions provided by this module are also broken on CloudABI. The traditional model doesn't lend well to sandboxing. Maybe it makes more sense to disable on CloudABI entirely.\nI'm opposed a completely opaque data type, because that limits the usefulness. For example, that makes it difficult to use in C (or dbus) that require a PID. However, I think it would be alright if the it was a semi-opaque type with OS-specific API's to get the underlying representation (in std::os::unix, etc). However, that would break consistency with , which is already stabilized. I'm not sure how you would deal with that either.\nAs has mentioned, the ship has already sailed here due to . More generally, though. I'm not sure it makes sense to avoid supporting getpid of all things because some platform decided not to implement it.\nOne thing that would be low-hanging fruit would be to at least have some kind of type alias for this. On UNIX/Windows, it could be defined as . On CloudABI, it could be a string. That way, existing code that uses continues to build on existing platforms, whereas less conventional platforms can choose to implement this function differently.\nGiven this: and this: I\u2019m inclined to stabilize as-is. (Returning .) fcp merge Hopefully the can eventually help a CloudABI port to \"cleanly\" implement this differently, or not at all.\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-bd4d7b5f9913fcaf374dcc7cf818c4b5ea0b37b0e70e57356717a965aa55ab83", "text": "let _ = syscall::exit(code as usize); unreachable!(); } pub fn getpid() -> u32 { syscall::getpid().unwrap() as u32 } ", "commid": "rust_pr_45059"}], "negative_passages": []} {"query_id": "q-en-rust-82451d4ef2e2ae14065e10254414f657d76057960bb24f89a20f8c903648cd47", "query": "The standard library provides a way to get the process id of a child process with , but doesn't have any way to get the current processes id (the equivalent of ). It is possible to do this with , but it seems to me that this should be something that is part of the standard library, especially since it already has cross-platform support for process ids of child processes.\nSeems reasonable to add!\non Windows.\nReopening as the tracking issue.\nWhile porting Rust over to , a sandboxed UNIX-like runtime environment, I ran into the issue that there is no way for me to implement this function. As we're aiming to use CloudABI in distributed settings, there is no traditional 16/32-bit process identifier. Instead, processes may be identified by a UUID. This is useful, as integer process identifiers can easily collide when dealing with many instances of a job (due to the birthday paradox). As this function was only introduced recently, would it still be possible to alter it to return an opaque, but displayable data type instead? That said, almost all of the other functions provided by this module are also broken on CloudABI. The traditional model doesn't lend well to sandboxing. Maybe it makes more sense to disable on CloudABI entirely.\nI'm opposed a completely opaque data type, because that limits the usefulness. For example, that makes it difficult to use in C (or dbus) that require a PID. However, I think it would be alright if the it was a semi-opaque type with OS-specific API's to get the underlying representation (in std::os::unix, etc). However, that would break consistency with , which is already stabilized. I'm not sure how you would deal with that either.\nAs has mentioned, the ship has already sailed here due to . More generally, though. I'm not sure it makes sense to avoid supporting getpid of all things because some platform decided not to implement it.\nOne thing that would be low-hanging fruit would be to at least have some kind of type alias for this. On UNIX/Windows, it could be defined as . On CloudABI, it could be a string. That way, existing code that uses continues to build on existing platforms, whereas less conventional platforms can choose to implement this function differently.\nGiven this: and this: I\u2019m inclined to stabilize as-is. (Returning .) fcp merge Hopefully the can eventually help a CloudABI port to \"cleanly\" implement this differently, or not at all.\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-464fbe097d91df31b9ed18785015e90a866e60fdd66bc13752456982936140b7", "text": "pub fn exit(code: i32) -> ! { unsafe { libc::exit(code as c_int) } } pub fn getpid() -> u32 { unsafe { libc::getpid() as u32 } } ", "commid": "rust_pr_45059"}], "negative_passages": []} {"query_id": "q-en-rust-82451d4ef2e2ae14065e10254414f657d76057960bb24f89a20f8c903648cd47", "query": "The standard library provides a way to get the process id of a child process with , but doesn't have any way to get the current processes id (the equivalent of ). It is possible to do this with , but it seems to me that this should be something that is part of the standard library, especially since it already has cross-platform support for process ids of child processes.\nSeems reasonable to add!\non Windows.\nReopening as the tracking issue.\nWhile porting Rust over to , a sandboxed UNIX-like runtime environment, I ran into the issue that there is no way for me to implement this function. As we're aiming to use CloudABI in distributed settings, there is no traditional 16/32-bit process identifier. Instead, processes may be identified by a UUID. This is useful, as integer process identifiers can easily collide when dealing with many instances of a job (due to the birthday paradox). As this function was only introduced recently, would it still be possible to alter it to return an opaque, but displayable data type instead? That said, almost all of the other functions provided by this module are also broken on CloudABI. The traditional model doesn't lend well to sandboxing. Maybe it makes more sense to disable on CloudABI entirely.\nI'm opposed a completely opaque data type, because that limits the usefulness. For example, that makes it difficult to use in C (or dbus) that require a PID. However, I think it would be alright if the it was a semi-opaque type with OS-specific API's to get the underlying representation (in std::os::unix, etc). However, that would break consistency with , which is already stabilized. I'm not sure how you would deal with that either.\nAs has mentioned, the ship has already sailed here due to . More generally, though. I'm not sure it makes sense to avoid supporting getpid of all things because some platform decided not to implement it.\nOne thing that would be low-hanging fruit would be to at least have some kind of type alias for this. On UNIX/Windows, it could be defined as . On CloudABI, it could be a string. That way, existing code that uses continues to build on existing platforms, whereas less conventional platforms can choose to implement this function differently.\nGiven this: and this: I\u2019m inclined to stabilize as-is. (Returning .) fcp merge Hopefully the can eventually help a CloudABI port to \"cleanly\" implement this differently, or not at all.\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-4e73b10618b3fa796517a7b92ac93dd74017ea9af61f272783e029ae312885a5", "text": "unsafe { c::ExitProcess(code as c::UINT) } } pub fn getpid() -> u32 { unsafe { c::GetCurrentProcessId() as u32 } } #[cfg(test)] mod tests { use io::Error;", "commid": "rust_pr_45059"}], "negative_passages": []} {"query_id": "q-en-rust-797cbc37bcfe1bbd58dcfd8367b3e093373ebb74c75cce88587d6575d30812b1", "query": "is only parameterized on lifetime. The types of the captured arguments are erased in the contained . Nothing restricts the aggregate from being or , so by OIBIT it's both. Thus this compiles: I'm not sure if there are any realistic ways to accidentally abuse this, but here's a deliberate example. The spawned thread will read the through the arguments, even while the main thread modifies it.", "positive_passages": [{"docid": "doc-en-rust-8dc8aba88e6c7bb7b49d451eaf4a768912d7ba8f194d7ddd95bcbfdb477e8165", "text": "struct Void { _priv: (), /// Erases all oibits, because `Void` erases the type of the object that /// will be used to produce formatted output. Since we do not know what /// oibits the real types have (and they can have any or none), we need to /// take the most conservative approach and forbid all oibits. /// /// It was added after #45197 showed that one could share a `!Sync` /// object across threads by passing it into `format_args!`. _oibit_remover: PhantomData<*mut Fn()>, } /// This struct represents the generic \"argument\" which is taken by the Xprintf", "commid": "rust_pr_45198"}], "negative_passages": []} {"query_id": "q-en-rust-797cbc37bcfe1bbd58dcfd8367b3e093373ebb74c75cce88587d6575d30812b1", "query": "is only parameterized on lifetime. The types of the captured arguments are erased in the contained . Nothing restricts the aggregate from being or , so by OIBIT it's both. Thus this compiles: I'm not sure if there are any realistic ways to accidentally abuse this, but here's a deliberate example. The spawned thread will read the through the arguments, even while the main thread modifies it.", "positive_passages": [{"docid": "doc-en-rust-4ab1218c247aecbed17399fab7b69d7225716a6b382463c6e438cae03482d555", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn send(_: T) {} fn sync(_: T) {} fn main() { // `Cell` is not `Sync`, so `&Cell` is neither `Sync` nor `Send`, // `std::fmt::Arguments` used to forget this... let c = std::cell::Cell::new(42); send(format_args!(\"{:?}\", c)); sync(format_args!(\"{:?}\", c)); } ", "commid": "rust_pr_45198"}], "negative_passages": []} {"query_id": "q-en-rust-797cbc37bcfe1bbd58dcfd8367b3e093373ebb74c75cce88587d6575d30812b1", "query": "is only parameterized on lifetime. The types of the captured arguments are erased in the contained . Nothing restricts the aggregate from being or , so by OIBIT it's both. Thus this compiles: I'm not sure if there are any realistic ways to accidentally abuse this, but here's a deliberate example. The spawned thread will read the through the arguments, even while the main thread modifies it.", "positive_passages": [{"docid": "doc-en-rust-a5018cb26734374b5d20f661856821930044342a6bd820645c2e5a37832134d5", "text": " error[E0277]: the trait bound `*mut std::ops::Fn() + 'static: std::marker::Sync` is not satisfied in `[std::fmt::ArgumentV1<'_>]` --> $DIR/send-sync.rs:18:5 | 18 | send(format_args!(\"{:?}\", c)); | ^^^^ `*mut std::ops::Fn() + 'static` cannot be shared between threads safely | = help: within `[std::fmt::ArgumentV1<'_>]`, the trait `std::marker::Sync` is not implemented for `*mut std::ops::Fn() + 'static` = note: required because it appears within the type `std::marker::PhantomData<*mut std::ops::Fn() + 'static>` = note: required because it appears within the type `core::fmt::Void` = note: required because it appears within the type `&core::fmt::Void` = note: required because it appears within the type `std::fmt::ArgumentV1<'_>` = note: required because it appears within the type `[std::fmt::ArgumentV1<'_>]` = note: required because of the requirements on the impl of `std::marker::Send` for `&[std::fmt::ArgumentV1<'_>]` = note: required because it appears within the type `std::fmt::Arguments<'_>` = note: required by `send` error[E0277]: the trait bound `*mut std::ops::Fn() + 'static: std::marker::Sync` is not satisfied in `std::fmt::Arguments<'_>` --> $DIR/send-sync.rs:19:5 | 19 | sync(format_args!(\"{:?}\", c)); | ^^^^ `*mut std::ops::Fn() + 'static` cannot be shared between threads safely | = help: within `std::fmt::Arguments<'_>`, the trait `std::marker::Sync` is not implemented for `*mut std::ops::Fn() + 'static` = note: required because it appears within the type `std::marker::PhantomData<*mut std::ops::Fn() + 'static>` = note: required because it appears within the type `core::fmt::Void` = note: required because it appears within the type `&core::fmt::Void` = note: required because it appears within the type `std::fmt::ArgumentV1<'_>` = note: required because it appears within the type `[std::fmt::ArgumentV1<'_>]` = note: required because it appears within the type `&[std::fmt::ArgumentV1<'_>]` = note: required because it appears within the type `std::fmt::Arguments<'_>` = note: required by `sync` error: aborting due to 2 previous errors ", "commid": "rust_pr_45198"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-622573683ffa048603de927952390f1548fdeb6fcf76a9e380e950202a1880f7", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] #[macro_export] #[cfg(dox)] macro_rules! format_args { ($fmt:expr, $($args:tt)*) => ({ /* compiler built-in */ }) } macro_rules! format_args { ($fmt:expr) => ({ /* compiler built-in */ }); ($fmt:expr, $($args:tt)*) => ({ /* compiler built-in */ }); } /// Inspect an environment variable at compile time. ///", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-5942075405f0fed830a33197cb2a3bf73cbb9fc59855920db87021817abb3dd9", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] #[macro_export] #[cfg(dox)] macro_rules! env { ($name:expr) => ({ /* compiler built-in */ }) } macro_rules! env { ($name:expr) => ({ /* compiler built-in */ }); ($name:expr,) => ({ /* compiler built-in */ }); } /// Optionally inspect an environment variable at compile time. ///", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-4e9486c540b1cb94190446e9a783c27409b4f31ec5379fc8440ae81704492f70", "text": "#[macro_export] #[cfg(dox)] macro_rules! concat_idents { ($($e:ident),*) => ({ /* compiler built-in */ }) ($($e:ident),*) => ({ /* compiler built-in */ }); ($($e:ident,)*) => ({ /* compiler built-in */ }); } /// Concatenates literals into a static string slice.", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-53c477a99aec5d655d4b4c602f949ff7be16a3c4f6c3f66930b1271eaf79f336", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] #[macro_export] #[cfg(dox)] macro_rules! concat { ($($e:expr),*) => ({ /* compiler built-in */ }) } macro_rules! concat { ($($e:expr),*) => ({ /* compiler built-in */ }); ($($e:expr,)*) => ({ /* compiler built-in */ }); } /// A macro which expands to the line number on which it was invoked. ///", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-749faed53f7c073ff9e886fb24d1d8c82f31b7906ea6de0745044b1cff9c119d", "text": "/// ``` #[stable(feature = \"rust1\", since = \"1.0.0\")] #[macro_export] macro_rules! format_args { ($fmt:expr, $($args:tt)*) => ({ /* compiler built-in */ }) } macro_rules! format_args { ($fmt:expr) => ({ /* compiler built-in */ }); ($fmt:expr, $($args:tt)*) => ({ /* compiler built-in */ }); } /// Inspect an environment variable at compile time. ///", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-16610e2d2bdbf4173a2f8b4c9736bbf426be4a93fb7911378a6de6a82ceba336", "text": "/// ``` #[stable(feature = \"rust1\", since = \"1.0.0\")] #[macro_export] macro_rules! env { ($name:expr) => ({ /* compiler built-in */ }) } macro_rules! env { ($name:expr) => ({ /* compiler built-in */ }); ($name:expr,) => ({ /* compiler built-in */ }); } /// Optionally inspect an environment variable at compile time. ///", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-24d8cc121302451a85c53437d038b377a1fa32ea82f1fe503f7f1f2ba1493f9d", "text": "#[unstable(feature = \"concat_idents_macro\", issue = \"29599\")] #[macro_export] macro_rules! concat_idents { ($($e:ident),*) => ({ /* compiler built-in */ }) ($($e:ident),*) => ({ /* compiler built-in */ }); ($($e:ident,)*) => ({ /* compiler built-in */ }); } /// Concatenates literals into a static string slice.", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-fe74a1d27f1fac03f4a1fb86de5ee73815eb8a77922d2b9c9d097e2470a0c86e", "query": "Example: The macro is when in reality, the following call is legal (): I believe this is because . and (edit: also ) are in a similar boat (they also support trailing commas in places where the documented pattern suggests they would not). In all fairness this is not a big deal, and even has the conceivable benefit that the documentation is cleaner without such laser precision. I just thought it was worth making an issue for.\nit's all good! These should be correct. Happy to take PRs.\nclosing since fix was merged", "positive_passages": [{"docid": "doc-en-rust-3176ec723acb8e2ad906b2b78d64dd64704083f921c032d4cb3a34b942c8d4c3", "text": "/// ``` #[stable(feature = \"rust1\", since = \"1.0.0\")] #[macro_export] macro_rules! concat { ($($e:expr),*) => ({ /* compiler built-in */ }) } macro_rules! concat { ($($e:expr),*) => ({ /* compiler built-in */ }); ($($e:expr,)*) => ({ /* compiler built-in */ }); } /// A macro which expands to the line number on which it was invoked. ///", "commid": "rust_pr_46260"}], "negative_passages": []} {"query_id": "q-en-rust-d32f19abff5ccf7a5342b6d7208d67025d685a3da3d7b544f4725ad062859215", "query": "We this to OccupiedEntry a while ago, but weirdly not to the map itself.\nShould we also add ?\nLooks good to me to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-80f90d0adb6a2cb0792004b33cece8a498c6c9eb678b501176eed0c396525e2a", "text": "self.search_mut(k).into_occupied_bucket().map(|bucket| pop_internal(bucket).1) } /// Removes a key from the map, returning the stored key and value if the /// key was previously in the map. /// /// The key may be any borrowed form of the map's key type, but /// [`Hash`] and [`Eq`] on the borrowed form *must* match those for /// the key type. /// /// [`Eq`]: ../../std/cmp/trait.Eq.html /// [`Hash`]: ../../std/hash/trait.Hash.html /// /// # Examples /// /// ``` /// #![feature(hash_map_remove_entry)] /// use std::collections::HashMap; /// /// # fn main() { /// let mut map = HashMap::new(); /// map.insert(1, \"a\"); /// assert_eq!(map.remove_entry(&1), Some((1, \"a\"))); /// assert_eq!(map.remove(&1), None); /// # } /// ``` #[unstable(feature = \"hash_map_remove_entry\", issue = \"46344\")] pub fn remove_entry(&mut self, k: &Q) -> Option<(K, V)> where K: Borrow, Q: Hash + Eq { if self.table.size() == 0 { return None; } self.search_mut(k) .into_occupied_bucket() .map(|bucket| { let (k, v, _) = pop_internal(bucket); (k, v) }) } /// Retains only the elements specified by the predicate. /// /// In other words, remove all pairs `(k, v)` such that `f(&k,&mut v)` returns `false`.", "commid": "rust_pr_47259"}], "negative_passages": []} {"query_id": "q-en-rust-d32f19abff5ccf7a5342b6d7208d67025d685a3da3d7b544f4725ad062859215", "query": "We this to OccupiedEntry a while ago, but weirdly not to the map itself.\nShould we also add ?\nLooks good to me to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-bf13118b0889f90f379cb96a04f50df00f9b52fcea5b06fd3ce56fdc00ee8b74", "text": "} #[test] fn test_pop() { fn test_remove() { let mut m = HashMap::new(); m.insert(1, 2); assert_eq!(m.remove(&1), Some(2));", "commid": "rust_pr_47259"}], "negative_passages": []} {"query_id": "q-en-rust-d32f19abff5ccf7a5342b6d7208d67025d685a3da3d7b544f4725ad062859215", "query": "We this to OccupiedEntry a while ago, but weirdly not to the map itself.\nShould we also add ?\nLooks good to me to stabilize. fcp merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period is now complete.", "positive_passages": [{"docid": "doc-en-rust-ef62f8d48d655553501497a084ca68078e2417aa5c9ea7860a55f641c86c6346", "text": "} #[test] fn test_remove_entry() { let mut m = HashMap::new(); m.insert(1, 2); assert_eq!(m.remove_entry(&1), Some((1, 2))); assert_eq!(m.remove(&1), None); } #[test] fn test_iterate() { let mut m = HashMap::with_capacity(4); for i in 0..32 {", "commid": "rust_pr_47259"}], "negative_passages": []} {"query_id": "q-en-rust-f313454093b8912ec365038042533954d203d0648e185eb0b4c56a8e6ba007a1", "query": "Here's a code sample, narrowed down a bit from my program: When built with and run, it gets SIGSEGV on my Mac. (I poked around a little bit with lldb and it looks like is returning something we can't dereference. I couldn't investigate thoroughly because it wasn't showing source line information for whatever reason.) Without the , it doesn't segfault. Here's an lldb session with some basic information about the crash: I'm on macOS 10.12.6. target-cpu=native on my machine seems to mean haswell, according to .\nSeems to be macOS-specific, can't reproduce this on Linux.\nI can repro on my Mac. Early 2015 Macbook Pro, running macOS Sierra. It did not crash on my 1.18.0 install; first starts crashing on installed via rustup:\nThis may be which was as an LLVM bug.\ntriage: P-medium Seems likely to be an LLVM bug. If more data or dups turn up, we can re-evaluate.\nI am experiencing something similar to this when compiling native on OSX. OSX [Edit] problem does not exists in nightly v 1.24. So perhaps a regression in LLVM 6.0?", "positive_passages": [{"docid": "doc-en-rust-1b6498d64f2df516330c11510c93a121c360473cb68481dc5074bd6b6fc38f00", "text": "unsafe { let g = get_static(cx, def_id); let v = match ::mir::codegen_static_initializer(cx, def_id) { let (v, alloc) = match ::mir::codegen_static_initializer(cx, def_id) { Ok(v) => v, // Error has already been reported Err(_) => return,", "commid": "rust_pr_51828"}], "negative_passages": []} {"query_id": "q-en-rust-f313454093b8912ec365038042533954d203d0648e185eb0b4c56a8e6ba007a1", "query": "Here's a code sample, narrowed down a bit from my program: When built with and run, it gets SIGSEGV on my Mac. (I poked around a little bit with lldb and it looks like is returning something we can't dereference. I couldn't investigate thoroughly because it wasn't showing source line information for whatever reason.) Without the , it doesn't segfault. Here's an lldb session with some basic information about the crash: I'm on macOS 10.12.6. target-cpu=native on my machine seems to mean haswell, according to .\nSeems to be macOS-specific, can't reproduce this on Linux.\nI can repro on my Mac. Early 2015 Macbook Pro, running macOS Sierra. It did not crash on my 1.18.0 install; first starts crashing on installed via rustup:\nThis may be which was as an LLVM bug.\ntriage: P-medium Seems likely to be an LLVM bug. If more data or dups turn up, we can re-evaluate.\nI am experiencing something similar to this when compiling native on OSX. OSX [Edit] problem does not exists in nightly v 1.24. So perhaps a regression in LLVM 6.0?", "positive_passages": [{"docid": "doc-en-rust-58034db7c293b7c76a26bf6916ea560733c6aa5f5366b86b932b8c5986bf8798", "text": "if attr::contains_name(attrs, \"thread_local\") { llvm::set_thread_local_mode(g, cx.tls_model); // Do not allow LLVM to change the alignment of a TLS on macOS. // // By default a global's alignment can be freely increased. // This allows LLVM to generate more performant instructions // e.g. using load-aligned into a SIMD register. // // However, on macOS 10.10 or below, the dynamic linker does not // respect any alignment given on the TLS (radar 24221680). // This will violate the alignment assumption, and causing segfault at runtime. // // This bug is very easy to trigger. In `println!` and `panic!`, // the `LOCAL_STDOUT`/`LOCAL_STDERR` handles are stored in a TLS, // which the values would be `mem::replace`d on initialization. // The implementation of `mem::replace` will use SIMD // whenever the size is 32 bytes or higher. LLVM notices SIMD is used // and tries to align `LOCAL_STDOUT`/`LOCAL_STDERR` to a 32-byte boundary, // which macOS's dyld disregarded and causing crashes // (see issues #51794, #51758, #50867, #48866 and #44056). // // To workaround the bug, we trick LLVM into not increasing // the global's alignment by explicitly assigning a section to it // (equivalent to automatically generating a `#[link_section]` attribute). // See the comment in the `GlobalValue::canIncreaseAlignment()` function // of `lib/IR/Globals.cpp` for why this works. // // When the alignment is not increased, the optimized `mem::replace` // will use load-unaligned instructions instead, and thus avoiding the crash. // // We could remove this hack whenever we decide to drop macOS 10.10 support. if cx.tcx.sess.target.target.options.is_like_osx { let sect_name = if alloc.bytes.iter().all(|b| *b == 0) { CStr::from_bytes_with_nul_unchecked(b\"__DATA,__thread_bss0\") } else { CStr::from_bytes_with_nul_unchecked(b\"__DATA,__thread_data0\") }; llvm::LLVMSetSection(g, sect_name.as_ptr()); } } base::set_link_section(cx, g, attrs);", "commid": "rust_pr_51828"}], "negative_passages": []} {"query_id": "q-en-rust-f313454093b8912ec365038042533954d203d0648e185eb0b4c56a8e6ba007a1", "query": "Here's a code sample, narrowed down a bit from my program: When built with and run, it gets SIGSEGV on my Mac. (I poked around a little bit with lldb and it looks like is returning something we can't dereference. I couldn't investigate thoroughly because it wasn't showing source line information for whatever reason.) Without the , it doesn't segfault. Here's an lldb session with some basic information about the crash: I'm on macOS 10.12.6. target-cpu=native on my machine seems to mean haswell, according to .\nSeems to be macOS-specific, can't reproduce this on Linux.\nI can repro on my Mac. Early 2015 Macbook Pro, running macOS Sierra. It did not crash on my 1.18.0 install; first starts crashing on installed via rustup:\nThis may be which was as an LLVM bug.\ntriage: P-medium Seems likely to be an LLVM bug. If more data or dups turn up, we can re-evaluate.\nI am experiencing something similar to this when compiling native on OSX. OSX [Edit] problem does not exists in nightly v 1.24. So perhaps a regression in LLVM 6.0?", "positive_passages": [{"docid": "doc-en-rust-35c67e5fe57b9186bb62742fac6700f1804c45f3d708d86104451fe19ecfb640", "text": "pub fn codegen_static_initializer<'a, 'tcx>( cx: &CodegenCx<'a, 'tcx>, def_id: DefId) -> Result>> -> Result<(ValueRef, &'tcx Allocation), Lrc>> { let instance = ty::Instance::mono(cx.tcx, def_id); let cid = GlobalId {", "commid": "rust_pr_51828"}], "negative_passages": []} {"query_id": "q-en-rust-f313454093b8912ec365038042533954d203d0648e185eb0b4c56a8e6ba007a1", "query": "Here's a code sample, narrowed down a bit from my program: When built with and run, it gets SIGSEGV on my Mac. (I poked around a little bit with lldb and it looks like is returning something we can't dereference. I couldn't investigate thoroughly because it wasn't showing source line information for whatever reason.) Without the , it doesn't segfault. Here's an lldb session with some basic information about the crash: I'm on macOS 10.12.6. target-cpu=native on my machine seems to mean haswell, according to .\nSeems to be macOS-specific, can't reproduce this on Linux.\nI can repro on my Mac. Early 2015 Macbook Pro, running macOS Sierra. It did not crash on my 1.18.0 install; first starts crashing on installed via rustup:\nThis may be which was as an LLVM bug.\ntriage: P-medium Seems likely to be an LLVM bug. If more data or dups turn up, we can re-evaluate.\nI am experiencing something similar to this when compiling native on OSX. OSX [Edit] problem does not exists in nightly v 1.24. So perhaps a regression in LLVM 6.0?", "positive_passages": [{"docid": "doc-en-rust-f8c50fc4fff2dfc4e0d2f788d548d9edc9ee7559afa8745fc145e4b4f9e989d0", "text": "ConstValue::ByRef(alloc, n) if n.bytes() == 0 => alloc, _ => bug!(\"static const eval returned {:#?}\", static_), }; Ok(const_alloc_to_llvm(cx, alloc)) Ok((const_alloc_to_llvm(cx, alloc), alloc)) } impl<'a, 'tcx> FunctionCx<'a, 'tcx> {", "commid": "rust_pr_51828"}], "negative_passages": []} {"query_id": "q-en-rust-f313454093b8912ec365038042533954d203d0648e185eb0b4c56a8e6ba007a1", "query": "Here's a code sample, narrowed down a bit from my program: When built with and run, it gets SIGSEGV on my Mac. (I poked around a little bit with lldb and it looks like is returning something we can't dereference. I couldn't investigate thoroughly because it wasn't showing source line information for whatever reason.) Without the , it doesn't segfault. Here's an lldb session with some basic information about the crash: I'm on macOS 10.12.6. target-cpu=native on my machine seems to mean haswell, according to .\nSeems to be macOS-specific, can't reproduce this on Linux.\nI can repro on my Mac. Early 2015 Macbook Pro, running macOS Sierra. It did not crash on my 1.18.0 install; first starts crashing on installed via rustup:\nThis may be which was as an LLVM bug.\ntriage: P-medium Seems likely to be an LLVM bug. If more data or dups turn up, we can re-evaluate.\nI am experiencing something similar to this when compiling native on OSX. OSX [Edit] problem does not exists in nightly v 1.24. So perhaps a regression in LLVM 6.0?", "positive_passages": [{"docid": "doc-en-rust-fa4fd92f0258c5365df578ba3ef0b6ee40ae91c9eb46c9b13ec017ec59d27c9c", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // ignore-tidy-linelength // only-macos // no-system-llvm // min-llvm-version 6.0 // compile-flags: -O #![crate_type = \"rlib\"] #![feature(thread_local)] // CHECK: @STATIC_VAR_1 = internal thread_local unnamed_addr global <{ [32 x i8] }> zeroinitializer, section \"__DATA,__thread_bss\", align 4 #[no_mangle] #[allow(private_no_mangle_statics)] #[thread_local] static mut STATIC_VAR_1: [u32; 8] = [0; 8]; // CHECK: @STATIC_VAR_2 = internal thread_local unnamed_addr global <{ [32 x i8] }> <{{[^>]*}}>, section \"__DATA,__thread_data\", align 4 #[no_mangle] #[allow(private_no_mangle_statics)] #[thread_local] static mut STATIC_VAR_2: [u32; 8] = [4; 8]; #[no_mangle] pub unsafe fn f(x: &mut [u32; 8]) { std::mem::swap(x, &mut STATIC_VAR_1) } #[no_mangle] pub unsafe fn g(x: &mut [u32; 8]) { std::mem::swap(x, &mut STATIC_VAR_2) } ", "commid": "rust_pr_51828"}], "negative_passages": []} {"query_id": "q-en-rust-f313454093b8912ec365038042533954d203d0648e185eb0b4c56a8e6ba007a1", "query": "Here's a code sample, narrowed down a bit from my program: When built with and run, it gets SIGSEGV on my Mac. (I poked around a little bit with lldb and it looks like is returning something we can't dereference. I couldn't investigate thoroughly because it wasn't showing source line information for whatever reason.) Without the , it doesn't segfault. Here's an lldb session with some basic information about the crash: I'm on macOS 10.12.6. target-cpu=native on my machine seems to mean haswell, according to .\nSeems to be macOS-specific, can't reproduce this on Linux.\nI can repro on my Mac. Early 2015 Macbook Pro, running macOS Sierra. It did not crash on my 1.18.0 install; first starts crashing on installed via rustup:\nThis may be which was as an LLVM bug.\ntriage: P-medium Seems likely to be an LLVM bug. If more data or dups turn up, we can re-evaluate.\nI am experiencing something similar to this when compiling native on OSX. OSX [Edit] problem does not exists in nightly v 1.24. So perhaps a regression in LLVM 6.0?", "positive_passages": [{"docid": "doc-en-rust-94142aa0f79a6a7a31a5c4cfa05a89738f1e16b3c035a940a1d86ddd1a8b57a3", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // only-x86_64 // no-prefer-dynamic // compile-flags: -Ctarget-feature=+avx -Clto fn main() {} ", "commid": "rust_pr_51828"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-be8d7951afd48320d53ad23b8bb838a4ad48231ebb1950b9ab740b3ab4053c9c", "text": "self.cx } fn do_not_inline(&mut self, _llret: RValue<'gcc>) { fn apply_attrs_to_cleanup_callsite(&mut self, _llret: RValue<'gcc>) { unimplemented!(); }", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-05092cfbf38dde0ef4a8d3ca0517c1a819cd860ee738a3a61089243a9d363d83", "text": "unsafe { llvm::LLVMBuildZExt(self.llbuilder, val, dest_ty, UNNAMED) } } fn do_not_inline(&mut self, llret: &'ll Value) { llvm::Attribute::NoInline.apply_callsite(llvm::AttributePlace::Function, llret); fn apply_attrs_to_cleanup_callsite(&mut self, llret: &'ll Value) { // Cleanup is always the cold path. llvm::Attribute::Cold.apply_callsite(llvm::AttributePlace::Function, llret); // In LLVM versions with deferred inlining (currently, system LLVM < 14), // inlining drop glue can lead to exponential size blowup, see #41696 and #92110. if !llvm_util::is_rust_llvm() && llvm_util::get_version() < (14, 0, 0) { llvm::Attribute::NoInline.apply_callsite(llvm::AttributePlace::Function, llret); } } }", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-c318b0b57b40398c3558d6bd1976646172eb9d62fab7464e6c5ee8e01a019b38", "text": "pub fn LLVMRustVersionMinor() -> u32; pub fn LLVMRustVersionPatch() -> u32; pub fn LLVMRustIsRustLLVM() -> bool; pub fn LLVMRustAddModuleFlag(M: &Module, name: *const c_char, value: u32); pub fn LLVMRustMetadataAsValue<'a>(C: &'a Context, MD: &'a Metadata) -> &'a Value;", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-56bb8d355ca0fe63a60a8f6da31993ec2c262253d74a7385973e8e3518a11c19", "text": "} } /// Returns `true` if this LLVM is Rust's bundled LLVM (and not system LLVM). pub fn is_rust_llvm() -> bool { // Can be called without initializing LLVM unsafe { llvm::LLVMRustIsRustLLVM() } } pub fn print_passes() { // Can be called without initializing LLVM unsafe {", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-05ce6c97143dc4c1d3ee963fcdd27d7180210c746f109acc313c9db3db327650", "text": "let llret = bx.call(fn_ty, fn_ptr, &llargs, self.funclet(fx)); bx.apply_attrs_callsite(&fn_abi, llret); if fx.mir[self.bb].is_cleanup { // Cleanup is always the cold path. Don't inline // drop glue. Also, when there is a deeply-nested // struct, there are \"symmetry\" issues that cause // exponential inlining - see issue #41696. bx.do_not_inline(llret); bx.apply_attrs_to_cleanup_callsite(llret); } if let Some((ret_dest, target)) = destination {", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-7744edff9c1fa950f3e15c72095cec46419107f6f2d2fac2890c714d122c8b27", "text": ") -> Self::Value; fn zext(&mut self, val: Self::Value, dest_ty: Self::Type) -> Self::Value; fn do_not_inline(&mut self, llret: Self::Value); fn apply_attrs_to_cleanup_callsite(&mut self, llret: Self::Value); }", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-a269d99deba67795a2ab1583f8defd84947d75c42143bfb1f14ac7a771cc5ea7", "text": "extern \"C\" uint32_t LLVMRustVersionMajor() { return LLVM_VERSION_MAJOR; } extern \"C\" bool LLVMRustIsRustLLVM() { #ifdef LLVM_RUSTLLVM return true; #else return false; #endif } extern \"C\" void LLVMRustAddModuleFlag(LLVMModuleRef M, const char *Name, uint32_t Value) { unwrap(M)->addModuleFlag(Module::Warning, Name, Value);", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-c4a7ecc8f3457bb1e4650efb6538f97718d6434779625e434ef1cc21ed5d6164", "text": " // no-system-llvm: needs #92110 // compile-flags: -Cno-prepopulate-passes #![crate_type = \"lib\"] // This test checks that drop calls in unwind landing pads // get the `cold` attribute. // CHECK-LABEL: @check_cold // CHECK: call void {{.+}}drop_in_place{{.+}} [[ATTRIBUTES:#[0-9]+]] // CHECK: attributes [[ATTRIBUTES]] = { cold } #[no_mangle] pub fn check_cold(f: fn(), x: Box) { // this may unwind f(); } ", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-296b1e606ab01f2532c8bc184e50fdc566885b4fcf0c18933f8699bb24888c16", "query": "The following code gets optimized as expected: However, if I let the code call an unknown function instead of return, the optimization disappears: That seems wrong, why should the optimization stop kicking in? The corresponding C++ code does not have any problem: (I briefly thought maybe unwinding is the problem, but C++ should have the same kind of unwinding here.)\nsuggested I try , and indeed, . However, the optimization is sound even with unwinding panics, so it should happen either way.\nAha! I think I got it. In the IR for the Rust program, the unwinding branch calls with the attribute. So LLVM does not look through this call at all, and (I guess) it considers the pointer passed to it escaped. The pointer equality folding only happens if the pointer does not get escaped, so LLVM can guarantee that all comparisons of this pointer are folded consistently. So, to fix this, we'd have to remove this -- at least for where does not itself implement .\nCC\nCould this be preventing other optimisations than pointer equality folding? Cc\ndefinitely. My guess is that alias analysis is being overly conservative here and is declaring pointers to be , rather than , which what they definitely are. This will defeat many optimisations that check for aliasing (pretty much everything).\nHowever, pointer equality optimizations and aliasing information are (AFAIK) mostly unrelated -- in particular, optimizing equalities based on is incorrect:\nTriage: this optimization still does not happen.\nOh no, ended up triggering LLVM to close this again.", "positive_passages": [{"docid": "doc-en-rust-48a597f693a207bdb0239643e4dd0a55f1f7a61e2ac706f85315713300f1bbe1", "text": " // no-system-llvm: needs #92110 + patch for Rust alloc/dealloc functions // compile-flags: -Copt-level=3 #![crate_type = \"lib\"] // This test checks that we can inline drop_in_place in // unwind landing pads. // Without inlining, the box pointers escape via the call to drop_in_place, // and LLVM will not optimize out the pointer comparison. // With inlining, everything should be optimized out. // See https://github.com/rust-lang/rust/issues/46515 // CHECK-LABEL: @check_no_escape_in_landingpad // CHECK: start: // CHECK-NEXT: ret void #[no_mangle] pub fn check_no_escape_in_landingpad(f: fn()) { let x = &*Box::new(0); let y = &*Box::new(0); if x as *const _ == y as *const _ { f(); } } // Without inlining, the compiler can't tell that // dropping an empty string (in a landing pad) does nothing. // With inlining, the landing pad should be optimized out. // See https://github.com/rust-lang/rust/issues/87055 // CHECK-LABEL: @check_eliminate_noop_drop // CHECK: start: // CHECK-NEXT: call void %g() // CHECK-NEXT: ret void #[no_mangle] pub fn check_eliminate_noop_drop(g: fn()) { let _var = String::new(); g(); } ", "commid": "rust_pr_92419"}], "negative_passages": []} {"query_id": "q-en-rust-6c925039acb7b4bd57f64c4d419c52a2a64f2f90bec01f59066c00ddad9b4cac", "query": "Consider the following code: If you try to compile this, you get the following errors: See the bug? Its because I used when I should have used It would be nice if the compiler, in its ever friendly manner, suggested the latter to me.\nI volunteer.", "positive_passages": [{"docid": "doc-en-rust-88f437e0935045a6b06b9848cd82b7f2b88a70ca156246bc905dc41121407b94", "text": "pub fn span_err(&self, sp: Span, m: &str) { self.sess.span_diagnostic.span_err(sp, m) } pub fn struct_span_err(&self, sp: Span, m: &str) -> DiagnosticBuilder<'a> { self.sess.span_diagnostic.struct_span_err(sp, m) } pub fn span_err_help(&self, sp: Span, m: &str, h: &str) { let mut err = self.sess.span_diagnostic.mut_span_err(sp, m); err.help(h);", "commid": "rust_pr_46763"}], "negative_passages": []} {"query_id": "q-en-rust-6c925039acb7b4bd57f64c4d419c52a2a64f2f90bec01f59066c00ddad9b4cac", "query": "Consider the following code: If you try to compile this, you get the following errors: See the bug? Its because I used when I should have used It would be nice if the compiler, in its ever friendly manner, suggested the latter to me.\nI volunteer.", "positive_passages": [{"docid": "doc-en-rust-8ef35b8315ec14874faf603b9aa8bfc831497cc339609a4a46dde65e987dab64", "text": "let lo = self.span; let hi; if self.check(&token::DotDot) { if self.check(&token::DotDot) || self.token == token::DotDotDot { if self.token == token::DotDotDot { // Issue #46718 let mut err = self.struct_span_err(self.span, \"expected field pattern, found `...`\"); err.span_suggestion(self.span, \"to omit remaining fields, use one fewer `.`\", \"..\".to_owned()); err.emit(); } self.bump(); if self.token != token::CloseDelim(token::Brace) { let token_str = self.this_token_to_string();", "commid": "rust_pr_46763"}], "negative_passages": []} {"query_id": "q-en-rust-6c925039acb7b4bd57f64c4d419c52a2a64f2f90bec01f59066c00ddad9b4cac", "query": "Consider the following code: If you try to compile this, you get the following errors: See the bug? Its because I used when I should have used It would be nice if the compiler, in its ever friendly manner, suggested the latter to me.\nI volunteer.", "positive_passages": [{"docid": "doc-en-rust-f9d5eab9f84c50bab35d6e099e86fc7b6c7cfd67a7aab63ea606173a9f2d4360", "text": " // Copyright 2017 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![allow(unused)] struct PersonalityInventory { expressivity: f32, instrumentality: f32 } impl PersonalityInventory { fn expressivity(&self) -> f32 { match *self { PersonalityInventory { expressivity: exp, ... } => exp //~^ ERROR expected field pattern, found `...` } } } fn main() {} ", "commid": "rust_pr_46763"}], "negative_passages": []} {"query_id": "q-en-rust-6c925039acb7b4bd57f64c4d419c52a2a64f2f90bec01f59066c00ddad9b4cac", "query": "Consider the following code: If you try to compile this, you get the following errors: See the bug? Its because I used when I should have used It would be nice if the compiler, in its ever friendly manner, suggested the latter to me.\nI volunteer.", "positive_passages": [{"docid": "doc-en-rust-63623d14d4deb8ff00b9c19e89ea01505680d500292eaddf2d4d568c2aebc510", "text": " error: expected field pattern, found `...` --> $DIR/issue-46718-struct-pattern-dotdotdot.rs:21:55 | 21 | PersonalityInventory { expressivity: exp, ... } => exp | ^^^ help: to omit remaining fields, use one fewer `.`: `..` error: aborting due to previous error ", "commid": "rust_pr_46763"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #[cfg(bootstrap)] impl ops::TryV1 for ControlFlow { type Output = C; type Error = B;", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #[cfg(bootstrap)] mod r#try; mod try_trait; mod unsize;", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #[cfg(bootstrap)] pub(crate) use self::r#try::Try as TryV1; #[unstable(feature = \"try_trait_v2\", issue = \"84277\")]", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #[cfg(bootstrap)] pub struct NoneError; #[unstable(feature = \"try_trait\", issue = \"42327\")] #[cfg(bootstrap)] impl ops::TryV1 for Option { type Output = T; type Error = NoneError;", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #[cfg(bootstrap)] impl ops::TryV1 for Result { type Output = T; type Error = E;", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #[cfg(bootstrap)] impl ops::TryV1 for Poll> { type Output = Poll; type Error = E;", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #[cfg(bootstrap)] impl ops::TryV1 for Poll>> { type Output = Poll>; type Error = E;", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-4b41f071f1d9c623d82b379f6cb048e1b15a3b2255113436e62d068d4b25b98f", "query": "Forgot it? Still for other reasons.\nSee also:\nthanks\nIt seems to me that if should exist at all, then it should implement , otherwise doesn't work.\nFacing the same issue in 2019 :/\nAfter looking through the discussion on , I\u2019m getting the impression that it is indeed an oversight that there is no way to convert to so that you can use on a function that returns .\nThis becomes a prevalent problem in the trainings I give, when people learn about: , because they also try to coerce optionals.\nCan this be prioritised? It would be upsetting if this were to be forgotten, particularly since is quite useful now.\nI don't believe this has been forgotten or de-prioritized -- at the very least, based on me re-read of , I get the impression that the general feeling is that the Try trait design is not ready and being convertible to via is not something we are sure we want. In particular, these comments are relevant:\nI actually changed my mind on this one. Now that I think about it, it only really makes sense to use on an if the enclosing function returns an . Otherwise if you want to return a you can use to indicate what went wrong. The error message could be much better though.\nI agree about the error message. I think one thing we missed in the Try trait design discussion was what the errors for mixing should look like -- right now you get the note about , but with the other possible design you would have explicitly gotten something about .\nI'll note that I think the existence of a is a bit strange and a bit of a code smell, and another design should be considered. But if it isn't considered, make sure is implemented at least, haha.\nIf is to exist, then it should implement , otherwise a better name should be chosen for the try trait result. But generally, I think I'd like to be able to use in functions returning on either or types (without explicit conversion), while only allowing use on types in functions that return . Does this make sense?\nI don't think should implement for . is not an error, it is just a state that may cause errors. If Option #![feature(try_trait)] #![feature(try_trait_v2)] #![feature(slice_internals)] #![feature(slice_partition_dedup)]", "commid": "rust_pr_85482"}], "negative_passages": []} {"query_id": "q-en-rust-6e54fc6e16649d13e50caefa4b187ef85ff0593171d06b44505fc2a44b9ef5c9", "query": "Originally reported here: I don't understand the following sentence: Could someone parse this for me? To me, it reads like \"to free memory allocated by a Vec v, create a new Vec w and drop it (i.e., w)\". This is obviously not what it is intended. What's the role of the second Vec? Is this only in a context where v is empty but its memory not yet freed? If so, why wouldn't shrinktofit work in that case?\nThis refers to unsafe code that e.g. uses to allocate memory, and then s the to use the buffer in different ways (e.g., passing it to C, or implement a custom data structure over the allocated memory).", "positive_passages": [{"docid": "doc-en-rust-7fe509292d390edc241c3fb8c8ca824f9cb73ba1ff66eb30bc633b241e4ba585", "text": "/// types inside a `Vec`, it will not allocate space for them. *Note that in this case /// the `Vec` may not report a [`capacity`] of 0*. `Vec` will allocate if and only /// if [`mem::size_of::`]`() * capacity() > 0`. In general, `Vec`'s allocation /// details are subtle enough that it is strongly recommended that you only /// free memory allocated by a `Vec` by creating a new `Vec` and dropping it. /// details are very subtle — if you intend to allocate memory using a `Vec` /// and use it for something else (either to pass to unsafe code, or to build your /// own memory-backed collection), be sure to deallocate this memory by using /// `from_raw_parts` to recover the `Vec` and then dropping it. /// /// If a `Vec` *has* allocated memory, then the memory it points to is on the heap /// (as defined by the allocator Rust is configured to use by default), and its", "commid": "rust_pr_46884"}], "negative_passages": []} {"query_id": "q-en-rust-383ed1a2d2dfdebc2364a641a27f9c56084fcc1b5d719f3eaf53ebaf8ef28eb7", "query": "In core, fails maybe a few times per month. Linux x86_64. I have no further information.\nalright, i don't suppose you'd happen to have any logs of the times it failed? I haven't been able to reproduce the test failing (i.e., the test function terminating successfully). Looking over the unwrap implementation I think it's basically impossible for the test to fail with a localized bug, since it's testing two linked-failure tasks who go through code that looks like this: so I think it is safe to say that exactly one of the tasks will hit the . What's more, the test case itself has the parent thread (the one the test harness will block on) block on its child using a future_result. Either the parent will hit the die above (in which case the test case will surely pass), or the child will hit it, in which case the child's failure should link to the parent through the parent's call (I checked the linked failure code to make sure taskgroup killing happens before AutoNotify messaging). So it might be just as likely the race is in getting failed out of the recv call. So far I've experimented adding delays into various parts of the unwrap implementation to see which paths this test actually exercises (the state space is not very large, so it is not a Research Problem :P ), but not seen the bug in any yet. Will continue the search another day. EDIT: reproduction successful. (I'd forgotten that there was a difference in failure semantics between when the tasks are in the main/root taskgroup and when they're running as a #[test]! Argh.) More later.\nWould you believe this is the minimized test case for this bug? No unwrap involved at all. Depending on the sleep, the test will either work or not (i.e., the problem is when a failing task sends to its futureresult before the parent even starts to receive on it, the linked failure doesn't take effect). I'm not sure if this is an intended nondeterminism of , but I definitely wrote assuming it wouldn't happen that way. anyway one possible way to fix this is to change the last line of the test case to (which doesn't weaken the test case). I will meditate on this more to decide if that's the best way.\nOK, what's going on here is linked failure doesn't propagate through a in the special case where the pipe has data and the task doesn't need to block for it. To \"fix\" this nondeterminism at its source would involve putting a check in the case of . As another example, consider this program, whose main task always succeeds. \"Fixing\" the nondetermism above would make this program sometimes succeed (if the parent recvs before the child sends) and sometimes fail (if the child sends and fails before the parent recvs). Introducing the extra killed check wouldn't require taking a lock (see ). But, I've always punted on the semantics of when a linked failure killing takes effect, and adding this check feels like a patchwork special-case rather than a principled semantics change. might have input on the matter?\nAdding the check seems reasonable to me. In the case of failure, I think it's fine if things are somewhat nondeterministic, so I don't think it's a problem if the program above sometimes failed. What if we a sleep before ? It seems like in that case you'd want this to always fail, unless you turned off link failure.\nThanks for looking into this. I understand how misses its opportunity to fail but not why the 'assert' has any effect. This is similar to a problem I frequently run into where I expect test cases like this to work (by failing):\nYeah, those are annoying cases. It might be nice to have a spawn interface that, while letting the parent go off and do its own thing, also makes the parent block on the child (on all such children spawned this way) before actually task-exiting. This could be done with on top of a generalised interface, or it could work by stashing s inside the parent's and having the destructor wait on each one in the destructor (which is essentially a hardcoded version of how would also work).", "positive_passages": [{"docid": "doc-en-rust-b300871df0a0886d90c105ee94d301301842ca0562447edd3f30f029389f1c71", "text": "res.recv(); } #[test] #[should_fail] #[ignore(cfg(windows))] #[test] #[should_fail] #[ignore(reason = \"random red\")] pub fn exclusive_unwrap_conflict() { let x = exclusive(~~\"hello\"); let x2 = ~mut Some(x.clone());", "commid": "rust_pr_4793"}], "negative_passages": []} {"query_id": "q-en-rust-5e23a8934c65979f8f70aef584de51376005f3d3decd586607371f7589f5dd31", "query": "This seems to be related to , though not entirely the same. I am in the process of ing my crate (multiple crates in a workspace project) v0.5.0 (from ). Instead of a normal compile process, I got this: :\nI experienced this error now multiple times. Maybe it is a good idea to include the complete build output here:\nWith I get this error in a different form:\nFor the record: works on all crates where this bug was triggered (though I did not test all of them, just some).\nFor whatever reason, this is making it effectively impossible for me to package projects with lots of dependencies for Nix.\nI can reliably hit this bug when compiling my project () with Nix\nI believe this is a bug in our rustc package in nixpkgs, we appear to build it against which is bugged: https://nix- Maybe it would be a good idea to have a hard dependency on for rustc?\nThere is an issue here: rustc version 1.22.1 has in its lockfile, and the nix build appears to reliably trigger this bug while compiling . So we have a problem in nixpkgs that we cannot bootstrap using the binary distribution from rust- you are the author of , is there something we could do to workaround this bug for bootstrapping ? Or is it necessary to use a patched for compiling ?\nthe workaround would be to remove any ambient jobservers and let cargo/rustc manage it, if any\nCan't we bootstrap using 1.23.0 binaries?\nI can reliably hit this bug as well in nix\nI can reliably hit this bug using . As far as I am aware I don't have any jobservers running.\nI realized that the following issue was only occurring on nightly. However after the latest update this is now occurring on stable as well.\nstill persists using seems to happen because of this:\nCan you please take another look at this. It appears that your diagnosis is wrong and many people are experiencing this issue. There is no workaround as far as I can tell and this is seriously workflow disrupting.\nA misconfigured jobserver can still cause issues in rustc, and this error is indicative of a misconfigured jobserver in one way or another. This indicates an environmental problem rather than a problem in rustc AFAIK. If someone else manages to fix rustc we can of course merge, but I don't personally believe there's much we can do in rustc for this.\nThere is no other jobserver in my environment. I have checked numerous ways and am highly confident.\nI am also getting the same error as when running entr with the reload flag. Throws the following error: According to the entr documentation whats happening is : 1) immediately start the server 2) block until any of the listed files change 3) terminate the background process 4) wait for the server to exit before restarting\nI created an upstream issue for as well - it surrounds closing STDIN, which was in later versions for compatibility with some programs reasons - seems it then breaks with .\nIt sounds like the underlying issue is\nA workaround for those using : It works for me with and 1.40.0 on Linux.\nThis is no longer happening for me. I don't know what has changed but I don't see this anymore.\nIt still happens for me with and . Just tested it with and got a panic starting with the same bit.\nSo, I've found an entirely legitimate way to trigger this error (the \"early EOF on jobserver pipe\" case) on new versions of , without having stdin closed or anything other exotic setup. If you call from an outer -based build system then will receive file descriptors for the jobserver pipe through an env var called . Except that new versions of will mark those descriptors as so they will be either dead or, worse, replaced with other random files when tries to access them :D The solution is to prevent from closing the pipe by using string anywhere in the command. (Yep, that's how works - ) This is silly, but it seems like doesn't leave any other way to inherit jobserver (at least not readily available from higher level makefile generators).\nwas recently facing similar issue while trying to run cargo build inside a makefile with buildroot was facing this everytime when i run cargo build Build fails: error: failed to acquire jobserver token: Bad file descriptor (os error 9) This worked for me. MAKEFLAGS= JOBS= cargo build\nIf you are actually running cargo inside of make you are hitting a different problem and I recommend opening a different issue. This issue was about a bug where no job server was running. This issue was fixed.", "positive_passages": [{"docid": "doc-en-rust-58ceae4e766a3d946787e1d99a754d5b72f60879799106646de421d2d26dfa85", "text": "#[cfg(not(test))] pub fn init() { // The standard streams might be closed on application startup. To prevent // std::io::{stdin, stdout,stderr} objects from using other unrelated file // resources opened later, we reopen standards streams when they are closed. unsafe { sanitize_standard_fds(); } // By default, some platforms will send a *signal* when an EPIPE error // would otherwise be delivered. This runtime doesn't install a SIGPIPE // handler, causing it to kill the program, which isn't exactly what we", "commid": "rust_pr_75295"}], "negative_passages": []} {"query_id": "q-en-rust-5e23a8934c65979f8f70aef584de51376005f3d3decd586607371f7589f5dd31", "query": "This seems to be related to , though not entirely the same. I am in the process of ing my crate (multiple crates in a workspace project) v0.5.0 (from ). Instead of a normal compile process, I got this: :\nI experienced this error now multiple times. Maybe it is a good idea to include the complete build output here:\nWith I get this error in a different form:\nFor the record: works on all crates where this bug was triggered (though I did not test all of them, just some).\nFor whatever reason, this is making it effectively impossible for me to package projects with lots of dependencies for Nix.\nI can reliably hit this bug when compiling my project () with Nix\nI believe this is a bug in our rustc package in nixpkgs, we appear to build it against which is bugged: https://nix- Maybe it would be a good idea to have a hard dependency on for rustc?\nThere is an issue here: rustc version 1.22.1 has in its lockfile, and the nix build appears to reliably trigger this bug while compiling . So we have a problem in nixpkgs that we cannot bootstrap using the binary distribution from rust- you are the author of , is there something we could do to workaround this bug for bootstrapping ? Or is it necessary to use a patched for compiling ?\nthe workaround would be to remove any ambient jobservers and let cargo/rustc manage it, if any\nCan't we bootstrap using 1.23.0 binaries?\nI can reliably hit this bug as well in nix\nI can reliably hit this bug using . As far as I am aware I don't have any jobservers running.\nI realized that the following issue was only occurring on nightly. However after the latest update this is now occurring on stable as well.\nstill persists using seems to happen because of this:\nCan you please take another look at this. It appears that your diagnosis is wrong and many people are experiencing this issue. There is no workaround as far as I can tell and this is seriously workflow disrupting.\nA misconfigured jobserver can still cause issues in rustc, and this error is indicative of a misconfigured jobserver in one way or another. This indicates an environmental problem rather than a problem in rustc AFAIK. If someone else manages to fix rustc we can of course merge, but I don't personally believe there's much we can do in rustc for this.\nThere is no other jobserver in my environment. I have checked numerous ways and am highly confident.\nI am also getting the same error as when running entr with the reload flag. Throws the following error: According to the entr documentation whats happening is : 1) immediately start the server 2) block until any of the listed files change 3) terminate the background process 4) wait for the server to exit before restarting\nI created an upstream issue for as well - it surrounds closing STDIN, which was in later versions for compatibility with some programs reasons - seems it then breaks with .\nIt sounds like the underlying issue is\nA workaround for those using : It works for me with and 1.40.0 on Linux.\nThis is no longer happening for me. I don't know what has changed but I don't see this anymore.\nIt still happens for me with and . Just tested it with and got a panic starting with the same bit.\nSo, I've found an entirely legitimate way to trigger this error (the \"early EOF on jobserver pipe\" case) on new versions of , without having stdin closed or anything other exotic setup. If you call from an outer -based build system then will receive file descriptors for the jobserver pipe through an env var called . Except that new versions of will mark those descriptors as so they will be either dead or, worse, replaced with other random files when tries to access them :D The solution is to prevent from closing the pipe by using string anywhere in the command. (Yep, that's how works - ) This is silly, but it seems like doesn't leave any other way to inherit jobserver (at least not readily available from higher level makefile generators).\nwas recently facing similar issue while trying to run cargo build inside a makefile with buildroot was facing this everytime when i run cargo build Build fails: error: failed to acquire jobserver token: Bad file descriptor (os error 9) This worked for me. MAKEFLAGS= JOBS= cargo build\nIf you are actually running cargo inside of make you are hitting a different problem and I recommend opening a different issue. This issue was about a bug where no job server was running. This issue was fixed.", "positive_passages": [{"docid": "doc-en-rust-9178605905302613c7be437c27e222331862f8d45f89df0660bbed35fa739844", "text": "reset_sigpipe(); } // In the case when all file descriptors are open, the poll has been // observed to perform better than fcntl (on GNU/Linux). #[cfg(not(any( miri, target_os = \"emscripten\", target_os = \"fuchsia\", // The poll on Darwin doesn't set POLLNVAL for closed fds. target_os = \"macos\", target_os = \"ios\", target_os = \"redox\", )))] unsafe fn sanitize_standard_fds() { use crate::sys::os::errno; let pfds: &mut [_] = &mut [ libc::pollfd { fd: 0, events: 0, revents: 0 }, libc::pollfd { fd: 1, events: 0, revents: 0 }, libc::pollfd { fd: 2, events: 0, revents: 0 }, ]; while libc::poll(pfds.as_mut_ptr(), 3, 0) == -1 { if errno() == libc::EINTR { continue; } libc::abort(); } for pfd in pfds { if pfd.revents & libc::POLLNVAL == 0 { continue; } if libc::open(\"/dev/null0\".as_ptr().cast(), libc::O_RDWR, 0) == -1 { // If the stream is closed but we failed to reopen it, abort the // process. Otherwise we wouldn't preserve the safety of // operations on the corresponding Rust object Stdin, Stdout, or // Stderr. libc::abort(); } } } #[cfg(any(target_os = \"macos\", target_os = \"ios\", target_os = \"redox\"))] unsafe fn sanitize_standard_fds() { use crate::sys::os::errno; for fd in 0..3 { if libc::fcntl(fd, libc::F_GETFD) == -1 && errno() == libc::EBADF { if libc::open(\"/dev/null0\".as_ptr().cast(), libc::O_RDWR, 0) == -1 { libc::abort(); } } } } #[cfg(any( // The standard fds are always available in Miri. miri, target_os = \"emscripten\", target_os = \"fuchsia\"))] unsafe fn sanitize_standard_fds() {} #[cfg(not(any(target_os = \"emscripten\", target_os = \"fuchsia\")))] unsafe fn reset_sigpipe() { assert!(signal(libc::SIGPIPE, libc::SIG_IGN) != libc::SIG_ERR);", "commid": "rust_pr_75295"}], "negative_passages": []} {"query_id": "q-en-rust-5e23a8934c65979f8f70aef584de51376005f3d3decd586607371f7589f5dd31", "query": "This seems to be related to , though not entirely the same. I am in the process of ing my crate (multiple crates in a workspace project) v0.5.0 (from ). Instead of a normal compile process, I got this: :\nI experienced this error now multiple times. Maybe it is a good idea to include the complete build output here:\nWith I get this error in a different form:\nFor the record: works on all crates where this bug was triggered (though I did not test all of them, just some).\nFor whatever reason, this is making it effectively impossible for me to package projects with lots of dependencies for Nix.\nI can reliably hit this bug when compiling my project () with Nix\nI believe this is a bug in our rustc package in nixpkgs, we appear to build it against which is bugged: https://nix- Maybe it would be a good idea to have a hard dependency on for rustc?\nThere is an issue here: rustc version 1.22.1 has in its lockfile, and the nix build appears to reliably trigger this bug while compiling . So we have a problem in nixpkgs that we cannot bootstrap using the binary distribution from rust- you are the author of , is there something we could do to workaround this bug for bootstrapping ? Or is it necessary to use a patched for compiling ?\nthe workaround would be to remove any ambient jobservers and let cargo/rustc manage it, if any\nCan't we bootstrap using 1.23.0 binaries?\nI can reliably hit this bug as well in nix\nI can reliably hit this bug using . As far as I am aware I don't have any jobservers running.\nI realized that the following issue was only occurring on nightly. However after the latest update this is now occurring on stable as well.\nstill persists using seems to happen because of this:\nCan you please take another look at this. It appears that your diagnosis is wrong and many people are experiencing this issue. There is no workaround as far as I can tell and this is seriously workflow disrupting.\nA misconfigured jobserver can still cause issues in rustc, and this error is indicative of a misconfigured jobserver in one way or another. This indicates an environmental problem rather than a problem in rustc AFAIK. If someone else manages to fix rustc we can of course merge, but I don't personally believe there's much we can do in rustc for this.\nThere is no other jobserver in my environment. I have checked numerous ways and am highly confident.\nI am also getting the same error as when running entr with the reload flag. Throws the following error: According to the entr documentation whats happening is : 1) immediately start the server 2) block until any of the listed files change 3) terminate the background process 4) wait for the server to exit before restarting\nI created an upstream issue for as well - it surrounds closing STDIN, which was in later versions for compatibility with some programs reasons - seems it then breaks with .\nIt sounds like the underlying issue is\nA workaround for those using : It works for me with and 1.40.0 on Linux.\nThis is no longer happening for me. I don't know what has changed but I don't see this anymore.\nIt still happens for me with and . Just tested it with and got a panic starting with the same bit.\nSo, I've found an entirely legitimate way to trigger this error (the \"early EOF on jobserver pipe\" case) on new versions of , without having stdin closed or anything other exotic setup. If you call from an outer -based build system then will receive file descriptors for the jobserver pipe through an env var called . Except that new versions of will mark those descriptors as so they will be either dead or, worse, replaced with other random files when tries to access them :D The solution is to prevent from closing the pipe by using string anywhere in the command. (Yep, that's how works - ) This is silly, but it seems like doesn't leave any other way to inherit jobserver (at least not readily available from higher level makefile generators).\nwas recently facing similar issue while trying to run cargo build inside a makefile with buildroot was facing this everytime when i run cargo build Build fails: error: failed to acquire jobserver token: Bad file descriptor (os error 9) This worked for me. MAKEFLAGS= JOBS= cargo build\nIf you are actually running cargo inside of make you are hitting a different problem and I recommend opening a different issue. This issue was about a bug where no job server was running. This issue was fixed.", "positive_passages": [{"docid": "doc-en-rust-5e6e216a450ff4e78146a044fb150923fbaadb7ff04f7c2c04c68b2235049db7", "text": "return r } #[cfg(unix)] fn assert_fd_is_valid(fd: libc::c_int) { if unsafe { libc::fcntl(fd, libc::F_GETFD) == -1 } { panic!(\"file descriptor {} is not valid: {}\", fd, io::Error::last_os_error()); } } #[cfg(windows)] fn assert_fd_is_valid(_fd: libc::c_int) {} #[cfg(windows)] unsafe fn without_stdio R>(f: F) -> R { type DWORD = u32;", "commid": "rust_pr_75295"}], "negative_passages": []} {"query_id": "q-en-rust-5e23a8934c65979f8f70aef584de51376005f3d3decd586607371f7589f5dd31", "query": "This seems to be related to , though not entirely the same. I am in the process of ing my crate (multiple crates in a workspace project) v0.5.0 (from ). Instead of a normal compile process, I got this: :\nI experienced this error now multiple times. Maybe it is a good idea to include the complete build output here:\nWith I get this error in a different form:\nFor the record: works on all crates where this bug was triggered (though I did not test all of them, just some).\nFor whatever reason, this is making it effectively impossible for me to package projects with lots of dependencies for Nix.\nI can reliably hit this bug when compiling my project () with Nix\nI believe this is a bug in our rustc package in nixpkgs, we appear to build it against which is bugged: https://nix- Maybe it would be a good idea to have a hard dependency on for rustc?\nThere is an issue here: rustc version 1.22.1 has in its lockfile, and the nix build appears to reliably trigger this bug while compiling . So we have a problem in nixpkgs that we cannot bootstrap using the binary distribution from rust- you are the author of , is there something we could do to workaround this bug for bootstrapping ? Or is it necessary to use a patched for compiling ?\nthe workaround would be to remove any ambient jobservers and let cargo/rustc manage it, if any\nCan't we bootstrap using 1.23.0 binaries?\nI can reliably hit this bug as well in nix\nI can reliably hit this bug using . As far as I am aware I don't have any jobservers running.\nI realized that the following issue was only occurring on nightly. However after the latest update this is now occurring on stable as well.\nstill persists using seems to happen because of this:\nCan you please take another look at this. It appears that your diagnosis is wrong and many people are experiencing this issue. There is no workaround as far as I can tell and this is seriously workflow disrupting.\nA misconfigured jobserver can still cause issues in rustc, and this error is indicative of a misconfigured jobserver in one way or another. This indicates an environmental problem rather than a problem in rustc AFAIK. If someone else manages to fix rustc we can of course merge, but I don't personally believe there's much we can do in rustc for this.\nThere is no other jobserver in my environment. I have checked numerous ways and am highly confident.\nI am also getting the same error as when running entr with the reload flag. Throws the following error: According to the entr documentation whats happening is : 1) immediately start the server 2) block until any of the listed files change 3) terminate the background process 4) wait for the server to exit before restarting\nI created an upstream issue for as well - it surrounds closing STDIN, which was in later versions for compatibility with some programs reasons - seems it then breaks with .\nIt sounds like the underlying issue is\nA workaround for those using : It works for me with and 1.40.0 on Linux.\nThis is no longer happening for me. I don't know what has changed but I don't see this anymore.\nIt still happens for me with and . Just tested it with and got a panic starting with the same bit.\nSo, I've found an entirely legitimate way to trigger this error (the \"early EOF on jobserver pipe\" case) on new versions of , without having stdin closed or anything other exotic setup. If you call from an outer -based build system then will receive file descriptors for the jobserver pipe through an env var called . Except that new versions of will mark those descriptors as so they will be either dead or, worse, replaced with other random files when tries to access them :D The solution is to prevent from closing the pipe by using string anywhere in the command. (Yep, that's how works - ) This is silly, but it seems like doesn't leave any other way to inherit jobserver (at least not readily available from higher level makefile generators).\nwas recently facing similar issue while trying to run cargo build inside a makefile with buildroot was facing this everytime when i run cargo build Build fails: error: failed to acquire jobserver token: Bad file descriptor (os error 9) This worked for me. MAKEFLAGS= JOBS= cargo build\nIf you are actually running cargo inside of make you are hitting a different problem and I recommend opening a different issue. This issue was about a bug where no job server was running. This issue was fixed.", "positive_passages": [{"docid": "doc-en-rust-10955d1a056df244f67e7b6ed7dc556b2faf8372095677fed1cf2969d07d865a", "text": "fn main() { if env::args().len() > 1 { // Writing to stdout & stderr should not panic. println!(\"test\"); assert!(io::stdout().write(b\"testn\").is_ok()); assert!(io::stderr().write(b\"testn\").is_ok()); // Stdin should be at EOF. assert_eq!(io::stdin().read(&mut [0; 10]).unwrap(), 0); // Standard file descriptors should be valid on UNIX: assert_fd_is_valid(0); assert_fd_is_valid(1); assert_fd_is_valid(2); return }", "commid": "rust_pr_75295"}], "negative_passages": []} {"query_id": "q-en-rust-5e23a8934c65979f8f70aef584de51376005f3d3decd586607371f7589f5dd31", "query": "This seems to be related to , though not entirely the same. I am in the process of ing my crate (multiple crates in a workspace project) v0.5.0 (from ). Instead of a normal compile process, I got this: :\nI experienced this error now multiple times. Maybe it is a good idea to include the complete build output here:\nWith I get this error in a different form:\nFor the record: works on all crates where this bug was triggered (though I did not test all of them, just some).\nFor whatever reason, this is making it effectively impossible for me to package projects with lots of dependencies for Nix.\nI can reliably hit this bug when compiling my project () with Nix\nI believe this is a bug in our rustc package in nixpkgs, we appear to build it against which is bugged: https://nix- Maybe it would be a good idea to have a hard dependency on for rustc?\nThere is an issue here: rustc version 1.22.1 has in its lockfile, and the nix build appears to reliably trigger this bug while compiling . So we have a problem in nixpkgs that we cannot bootstrap using the binary distribution from rust- you are the author of , is there something we could do to workaround this bug for bootstrapping ? Or is it necessary to use a patched for compiling ?\nthe workaround would be to remove any ambient jobservers and let cargo/rustc manage it, if any\nCan't we bootstrap using 1.23.0 binaries?\nI can reliably hit this bug as well in nix\nI can reliably hit this bug using . As far as I am aware I don't have any jobservers running.\nI realized that the following issue was only occurring on nightly. However after the latest update this is now occurring on stable as well.\nstill persists using seems to happen because of this:\nCan you please take another look at this. It appears that your diagnosis is wrong and many people are experiencing this issue. There is no workaround as far as I can tell and this is seriously workflow disrupting.\nA misconfigured jobserver can still cause issues in rustc, and this error is indicative of a misconfigured jobserver in one way or another. This indicates an environmental problem rather than a problem in rustc AFAIK. If someone else manages to fix rustc we can of course merge, but I don't personally believe there's much we can do in rustc for this.\nThere is no other jobserver in my environment. I have checked numerous ways and am highly confident.\nI am also getting the same error as when running entr with the reload flag. Throws the following error: According to the entr documentation whats happening is : 1) immediately start the server 2) block until any of the listed files change 3) terminate the background process 4) wait for the server to exit before restarting\nI created an upstream issue for as well - it surrounds closing STDIN, which was in later versions for compatibility with some programs reasons - seems it then breaks with .\nIt sounds like the underlying issue is\nA workaround for those using : It works for me with and 1.40.0 on Linux.\nThis is no longer happening for me. I don't know what has changed but I don't see this anymore.\nIt still happens for me with and . Just tested it with and got a panic starting with the same bit.\nSo, I've found an entirely legitimate way to trigger this error (the \"early EOF on jobserver pipe\" case) on new versions of , without having stdin closed or anything other exotic setup. If you call from an outer -based build system then will receive file descriptors for the jobserver pipe through an env var called . Except that new versions of will mark those descriptors as so they will be either dead or, worse, replaced with other random files when tries to access them :D The solution is to prevent from closing the pipe by using string anywhere in the command. (Yep, that's how works - ) This is silly, but it seems like doesn't leave any other way to inherit jobserver (at least not readily available from higher level makefile generators).\nwas recently facing similar issue while trying to run cargo build inside a makefile with buildroot was facing this everytime when i run cargo build Build fails: error: failed to acquire jobserver token: Bad file descriptor (os error 9) This worked for me. MAKEFLAGS= JOBS= cargo build\nIf you are actually running cargo inside of make you are hitting a different problem and I recommend opening a different issue. This issue was about a bug where no job server was running. This issue was fixed.", "positive_passages": [{"docid": "doc-en-rust-553a72461ab4599584323b0ea5ffd75172b65acc206ff2fa8f827934ba599249", "text": ".stdout(Stdio::null()) .stderr(Stdio::null()) .status().unwrap(); assert!(status.success(), \"{:?} isn't a success\", status); assert!(status.success(), \"{} isn't a success\", status); // Finally, close everything then spawn a child to make sure everything is // *still* ok. let status = unsafe { without_stdio(|| Command::new(&me).arg(\"next\").status()) }.unwrap(); assert!(status.success(), \"{:?} isn't a success\", status); assert!(status.success(), \"{} isn't a success\", status); }", "commid": "rust_pr_75295"}], "negative_passages": []} {"query_id": "q-en-rust-45decd0e075fb5d0be83dbc80be1d4397ca1d6fb19bddb610c3554bcdc5f6699", "query": "Irrefutable slice patterns with a may cause the compiler to ICE. I expected to see this happen: the code compiled (however silly this example is) Instead, this happened: : rustc 1.24.0-nightly ( 2017-12-30) binary: rustc commit-hash: commit-date: 2017-12-30 host: x86_64-unknown-linux-gnu release: 1.24.0-nightly LLVM version: 4.0 Backtrace:\nIt also occurs with:", "positive_passages": [{"docid": "doc-en-rust-85ae90fc152b8170f5529441cac0dbbc36f37c328d2f4362d9cd0725eba757b4", "text": "end: RangeEnd, }, /// matches against a slice, checking the length and extracting elements /// matches against a slice, checking the length and extracting elements. /// irrefutable when there is a slice pattern and both `prefix` and `suffix` are empty. /// e.g. `&[ref xs..]`. Slice { prefix: Vec>, slice: Option>,", "commid": "rust_pr_47374"}], "negative_passages": []} {"query_id": "q-en-rust-45decd0e075fb5d0be83dbc80be1d4397ca1d6fb19bddb610c3554bcdc5f6699", "query": "Irrefutable slice patterns with a may cause the compiler to ICE. I expected to see this happen: the code compiled (however silly this example is) Instead, this happened: : rustc 1.24.0-nightly ( 2017-12-30) binary: rustc commit-hash: commit-date: 2017-12-30 host: x86_64-unknown-linux-gnu release: 1.24.0-nightly LLVM version: 4.0 Backtrace:\nIt also occurs with:", "positive_passages": [{"docid": "doc-en-rust-d53fdb8236224f3e71e942ffe37c9ed3bc0f4b8c70a032269a15d3abcb5d5b9e", "text": "Err(match_pair) } PatternKind::Range { .. } | PatternKind::Slice { .. } => { PatternKind::Range { .. } => { Err(match_pair) } PatternKind::Slice { ref prefix, ref slice, ref suffix } => { if prefix.is_empty() && slice.is_some() && suffix.is_empty() { // irrefutable self.prefix_slice_suffix(&mut candidate.match_pairs, &match_pair.place, prefix, slice.as_ref(), suffix); Ok(()) } else { Err(match_pair) } } PatternKind::Variant { adt_def, substs, variant_index, ref subpatterns } => { let irrefutable = adt_def.variants.iter().enumerate().all(|(i, v)| { i == variant_index || {", "commid": "rust_pr_47374"}], "negative_passages": []} {"query_id": "q-en-rust-45decd0e075fb5d0be83dbc80be1d4397ca1d6fb19bddb610c3554bcdc5f6699", "query": "Irrefutable slice patterns with a may cause the compiler to ICE. I expected to see this happen: the code compiled (however silly this example is) Instead, this happened: : rustc 1.24.0-nightly ( 2017-12-30) binary: rustc commit-hash: commit-date: 2017-12-30 host: x86_64-unknown-linux-gnu release: 1.24.0-nightly LLVM version: 4.0 Backtrace:\nIt also occurs with:", "positive_passages": [{"docid": "doc-en-rust-6a9992d9603d4cff194241164f916337acf81cc9955cdeac52a4785d35eb507b", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // #47096 #![feature(slice_patterns)] fn foo(s: &[i32]) -> &[i32] { let &[ref xs..] = s; xs } fn main() { let x = [1, 2, 3]; let y = foo(&x); assert_eq!(x, y); } ", "commid": "rust_pr_47374"}], "negative_passages": []} {"query_id": "q-en-rust-ab0d9daf1cf6da6c4b83dd4dd411a26726c8e60173d2e1ddf66e130ea49f4424", "query": "Former-XXXes say \"is this right?\"\nClearing milestone because it's not obviously blocking anything. -- what test case(s) would answer the question you asked in a comment (\"is this right?\")?\nVisiting for triage; FIXMEs are still there.", "positive_passages": [{"docid": "doc-en-rust-9762910d2b6f19293d73d3cc756c6409f3364162c3f3beea52f64e00319423c2", "text": "#[test] #[should_fail] fn test_ascii_fail_char_slice() { '\u03bb'.to_ascii(); } #[test] fn test_opt() { assert_eq!(65u8.to_ascii_opt(), Some(Ascii { chr: 65u8 })); assert_eq!(255u8.to_ascii_opt(), None);", "commid": "rust_pr_11333"}], "negative_passages": []} {"query_id": "q-en-rust-ab0d9daf1cf6da6c4b83dd4dd411a26726c8e60173d2e1ddf66e130ea49f4424", "query": "Former-XXXes say \"is this right?\"\nClearing milestone because it's not obviously blocking anything. -- what test case(s) would answer the question you asked in a comment (\"is this right?\")?\nVisiting for triage; FIXMEs are still there.", "positive_passages": [{"docid": "doc-en-rust-4f364c3a7ac175416d7de0bfa548fe5d035aa34591deab75c6e92ce09a5b477d", "text": "#[cfg(test)] mod test { use super::*; use io::net::ip::{Ipv4Addr, SocketAddr}; use io::net::ip::{SocketAddr}; use io::*; use io::test::*; use prelude::*; iotest!(fn bind_error() {", "commid": "rust_pr_11333"}], "negative_passages": []} {"query_id": "q-en-rust-ab0d9daf1cf6da6c4b83dd4dd411a26726c8e60173d2e1ddf66e130ea49f4424", "query": "Former-XXXes say \"is this right?\"\nClearing milestone because it's not obviously blocking anything. -- what test case(s) would answer the question you asked in a comment (\"is this right?\")?\nVisiting for triage; FIXMEs are still there.", "positive_passages": [{"docid": "doc-en-rust-5d110b67007dd1a09594f98d99756e0d9851fbde36fd5df6266cdafb756c65b0", "text": "use unstable::run_in_bare_thread; use super::*; use rt::task::Task; use rt::local_ptr; #[test] fn thread_local_task_smoke_test() {", "commid": "rust_pr_11333"}], "negative_passages": []} {"query_id": "q-en-rust-ab0d9daf1cf6da6c4b83dd4dd411a26726c8e60173d2e1ddf66e130ea49f4424", "query": "Former-XXXes say \"is this right?\"\nClearing milestone because it's not obviously blocking anything. -- what test case(s) would answer the question you asked in a comment (\"is this right?\")?\nVisiting for triage; FIXMEs are still there.", "positive_passages": [{"docid": "doc-en-rust-32df046ed9398822e0a52403416862c280c77878d1c4aa46da47d15f25d4e043", "text": "fn visit_estr_uniq(&mut self) -> bool { true } fn visit_estr_slice(&mut self) -> bool { true } fn visit_estr_fixed(&mut self, _sz: uint, _sz: uint, _sz: uint, _sz2: uint, _align: uint) -> bool { true } fn visit_box(&mut self, _mtbl: uint, _inner: *TyDesc) -> bool { true }", "commid": "rust_pr_11333"}], "negative_passages": []} {"query_id": "q-en-rust-4ccd0be9ba729ec7abadb8edae413d864de4067e302533f99eb7504d3fbf79a8", "query": "is probably the same issue.\nNot a dup of\nor rather, with the fix applied, I get another ICE =)\nThat new ICE seems similar to :)\nHmm, yes, but it's definitely not an issue with\nThis probably has something to do with the fact that consts use the identity substitutions when computing universal regions substitutions: since those are always going to be generating regions: I have yet to track down the code that should be resolving those early bounds and why it isn't doing so.\nHmm, you are in the right area of the code. But I think the problem runs just a touch deeper. Actually the comments here are not that great. I should do a pass to improve them. But let me try to first explain what's happening a bit and then how to fix it. The role of the enum (which should perhaps be renamed?) is to kind of capture the \"external signature\" for a given piece of MIR, and ideally in a way that identifies all of the \"degrees of freedom\" that the code has (and just those degrees of freedom). Based just on the defining-type, we should be able to identify the following information: The universally bound regions (i.e., lifetime parameters), both early and (where relevant) late bound The types of all the MIR inputs (expressed in terms of these universally bound regions) The types of the MIR output (expressed in terms of these universally bound regions) This is perhaps easiest to explain with a function item. Assume that we are working with the MIR for this function : In that case, the defining type will be the variant: You can see that it is actually sort of minimal: just a def-id and a substs array. In this case, the substs array will consist of three things, first a lifetime variable (representing ) and then two types (, as it happens): (This array gets created in ; this will basically replace the formal lifetime parameter of (represented as an ) with a fresh variable .) Given the array, we can easily compute the set of region variables representing universally bound regions in a function signature -- it's just the region variables appearing in the substs array. That's what we use to build the map in . Note that this substs array does not, for example, include the types of the arguments etc. That's because those types can be reconstructed using queries. For example, the function includes this code, which extracts the function signature (using the query) and then basically substitutes (the call has that effect) from the formal lifetime parameters: Part of the reason it is done this way is that then the types of both arguments ( and ) will reference the same lifetime (). If, when creating the defining type, we had on something that contained the types of those arguments separately, then they would have been rewritten to use two different lifetime variables. This would have been too many \"degrees of freedom\". For constants, our defining-type variant is currently the type of the constant: This is actually wrong, and is roughly the equivalent of storing the types of the arguments directly in the . The fault arises from in -- it was written with functions and closures in mind. It fetches the \"type\" associated with the def-id and rewrites all the regions in it. This is ok for functions and closures because the type of a function or closure is already a kind of indirection; e.g., a is . We probably want to change that variant to something more like Fn: then we want to do is to refactor somewhat. I would expect it to look more like the following. This is the same as the existing code, just rearranged so that it only executes for : meanwhile, for and , we'll do something slightly different. We don't want to start by fetching , because that gets the type associated with the static. Instead, we'll do: Then we can change the other bits of code to match. For example: becomes When we wish to compute the \"return type\" of the closure, then we can do the query: becomes: I think that ought to work. cc", "positive_passages": [{"docid": "doc-en-rust-abe1f8d82e63f3230f48cf2b8b49040ac3b2df93dc9d1df00f735c9c4469e5e7", "text": "&substs[..] )); } DefiningTy::Const(ty) => { DefiningTy::Const(def_id, substs) => { err.note(&format!( \"defining type: {:?}\", ty \"defining constant type: {:?} with substs {:#?}\", def_id, &substs[..] )); } }", "commid": "rust_pr_47957"}], "negative_passages": []} {"query_id": "q-en-rust-4ccd0be9ba729ec7abadb8edae413d864de4067e302533f99eb7504d3fbf79a8", "query": "is probably the same issue.\nNot a dup of\nor rather, with the fix applied, I get another ICE =)\nThat new ICE seems similar to :)\nHmm, yes, but it's definitely not an issue with\nThis probably has something to do with the fact that consts use the identity substitutions when computing universal regions substitutions: since those are always going to be generating regions: I have yet to track down the code that should be resolving those early bounds and why it isn't doing so.\nHmm, you are in the right area of the code. But I think the problem runs just a touch deeper. Actually the comments here are not that great. I should do a pass to improve them. But let me try to first explain what's happening a bit and then how to fix it. The role of the enum (which should perhaps be renamed?) is to kind of capture the \"external signature\" for a given piece of MIR, and ideally in a way that identifies all of the \"degrees of freedom\" that the code has (and just those degrees of freedom). Based just on the defining-type, we should be able to identify the following information: The universally bound regions (i.e., lifetime parameters), both early and (where relevant) late bound The types of all the MIR inputs (expressed in terms of these universally bound regions) The types of the MIR output (expressed in terms of these universally bound regions) This is perhaps easiest to explain with a function item. Assume that we are working with the MIR for this function : In that case, the defining type will be the variant: You can see that it is actually sort of minimal: just a def-id and a substs array. In this case, the substs array will consist of three things, first a lifetime variable (representing ) and then two types (, as it happens): (This array gets created in ; this will basically replace the formal lifetime parameter of (represented as an ) with a fresh variable .) Given the array, we can easily compute the set of region variables representing universally bound regions in a function signature -- it's just the region variables appearing in the substs array. That's what we use to build the map in . Note that this substs array does not, for example, include the types of the arguments etc. That's because those types can be reconstructed using queries. For example, the function includes this code, which extracts the function signature (using the query) and then basically substitutes (the call has that effect) from the formal lifetime parameters: Part of the reason it is done this way is that then the types of both arguments ( and ) will reference the same lifetime (). If, when creating the defining type, we had on something that contained the types of those arguments separately, then they would have been rewritten to use two different lifetime variables. This would have been too many \"degrees of freedom\". For constants, our defining-type variant is currently the type of the constant: This is actually wrong, and is roughly the equivalent of storing the types of the arguments directly in the . The fault arises from in -- it was written with functions and closures in mind. It fetches the \"type\" associated with the def-id and rewrites all the regions in it. This is ok for functions and closures because the type of a function or closure is already a kind of indirection; e.g., a is . We probably want to change that variant to something more like Fn: then we want to do is to refactor somewhat. I would expect it to look more like the following. This is the same as the existing code, just rearranged so that it only executes for : meanwhile, for and , we'll do something slightly different. We don't want to start by fetching , because that gets the type associated with the static. Instead, we'll do: Then we can change the other bits of code to match. For example: becomes When we wish to compute the \"return type\" of the closure, then we can do the query: becomes: I think that ought to work. cc", "positive_passages": [{"docid": "doc-en-rust-11ca6763e7e4dd0f68516f4836e5245586ad766c258c17984ea0f58522514cfc", "text": "/// The MIR represents some form of constant. The signature then /// is that it has no inputs and a single return value, which is /// the value of the constant. Const(Ty<'tcx>), Const(DefId, &'tcx Substs<'tcx>), } #[derive(Debug)]", "commid": "rust_pr_47957"}], "negative_passages": []} {"query_id": "q-en-rust-4ccd0be9ba729ec7abadb8edae413d864de4067e302533f99eb7504d3fbf79a8", "query": "is probably the same issue.\nNot a dup of\nor rather, with the fix applied, I get another ICE =)\nThat new ICE seems similar to :)\nHmm, yes, but it's definitely not an issue with\nThis probably has something to do with the fact that consts use the identity substitutions when computing universal regions substitutions: since those are always going to be generating regions: I have yet to track down the code that should be resolving those early bounds and why it isn't doing so.\nHmm, you are in the right area of the code. But I think the problem runs just a touch deeper. Actually the comments here are not that great. I should do a pass to improve them. But let me try to first explain what's happening a bit and then how to fix it. The role of the enum (which should perhaps be renamed?) is to kind of capture the \"external signature\" for a given piece of MIR, and ideally in a way that identifies all of the \"degrees of freedom\" that the code has (and just those degrees of freedom). Based just on the defining-type, we should be able to identify the following information: The universally bound regions (i.e., lifetime parameters), both early and (where relevant) late bound The types of all the MIR inputs (expressed in terms of these universally bound regions) The types of the MIR output (expressed in terms of these universally bound regions) This is perhaps easiest to explain with a function item. Assume that we are working with the MIR for this function : In that case, the defining type will be the variant: You can see that it is actually sort of minimal: just a def-id and a substs array. In this case, the substs array will consist of three things, first a lifetime variable (representing ) and then two types (, as it happens): (This array gets created in ; this will basically replace the formal lifetime parameter of (represented as an ) with a fresh variable .) Given the array, we can easily compute the set of region variables representing universally bound regions in a function signature -- it's just the region variables appearing in the substs array. That's what we use to build the map in . Note that this substs array does not, for example, include the types of the arguments etc. That's because those types can be reconstructed using queries. For example, the function includes this code, which extracts the function signature (using the query) and then basically substitutes (the call has that effect) from the formal lifetime parameters: Part of the reason it is done this way is that then the types of both arguments ( and ) will reference the same lifetime (). If, when creating the defining type, we had on something that contained the types of those arguments separately, then they would have been rewritten to use two different lifetime variables. This would have been too many \"degrees of freedom\". For constants, our defining-type variant is currently the type of the constant: This is actually wrong, and is roughly the equivalent of storing the types of the arguments directly in the . The fault arises from in -- it was written with functions and closures in mind. It fetches the \"type\" associated with the def-id and rewrites all the regions in it. This is ok for functions and closures because the type of a function or closure is already a kind of indirection; e.g., a is . We probably want to change that variant to something more like Fn: then we want to do is to refactor somewhat. I would expect it to look more like the following. This is the same as the existing code, just rearranged so that it only executes for : meanwhile, for and , we'll do something slightly different. We don't want to start by fetching , because that gets the type associated with the static. Instead, we'll do: Then we can change the other bits of code to match. For example: becomes When we wish to compute the \"return type\" of the closure, then we can do the query: becomes: I think that ought to work. cc", "positive_passages": [{"docid": "doc-en-rust-cedd8f1d965016247bbf172aba191e26648da5a28c164eb03b17bfdc3ad159cc", "text": "/// see `DefiningTy` for details. fn defining_ty(&self) -> DefiningTy<'tcx> { let tcx = self.infcx.tcx; let closure_base_def_id = tcx.closure_base_def_id(self.mir_def_id); let defining_ty = if self.mir_def_id == closure_base_def_id { tcx.type_of(closure_base_def_id) } else { let tables = tcx.typeck_tables_of(self.mir_def_id); tables.node_id_to_type(self.mir_hir_id) }; let defining_ty = self.infcx .replace_free_regions_with_nll_infer_vars(FR, &defining_ty); match tcx.hir.body_owner_kind(self.mir_node_id) { BodyOwnerKind::Fn => match defining_ty.sty { ty::TyClosure(def_id, substs) => DefiningTy::Closure(def_id, substs), ty::TyGenerator(def_id, substs, interior) => { DefiningTy::Generator(def_id, substs, interior) BodyOwnerKind::Fn => { let defining_ty = if self.mir_def_id == closure_base_def_id { tcx.type_of(closure_base_def_id) } else { let tables = tcx.typeck_tables_of(self.mir_def_id); tables.node_id_to_type(self.mir_hir_id) }; let defining_ty = self.infcx .replace_free_regions_with_nll_infer_vars(FR, &defining_ty); match defining_ty.sty { ty::TyClosure(def_id, substs) => DefiningTy::Closure(def_id, substs), ty::TyGenerator(def_id, substs, interior) => { DefiningTy::Generator(def_id, substs, interior) } ty::TyFnDef(def_id, substs) => DefiningTy::FnDef(def_id, substs), _ => span_bug!( tcx.def_span(self.mir_def_id), \"expected defining type for `{:?}`: `{:?}`\", self.mir_def_id, defining_ty ), } ty::TyFnDef(def_id, substs) => DefiningTy::FnDef(def_id, substs), _ => span_bug!( tcx.def_span(self.mir_def_id), \"expected defining type for `{:?}`: `{:?}`\", self.mir_def_id, defining_ty ), }, BodyOwnerKind::Const | BodyOwnerKind::Static(..) => DefiningTy::Const(defining_ty), } BodyOwnerKind::Const | BodyOwnerKind::Static(..) => { assert_eq!(closure_base_def_id, self.mir_def_id); let identity_substs = Substs::identity_for_item(tcx, closure_base_def_id); let substs = self.infcx .replace_free_regions_with_nll_infer_vars(FR, &identity_substs); DefiningTy::Const(self.mir_def_id, substs) } } }", "commid": "rust_pr_47957"}], "negative_passages": []} {"query_id": "q-en-rust-4ccd0be9ba729ec7abadb8edae413d864de4067e302533f99eb7504d3fbf79a8", "query": "is probably the same issue.\nNot a dup of\nor rather, with the fix applied, I get another ICE =)\nThat new ICE seems similar to :)\nHmm, yes, but it's definitely not an issue with\nThis probably has something to do with the fact that consts use the identity substitutions when computing universal regions substitutions: since those are always going to be generating regions: I have yet to track down the code that should be resolving those early bounds and why it isn't doing so.\nHmm, you are in the right area of the code. But I think the problem runs just a touch deeper. Actually the comments here are not that great. I should do a pass to improve them. But let me try to first explain what's happening a bit and then how to fix it. The role of the enum (which should perhaps be renamed?) is to kind of capture the \"external signature\" for a given piece of MIR, and ideally in a way that identifies all of the \"degrees of freedom\" that the code has (and just those degrees of freedom). Based just on the defining-type, we should be able to identify the following information: The universally bound regions (i.e., lifetime parameters), both early and (where relevant) late bound The types of all the MIR inputs (expressed in terms of these universally bound regions) The types of the MIR output (expressed in terms of these universally bound regions) This is perhaps easiest to explain with a function item. Assume that we are working with the MIR for this function : In that case, the defining type will be the variant: You can see that it is actually sort of minimal: just a def-id and a substs array. In this case, the substs array will consist of three things, first a lifetime variable (representing ) and then two types (, as it happens): (This array gets created in ; this will basically replace the formal lifetime parameter of (represented as an ) with a fresh variable .) Given the array, we can easily compute the set of region variables representing universally bound regions in a function signature -- it's just the region variables appearing in the substs array. That's what we use to build the map in . Note that this substs array does not, for example, include the types of the arguments etc. That's because those types can be reconstructed using queries. For example, the function includes this code, which extracts the function signature (using the query) and then basically substitutes (the call has that effect) from the formal lifetime parameters: Part of the reason it is done this way is that then the types of both arguments ( and ) will reference the same lifetime (). If, when creating the defining type, we had on something that contained the types of those arguments separately, then they would have been rewritten to use two different lifetime variables. This would have been too many \"degrees of freedom\". For constants, our defining-type variant is currently the type of the constant: This is actually wrong, and is roughly the equivalent of storing the types of the arguments directly in the . The fault arises from in -- it was written with functions and closures in mind. It fetches the \"type\" associated with the def-id and rewrites all the regions in it. This is ok for functions and closures because the type of a function or closure is already a kind of indirection; e.g., a is . We probably want to change that variant to something more like Fn: then we want to do is to refactor somewhat. I would expect it to look more like the following. This is the same as the existing code, just rearranged so that it only executes for : meanwhile, for and , we'll do something slightly different. We don't want to start by fetching , because that gets the type associated with the static. Instead, we'll do: Then we can change the other bits of code to match. For example: becomes When we wish to compute the \"return type\" of the closure, then we can do the query: becomes: I think that ought to work. cc", "positive_passages": [{"docid": "doc-en-rust-ab5394f6f844e8f0bb1918668550975fa12034384454bbf5a6919a3ef1bd21d5", "text": "substs.substs } DefiningTy::FnDef(_, substs) => substs, // When we encounter a constant body, just return whatever // substitutions are in scope for that constant. DefiningTy::Const(_) => { identity_substs } DefiningTy::FnDef(_, substs) | DefiningTy::Const(_, substs) => substs, }; let global_mapping = iter::once((gcx.types.re_static, fr_static));", "commid": "rust_pr_47957"}], "negative_passages": []} {"query_id": "q-en-rust-4ccd0be9ba729ec7abadb8edae413d864de4067e302533f99eb7504d3fbf79a8", "query": "is probably the same issue.\nNot a dup of\nor rather, with the fix applied, I get another ICE =)\nThat new ICE seems similar to :)\nHmm, yes, but it's definitely not an issue with\nThis probably has something to do with the fact that consts use the identity substitutions when computing universal regions substitutions: since those are always going to be generating regions: I have yet to track down the code that should be resolving those early bounds and why it isn't doing so.\nHmm, you are in the right area of the code. But I think the problem runs just a touch deeper. Actually the comments here are not that great. I should do a pass to improve them. But let me try to first explain what's happening a bit and then how to fix it. The role of the enum (which should perhaps be renamed?) is to kind of capture the \"external signature\" for a given piece of MIR, and ideally in a way that identifies all of the \"degrees of freedom\" that the code has (and just those degrees of freedom). Based just on the defining-type, we should be able to identify the following information: The universally bound regions (i.e., lifetime parameters), both early and (where relevant) late bound The types of all the MIR inputs (expressed in terms of these universally bound regions) The types of the MIR output (expressed in terms of these universally bound regions) This is perhaps easiest to explain with a function item. Assume that we are working with the MIR for this function : In that case, the defining type will be the variant: You can see that it is actually sort of minimal: just a def-id and a substs array. In this case, the substs array will consist of three things, first a lifetime variable (representing ) and then two types (, as it happens): (This array gets created in ; this will basically replace the formal lifetime parameter of (represented as an ) with a fresh variable .) Given the array, we can easily compute the set of region variables representing universally bound regions in a function signature -- it's just the region variables appearing in the substs array. That's what we use to build the map in . Note that this substs array does not, for example, include the types of the arguments etc. That's because those types can be reconstructed using queries. For example, the function includes this code, which extracts the function signature (using the query) and then basically substitutes (the call has that effect) from the formal lifetime parameters: Part of the reason it is done this way is that then the types of both arguments ( and ) will reference the same lifetime (). If, when creating the defining type, we had on something that contained the types of those arguments separately, then they would have been rewritten to use two different lifetime variables. This would have been too many \"degrees of freedom\". For constants, our defining-type variant is currently the type of the constant: This is actually wrong, and is roughly the equivalent of storing the types of the arguments directly in the . The fault arises from in -- it was written with functions and closures in mind. It fetches the \"type\" associated with the def-id and rewrites all the regions in it. This is ok for functions and closures because the type of a function or closure is already a kind of indirection; e.g., a is . We probably want to change that variant to something more like Fn: then we want to do is to refactor somewhat. I would expect it to look more like the following. This is the same as the existing code, just rearranged so that it only executes for : meanwhile, for and , we'll do something slightly different. We don't want to start by fetching , because that gets the type associated with the static. Instead, we'll do: Then we can change the other bits of code to match. For example: becomes When we wish to compute the \"return type\" of the closure, then we can do the query: becomes: I think that ought to work. cc", "positive_passages": [{"docid": "doc-en-rust-fbfc9c9274e50b04a09269c27af72437cc912cf11c0fafd6d9644b1e79949817", "text": "sig.inputs_and_output() } // For a constant body, there are no inputs, and one // \"output\" (the type of the constant). DefiningTy::Const(ty) => ty::Binder::dummy(tcx.mk_type_list(iter::once(ty))), DefiningTy::Const(def_id, _) => { // For a constant body, there are no inputs, and one // \"output\" (the type of the constant). assert_eq!(self.mir_def_id, def_id); let ty = tcx.type_of(def_id); let ty = indices.fold_to_region_vids(tcx, &ty); ty::Binder::dummy(tcx.mk_type_list(iter::once(ty))) } } }", "commid": "rust_pr_47957"}], "negative_passages": []} {"query_id": "q-en-rust-4ccd0be9ba729ec7abadb8edae413d864de4067e302533f99eb7504d3fbf79a8", "query": "is probably the same issue.\nNot a dup of\nor rather, with the fix applied, I get another ICE =)\nThat new ICE seems similar to :)\nHmm, yes, but it's definitely not an issue with\nThis probably has something to do with the fact that consts use the identity substitutions when computing universal regions substitutions: since those are always going to be generating regions: I have yet to track down the code that should be resolving those early bounds and why it isn't doing so.\nHmm, you are in the right area of the code. But I think the problem runs just a touch deeper. Actually the comments here are not that great. I should do a pass to improve them. But let me try to first explain what's happening a bit and then how to fix it. The role of the enum (which should perhaps be renamed?) is to kind of capture the \"external signature\" for a given piece of MIR, and ideally in a way that identifies all of the \"degrees of freedom\" that the code has (and just those degrees of freedom). Based just on the defining-type, we should be able to identify the following information: The universally bound regions (i.e., lifetime parameters), both early and (where relevant) late bound The types of all the MIR inputs (expressed in terms of these universally bound regions) The types of the MIR output (expressed in terms of these universally bound regions) This is perhaps easiest to explain with a function item. Assume that we are working with the MIR for this function : In that case, the defining type will be the variant: You can see that it is actually sort of minimal: just a def-id and a substs array. In this case, the substs array will consist of three things, first a lifetime variable (representing ) and then two types (, as it happens): (This array gets created in ; this will basically replace the formal lifetime parameter of (represented as an ) with a fresh variable .) Given the array, we can easily compute the set of region variables representing universally bound regions in a function signature -- it's just the region variables appearing in the substs array. That's what we use to build the map in . Note that this substs array does not, for example, include the types of the arguments etc. That's because those types can be reconstructed using queries. For example, the function includes this code, which extracts the function signature (using the query) and then basically substitutes (the call has that effect) from the formal lifetime parameters: Part of the reason it is done this way is that then the types of both arguments ( and ) will reference the same lifetime (). If, when creating the defining type, we had on something that contained the types of those arguments separately, then they would have been rewritten to use two different lifetime variables. This would have been too many \"degrees of freedom\". For constants, our defining-type variant is currently the type of the constant: This is actually wrong, and is roughly the equivalent of storing the types of the arguments directly in the . The fault arises from in -- it was written with functions and closures in mind. It fetches the \"type\" associated with the def-id and rewrites all the regions in it. This is ok for functions and closures because the type of a function or closure is already a kind of indirection; e.g., a is . We probably want to change that variant to something more like Fn: then we want to do is to refactor somewhat. I would expect it to look more like the following. This is the same as the existing code, just rearranged so that it only executes for : meanwhile, for and , we'll do something slightly different. We don't want to start by fetching , because that gets the type associated with the static. Instead, we'll do: Then we can change the other bits of code to match. For example: becomes When we wish to compute the \"return type\" of the closure, then we can do the query: becomes: I think that ought to work. cc", "positive_passages": [{"docid": "doc-en-rust-0c24c035f54cd0270ce632b5c82f3bd4a2fa478dd1570e806d1530afc4cd581e", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Test cases where we put various lifetime constraints on trait // associated constants. #![feature(rustc_attrs)] use std::option::Option; trait Anything<'a: 'b, 'b> { const AC: Option<&'b str>; } struct OKStruct { } impl<'a: 'b, 'b> Anything<'a, 'b> for OKStruct { const AC: Option<&'b str> = None; } struct FailStruct1 { } impl<'a: 'b, 'b, 'c> Anything<'a, 'b> for FailStruct1 { const AC: Option<&'c str> = None; //~^ ERROR: mismatched types } struct FailStruct2 { } impl<'a: 'b, 'b> Anything<'a, 'b> for FailStruct2 { const AC: Option<&'a str> = None; //~^ ERROR: mismatched types } fn main() {} ", "commid": "rust_pr_47957"}], "negative_passages": []} {"query_id": "q-en-rust-4ccd0be9ba729ec7abadb8edae413d864de4067e302533f99eb7504d3fbf79a8", "query": "is probably the same issue.\nNot a dup of\nor rather, with the fix applied, I get another ICE =)\nThat new ICE seems similar to :)\nHmm, yes, but it's definitely not an issue with\nThis probably has something to do with the fact that consts use the identity substitutions when computing universal regions substitutions: since those are always going to be generating regions: I have yet to track down the code that should be resolving those early bounds and why it isn't doing so.\nHmm, you are in the right area of the code. But I think the problem runs just a touch deeper. Actually the comments here are not that great. I should do a pass to improve them. But let me try to first explain what's happening a bit and then how to fix it. The role of the enum (which should perhaps be renamed?) is to kind of capture the \"external signature\" for a given piece of MIR, and ideally in a way that identifies all of the \"degrees of freedom\" that the code has (and just those degrees of freedom). Based just on the defining-type, we should be able to identify the following information: The universally bound regions (i.e., lifetime parameters), both early and (where relevant) late bound The types of all the MIR inputs (expressed in terms of these universally bound regions) The types of the MIR output (expressed in terms of these universally bound regions) This is perhaps easiest to explain with a function item. Assume that we are working with the MIR for this function : In that case, the defining type will be the variant: You can see that it is actually sort of minimal: just a def-id and a substs array. In this case, the substs array will consist of three things, first a lifetime variable (representing ) and then two types (, as it happens): (This array gets created in ; this will basically replace the formal lifetime parameter of (represented as an ) with a fresh variable .) Given the array, we can easily compute the set of region variables representing universally bound regions in a function signature -- it's just the region variables appearing in the substs array. That's what we use to build the map in . Note that this substs array does not, for example, include the types of the arguments etc. That's because those types can be reconstructed using queries. For example, the function includes this code, which extracts the function signature (using the query) and then basically substitutes (the call has that effect) from the formal lifetime parameters: Part of the reason it is done this way is that then the types of both arguments ( and ) will reference the same lifetime (). If, when creating the defining type, we had on something that contained the types of those arguments separately, then they would have been rewritten to use two different lifetime variables. This would have been too many \"degrees of freedom\". For constants, our defining-type variant is currently the type of the constant: This is actually wrong, and is roughly the equivalent of storing the types of the arguments directly in the . The fault arises from in -- it was written with functions and closures in mind. It fetches the \"type\" associated with the def-id and rewrites all the regions in it. This is ok for functions and closures because the type of a function or closure is already a kind of indirection; e.g., a is . We probably want to change that variant to something more like Fn: then we want to do is to refactor somewhat. I would expect it to look more like the following. This is the same as the existing code, just rearranged so that it only executes for : meanwhile, for and , we'll do something slightly different. We don't want to start by fetching , because that gets the type associated with the static. Instead, we'll do: Then we can change the other bits of code to match. For example: becomes When we wish to compute the \"return type\" of the closure, then we can do the query: becomes: I think that ought to work. cc", "positive_passages": [{"docid": "doc-en-rust-f1eee7ab3b4ebe8597f9879fedb6d14f2d3da9a641c2e16c1bea0259c6c1cc5e", "text": " error[E0308]: mismatched types --> $DIR/trait-associated-constant.rs:31:5 | 31 | const AC: Option<&'c str> = None; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ lifetime mismatch | = note: expected type `std::option::Option<&'b str>` found type `std::option::Option<&'c str>` note: the lifetime 'c as defined on the impl at 30:1... --> $DIR/trait-associated-constant.rs:30:1 | 30 | impl<'a: 'b, 'b, 'c> Anything<'a, 'b> for FailStruct1 { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ note: ...does not necessarily outlive the lifetime 'b as defined on the impl at 30:1 --> $DIR/trait-associated-constant.rs:30:1 | 30 | impl<'a: 'b, 'b, 'c> Anything<'a, 'b> for FailStruct1 { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error[E0308]: mismatched types --> $DIR/trait-associated-constant.rs:38:5 | 38 | const AC: Option<&'a str> = None; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ lifetime mismatch | = note: expected type `std::option::Option<&'b str>` found type `std::option::Option<&'a str>` note: the lifetime 'a as defined on the impl at 37:1... --> $DIR/trait-associated-constant.rs:37:1 | 37 | impl<'a: 'b, 'b> Anything<'a, 'b> for FailStruct2 { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ note: ...does not necessarily outlive the lifetime 'b as defined on the impl at 37:1 --> $DIR/trait-associated-constant.rs:37:1 | 37 | impl<'a: 'b, 'b> Anything<'a, 'b> for FailStruct2 { | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ error: aborting due to 2 previous errors ", "commid": "rust_pr_47957"}], "negative_passages": []} {"query_id": "q-en-rust-14619b0298e375a18f7e233cd65e1cf24f8130d969609168a70256fd6bbe212f", "query": "The code around opt-levels indicates only nightly can use or . Non-nightly s should not even report those values as possible options.", "positive_passages": [{"docid": "doc-en-rust-e5e9c71310a95f763f2f0377e27cc3dc83c21c9ae2c013f5eb7ae4bbb102c63e", "text": "} OptLevel::Default } else { match ( cg.opt_level.as_ref().map(String::as_ref), nightly_options::is_nightly_build(), ) { (None, _) => OptLevel::No, (Some(\"0\"), _) => OptLevel::No, (Some(\"1\"), _) => OptLevel::Less, (Some(\"2\"), _) => OptLevel::Default, (Some(\"3\"), _) => OptLevel::Aggressive, (Some(\"s\"), true) => OptLevel::Size, (Some(\"z\"), true) => OptLevel::SizeMin, (Some(\"s\"), false) | (Some(\"z\"), false) => { early_error( error_format, &format!( \"the optimizations s or z are only accepted on the nightly compiler\" ), ); } (Some(arg), _) => { match cg.opt_level.as_ref().map(String::as_ref) { None => OptLevel::No, Some(\"0\") => OptLevel::No, Some(\"1\") => OptLevel::Less, Some(\"2\") => OptLevel::Default, Some(\"3\") => OptLevel::Aggressive, Some(\"s\") => OptLevel::Size, Some(\"z\") => OptLevel::SizeMin, Some(arg) => { early_error( error_format, &format!(", "commid": "rust_pr_50265"}], "negative_passages": []} {"query_id": "q-en-rust-f561f9a368ebc48351bc3b90667d41e2188ab4af4fcf22830501876a8a71001a", "query": "Repro: The same happens for 1.23-stable.\ndoes this still reproduce?\nno repro for .", "positive_passages": [{"docid": "doc-en-rust-3559168aea93e302adde5406e73f68afaeaf0bb160ec90cd850b417eb01f1e03", "text": "pub use run::{cmd, run, run_fail, run_with_args}; /// Helpers for checking target information. pub use targets::{is_darwin, is_msvc, is_windows, llvm_components_contain, target, uname}; pub use targets::{is_darwin, is_msvc, is_windows, llvm_components_contain, target, uname, apple_os}; /// Helpers for building names of output artifacts that are potentially target-specific. pub use artifact_names::{", "commid": "rust_pr_130068"}], "negative_passages": []} {"query_id": "q-en-rust-f561f9a368ebc48351bc3b90667d41e2188ab4af4fcf22830501876a8a71001a", "query": "Repro: The same happens for 1.23-stable.\ndoes this still reproduce?\nno repro for .", "positive_passages": [{"docid": "doc-en-rust-8aedff545db014c3ef22da1aa04c708a69c31352ac18e2e6f0e1def42499d373", "text": "target().contains(\"darwin\") } /// Get the target OS on Apple operating systems. #[must_use] pub fn apple_os() -> &'static str { if target().contains(\"darwin\") { \"macos\" } else if target().contains(\"ios\") { \"ios\" } else if target().contains(\"tvos\") { \"tvos\" } else if target().contains(\"watchos\") { \"watchos\" } else if target().contains(\"visionos\") { \"visionos\" } else { panic!(\"not an Apple OS\") } } /// Check if `component` is within `LLVM_COMPONENTS` #[must_use] pub fn llvm_components_contain(component: &str) -> bool {", "commid": "rust_pr_130068"}], "negative_passages": []} {"query_id": "q-en-rust-f561f9a368ebc48351bc3b90667d41e2188ab4af4fcf22830501876a8a71001a", "query": "Repro: The same happens for 1.23-stable.\ndoes this still reproduce?\nno repro for .", "positive_passages": [{"docid": "doc-en-rust-526637e713067f40f09a494a041d65ac005826783c4537e5b116d11878fa1343", "text": "run-make/issue-84395-lto-embed-bitcode/Makefile run-make/jobserver-error/Makefile run-make/libs-through-symlinks/Makefile run-make/macos-deployment-target/Makefile run-make/split-debuginfo/Makefile run-make/symbol-mangling-hashed/Makefile run-make/translation/Makefile", "commid": "rust_pr_130068"}], "negative_passages": []} {"query_id": "q-en-rust-f561f9a368ebc48351bc3b90667d41e2188ab4af4fcf22830501876a8a71001a", "query": "Repro: The same happens for 1.23-stable.\ndoes this still reproduce?\nno repro for .", "positive_passages": [{"docid": "doc-en-rust-3b96216a4f07be510bdf607eb604a6e4d81c808a73f62dd025ac56efd69e276d", "text": " //! Test codegen when setting deployment targets on Apple platforms. //! //! This is important since its a compatibility hazard. The linker will //! generate load commands differently based on what minimum OS it can assume. //! //! See https://github.com/rust-lang/rust/pull/105123. //@ only-apple use run_make_support::{apple_os, cmd, run_in_tmpdir, rustc, target}; /// Run vtool to check the `minos` field in LC_BUILD_VERSION. /// /// On lower deployment targets, LC_VERSION_MIN_MACOSX, LC_VERSION_MIN_IPHONEOS and similar /// are used instead of LC_BUILD_VERSION - these have a `version` field, so also check that. #[track_caller] fn minos(file: &str, version: &str) { cmd(\"vtool\") .arg(\"-show-build\") .arg(file) .run() .assert_stdout_contains_regex(format!(\"(minos|version) {version}\")); } fn main() { // These versions should generally be higher than the default versions let (env_var, example_version, higher_example_version) = match apple_os() { \"macos\" => (\"MACOSX_DEPLOYMENT_TARGET\", \"12.0\", \"13.0\"), // armv7s-apple-ios and i386-apple-ios only supports iOS 10.0 \"ios\" if target() == \"armv7s-apple-ios\" || target() == \"i386-apple-ios\" => { (\"IPHONEOS_DEPLOYMENT_TARGET\", \"10.0\", \"10.0\") } \"ios\" => (\"IPHONEOS_DEPLOYMENT_TARGET\", \"15.0\", \"16.0\"), \"watchos\" => (\"WATCHOS_DEPLOYMENT_TARGET\", \"7.0\", \"9.0\"), \"tvos\" => (\"TVOS_DEPLOYMENT_TARGET\", \"14.0\", \"15.0\"), \"visionos\" => (\"XROS_DEPLOYMENT_TARGET\", \"1.1\", \"1.2\"), _ => unreachable!(), }; let default_version = rustc().target(target()).env_remove(env_var).print(\"deployment-target\").run().stdout_utf8(); let default_version = default_version.strip_prefix(\"deployment_target=\").unwrap().trim(); // Test that version makes it to the object file. run_in_tmpdir(|| { let rustc = || { let mut rustc = rustc(); rustc.target(target()); rustc.crate_type(\"lib\"); rustc.emit(\"obj\"); rustc.input(\"foo.rs\"); rustc.output(\"foo.o\"); rustc }; rustc().env(env_var, example_version).run(); minos(\"foo.o\", example_version); // FIXME(madsmtm): Doesn't work on Mac Catalyst and the simulator. if !target().contains(\"macabi\") && !target().contains(\"sim\") { rustc().env_remove(env_var).run(); minos(\"foo.o\", default_version); } }); // Test that version makes it to the linker when linking dylibs. run_in_tmpdir(|| { // Certain watchOS targets don't support dynamic linking, so we disable the test on those. if apple_os() == \"watchos\" { return; } let rustc = || { let mut rustc = rustc(); rustc.target(target()); rustc.crate_type(\"dylib\"); rustc.input(\"foo.rs\"); rustc.output(\"libfoo.dylib\"); rustc }; rustc().env(env_var, example_version).run(); minos(\"libfoo.dylib\", example_version); // FIXME(madsmtm): Deployment target is not currently passed properly to linker // rustc().env_remove(env_var).run(); // minos(\"libfoo.dylib\", default_version); // Test with ld64 instead rustc().arg(\"-Clinker-flavor=ld\").env(env_var, example_version).run(); minos(\"libfoo.dylib\", example_version); rustc().arg(\"-Clinker-flavor=ld\").env_remove(env_var).run(); minos(\"libfoo.dylib\", default_version); }); // Test that version makes it to the linker when linking executables. run_in_tmpdir(|| { let rustc = || { let mut rustc = rustc(); rustc.target(target()); rustc.crate_type(\"bin\"); rustc.input(\"foo.rs\"); rustc.output(\"foo\"); rustc }; // FIXME(madsmtm): Doesn't work on watchOS for some reason? if !target().contains(\"watchos\") { rustc().env(env_var, example_version).run(); minos(\"foo\", example_version); // FIXME(madsmtm): Deployment target is not currently passed properly to linker // rustc().env_remove(env_var).run(); // minos(\"foo\", default_version); } // Test with ld64 instead rustc().arg(\"-Clinker-flavor=ld\").env(env_var, example_version).run(); minos(\"foo\", example_version); rustc().arg(\"-Clinker-flavor=ld\").env_remove(env_var).run(); minos(\"foo\", default_version); }); // Test that changing the deployment target busts the incremental cache. run_in_tmpdir(|| { let rustc = || { let mut rustc = rustc(); rustc.target(target()); rustc.incremental(\"incremental\"); rustc.crate_type(\"lib\"); rustc.emit(\"obj\"); rustc.input(\"foo.rs\"); rustc.output(\"foo.o\"); rustc }; // FIXME(madsmtm): Incremental cache is not yet busted // https://github.com/rust-lang/rust/issues/118204 let higher_example_version = example_version; let default_version = example_version; rustc().env(env_var, example_version).run(); minos(\"foo.o\", example_version); rustc().env(env_var, higher_example_version).run(); minos(\"foo.o\", higher_example_version); // FIXME(madsmtm): Doesn't work on Mac Catalyst and the simulator. if !target().contains(\"macabi\") && !target().contains(\"sim\") { rustc().env_remove(env_var).run(); minos(\"foo.o\", default_version); } }); } ", "commid": "rust_pr_130068"}], "negative_passages": []} {"query_id": "q-en-rust-f561f9a368ebc48351bc3b90667d41e2188ab4af4fcf22830501876a8a71001a", "query": "Repro: The same happens for 1.23-stable.\ndoes this still reproduce?\nno repro for .", "positive_passages": [{"docid": "doc-en-rust-6e80604eeae16ad6a8bca2c1f7a11ed8048968f8bd616173e2f7c0ddd0983016", "text": " # only-macos # # Check that a set deployment target actually makes it to the linker. # This is important since its a compatibility hazard. The linker will # generate load commands differently based on what minimum OS it can assume. include ../tools.mk ifeq ($(strip $(shell uname -m)),arm64) GREP_PATTERN = \"minos 11.0\" else GREP_PATTERN = \"version 10.13\" endif OUT_FILE=$(TMPDIR)/with_deployment_target.dylib all: env MACOSX_DEPLOYMENT_TARGET=10.13 $(RUSTC) with_deployment_target.rs -o $(OUT_FILE) # XXX: The check is for either the x86_64 minimum OR the aarch64 minimum (M1 starts at macOS 11). # They also use different load commands, so we let that change with each too. The aarch64 check # isn't as robust as the x86 one, but testing both seems unneeded. vtool -show-build $(OUT_FILE) | $(CGREP) -e $(GREP_PATTERN) ", "commid": "rust_pr_130068"}], "negative_passages": []} {"query_id": "q-en-rust-f561f9a368ebc48351bc3b90667d41e2188ab4af4fcf22830501876a8a71001a", "query": "Repro: The same happens for 1.23-stable.\ndoes this still reproduce?\nno repro for .", "positive_passages": [{"docid": "doc-en-rust-eea61b8e439907a97a1bf007dee4f6dfcf8c6cbb9386426e0af52d7750102253", "text": " #![crate_type = \"cdylib\"] #[allow(dead_code)] fn something_and_nothing() {} ", "commid": "rust_pr_130068"}], "negative_passages": []} {"query_id": "q-en-rust-f6350be142060682326dc17e8b230c8f7f942c78aa699579ba4514e201f5eeac", "query": "is implemented for Range step_impl_unsigned!(usize u8 u16 u32); step_impl_signed!([isize: usize] [i8: u8] [i16: u16] [i32: u32]); step_impl_unsigned!(usize u8 u16); #[cfg(not(target_pointer_witdth = \"16\"))] step_impl_unsigned!(u32); #[cfg(target_pointer_witdth = \"16\")] step_impl_no_between!(u32); step_impl_signed!([isize: usize] [i8: u8] [i16: u16]); #[cfg(not(target_pointer_witdth = \"16\"))] step_impl_signed!([i32: u32]); #[cfg(target_pointer_witdth = \"16\")] step_impl_no_between!(i32); #[cfg(target_pointer_width = \"64\")] step_impl_unsigned!(u64); #[cfg(target_pointer_width = \"64\")]", "commid": "rust_pr_53755"}], "negative_passages": []} {"query_id": "q-en-rust-f6350be142060682326dc17e8b230c8f7f942c78aa699579ba4514e201f5eeac", "query": "is implemented for Range // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(step_trait)] use std::iter::Step; #[cfg(target_pointer_width = \"16\")] fn main() { assert!(Step::steps_between(&0u32, &::std::u32::MAX).is_none()); } #[cfg(any(target_pointer_width = \"32\", target_pointer_width = \"64\"))] fn main() { assert!(Step::steps_between(&0u32, &::std::u32::MAX).is_some()); } ", "commid": "rust_pr_53755"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-f639295df9bd378e7048a0c34e33364f85360f91286c169e7d788fff75bccade", "text": "cfg.file(\"../rustllvm/PassWrapper.cpp\") .file(\"../rustllvm/RustWrapper.cpp\") .file(\"../rustllvm/ArchiveWrapper.cpp\") .file(\"../rustllvm/Linker.cpp\") .cpp(true) .cpp_link_stdlib(None) // we handle this below .compile(\"rustllvm\");", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-e92f6773c938b7f151b8f7097af8ee95d360c08d86c5b4163ae4c599039588bf", "text": "#[allow(missing_copy_implementations)] pub enum OperandBundleDef_opaque {} pub type OperandBundleDefRef = *mut OperandBundleDef_opaque; #[allow(missing_copy_implementations)] pub enum Linker_opaque {} pub type LinkerRef = *mut Linker_opaque; pub type DiagnosticHandler = unsafe extern \"C\" fn(DiagnosticInfoRef, *mut c_void); pub type InlineAsmDiagHandler = unsafe extern \"C\" fn(SMDiagnosticRef, *const c_void, c_uint);", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-d62d16b9035a2079a4029414dfae1dd512cf94c42945ef0ff5b76ff998e37662", "text": "pub fn LLVMRustPrintPasses(); pub fn LLVMRustSetNormalizedTarget(M: ModuleRef, triple: *const c_char); pub fn LLVMRustAddAlwaysInlinePass(P: PassManagerBuilderRef, AddLifetimes: bool); pub fn LLVMRustLinkInExternalBitcode(M: ModuleRef, bc: *const c_char, len: size_t) -> bool; pub fn LLVMRustRunRestrictionPass(M: ModuleRef, syms: *const *const c_char, len: size_t); pub fn LLVMRustMarkAllFunctionsNounwind(M: ModuleRef);", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-ae1b295185e2054c7d1a5275e5347a1773c644485556cfc458cdf9f5ebb86c03", "text": "CU2: *mut *mut c_void); pub fn LLVMRustThinLTOPatchDICompileUnit(M: ModuleRef, CU: *mut c_void); pub fn LLVMRustThinLTORemoveAvailableExternally(M: ModuleRef); pub fn LLVMRustLinkerNew(M: ModuleRef) -> LinkerRef; pub fn LLVMRustLinkerAdd(linker: LinkerRef, bytecode: *const c_char, bytecode_len: usize) -> bool; pub fn LLVMRustLinkerFree(linker: LinkerRef); }", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-ddc9a7ca9f860b8334becadfc7c187c3d0a5e56291d918fb0e8eff2c6fab2cee", "text": "// know much about the memory management here so we err on the side of being // save and persist everything with the original module. let mut serialized_bitcode = Vec::new(); let mut linker = Linker::new(llmod); for (bc_decoded, name) in serialized_modules { info!(\"linking {:?}\", name); time(cgcx.time_passes, &format!(\"ll link {:?}\", name), || unsafe { time(cgcx.time_passes, &format!(\"ll link {:?}\", name), || { let data = bc_decoded.data(); if llvm::LLVMRustLinkInExternalBitcode(llmod, data.as_ptr() as *const libc::c_char, data.len() as libc::size_t) { Ok(()) } else { linker.add(&data).map_err(|()| { let msg = format!(\"failed to load bc of {:?}\", name); Err(write::llvm_err(&diag_handler, msg)) } write::llvm_err(&diag_handler, msg) }) })?; timeline.record(&format!(\"link {:?}\", name)); serialized_bitcode.push(bc_decoded); } drop(linker); cgcx.save_temp_bitcode(&module, \"lto.input\"); // Internalize everything that *isn't* in our whitelist to help strip out", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-677854bd1a48e183a3f0017d24fc50ea5cc3c6912a4c9ca1b57bdc94009e63b8", "text": "}]) } struct Linker(llvm::LinkerRef); impl Linker { fn new(llmod: ModuleRef) -> Linker { unsafe { Linker(llvm::LLVMRustLinkerNew(llmod)) } } fn add(&mut self, bytecode: &[u8]) -> Result<(), ()> { unsafe { if llvm::LLVMRustLinkerAdd(self.0, bytecode.as_ptr() as *const libc::c_char, bytecode.len()) { Ok(()) } else { Err(()) } } } } impl Drop for Linker { fn drop(&mut self) { unsafe { llvm::LLVMRustLinkerFree(self.0); } } } /// Prepare \"thin\" LTO to get run on these modules. /// /// The general structure of ThinLTO is quite different from the structure of", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-0c3ee09cb802fcb50328876b81c79d0d7478ff2073f9856f8e409e3f9f424355", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #include \"llvm/Linker/Linker.h\" #include \"rustllvm.h\" using namespace llvm; struct RustLinker { Linker L; LLVMContext &Ctx; RustLinker(Module &M) : L(M), Ctx(M.getContext()) {} }; extern \"C\" RustLinker* LLVMRustLinkerNew(LLVMModuleRef DstRef) { Module *Dst = unwrap(DstRef); auto Ret = llvm::make_unique(*Dst); return Ret.release(); } extern \"C\" void LLVMRustLinkerFree(RustLinker *L) { delete L; } extern \"C\" bool LLVMRustLinkerAdd(RustLinker *L, char *BC, size_t Len) { std::unique_ptr Buf = MemoryBuffer::getMemBufferCopy(StringRef(BC, Len)); #if LLVM_VERSION_GE(4, 0) Expected> SrcOrError = llvm::getLazyBitcodeModule(Buf->getMemBufferRef(), L->Ctx); if (!SrcOrError) { LLVMRustSetLastError(toString(SrcOrError.takeError()).c_str()); return false; } auto Src = std::move(*SrcOrError); #else ErrorOr> Src = llvm::getLazyBitcodeModule(std::move(Buf), L->Ctx); if (!Src) { LLVMRustSetLastError(Src.getError().message().c_str()); return false; } #endif #if LLVM_VERSION_GE(4, 0) if (L->L.linkInModule(std::move(Src))) { #else if (L->L.linkInModule(std::move(Src.get()))) { #endif LLVMRustSetLastError(\"\"); return false; } return true; } ", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-50f374e996074db370f2382a49db413c72a0c93c188c71cffb875e937714f7bb", "query": "I noticed my project took noticeably longer to build with \"lto=true\" at some point. I benchmarked link time (touch ; cargo build --release) which appear to blow up as soon as we have several codegen units. link times: So it seem codegen-units=1 makes for bad parallelism while building but speeds up lto because less translation units need to be merged. Maybe it is possible to dynamically use as little codegen units as possible (but still enough to utilize all cores) when building? //cc\nOr maybe just don't use multiple codegen units by default when monolithic LTO is enabled.\nI think the optimal \"balance\" for linktime/buildtime with monolithic lto would be to 1) compile all crates with codgen-units=1 as long as we can compile several crates in parallel. 2) if we have to compile a single crate at a time, use ${thread_number} codegen units. However this will probably introduce indeterministic builds depending on the number of threads a host machine has. So a single core machine would optimize better than a 8 core machine when building the same crate with monolithic lto. :/\nI made some benchmarks (cpu: i5-2540M dual core), rustc 1.25.0-nightly ( 2018-02-11) `\nI believe this is fixed in\ner, didn't mean to close yet", "positive_passages": [{"docid": "doc-en-rust-19534572d4ab632138e20861f3ff6d2fb0a77df40f4a045e67efba8f1229c845", "text": "} } extern \"C\" bool LLVMRustLinkInExternalBitcode(LLVMModuleRef DstRef, char *BC, size_t Len) { Module *Dst = unwrap(DstRef); std::unique_ptr Buf = MemoryBuffer::getMemBufferCopy(StringRef(BC, Len)); #if LLVM_VERSION_GE(4, 0) Expected> SrcOrError = llvm::getLazyBitcodeModule(Buf->getMemBufferRef(), Dst->getContext()); if (!SrcOrError) { LLVMRustSetLastError(toString(SrcOrError.takeError()).c_str()); return false; } auto Src = std::move(*SrcOrError); #else ErrorOr> Src = llvm::getLazyBitcodeModule(std::move(Buf), Dst->getContext()); if (!Src) { LLVMRustSetLastError(Src.getError().message().c_str()); return false; } #endif std::string Err; raw_string_ostream Stream(Err); DiagnosticPrinterRawOStream DP(Stream); #if LLVM_VERSION_GE(4, 0) if (Linker::linkModules(*Dst, std::move(Src))) { #else if (Linker::linkModules(*Dst, std::move(Src.get()))) { #endif LLVMRustSetLastError(Err.c_str()); return false; } return true; } // Note that the two following functions look quite similar to the // LLVMGetSectionName function. Sadly, it appears that this function only // returns a char* pointer, which isn't guaranteed to be null-terminated. The", "commid": "rust_pr_48163"}], "negative_passages": []} {"query_id": "q-en-rust-7e992158fe76383c31be7124b9be432ac0e868f9d7c73ce3004e8939764f7c92", "query": "In , the NLL checker incorrectly judges to be borrowed more than once: I get this error: This works without NLL. This was found while trying to bootstrap rustc with NLL enabled, where it manifests as:\ncc\nThis is a case where our debug flags are very helpful for isolating the cause of a bug like this. Step 1: Remove from the example. Step 2: Incrementally work way to recreating effect of ; my hypothesis is (well, was) that this bug was injected by either mir-borrowck or, somehow, by NLL itself... lets test that hypothesis: I was scratching my head at this point... and then it hit me:\nah, good catch! I guess this is because those mutable borrows are \"activating\" at this later point? Fascinating.\nI'm going to take a look at this bug.\nHere's the MIR dump I managed to extract: // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // run-pass // revisions: lxl nll #![cfg_attr(nll, feature(nll))] struct Foo { x: u32 } impl Foo { fn twiddle(&mut self) -> &mut Self { self } fn twaddle(&mut self) -> &mut Self { self } fn emit(&mut self) { self.x += 1; } } fn main() { let mut foo = Foo { x: 0 }; match 22 { 22 => &mut foo, 44 => foo.twiddle(), _ => foo.twaddle(), }.emit(); } ", "commid": "rust_pr_48988"}], "negative_passages": []} {"query_id": "q-en-rust-841c374c26ec0bdd1a586472fd0ae2291029983d6d556f555f731cf4a1e9a2ad", "query": "should document that , , and are all mutually exclusive. (and that it may also be the case that none of them are true) Why? Because returns true for both files and symlinks to files, and this tends to make me paranoid that a will share similar characteristics and perhaps claim that both and . The only way to convince myself that FileType's behavior is indeed sane is to write a little test program---again. Surely I can't be alone in this.\nis calling not (and equivalent on Windows), so if the path is a symlink, then it returns the results based on what the symlink points to, not the symlink itself. So respects the same mutual exclusivity rules as . But yes, this ought to be documented, especially given that the mutually exclusive behavior may be surprising to Windows developers.\nSimilar goes for , which should document that these are always false for a obtained by calling on a symlink.\nWell you can always call on something that is not a symlink, in which case or could be true.", "positive_passages": [{"docid": "doc-en-rust-c17c4568f9974a9c03328f640523a5e783562b3c966b102b31693d909e758eb1", "text": "FileType(self.0.file_type()) } /// Returns whether this metadata is for a directory. /// Returns whether this metadata is for a directory. The /// result is mutually exclusive to the result of /// [`is_file`], and will be false for symlink metadata /// obtained from [`symlink_metadata`]. /// /// [`is_file`]: struct.Metadata.html#method.is_file /// [`symlink_metadata`]: fn.symlink_metadata.html /// /// # Examples ///", "commid": "rust_pr_49076"}], "negative_passages": []} {"query_id": "q-en-rust-841c374c26ec0bdd1a586472fd0ae2291029983d6d556f555f731cf4a1e9a2ad", "query": "should document that , , and are all mutually exclusive. (and that it may also be the case that none of them are true) Why? Because returns true for both files and symlinks to files, and this tends to make me paranoid that a will share similar characteristics and perhaps claim that both and . The only way to convince myself that FileType's behavior is indeed sane is to write a little test program---again. Surely I can't be alone in this.\nis calling not (and equivalent on Windows), so if the path is a symlink, then it returns the results based on what the symlink points to, not the symlink itself. So respects the same mutual exclusivity rules as . But yes, this ought to be documented, especially given that the mutually exclusive behavior may be surprising to Windows developers.\nSimilar goes for , which should document that these are always false for a obtained by calling on a symlink.\nWell you can always call on something that is not a symlink, in which case or could be true.", "positive_passages": [{"docid": "doc-en-rust-4b225942ba6f6ef447ab2c46650a514217443589412302e9d1c783d6f08c0b90", "text": "#[stable(feature = \"rust1\", since = \"1.0.0\")] pub fn is_dir(&self) -> bool { self.file_type().is_dir() } /// Returns whether this metadata is for a regular file. /// Returns whether this metadata is for a regular file. The /// result is mutually exclusive to the result of /// [`is_dir`], and will be false for symlink metadata /// obtained from [`symlink_metadata`]. /// /// [`is_dir`]: struct.Metadata.html#method.is_dir /// [`symlink_metadata`]: fn.symlink_metadata.html /// /// # Examples ///", "commid": "rust_pr_49076"}], "negative_passages": []} {"query_id": "q-en-rust-841c374c26ec0bdd1a586472fd0ae2291029983d6d556f555f731cf4a1e9a2ad", "query": "should document that , , and are all mutually exclusive. (and that it may also be the case that none of them are true) Why? Because returns true for both files and symlinks to files, and this tends to make me paranoid that a will share similar characteristics and perhaps claim that both and . The only way to convince myself that FileType's behavior is indeed sane is to write a little test program---again. Surely I can't be alone in this.\nis calling not (and equivalent on Windows), so if the path is a symlink, then it returns the results based on what the symlink points to, not the symlink itself. So respects the same mutual exclusivity rules as . But yes, this ought to be documented, especially given that the mutually exclusive behavior may be surprising to Windows developers.\nSimilar goes for , which should document that these are always false for a obtained by calling on a symlink.\nWell you can always call on something that is not a symlink, in which case or could be true.", "positive_passages": [{"docid": "doc-en-rust-75391dfc100be62894a8d8ed6ba5006fd5f99155c8116acf48ebf736af74fe60", "text": "} impl FileType { /// Test whether this file type represents a directory. /// Test whether this file type represents a directory. The /// result is mutually exclusive to the results of /// [`is_file`] and [`is_symlink`]; only zero or one of these /// tests may pass. /// /// [`is_file`]: struct.FileType.html#method.is_file /// [`is_symlink`]: struct.FileType.html#method.is_symlink /// /// # Examples ///", "commid": "rust_pr_49076"}], "negative_passages": []} {"query_id": "q-en-rust-841c374c26ec0bdd1a586472fd0ae2291029983d6d556f555f731cf4a1e9a2ad", "query": "should document that , , and are all mutually exclusive. (and that it may also be the case that none of them are true) Why? Because returns true for both files and symlinks to files, and this tends to make me paranoid that a will share similar characteristics and perhaps claim that both and . The only way to convince myself that FileType's behavior is indeed sane is to write a little test program---again. Surely I can't be alone in this.\nis calling not (and equivalent on Windows), so if the path is a symlink, then it returns the results based on what the symlink points to, not the symlink itself. So respects the same mutual exclusivity rules as . But yes, this ought to be documented, especially given that the mutually exclusive behavior may be surprising to Windows developers.\nSimilar goes for , which should document that these are always false for a obtained by calling on a symlink.\nWell you can always call on something that is not a symlink, in which case or could be true.", "positive_passages": [{"docid": "doc-en-rust-8b68e3ab86d85198afd156d6f26ddb2983d40b93433b1a06c7ffe06a0ad45edf", "text": "pub fn is_dir(&self) -> bool { self.0.is_dir() } /// Test whether this file type represents a regular file. /// The result is mutually exclusive to the results of /// [`is_dir`] and [`is_symlink`]; only zero or one of these /// tests may pass. /// /// [`is_dir`]: struct.FileType.html#method.is_dir /// [`is_symlink`]: struct.FileType.html#method.is_symlink /// /// # Examples ///", "commid": "rust_pr_49076"}], "negative_passages": []} {"query_id": "q-en-rust-841c374c26ec0bdd1a586472fd0ae2291029983d6d556f555f731cf4a1e9a2ad", "query": "should document that , , and are all mutually exclusive. (and that it may also be the case that none of them are true) Why? Because returns true for both files and symlinks to files, and this tends to make me paranoid that a will share similar characteristics and perhaps claim that both and . The only way to convince myself that FileType's behavior is indeed sane is to write a little test program---again. Surely I can't be alone in this.\nis calling not (and equivalent on Windows), so if the path is a symlink, then it returns the results based on what the symlink points to, not the symlink itself. So respects the same mutual exclusivity rules as . But yes, this ought to be documented, especially given that the mutually exclusive behavior may be surprising to Windows developers.\nSimilar goes for , which should document that these are always false for a obtained by calling on a symlink.\nWell you can always call on something that is not a symlink, in which case or could be true.", "positive_passages": [{"docid": "doc-en-rust-288de0beb712496a5ce473b849eb6e23b69a65e5183ee4df2ad17773e1743d87", "text": "pub fn is_file(&self) -> bool { self.0.is_file() } /// Test whether this file type represents a symbolic link. /// The result is mutually exclusive to the results of /// [`is_dir`] and [`is_file`]; only zero or one of these /// tests may pass. /// /// The underlying [`Metadata`] struct needs to be retrieved /// with the [`fs::symlink_metadata`] function and not the", "commid": "rust_pr_49076"}], "negative_passages": []} {"query_id": "q-en-rust-841c374c26ec0bdd1a586472fd0ae2291029983d6d556f555f731cf4a1e9a2ad", "query": "should document that , , and are all mutually exclusive. (and that it may also be the case that none of them are true) Why? Because returns true for both files and symlinks to files, and this tends to make me paranoid that a will share similar characteristics and perhaps claim that both and . The only way to convince myself that FileType's behavior is indeed sane is to write a little test program---again. Surely I can't be alone in this.\nis calling not (and equivalent on Windows), so if the path is a symlink, then it returns the results based on what the symlink points to, not the symlink itself. So respects the same mutual exclusivity rules as . But yes, this ought to be documented, especially given that the mutually exclusive behavior may be surprising to Windows developers.\nSimilar goes for , which should document that these are always false for a obtained by calling on a symlink.\nWell you can always call on something that is not a symlink, in which case or could be true.", "positive_passages": [{"docid": "doc-en-rust-840a775eacf6eba96a10eadd30e82887dc901ea7fe96b648abd39ef50641f1db", "text": "/// [`Metadata`]: struct.Metadata.html /// [`fs::metadata`]: fn.metadata.html /// [`fs::symlink_metadata`]: fn.symlink_metadata.html /// [`is_dir`]: struct.FileType.html#method.is_dir /// [`is_file`]: struct.FileType.html#method.is_file /// [`is_symlink`]: struct.FileType.html#method.is_symlink /// /// # Examples", "commid": "rust_pr_49076"}], "negative_passages": []} {"query_id": "q-en-rust-f6e6ce633b62b4f539ddd22aaed994a5a47677c750da151c2f4b9b3cd14dce6c", "query": "This ICEs with 'tyregion() invoked on in appropriate ty: tybot' from Full output from : Sorry for making the poor compiler throw up with my bad code. :(\n\"Sorry for putting in effort to make your compiler better with no personal gain\", you mean! :-D\nCan't reproduce this anymore.", "positive_passages": [{"docid": "doc-en-rust-379ad63aafec8abcf1b597422071c535ed2823557aa4180060cac8b1cc0bd165", "text": "#[lang=\"return_to_mut\"] #[inline(always)] pub unsafe fn return_to_mut(a: *u8) { let a: *mut BoxRepr = transmute(a); (*a).header.ref_count &= !FROZEN_BIT; // Sometimes the box is null, if it is conditionally frozen. // See e.g. #4904. if !a.is_null() { let a: *mut BoxRepr = transmute(a); (*a).header.ref_count &= !FROZEN_BIT; } } #[lang=\"check_not_borrowed\"]", "commid": "rust_pr_5556"}], "negative_passages": []} {"query_id": "q-en-rust-63fc269087982e056aa6859300bf71c1cb7dfeb09caf01defe161c8f7af225fc", "query": "Tracking issue for , and . Steps: [X] Implementation [X] Stabilization PR () Unresolved questions: [X] What is a good name? ? ?\ncc", "positive_passages": [{"docid": "doc-en-rust-f32d436173e3aac9fedf0058d71da08bc8693a2e16f37b98d275f99e45efc1d2", "text": "/// # Examples /// /// ``` /// #![feature(path_ancestors)] /// /// use std::path::Path; /// /// let path = Path::new(\"/foo/bar\");", "commid": "rust_pr_50894"}], "negative_passages": []} {"query_id": "q-en-rust-63fc269087982e056aa6859300bf71c1cb7dfeb09caf01defe161c8f7af225fc", "query": "Tracking issue for , and . Steps: [X] Implementation [X] Stabilization PR () Unresolved questions: [X] What is a good name? ? ?\ncc", "positive_passages": [{"docid": "doc-en-rust-c12687901c4f1c3f5a9627fb2d1f0317054d75b36104c6f241ea37b3713df337", "text": "/// [`ancestors`]: struct.Path.html#method.ancestors /// [`Path`]: struct.Path.html #[derive(Copy, Clone, Debug)] #[unstable(feature = \"path_ancestors\", issue = \"48581\")] #[stable(feature = \"path_ancestors\", since = \"1.28.0\")] pub struct Ancestors<'a> { next: Option<&'a Path>, } #[unstable(feature = \"path_ancestors\", issue = \"48581\")] #[stable(feature = \"path_ancestors\", since = \"1.28.0\")] impl<'a> Iterator for Ancestors<'a> { type Item = &'a Path;", "commid": "rust_pr_50894"}], "negative_passages": []} {"query_id": "q-en-rust-63fc269087982e056aa6859300bf71c1cb7dfeb09caf01defe161c8f7af225fc", "query": "Tracking issue for , and . Steps: [X] Implementation [X] Stabilization PR () Unresolved questions: [X] What is a good name? ? ?\ncc", "positive_passages": [{"docid": "doc-en-rust-02f1ca640ca29f1a01bcdb1f9b86d136662247bd7e71d8039d14b263dc6a4d5e", "text": "} } #[unstable(feature = \"path_ancestors\", issue = \"48581\")] #[stable(feature = \"path_ancestors\", since = \"1.28.0\")] impl<'a> FusedIterator for Ancestors<'a> {} ////////////////////////////////////////////////////////////////////////////////", "commid": "rust_pr_50894"}], "negative_passages": []} {"query_id": "q-en-rust-63fc269087982e056aa6859300bf71c1cb7dfeb09caf01defe161c8f7af225fc", "query": "Tracking issue for , and . Steps: [X] Implementation [X] Stabilization PR () Unresolved questions: [X] What is a good name? ? ?\ncc", "positive_passages": [{"docid": "doc-en-rust-f563d20e06d48655a156e277e71359ee00f116779aea5fbe553ae55b50f7189b", "text": "/// # Examples /// /// ``` /// #![feature(path_ancestors)] /// /// use std::path::Path; /// /// let mut ancestors = Path::new(\"/foo/bar\").ancestors();", "commid": "rust_pr_50894"}], "negative_passages": []} {"query_id": "q-en-rust-63fc269087982e056aa6859300bf71c1cb7dfeb09caf01defe161c8f7af225fc", "query": "Tracking issue for , and . Steps: [X] Implementation [X] Stabilization PR () Unresolved questions: [X] What is a good name? ? ?\ncc", "positive_passages": [{"docid": "doc-en-rust-087185a6b180bed7c52bc20838b4182eeb9380771116efab8d35cdc99aeebf13", "text": "/// /// [`None`]: ../../std/option/enum.Option.html#variant.None /// [`parent`]: struct.Path.html#method.parent #[unstable(feature = \"path_ancestors\", issue = \"48581\")] #[stable(feature = \"path_ancestors\", since = \"1.28.0\")] pub fn ancestors(&self) -> Ancestors { Ancestors { next: Some(&self),", "commid": "rust_pr_50894"}], "negative_passages": []} {"query_id": "q-en-rust-00998d774fb97745658b898c796e3040c0717928df905b6306a209bb71caed5f", "query": "The following crash occurs during incremental compilation of the rust-url crate: https://travis- cc\nminimal repro:\nremarked on IRC that this might be temporarily worked around by not caching upstream constants.\ntriage: P-high Regression.", "positive_passages": [{"docid": "doc-en-rust-ba999c0490d8d05204546fec4646a8861536acffd9c5d7b9bf3e8b36e7feee48", "text": "use ich::{self, CachingCodemapView, Fingerprint}; use middle::cstore::CrateStore; use ty::{TyCtxt, fast_reject}; use mir::interpret::AllocId; use session::Session; use std::cmp::Ord;", "commid": "rust_pr_49424"}], "negative_passages": []} {"query_id": "q-en-rust-00998d774fb97745658b898c796e3040c0717928df905b6306a209bb71caed5f", "query": "The following crash occurs during incremental compilation of the rust-url crate: https://travis- cc\nminimal repro:\nremarked on IRC that this might be temporarily worked around by not caching upstream constants.\ntriage: P-high Regression.", "positive_passages": [{"docid": "doc-en-rust-7faaf72f7626e142611516858132d8d2b8944556324a8f18e59f29db686e89cc", "text": "// CachingCodemapView, so we initialize it lazily. raw_codemap: &'a CodeMap, caching_codemap: Option>, pub(super) alloc_id_recursion_tracker: FxHashSet, } #[derive(PartialEq, Eq, Clone, Copy)]", "commid": "rust_pr_49424"}], "negative_passages": []} {"query_id": "q-en-rust-00998d774fb97745658b898c796e3040c0717928df905b6306a209bb71caed5f", "query": "The following crash occurs during incremental compilation of the rust-url crate: https://travis- cc\nminimal repro:\nremarked on IRC that this might be temporarily worked around by not caching upstream constants.\ntriage: P-high Regression.", "positive_passages": [{"docid": "doc-en-rust-0cc14e5b5e31ab7300b8ef6896a27113870058906259285f7361099d1f2221eb", "text": "hash_spans: hash_spans_initial, hash_bodies: true, node_id_hashing_mode: NodeIdHashingMode::HashDefPath, alloc_id_recursion_tracker: Default::default(), } }", "commid": "rust_pr_49424"}], "negative_passages": []} {"query_id": "q-en-rust-00998d774fb97745658b898c796e3040c0717928df905b6306a209bb71caed5f", "query": "The following crash occurs during incremental compilation of the rust-url crate: https://travis- cc\nminimal repro:\nremarked on IRC that this might be temporarily worked around by not caching upstream constants.\ntriage: P-high Regression.", "positive_passages": [{"docid": "doc-en-rust-ddecf83d0e8f32f368a93bd9390748a36681fddbfc7b55fcaf2cacff3e2edeb3", "text": "}); enum AllocDiscriminant { Static, Constant, Alloc, ExternStatic, Function, } impl_stable_hash_for!(enum self::AllocDiscriminant { Static, Constant, Alloc, ExternStatic, Function });", "commid": "rust_pr_49424"}], "negative_passages": []} {"query_id": "q-en-rust-00998d774fb97745658b898c796e3040c0717928df905b6306a209bb71caed5f", "query": "The following crash occurs during incremental compilation of the rust-url crate: https://travis- cc\nminimal repro:\nremarked on IRC that this might be temporarily worked around by not caching upstream constants.\ntriage: P-high Regression.", "positive_passages": [{"docid": "doc-en-rust-81cc0a58b59bda2c713c2c7504eacc52bdd2584f5bbe139d6adacbf72330def2", "text": ") { ty::tls::with_opt(|tcx| { let tcx = tcx.expect(\"can't hash AllocIds during hir lowering\"); if let Some(def_id) = tcx.interpret_interner.get_corresponding_static_def_id(*self) { AllocDiscriminant::Static.hash_stable(hcx, hasher); // statics are unique via their DefId def_id.hash_stable(hcx, hasher); } else if let Some(alloc) = tcx.interpret_interner.get_alloc(*self) { // not a static, can't be recursive, hash the allocation AllocDiscriminant::Constant.hash_stable(hcx, hasher); alloc.hash_stable(hcx, hasher); if let Some(alloc) = tcx.interpret_interner.get_alloc(*self) { AllocDiscriminant::Alloc.hash_stable(hcx, hasher); if !hcx.alloc_id_recursion_tracker.insert(*self) { tcx .interpret_interner .get_corresponding_static_def_id(*self) .hash_stable(hcx, hasher); alloc.hash_stable(hcx, hasher); assert!(hcx.alloc_id_recursion_tracker.remove(self)); } } else if let Some(inst) = tcx.interpret_interner.get_fn(*self) { AllocDiscriminant::Function.hash_stable(hcx, hasher); inst.hash_stable(hcx, hasher); } else if let Some(def_id) = tcx.interpret_interner .get_corresponding_static_def_id(*self) { AllocDiscriminant::ExternStatic.hash_stable(hcx, hasher); def_id.hash_stable(hcx, hasher); } else { bug!(\"no allocation for {}\", self); }", "commid": "rust_pr_49424"}], "negative_passages": []} {"query_id": "q-en-rust-00998d774fb97745658b898c796e3040c0717928df905b6306a209bb71caed5f", "query": "The following crash occurs during incremental compilation of the rust-url crate: https://travis- cc\nminimal repro:\nremarked on IRC that this might be temporarily worked around by not caching upstream constants.\ntriage: P-high Regression.", "positive_passages": [{"docid": "doc-en-rust-6260ae088883d08c6d357991a7ebe6ab1b15c8d1fe94b671df09c969080f8fc3", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // https://github.com/rust-lang/rust/issues/49081 // revisions:rpass1 rpass2 #[cfg(rpass1)] pub static A: &str = \"hello\"; #[cfg(rpass2)] pub static A: &str = \"xxxxx\"; #[cfg(rpass1)] fn main() { assert_eq!(A, \"hello\"); } #[cfg(rpass2)] fn main() { assert_eq!(A, \"xxxxx\"); } ", "commid": "rust_pr_49424"}], "negative_passages": []} {"query_id": "q-en-rust-c0fbd301bc4f1391e19831096895e4e5331de53784dce332949bd2b39e54495e", "query": "Documentation on states: However, this code panics on both store and load (if the call to store() is commented out) Either documentation should be updated to describe why this is not relevant to store/load, or implementation should be fixed to actually deliver on the promise.\nThe implementation is correct. AcquireRelease is an ordering for operations that both load and store. If you just have a load or a store, you can and should simply use acquire or release respectively.\nOk, makes sense. Then the documentation should reflect that more clearly.", "positive_passages": [{"docid": "doc-en-rust-c5230380a4b8fe056ef59c974448443d957682ff6f2fc7e1dfb582dcb5f3337c", "text": "/// [`Release`]: http://llvm.org/docs/Atomics.html#release #[stable(feature = \"rust1\", since = \"1.0.0\")] Acquire, /// When coupled with a load, uses [`Acquire`] ordering, and with a store /// [`Release`] ordering. /// Has the effects of both [`Acquire`] and [`Release`] together. /// /// This ordering is only applicable for operations that combine both loads and stores. /// /// For loads it uses [`Acquire`] ordering. For stores it uses the [`Release`] ordering. /// /// [`Acquire`]: http://llvm.org/docs/Atomics.html#acquire /// [`Release`]: http://llvm.org/docs/Atomics.html#release", "commid": "rust_pr_49170"}], "negative_passages": []} {"query_id": "q-en-rust-8576131ab55cd626448f0aa19abc0e44d35232f74923ed2ebd776daab0a9d955", "query": "Nobody responded to , and the rabbit hole has only gotten deeper and deeper since I've started trying to work around this . It's time for an issue. Here's a simple pattern macro: Trouble is, suppose somebody writes . Then they'll get this: Here is the simplest alternative I can come up with for working around this warning in this simple macro. Notice how proper support for and requires that the macro output uses shorthand field syntax, as opposed to a more scalable workaround like (where the is at least something that could be produced by a helper macro). A similar pattern macro for a struct with two fields basically requires an incremental muncher now, just to avoid the exponential blowup of rules. This is an awful lot of headache just for a little reminder to write more idiomatic code! Isn't that clippy's job, anyhow?\nA crude fix would be to just not trigger this lint in macro expansions (at the cost of missing out on linting macro code where the offending non-shorthand pattern doesn't come as an argument, which, ideally, should be linted):", "positive_passages": [{"docid": "doc-en-rust-1057eab5efca42a65308c01776775761737cef413571f1dcf574c4944e545527", "text": "} if let PatKind::Binding(_, _, name, None) = fieldpat.node.pat.node { if name.node == fieldpat.node.name { if let Some(_) = fieldpat.span.ctxt().outer().expn_info() { // Don't lint if this is a macro expansion: macro authors // shouldn't have to worry about this kind of style issue // (Issue #49588) return; } let mut err = cx.struct_span_lint(NON_SHORTHAND_FIELD_PATTERNS, fieldpat.span, &format!(\"the `{}:` in this pattern is redundant\",", "commid": "rust_pr_49614"}], "negative_passages": []} {"query_id": "q-en-rust-8576131ab55cd626448f0aa19abc0e44d35232f74923ed2ebd776daab0a9d955", "query": "Nobody responded to , and the rabbit hole has only gotten deeper and deeper since I've started trying to work around this . It's time for an issue. Here's a simple pattern macro: Trouble is, suppose somebody writes . Then they'll get this: Here is the simplest alternative I can come up with for working around this warning in this simple macro. Notice how proper support for and requires that the macro output uses shorthand field syntax, as opposed to a more scalable workaround like (where the is at least something that could be produced by a helper macro). A similar pattern macro for a struct with two fields basically requires an incremental muncher now, just to avoid the exponential blowup of rules. This is an awful lot of headache just for a little reminder to write more idiomatic code! Isn't that clippy's job, anyhow?\nA crude fix would be to just not trigger this lint in macro expansions (at the cost of missing out on linting macro code where the offending non-shorthand pattern doesn't come as an argument, which, ideally, should be linted):", "positive_passages": [{"docid": "doc-en-rust-fbb37b7cb07f25d7dcfbc67c30ca8c2507e1eedf5fd4c9ba849984139a07e9dc", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![deny(non_shorthand_field_patterns)] pub struct Value { pub value: A } #[macro_export] macro_rules! pat { ($a:pat) => { Value { value: $a } }; } fn main() { let pat!(value) = Value { value: () }; } ", "commid": "rust_pr_49614"}], "negative_passages": []} {"query_id": "q-en-rust-07ce3ac57c560810c3dacec65ab2ddd1dc2516f47bd80e11dc16cf075ce3d5c1", "query": "I have a Rust written using the msp430 backend. which means using the nightly compiler. Travis CI daily cron builds have been with the nightly compiler since March 27, 2018 with the following error (I am currently not in a position to get a backtrace/increase verbosity, I will update when possible): My msp430 firmware uses to create a for msp430 every time is updated, and recently a new feature has triggered a lint that causes compilation to fail due to an unused macro. Since can't be compiled, the whole build fails. A stopgap solution for now is to use a nightly before March 27, 2018 which compiles for msp430 successfully. cc: based on a in with PR might have tickled the lint into failing compilation for the msp430 backend?\nThis macro is conditionally used in code that depends on the target\u2019s pointer width. I suspect that this error happens on any 16-bit platform. Adding to the macro\u2019s definition should fix it. Would you send a PR and ping me for review? (As an aside, we should probably have CI on rust-lang/rust that libcore at least compiles for one 16-bit platform.)\nLooking at 's fix I guess my q is: why don't 16 bit targets (msp430/avr) need this macro at all?\nhas some background on the removal of conversions for some combinations of integer types. Here we have a private trait with a generic impl based on and some concrete impls to \"restore\" missing conversions. The implementation of those concrete conversion depends on the size of : we either check that the source value isn\u2019t too large for the target type, or check nothing at all because the target type covers all values of the source type. When is 16-bits, the minimum, only the latter kind of conversion is used.", "positive_passages": [{"docid": "doc-en-rust-d294cef40243cfe046cd41f4efbcdb8f76ebdfcd97554dbdebe740d9459ee9ea", "text": "} // unsigned to signed (only positive bound) #[cfg(any(target_pointer_width = \"32\", target_pointer_width = \"64\"))] macro_rules! try_from_upper_bounded { ($($target:ty),*) => {$( impl PrivateTryFromUsize for $target {", "commid": "rust_pr_49618"}], "negative_passages": []} {"query_id": "q-en-rust-84d010489b2089d923b10eb469879c43d06f0a36f6a8f02e738ee093c1b5b381", "query": "Given a procedural macro like so: and an invocation: you get the following when compiling: In addition to the panic (oh dear!) we can see here that is showing up in the tokens by accident. The attribute should have been removed during the tokenization when passing to the procedural macro! I believe this is another instance of\ncc", "positive_passages": [{"docid": "doc-en-rust-1d2dca20d928e5886afdcc66c9796c6abe0df2f2afce9e34fefbfbbce029086b", "text": "// all span information. // // As a result, some AST nodes are annotated with the token // stream they came from. Attempt to extract these lossless // token streams before we fall back to the stringification. // stream they came from. Here we attempt to extract these // lossless token streams before we fall back to the // stringification. // // During early phases of the compiler, though, the AST could // get modified directly (e.g. attributes added or removed) and // the internal cache of tokens my not be invalidated or // updated. Consequently if the \"lossless\" token stream // disagrees with our actuall stringification (which has // historically been much more battle-tested) then we go with // the lossy stream anyway (losing span information). let mut tokens = None; match nt.0 {", "commid": "rust_pr_49852"}], "negative_passages": []} {"query_id": "q-en-rust-84d010489b2089d923b10eb469879c43d06f0a36f6a8f02e738ee093c1b5b381", "query": "Given a procedural macro like so: and an invocation: you get the following when compiling: In addition to the panic (oh dear!) we can see here that is showing up in the tokens by accident. The attribute should have been removed during the tokenization when passing to the procedural macro! I believe this is another instance of\ncc", "positive_passages": [{"docid": "doc-en-rust-a35cb392b3de8d12b62acb9b6ccc5d65d1ee55d857dcd95f6157ff27973fbde1", "text": "_ => {} } tokens.unwrap_or_else(|| { nt.1.force(|| { // FIXME(jseyfried): Avoid this pretty-print + reparse hack let source = pprust::token_to_string(self); parse_stream_from_source_str(FileName::MacroExpansion, source, sess, Some(span)) }) }) let tokens_for_real = nt.1.force(|| { // FIXME(#43081): Avoid this pretty-print + reparse hack let source = pprust::token_to_string(self); parse_stream_from_source_str(FileName::MacroExpansion, source, sess, Some(span)) }); if let Some(tokens) = tokens { if tokens.eq_unspanned(&tokens_for_real) { return tokens } } return tokens_for_real } }", "commid": "rust_pr_49852"}], "negative_passages": []} {"query_id": "q-en-rust-84d010489b2089d923b10eb469879c43d06f0a36f6a8f02e738ee093c1b5b381", "query": "Given a procedural macro like so: and an invocation: you get the following when compiling: In addition to the panic (oh dear!) we can see here that is showing up in the tokens by accident. The attribute should have been removed during the tokenization when passing to the procedural macro! I believe this is another instance of\ncc", "positive_passages": [{"docid": "doc-en-rust-9c82eab5c6f34272d4e7be0e6148f880dbc6e058da0a285c51801e6a04510a14", "text": "(&TokenTree::Token(_, ref tk), &TokenTree::Token(_, ref tk2)) => tk == tk2, (&TokenTree::Delimited(_, ref dl), &TokenTree::Delimited(_, ref dl2)) => { dl.delim == dl2.delim && dl.stream().trees().zip(dl2.stream().trees()).all(|(tt, tt2)| tt.eq_unspanned(&tt2)) dl.stream().eq_unspanned(&dl2.stream()) } (_, _) => false, }", "commid": "rust_pr_49852"}], "negative_passages": []} {"query_id": "q-en-rust-84d010489b2089d923b10eb469879c43d06f0a36f6a8f02e738ee093c1b5b381", "query": "Given a procedural macro like so: and an invocation: you get the following when compiling: In addition to the panic (oh dear!) we can see here that is showing up in the tokens by accident. The attribute should have been removed during the tokenization when passing to the procedural macro! I believe this is another instance of\ncc", "positive_passages": [{"docid": "doc-en-rust-b5f1c8324b1498aeec65b699737e8cb58aafca2769cad090bf5db06d2ab2a133", "text": "/// Compares two TokenStreams, checking equality without regarding span information. pub fn eq_unspanned(&self, other: &TokenStream) -> bool { for (t1, t2) in self.trees().zip(other.trees()) { let mut t1 = self.trees(); let mut t2 = other.trees(); for (t1, t2) in t1.by_ref().zip(t2.by_ref()) { if !t1.eq_unspanned(&t2) { return false; } } true t1.next().is_none() && t2.next().is_none() } /// Precondition: `self` consists of a single token tree.", "commid": "rust_pr_49852"}], "negative_passages": []} {"query_id": "q-en-rust-84d010489b2089d923b10eb469879c43d06f0a36f6a8f02e738ee093c1b5b381", "query": "Given a procedural macro like so: and an invocation: you get the following when compiling: In addition to the panic (oh dear!) we can see here that is showing up in the tokens by accident. The attribute should have been removed during the tokenization when passing to the procedural macro! I believe this is another instance of\ncc", "positive_passages": [{"docid": "doc-en-rust-a8ecd5c7aab5fd460bf25c2bab05ce4570d7a848e94354971f259738f6935b09", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // no-prefer-dynamic #![crate_type = \"proc-macro\"] #![feature(proc_macro)] extern crate proc_macro; use proc_macro::*; #[proc_macro_attribute] pub fn assert1(_a: TokenStream, b: TokenStream) -> TokenStream { assert_eq(b.clone(), \"pub fn foo() {}\".parse().unwrap()); b } #[proc_macro_derive(Foo, attributes(foo))] pub fn assert2(a: TokenStream) -> TokenStream { assert_eq(a, \"pub struct MyStructc { _a: i32, }\".parse().unwrap()); TokenStream::empty() } fn assert_eq(a: TokenStream, b: TokenStream) { let mut a = a.into_iter(); let mut b = b.into_iter(); for (a, b) in a.by_ref().zip(&mut b) { match (a, b) { (TokenTree::Group(a), TokenTree::Group(b)) => { assert_eq!(a.delimiter(), b.delimiter()); assert_eq(a.stream(), b.stream()); } (TokenTree::Op(a), TokenTree::Op(b)) => { assert_eq!(a.op(), b.op()); assert_eq!(a.spacing(), b.spacing()); } (TokenTree::Literal(a), TokenTree::Literal(b)) => { assert_eq!(a.to_string(), b.to_string()); } (TokenTree::Term(a), TokenTree::Term(b)) => { assert_eq!(a.to_string(), b.to_string()); } (a, b) => panic!(\"{:?} != {:?}\", a, b), } } assert!(a.next().is_none()); assert!(b.next().is_none()); } ", "commid": "rust_pr_49852"}], "negative_passages": []} {"query_id": "q-en-rust-84d010489b2089d923b10eb469879c43d06f0a36f6a8f02e738ee093c1b5b381", "query": "Given a procedural macro like so: and an invocation: you get the following when compiling: In addition to the panic (oh dear!) we can see here that is showing up in the tokens by accident. The attribute should have been removed during the tokenization when passing to the procedural macro! I believe this is another instance of\ncc", "positive_passages": [{"docid": "doc-en-rust-96041a3dfe709615549233b2094a6bc5c9623f93e7ac23c4a0d2f64477d39c81", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // aux-build:modify-ast.rs #![feature(proc_macro)] extern crate modify_ast; use modify_ast::*; #[derive(Foo)] pub struct MyStructc { #[cfg_attr(my_cfg, foo)] _a: i32, } macro_rules! a { ($i:item) => ($i) } a! { #[assert1] pub fn foo() {} } fn main() { let _a = MyStructc { _a: 0 }; foo(); } ", "commid": "rust_pr_49852"}], "negative_passages": []} {"query_id": "q-en-rust-81ab08b9ffad03e8c51ac0a47086ee28d021a9bdb2576ca18fc7801f5169eab8", "query": "Caught while fleshing out a much beefier test suite for SliceIndex, which I plan to submit as a PR. (this Issue is here just in case that PR never happens). The unicode character boundary check here should be looking at . Better yet, the method could just delegate to instead. (note: it can't delegate to here without NLL, which seems to be a leading factor as to how this function ended up this way)\nNot technically a regression but tagging it as such to ensure that we don't forget about it.\nmy commit message accidentally closed this, which might defeat the purpose of the tag you (note: I myself can't reopen it because Github attributed the message to Bors)\nOh that's ok, the tags on the PR take over now for backporting and that's how we'll track this.\n(ahhhh, now I see. The tag is used in automation for a Github Project to track regressions. That's actually pretty cool!) ('t mind me, just talking to myself)", "positive_passages": [{"docid": "doc-en-rust-318412e66191c766f55662361e302d9cdcc1cad734406591a9dc715506109e11", "text": "} #[test] fn test_str_slice_rangetoinclusive_ok() { let s = \"abc\u03b1\u03b2\u03b3\"; assert_eq!(&s[..=2], \"abc\"); assert_eq!(&s[..=4], \"abc\u03b1\"); } #[test] #[should_panic] fn test_str_slice_rangetoinclusive_notok() { let s = \"abc\u03b1\u03b2\u03b3\"; &s[..=3]; } #[test] fn test_str_slicemut_rangetoinclusive_ok() { let mut s = \"abc\u03b1\u03b2\u03b3\".to_owned(); let s: &mut str = &mut s; assert_eq!(&mut s[..=2], \"abc\"); assert_eq!(&mut s[..=4], \"abc\u03b1\"); } #[test] #[should_panic] fn test_str_slicemut_rangetoinclusive_notok() { let mut s = \"abc\u03b1\u03b2\u03b3\".to_owned(); let s: &mut str = &mut s; &mut s[..=3]; } #[test] fn test_is_char_boundary() { let s = \"\u0e28\u0e44\u0e17\u0e22\u4e2d\u534eVi\u1ec7t Nam \u03b2-release \ud83d\udc31123\"; assert!(s.is_char_boundary(0));", "commid": "rust_pr_50039"}], "negative_passages": []} {"query_id": "q-en-rust-81ab08b9ffad03e8c51ac0a47086ee28d021a9bdb2576ca18fc7801f5169eab8", "query": "Caught while fleshing out a much beefier test suite for SliceIndex, which I plan to submit as a PR. (this Issue is here just in case that PR never happens). The unicode character boundary check here should be looking at . Better yet, the method could just delegate to instead. (note: it can't delegate to here without NLL, which seems to be a leading factor as to how this function ended up this way)\nNot technically a regression but tagging it as such to ensure that we don't forget about it.\nmy commit message accidentally closed this, which might defeat the purpose of the tag you (note: I myself can't reopen it because Github attributed the message to Bors)\nOh that's ok, the tags on the PR take over now for backporting and that's how we'll track this.\n(ahhhh, now I see. The tag is used in automation for a Github Project to track regressions. That's actually pretty cool!) ('t mind me, just talking to myself)", "positive_passages": [{"docid": "doc-en-rust-516a6c446f15de53052759a06f075c1dea64337d561045f705c417e957c6e78b", "text": "fn index(self, slice: &str) -> &Self::Output { assert!(self.end != usize::max_value(), \"attempted to index str up to maximum usize\"); let end = self.end + 1; self.get(slice).unwrap_or_else(|| super::slice_error_fail(slice, 0, end)) (..self.end+1).index(slice) } #[inline] fn index_mut(self, slice: &mut str) -> &mut Self::Output { assert!(self.end != usize::max_value(), \"attempted to index str up to maximum usize\"); if slice.is_char_boundary(self.end) { unsafe { self.get_unchecked_mut(slice) } } else { super::slice_error_fail(slice, 0, self.end + 1) } (..self.end+1).index_mut(slice) } }", "commid": "rust_pr_50039"}], "negative_passages": []} {"query_id": "q-en-rust-914ef78a69ed45e94c91239e7833da626236927dd80583418fdd14bc5ef3baf3", "query": "When I tried to build docs for , I got a panic. Here is the reduced test case: With the above code, when I run I get this panic: rustc 1.27.0-nightly ( 2018-04-18) binary: rustc commit-hash: commit-date: 2018-04-18 host: x86_64-pc-windows-msvc release: 1.27.0-nightly LLVM version: 6.0\nStrange... Will take a look as soon as possible.\nSo the call is performed . I'll need help from (or anyone in the team who knows about the type) in order to fix this issue.\nReduced further:\nI'd like to work on this.\nThank you so much! I verified that with the latest Rust Nightly I no longer get any errors.", "positive_passages": [{"docid": "doc-en-rust-ac438aae3c57c09f244887e754d3307745bd008a10134095b6c139d9f50db20f", "text": "continue; } let result = select.select(&Obligation::new(dummy_cause.clone(), new_env, pred)); // Call infcx.resolve_type_vars_if_possible to see if we can // get rid of any inference variables. let obligation = infcx.resolve_type_vars_if_possible( &Obligation::new(dummy_cause.clone(), new_env, pred) ); let result = select.select(&obligation); match &result { &Ok(Some(ref vtable)) => {", "commid": "rust_pr_55318"}], "negative_passages": []} {"query_id": "q-en-rust-914ef78a69ed45e94c91239e7833da626236927dd80583418fdd14bc5ef3baf3", "query": "When I tried to build docs for , I got a panic. Here is the reduced test case: With the above code, when I run I get this panic: rustc 1.27.0-nightly ( 2018-04-18) binary: rustc commit-hash: commit-date: 2018-04-18 host: x86_64-pc-windows-msvc release: 1.27.0-nightly LLVM version: 6.0\nStrange... Will take a look as soon as possible.\nSo the call is performed . I'll need help from (or anyone in the team who knows about the type) in order to fix this issue.\nReduced further:\nI'd like to work on this.\nThank you so much! I verified that with the latest Rust Nightly I no longer get any errors.", "positive_passages": [{"docid": "doc-en-rust-8cc47c721a66ed6034815dbbc62ed622e744b4107ea43e7c36cd136bcf9801c5", "text": "} &Ok(None) => {} &Err(SelectionError::Unimplemented) => { if self.is_of_param(pred.skip_binder().trait_ref.substs) { if self.is_param_no_infer(pred.skip_binder().trait_ref.substs) { already_visited.remove(&pred); self.add_user_pred( &mut user_computed_preds,", "commid": "rust_pr_55318"}], "negative_passages": []} {"query_id": "q-en-rust-914ef78a69ed45e94c91239e7833da626236927dd80583418fdd14bc5ef3baf3", "query": "When I tried to build docs for , I got a panic. Here is the reduced test case: With the above code, when I run I get this panic: rustc 1.27.0-nightly ( 2018-04-18) binary: rustc commit-hash: commit-date: 2018-04-18 host: x86_64-pc-windows-msvc release: 1.27.0-nightly LLVM version: 6.0\nStrange... Will take a look as soon as possible.\nSo the call is performed . I'll need help from (or anyone in the team who knows about the type) in order to fix this issue.\nReduced further:\nI'd like to work on this.\nThank you so much! I verified that with the latest Rust Nightly I no longer get any errors.", "positive_passages": [{"docid": "doc-en-rust-a73c4dc3a6d1cccb98e44c0102d2c67c612b20724f550820ef7ce1748781739e", "text": "finished_map } pub fn is_of_param(&self, substs: &Substs<'_>) -> bool { if substs.is_noop() { return false; } fn is_param_no_infer(&self, substs: &Substs<'_>) -> bool { return self.is_of_param(substs.type_at(0)) && !substs.types().any(|t| t.has_infer_types()); } return match substs.type_at(0).sty { pub fn is_of_param(&self, ty: Ty<'_>) -> bool { return match ty.sty { ty::Param(_) => true, ty::Projection(p) => self.is_of_param(p.substs), ty::Projection(p) => self.is_of_param(p.self_ty()), _ => false, }; } fn is_self_referential_projection(&self, p: ty::PolyProjectionPredicate<'_>) -> bool { match p.ty().skip_binder().sty { ty::Projection(proj) if proj == p.skip_binder().projection_ty => { true }, _ => false } } pub fn evaluate_nested_obligations< 'b, 'c,", "commid": "rust_pr_55318"}], "negative_passages": []} {"query_id": "q-en-rust-914ef78a69ed45e94c91239e7833da626236927dd80583418fdd14bc5ef3baf3", "query": "When I tried to build docs for , I got a panic. Here is the reduced test case: With the above code, when I run I get this panic: rustc 1.27.0-nightly ( 2018-04-18) binary: rustc commit-hash: commit-date: 2018-04-18 host: x86_64-pc-windows-msvc release: 1.27.0-nightly LLVM version: 6.0\nStrange... Will take a look as soon as possible.\nSo the call is performed . I'll need help from (or anyone in the team who knows about the type) in order to fix this issue.\nReduced further:\nI'd like to work on this.\nThank you so much! I verified that with the latest Rust Nightly I no longer get any errors.", "positive_passages": [{"docid": "doc-en-rust-f256a431c8ba44aefa176c89540ed4eb6ed77f1f18890050522654ede312ebe4", "text": ") -> bool { let dummy_cause = ObligationCause::misc(DUMMY_SP, ast::DUMMY_NODE_ID); for (obligation, predicate) in nested .filter(|o| o.recursion_depth == 1) for (obligation, mut predicate) in nested .map(|o| (o.clone(), o.predicate.clone())) { let is_new_pred = fresh_preds.insert(self.clean_pred(select.infcx(), predicate.clone())); // Resolve any inference variables that we can, to help selection succeed predicate = select.infcx().resolve_type_vars_if_possible(&predicate); // We only add a predicate as a user-displayable bound if // it involves a generic parameter, and doesn't contain // any inference variables. // // Displaying a bound involving a concrete type (instead of a generic // parameter) would be pointless, since it's always true // (e.g. u8: Copy) // Displaying an inference variable is impossible, since they're // an internal compiler detail without a defined visual representation // // We check this by calling is_of_param on the relevant types // from the various possible predicates match &predicate { &ty::Predicate::Trait(ref p) => { let substs = &p.skip_binder().trait_ref.substs; if self.is_param_no_infer(p.skip_binder().trait_ref.substs) && !only_projections && is_new_pred { if self.is_of_param(substs) && !only_projections && is_new_pred { self.add_user_pred(computed_preds, predicate); } predicates.push_back(p.clone()); } &ty::Predicate::Projection(p) => { // If the projection isn't all type vars, then // we don't want to add it as a bound if self.is_of_param(p.skip_binder().projection_ty.substs) && is_new_pred { self.add_user_pred(computed_preds, predicate); } else { debug!(\"evaluate_nested_obligations: examining projection predicate {:?}\", predicate); // As described above, we only want to display // bounds which include a generic parameter but don't include // an inference variable. // Additionally, we check if we've seen this predicate before, // to avoid rendering duplicate bounds to the user. if self.is_param_no_infer(p.skip_binder().projection_ty.substs) && !p.ty().skip_binder().is_ty_infer() && is_new_pred { debug!(\"evaluate_nested_obligations: adding projection predicate to computed_preds: {:?}\", predicate); // Under unusual circumstances, we can end up with a self-refeential // projection predicate. For example: // ::Value == ::Value // Not only is displaying this to the user pointless, // having it in the ParamEnv will cause an issue if we try to call // poly_project_and_unify_type on the predicate, since this kind of // predicate will normally never end up in a ParamEnv. // // For these reasons, we ignore these weird predicates, // ensuring that we're able to properly synthesize an auto trait impl if self.is_self_referential_projection(p) { debug!(\"evaluate_nested_obligations: encountered a projection predicate equating a type with itself! Skipping\"); } else { self.add_user_pred(computed_preds, predicate); } } // We can only call poly_project_and_unify_type when our predicate's // Ty is an inference variable - otherwise, there won't be anything to // unify if p.ty().skip_binder().is_ty_infer() { debug!(\"Projecting and unifying projection predicate {:?}\", predicate); match poly_project_and_unify_type(select, &obligation.with(p.clone())) { Err(e) => { debug!(", "commid": "rust_pr_55318"}], "negative_passages": []} {"query_id": "q-en-rust-914ef78a69ed45e94c91239e7833da626236927dd80583418fdd14bc5ef3baf3", "query": "When I tried to build docs for , I got a panic. Here is the reduced test case: With the above code, when I run I get this panic: rustc 1.27.0-nightly ( 2018-04-18) binary: rustc commit-hash: commit-date: 2018-04-18 host: x86_64-pc-windows-msvc release: 1.27.0-nightly LLVM version: 6.0\nStrange... Will take a look as soon as possible.\nSo the call is performed . I'll need help from (or anyone in the team who knows about the type) in order to fix this issue.\nReduced further:\nI'd like to work on this.\nThank you so much! I verified that with the latest Rust Nightly I no longer get any errors.", "positive_passages": [{"docid": "doc-en-rust-56d79684ebad96c68a3b0fc0a74cd87c72d9cbb94b0e6a2c567e0ae45723b1a0", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. pub trait Signal { type Item; } pub trait Signal2 { type Item2; } impl Signal2 for B where B: Signal { type Item2 = C; } // @has issue_50159/struct.Switch.html // @has - '//code' 'impl Send for Switch where ::Item: Send' // @has - '//code' 'impl Sync for Switch where ::Item: Sync' // @count - '//*[@id=\"implementations-list\"]/*[@class=\"impl\"]' 0 // @count - '//*[@id=\"synthetic-implementations-list\"]/*[@class=\"impl\"]' 2 pub struct Switch { pub inner: ::Item2, } ", "commid": "rust_pr_55318"}], "negative_passages": []} {"query_id": "q-en-rust-914ef78a69ed45e94c91239e7833da626236927dd80583418fdd14bc5ef3baf3", "query": "When I tried to build docs for , I got a panic. Here is the reduced test case: With the above code, when I run I get this panic: rustc 1.27.0-nightly ( 2018-04-18) binary: rustc commit-hash: commit-date: 2018-04-18 host: x86_64-pc-windows-msvc release: 1.27.0-nightly LLVM version: 6.0\nStrange... Will take a look as soon as possible.\nSo the call is performed . I'll need help from (or anyone in the team who knows about the type) in order to fix this issue.\nReduced further:\nI'd like to work on this.\nThank you so much! I verified that with the latest Rust Nightly I no longer get any errors.", "positive_passages": [{"docid": "doc-en-rust-db5c627da3af5bbc243e9055f5b52ac4cb8f002f4e7b3b5b07859352c9b1687f", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // Some unusual code minimized from // https://github.com/sile/handy_async/tree/7b619b762c06544fc67792c8ff8ebc24a88fdb98 pub trait Pattern { type Value; } pub struct Constrain(A, B, C); impl Pattern for Constrain where A: Pattern, B: Pattern, C: Pattern, { type Value = A::Value; } pub struct Wrapper(T); impl Pattern for Wrapper { type Value = T; } // @has self_referential/struct.WriteAndThen.html // @has - '//*[@id=\"synthetic-implementations-list\"]/*[@class=\"impl\"]//*/code' \"impl Send for // WriteAndThen where ::Value: Send\" pub struct WriteAndThen(pub P1::Value,pub > as Pattern>::Value) where P1: Pattern; ", "commid": "rust_pr_55318"}], "negative_passages": []} {"query_id": "q-en-rust-58af39f151b37a9bfd8e909b97cb75b9c38bb2363528af6e14a090f5076cf180", "query": "I came across . This is a \"safe\" feature of rust I didn't know about. I was at first confused about the docs for however. My main confusion was around the \"deinitializing\" word which made me think that is not consumed. It is consumed but it is not dropped, which is a subtle distinction. I propose here to remove the confusing language. Suggested new docs: Another possible option (I don't like it as much but it is more similar to the previous docs): As a side note, I have not seen the \"deinitialized\" qualifier in rust before. I don't think it is part of the standard rust lingo which is probably some of my confusion :smile:\nAnother option, which I think is the best of both worlds: The important points are to mention that is moved, and to use standard terminology rather than \"deinitialized\".\nI like the third option the most:\nI have to say I like yours the best. Maybe change \"Moves behind the mutable reference\" to \"Moves into the mutable reference\"? I'm not sure \"behind\" is standard lingo either :smile:\nopened a pr", "positive_passages": [{"docid": "doc-en-rust-beba344fda0e3a2a03dac3adba8c1c6cb43f11b4f29f2bd627bdb90659947218", "text": "} } /// Replaces the value at a mutable location with a new one, returning the old value, without /// deinitializing either one. /// Moves `src` into the referenced `dest`, returning the previous `dest` value. /// /// Neither value is dropped. /// /// # Examples ///", "commid": "rust_pr_51124"}], "negative_passages": []} {"query_id": "q-en-rust-58af39f151b37a9bfd8e909b97cb75b9c38bb2363528af6e14a090f5076cf180", "query": "I came across . This is a \"safe\" feature of rust I didn't know about. I was at first confused about the docs for however. My main confusion was around the \"deinitializing\" word which made me think that is not consumed. It is consumed but it is not dropped, which is a subtle distinction. I propose here to remove the confusing language. Suggested new docs: Another possible option (I don't like it as much but it is more similar to the previous docs): As a side note, I have not seen the \"deinitialized\" qualifier in rust before. I don't think it is part of the standard rust lingo which is probably some of my confusion :smile:\nAnother option, which I think is the best of both worlds: The important points are to mention that is moved, and to use standard terminology rather than \"deinitialized\".\nI like the third option the most:\nI have to say I like yours the best. Maybe change \"Moves behind the mutable reference\" to \"Moves into the mutable reference\"? I'm not sure \"behind\" is standard lingo either :smile:\nopened a pr", "positive_passages": [{"docid": "doc-en-rust-81ee888ac8b4a26f573e42933260fa196f8500f273efa24e0887b38843ceb13b", "text": "} } /// Replaces the value at `dest` with `src`, returning the old /// value, without dropping either. /// Moves `src` into the pointed `dest`, returning the previous `dest` value. /// /// Neither value is dropped. /// /// # Safety ///", "commid": "rust_pr_51124"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-1bbd5515e2721ed3c8e4753f7628172281c6ef3160727c1de51e421c463bb129", "text": "use crate::resolve_imports::{ImportDirective, ImportDirectiveSubclass, ImportResolver}; use crate::{path_names_to_string, KNOWN_TOOLS}; use crate::{CrateLint, LegacyScope, Module, ModuleOrUniformRoot}; use crate::{BindingError, CrateLint, LegacyScope, Module, ModuleOrUniformRoot}; use crate::{PathResult, ParentScope, ResolutionError, Resolver, Scope, ScopeSet, Segment}; type Res = def::Res;", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-60344106e9dd72fd60a43de48a45843b3da1630b38a9eddf447bc03767173bdf", "text": "err } ResolutionError::VariableNotBoundInPattern(binding_error) => { let target_sp = binding_error.target.iter().cloned().collect::>(); let BindingError { name, target, origin, could_be_path } = binding_error; let target_sp = target.iter().copied().collect::>(); let origin_sp = origin.iter().copied().collect::>(); let msp = MultiSpan::from_spans(target_sp.clone()); let msg = format!(\"variable `{}` is not bound in all patterns\", binding_error.name); let msg = format!(\"variable `{}` is not bound in all patterns\", name); let mut err = self.session.struct_span_err_with_code( msp, &msg, DiagnosticId::Error(\"E0408\".into()), ); for sp in target_sp { err.span_label(sp, format!(\"pattern doesn't bind `{}`\", binding_error.name)); err.span_label(sp, format!(\"pattern doesn't bind `{}`\", name)); } let origin_sp = binding_error.origin.iter().cloned(); for sp in origin_sp { err.span_label(sp, \"variable not in all patterns\"); } if *could_be_path { let help_msg = format!( \"if you meant to match on a variant or a `const` item, consider making the path in the pattern qualified: `?::{}`\", name, ); err.span_help(span, &help_msg); } err } ResolutionError::VariableBoundWithDifferentMode(variable_name,", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-f2b4b6d2549a71fec546e9a7c8cb3c96ef30b4307a7cf93fa71ee4e2c3ac3ef7", "text": "// Checks that all of the arms in an or-pattern have exactly the // same set of bindings, with the same binding modes for each. fn check_consistent_bindings(&mut self, pats: &[P]) { if pats.is_empty() { return; } let mut missing_vars = FxHashMap::default(); let mut inconsistent_vars = FxHashMap::default(); for (i, p) in pats.iter().enumerate() { let map_i = self.binding_mode_map(&p); for (j, q) in pats.iter().enumerate() { if i == j { continue; } let map_j = self.binding_mode_map(&q); for (&key, &binding_i) in &map_i { if map_j.is_empty() { // Account for missing bindings when let binding_error = missing_vars // `map_j` has none. .entry(key.name) .or_insert(BindingError { name: key.name, origin: BTreeSet::new(), target: BTreeSet::new(), }); binding_error.origin.insert(binding_i.span); binding_error.target.insert(q.span); } for (&key_j, &binding_j) in &map_j { match map_i.get(&key_j) { None => { // missing binding let binding_error = missing_vars .entry(key_j.name) .or_insert(BindingError { name: key_j.name, origin: BTreeSet::new(), target: BTreeSet::new(), }); binding_error.origin.insert(binding_j.span); binding_error.target.insert(p.span); } Some(binding_i) => { // check consistent binding if binding_i.binding_mode != binding_j.binding_mode { inconsistent_vars .entry(key.name) .or_insert((binding_j.span, binding_i.span)); } for pat_outer in pats.iter() { let map_outer = self.binding_mode_map(&pat_outer); for pat_inner in pats.iter().filter(|pat| pat.id != pat_outer.id) { let map_inner = self.binding_mode_map(&pat_inner); for (&key_inner, &binding_inner) in map_inner.iter() { match map_outer.get(&key_inner) { None => { // missing binding let binding_error = missing_vars .entry(key_inner.name) .or_insert(BindingError { name: key_inner.name, origin: BTreeSet::new(), target: BTreeSet::new(), could_be_path: key_inner.name.as_str().starts_with(char::is_uppercase) }); binding_error.origin.insert(binding_inner.span); binding_error.target.insert(pat_outer.span); } Some(binding_outer) => { // check consistent binding if binding_outer.binding_mode != binding_inner.binding_mode { inconsistent_vars .entry(key_inner.name) .or_insert((binding_inner.span, binding_outer.span)); } } } } } } let mut missing_vars = missing_vars.iter().collect::>(); let mut missing_vars = missing_vars.iter_mut().collect::>(); missing_vars.sort(); for (_, v) in missing_vars { for (name, mut v) in missing_vars { if inconsistent_vars.contains_key(name) { v.could_be_path = false; } self.r.report_error( *v.origin.iter().next().unwrap(), ResolutionError::VariableNotBoundInPattern(v) ); *v.origin.iter().next().unwrap(), ResolutionError::VariableNotBoundInPattern(v)); } let mut inconsistent_vars = inconsistent_vars.iter().collect::>(); inconsistent_vars.sort(); for (name, v) in inconsistent_vars {", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-3b916c59521a21ded6b0a8ae11baca67ba31e2d043a64834d8445f10cb25e772", "text": "self.resolve_pattern(pat, source, &mut bindings_list); } // This has to happen *after* we determine which pat_idents are variants self.check_consistent_bindings(pats); if pats.len() > 1 { self.check_consistent_bindings(pats); } } fn resolve_block(&mut self, block: &Block) {", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-72184421f564501d9915cbe92d9712279cc8787337cf4a8663c973f6ead19ac2", "text": "name: Name, origin: BTreeSet, target: BTreeSet, could_be_path: bool } impl PartialOrd for BindingError {", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-04890cf5d016864ab6e1de3c3c0dd4c7404a65af88e4eb3129761af571a45d39", "text": " #![allow(non_camel_case_types)] enum E { A, B, c } mod m { const CONST1: usize = 10; const Const2: usize = 20; } fn main() { let y = 1; match y { a | b => {} //~ ERROR variable `a` is not bound in all patterns //~^ ERROR variable `b` is not bound in all patterns //~| ERROR variable `b` is not bound in all patterns } let x = (E::A, E::B); match x { (A, B) | (ref B, c) | (c, A) => () //~^ ERROR variable `A` is not bound in all patterns //~| ERROR variable `B` is not bound in all patterns //~| ERROR variable `B` is bound in inconsistent ways //~| ERROR mismatched types //~| ERROR variable `c` is not bound in all patterns //~| HELP consider making the path in the pattern qualified: `?::A` } let z = (10, 20); match z { (CONST1, _) | (_, Const2) => () //~^ ERROR variable `CONST1` is not bound in all patterns //~| HELP consider making the path in the pattern qualified: `?::CONST1` //~| ERROR variable `Const2` is not bound in all patterns //~| HELP consider making the path in the pattern qualified: `?::Const2` } }", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-dc645997c4e1ddd96d16eb3a3f77a934a740a5494316e93a126348ea9229e9f1", "text": "error[E0408]: variable `a` is not bound in all patterns --> $DIR/resolve-inconsistent-names.rs:4:12 --> $DIR/resolve-inconsistent-names.rs:13:12 | LL | a | b => {} | - ^ pattern doesn't bind `a`", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-82a51dec0f1bd4901e9f0a6a4457a5a6bd725a46881286f6723ed72aba3bc40e", "query": "Given the example code: I get a really nice warning message about what I did wrong: However, if I change the main function to match multiple patterns: The error message is really confusing with no hint how to fix it: Please expand the error message for the second case to look a bit more like the first case.\nTriage: no change. We need to supply the line in E0408.\nso I thought I would implement the change you've proposed but have been thinking how best to restrict the kinds of cases where this help note would be appended to the error as I figured it may be rather noisy and/or confusing within the context of some false negatives. So during resolution we of course cannot look at the types to see if it's possible we're matching against an enum. But what do you think about having the help note kick in only if the binding starts with an upper case letter? I know this is a convention that should not really be a concern of the compiler but seeing that using it would only make the diagnostic note a bit more fine-grained, perhaps it's not a terrible idea?\nwe already do that in some places, like the parser to infer intent, but in this case how would you figure out that you want to suggest and without resolving ? I believe a more holistic solution would be to detect \"variable not bound in all patterns\" errors where all bindings are single idents, record that fact while throwing the current error and recovering as if there had been a single variable, and later when we have access to the machinery uses emit a warning suggesting and . It is a bit more involved but if we recover as if only had been present the current diagnostic should kick in suggesting (leaving without a suggestion). Edit: that won't work as things stand, as the warning won't be emitted today if there are errors, I think. That would also need to be changed to make it always be emitted :-/\nno, I didn't mean to say we need to suggest possible variants. I too concluded that it's non-trivial with the current ordering of passes and the early bailout (indeed, type checking is impossible without resolution succeeding). I was going to add the help note you proposed back in May, just that I would make it dependent on bindings having a Variant-alike naming. From what you're saying there's a precedent in , so I guess this is an acceptable solution? I'll make a PR then.\nsorry, I misunderstood. Go ahead, it sounds good.\nno, it's likely me who wasn't clear enough. :) I submitted a PR.", "positive_passages": [{"docid": "doc-en-rust-d7d0c67a5fe28d7c51e6406a65f3961bdaeace1aa99e4994827b27dc7f9725e1", "text": "| variable not in all patterns error[E0408]: variable `b` is not bound in all patterns --> $DIR/resolve-inconsistent-names.rs:4:8 --> $DIR/resolve-inconsistent-names.rs:13:8 | LL | a | b => {} | ^ - variable not in all patterns | | | pattern doesn't bind `b` error: aborting due to 2 previous errors error[E0408]: variable `A` is not bound in all patterns --> $DIR/resolve-inconsistent-names.rs:19:18 | LL | (A, B) | (ref B, c) | (c, A) => () | - ^^^^^^^^^^ - variable not in all patterns | | | | | pattern doesn't bind `A` | variable not in all patterns | help: if you meant to match on a variant or a `const` item, consider making the path in the pattern qualified: `?::A` --> $DIR/resolve-inconsistent-names.rs:19:10 | LL | (A, B) | (ref B, c) | (c, A) => () | ^ error[E0408]: variable `B` is not bound in all patterns --> $DIR/resolve-inconsistent-names.rs:19:31 | LL | (A, B) | (ref B, c) | (c, A) => () | - - ^^^^^^ pattern doesn't bind `B` | | | | | variable not in all patterns | variable not in all patterns error[E0408]: variable `c` is not bound in all patterns --> $DIR/resolve-inconsistent-names.rs:19:9 | LL | (A, B) | (ref B, c) | (c, A) => () | ^^^^^^ - - variable not in all patterns | | | | | variable not in all patterns | pattern doesn't bind `c` error[E0409]: variable `B` is bound in inconsistent ways within the same match arm --> $DIR/resolve-inconsistent-names.rs:19:23 | LL | (A, B) | (ref B, c) | (c, A) => () | - ^ bound in different ways | | | first binding error[E0408]: variable `CONST1` is not bound in all patterns --> $DIR/resolve-inconsistent-names.rs:30:23 | LL | (CONST1, _) | (_, Const2) => () | ------ ^^^^^^^^^^^ pattern doesn't bind `CONST1` | | | variable not in all patterns | help: if you meant to match on a variant or a `const` item, consider making the path in the pattern qualified: `?::CONST1` --> $DIR/resolve-inconsistent-names.rs:30:10 | LL | (CONST1, _) | (_, Const2) => () | ^^^^^^ error[E0408]: variable `Const2` is not bound in all patterns --> $DIR/resolve-inconsistent-names.rs:30:9 | LL | (CONST1, _) | (_, Const2) => () | ^^^^^^^^^^^ ------ variable not in all patterns | | | pattern doesn't bind `Const2` | help: if you meant to match on a variant or a `const` item, consider making the path in the pattern qualified: `?::Const2` --> $DIR/resolve-inconsistent-names.rs:30:27 | LL | (CONST1, _) | (_, Const2) => () | ^^^^^^ error[E0308]: mismatched types --> $DIR/resolve-inconsistent-names.rs:19:19 | LL | (A, B) | (ref B, c) | (c, A) => () | ^^^^^ expected enum `E`, found &E | = note: expected type `E` found type `&E` error: aborting due to 9 previous errors For more information about this error, try `rustc --explain E0408`. Some errors have detailed explanations: E0308, E0408, E0409. For more information about an error, try `rustc --explain E0308`. ", "commid": "rust_pr_63406"}], "negative_passages": []} {"query_id": "q-en-rust-56d1cfabdaa00b2c4e12bcb3c6497e352222d74922be899678c17a5d1771dc96", "query": "rustc --version; cat ; env RUSTBACKTRACE=1 rustc --emit asm --crate-type dylib -O --target thumbv7em-none-eabihf rustc 1.28.0-nightly ( 2018-05-21) #![nostd] pub fn pete() -u32 { 2 } warning: dropping unsupported crate type for target thread 'main' panicked at 'called on a value', stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwindbacktrace at 1: std::syscommon::backtrace::print at at 2: std::panicking::defaulthook::{{closure}} at 3: std::panicking::defaulthook at 4: rustc::util::common::panichook 5: std::panicking::rustpanicwithhook at 6: std::panicking::beginpanicfmt at 7: rustbeginunwind at 8: core::panicking::panicfmt at 9: core::panicking::panic at 10: rustccodegenllvm::base::writemetadata 11: rustc::util::common::time 12: rustccodegenllvm::base::codegencrate 13: MetadataKind::Compressed, } }).max().unwrap(); }).max().unwrap_or(MetadataKind::None); if kind == MetadataKind::None { return (metadata_llcx,", "commid": "rust_pr_51035"}], "negative_passages": []} {"query_id": "q-en-rust-56d1cfabdaa00b2c4e12bcb3c6497e352222d74922be899678c17a5d1771dc96", "query": "rustc --version; cat ; env RUSTBACKTRACE=1 rustc --emit asm --crate-type dylib -O --target thumbv7em-none-eabihf rustc 1.28.0-nightly ( 2018-05-21) #![nostd] pub fn pete() -u32 { 2 } warning: dropping unsupported crate type for target thread 'main' panicked at 'called on a value', stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwindbacktrace at 1: std::syscommon::backtrace::print at at 2: std::panicking::defaulthook::{{closure}} at 3: std::panicking::defaulthook at 4: rustc::util::common::panichook 5: std::panicking::rustpanicwithhook at 6: std::panicking::beginpanicfmt at 7: rustbeginunwind at 8: core::panicking::panicfmt at 9: core::panicking::panic at 10: rustccodegenllvm::base::writemetadata 11: rustc::util::common::time 12: rustccodegenllvm::base::codegencrate 13: // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. // compile-flags: --crate-type dylib --target thumbv7em-none-eabihf // compile-pass // error-pattern: dropping unsupported crate type `dylib` for target `thumbv7em-none-eabihf` #![feature(no_core)] #![no_std] #![no_core] ", "commid": "rust_pr_51035"}], "negative_passages": []} {"query_id": "q-en-rust-56d1cfabdaa00b2c4e12bcb3c6497e352222d74922be899678c17a5d1771dc96", "query": "rustc --version; cat ; env RUSTBACKTRACE=1 rustc --emit asm --crate-type dylib -O --target thumbv7em-none-eabihf rustc 1.28.0-nightly ( 2018-05-21) #![nostd] pub fn pete() -u32 { 2 } warning: dropping unsupported crate type for target thread 'main' panicked at 'called on a value', stack backtrace: 0: std::sys::unix::backtrace::tracing::imp::unwindbacktrace at 1: std::syscommon::backtrace::print at at 2: std::panicking::defaulthook::{{closure}} at 3: std::panicking::defaulthook at 4: rustc::util::common::panichook 5: std::panicking::rustpanicwithhook at 6: std::panicking::beginpanicfmt at 7: rustbeginunwind at 8: core::panicking::panicfmt at 9: core::panicking::panic at 10: rustccodegenllvm::base::writemetadata 11: rustc::util::common::time 12: rustccodegenllvm::base::codegencrate 13: warning: dropping unsupported crate type `dylib` for target `thumbv7em-none-eabihf` ", "commid": "rust_pr_51035"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-f5231b8a9335563d55fcb100a9ea19eb224a9ed40c5fbc8a340643cb82f14472", "text": " // ignore-emscripten no asm! support #![feature(asm)] fn main() { unsafe { asm! {\"mov $0,$1\"::\"0\"(\"bx\"),\"1\"(0x00)} //~^ ERROR: invalid value for constraint in inline assembly } } ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-d485b68aecc027e13818c1f565d410ed64979746064d00f684bd3cc59e06b104", "text": " error[E0669]: invalid value for constraint in inline assembly --> $DIR/issue-51431.rs:7:32 | LL | asm! {\"mov $0,$1\"::\"0\"(\"bx\"),\"1\"(0x00)} | ^^^^ error: aborting due to previous error ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-10072589dc44876c2810ecad0e934f281f7867c3665844febb0ce84c810a9e0b", "text": " trait A { const C: usize; fn f() -> ([u8; A::C], [u8; A::C]); //~^ ERROR: type annotations needed: cannot resolve //~| ERROR: type annotations needed: cannot resolve } fn main() {} ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-2e704ed255e8ea3a352b473bd48aab2c1c087509f9540ccfa1013a1e3b63c46f", "text": " error[E0283]: type annotations needed: cannot resolve `_: A` --> $DIR/issue-63496.rs:4:21 | LL | const C: usize; | --------------- required by `A::C` LL | LL | fn f() -> ([u8; A::C], [u8; A::C]); | ^^^^ error[E0283]: type annotations needed: cannot resolve `_: A` --> $DIR/issue-63496.rs:4:33 | LL | const C: usize; | --------------- required by `A::C` LL | LL | fn f() -> ([u8; A::C], [u8; A::C]); | ^^^^ error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0283`. ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-bedbb3c212c440f528f318e68fd64ac16d9a7fad10797b8f1c8152ebf67a6fff", "text": " trait T<'x> { type V; } impl<'g> T<'g> for u32 { type V = u16; } fn main() { (&|_|()) as &dyn for<'x> Fn(>::V); //~^ ERROR: type mismatch in closure arguments //~| ERROR: type mismatch resolving } ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-49495d13f8bb647acc528f871f1a4b34412ba5a9a5a4eee571e23ac77077764e", "text": " error[E0631]: type mismatch in closure arguments --> $DIR/issue-41366.rs:10:5 | LL | (&|_|()) as &dyn for<'x> Fn(>::V); | ^^-----^ | | | | | found signature of `fn(_) -> _` | expected signature of `for<'x> fn(>::V) -> _` | = note: required for the cast to the object type `dyn for<'x> std::ops::Fn(>::V)` error[E0271]: type mismatch resolving `for<'x> <[closure@$DIR/issue-41366.rs:10:7: 10:12] as std::ops::FnOnce<(>::V,)>>::Output == ()` --> $DIR/issue-41366.rs:10:5 | LL | (&|_|()) as &dyn for<'x> Fn(>::V); | ^^^^^^^^ expected bound lifetime parameter 'x, found concrete lifetime | = note: required for the cast to the object type `dyn for<'x> std::ops::Fn(>::V)` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0271`. ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-8f6aa14951294b9b5e61325734e4242bb1be9a1352ea2ebb6ce9bc0f92d5946f", "text": " fn main() { [(); &(&'static: loop { |x| {}; }) as *const _ as usize] //~^ ERROR: invalid label name `'static` //~| ERROR: type annotations needed } ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-7e17bb901c180d1be32a1d1ab9f6f077a3fa3af0a633fd7e5290ca30c9d89ade", "query": "Running the following code triggers an internal compiler error and the error message says that it's a bug and I should report it, so I'm doing that now:\nmcve:\nNote, that this does not ICE anymore on nightly, instead it gives me: Can this be closed", "positive_passages": [{"docid": "doc-en-rust-5c1c20ed8c81b498f7f76af1453e94c766fa9b5ec3be2f0e8f0256fbd662e7ec", "text": " error: invalid label name `'static` --> $DIR/issue-52437.rs:2:13 | LL | [(); &(&'static: loop { |x| {}; }) as *const _ as usize] | ^^^^^^^ error[E0282]: type annotations needed --> $DIR/issue-52437.rs:2:30 | LL | [(); &(&'static: loop { |x| {}; }) as *const _ as usize] | ^ consider giving this closure parameter a type error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0282`. ", "commid": "rust_pr_65688"}], "negative_passages": []} {"query_id": "q-en-rust-379e360c57e4bb4b392f57b4240610d7ab72f7a3b6a872ab7cbe596e14b514e7", "query": "This is similar to the same buggy behavior we saw for matchers fixed in 1.20.0. For example in Rust 1.19 the following code fails to compile with the message , but correctly prints followed by as of 1.20. The matcher is currently broken in the same way. rustc 1.28.0-nightly ( 2018-06-09)\nThe ident issue was , so mentioning who fixed that one in . Would you be interested in taking a look at this similar case?\nDuplicate of . should still be working on it I think?\nThanks! Let's continue to track this as part of .", "positive_passages": [{"docid": "doc-en-rust-5e932b9a1b0198ca985a9a5bee9beefd6e0777347053c309acef7dadd8191b04", "text": "Token::Interpolated(ref nt) => may_be_ident(&nt.0), _ => false, }, \"lifetime\" => match *token { Token::Lifetime(_) => true, Token::Interpolated(ref nt) => match nt.0 { token::NtLifetime(_) | token::NtTT(_) => true, _ => false, }, _ => false, }, _ => match *token { token::CloseDelim(_) => false, _ => true,", "commid": "rust_pr_51480"}], "negative_passages": []} {"query_id": "q-en-rust-379e360c57e4bb4b392f57b4240610d7ab72f7a3b6a872ab7cbe596e14b514e7", "query": "This is similar to the same buggy behavior we saw for matchers fixed in 1.20.0. For example in Rust 1.19 the following code fails to compile with the message , but correctly prints followed by as of 1.20. The matcher is currently broken in the same way. rustc 1.28.0-nightly ( 2018-06-09)\nThe ident issue was , so mentioning who fixed that one in . Would you be interested in taking a look at this similar case?\nDuplicate of . should still be working on it I think?\nThanks! Let's continue to track this as part of .", "positive_passages": [{"docid": "doc-en-rust-2208cd1484c76342c69116dd802666c72e82505675edd2017d36d92f4684c957", "text": "fn main() { m!(a); //~^ ERROR expected a lifetime, found `a` //~^ ERROR no rules expected the token `a` }", "commid": "rust_pr_51480"}], "negative_passages": []} {"query_id": "q-en-rust-379e360c57e4bb4b392f57b4240610d7ab72f7a3b6a872ab7cbe596e14b514e7", "query": "This is similar to the same buggy behavior we saw for matchers fixed in 1.20.0. For example in Rust 1.19 the following code fails to compile with the message , but correctly prints followed by as of 1.20. The matcher is currently broken in the same way. rustc 1.28.0-nightly ( 2018-06-09)\nThe ident issue was , so mentioning who fixed that one in . Would you be interested in taking a look at this similar case?\nDuplicate of . should still be working on it I think?\nThanks! Let's continue to track this as part of .", "positive_passages": [{"docid": "doc-en-rust-85d510b757d245a8b1fdb7da7b3eb45579460c99749e27504d058f456b4113e4", "text": "//}}} //{{{ issue 50903 ============================================================== macro_rules! foo_50903 { ($($lif:lifetime ,)* #) => {}; } foo_50903!('a, 'b, #); foo_50903!('a, #); foo_50903!(#); //}}} //{{{ issue 51477 ============================================================== macro_rules! foo_51477 { ($lifetime:lifetime) => { \"last token is lifetime\" }; ($other:tt) => { \"last token is other\" }; ($first:tt $($rest:tt)*) => { foo_51477!($($rest)*) }; } fn test_51477() { assert_eq!(\"last token is lifetime\", foo_51477!('a)); assert_eq!(\"last token is other\", foo_51477!(@)); assert_eq!(\"last token is lifetime\", foo_51477!(@ {} 'a)); } //}}} //{{{ some more tests ========================================================== macro_rules! test_block {", "commid": "rust_pr_51480"}], "negative_passages": []} {"query_id": "q-en-rust-379e360c57e4bb4b392f57b4240610d7ab72f7a3b6a872ab7cbe596e14b514e7", "query": "This is similar to the same buggy behavior we saw for matchers fixed in 1.20.0. For example in Rust 1.19 the following code fails to compile with the message , but correctly prints followed by as of 1.20. The matcher is currently broken in the same way. rustc 1.28.0-nightly ( 2018-06-09)\nThe ident issue was , so mentioning who fixed that one in . Would you be interested in taking a look at this similar case?\nDuplicate of . should still be working on it I think?\nThanks! Let's continue to track this as part of .", "positive_passages": [{"docid": "doc-en-rust-6c2d0fc6cae7f66946caf4a9ef74ef0d9a1987820eef4339842d1526c67b2845", "text": "test_meta_block!(windows {}); macro_rules! test_lifetime { (1. $($l:lifetime)* $($b:block)*) => {}; (2. $($b:block)* $($l:lifetime)*) => {}; } test_lifetime!(1. 'a 'b {} {}); test_lifetime!(2. {} {} 'a 'b); //}}} fn main() {", "commid": "rust_pr_51480"}], "negative_passages": []} {"query_id": "q-en-rust-379e360c57e4bb4b392f57b4240610d7ab72f7a3b6a872ab7cbe596e14b514e7", "query": "This is similar to the same buggy behavior we saw for matchers fixed in 1.20.0. For example in Rust 1.19 the following code fails to compile with the message , but correctly prints followed by as of 1.20. The matcher is currently broken in the same way. rustc 1.28.0-nightly ( 2018-06-09)\nThe ident issue was , so mentioning who fixed that one in . Would you be interested in taking a look at this similar case?\nDuplicate of . should still be working on it I think?\nThanks! Let's continue to track this as part of .", "positive_passages": [{"docid": "doc-en-rust-e2e995eb82dd075ca3629490238ea0a99c8a1173d7c0fe6a18a51caba81cbcb0", "text": "test_40569(); test_35650(); test_24189(); test_51477(); }", "commid": "rust_pr_51480"}], "negative_passages": []} {"query_id": "q-en-rust-3243bc8f69ddcb545fea14fc56ecf46c064642d30448c829a66128784655b4f2", "query": "When I build this code on nightly... ...I get these errors: The first error is fine, but the second one is missing the first character of . It looks like rustc assumes the first character is an and removes it without checking. :\nCurrently working on a fix here: Still need to add UI tests, which I'll do once I figure out how they work.\nAdding to the Rust 2018 Edition Preview milestone. Has a pending PR, too.", "positive_passages": [{"docid": "doc-en-rust-d0cc170cfd3bf30d907ba83cf8ff2dd94affe4e83e47570dd5253b24306570d4", "text": "// highlighted text will always be `&` and // thus can transform to `&mut` by slicing off // first ASCII character and prepending \"&mut \". let borrowed_expr = src[1..].to_string(); return (assignment_rhs_span, format!(\"&mut {}\", borrowed_expr)); if src.starts_with('&') { let borrowed_expr = src[1..].to_string(); return (assignment_rhs_span, format!(\"&mut {}\", borrowed_expr)); } } }", "commid": "rust_pr_51612"}], "negative_passages": []} {"query_id": "q-en-rust-3243bc8f69ddcb545fea14fc56ecf46c064642d30448c829a66128784655b4f2", "query": "When I build this code on nightly... ...I get these errors: The first error is fine, but the second one is missing the first character of . It looks like rustc assumes the first character is an and removes it without checking. :\nCurrently working on a fix here: Still need to add UI tests, which I'll do once I figure out how they work.\nAdding to the Rust 2018 Edition Preview milestone. Has a pending PR, too.", "positive_passages": [{"docid": "doc-en-rust-1b22ef3a47795ade4957a2f858457eab959e80682fdb5896ed9b44666fbd8ff0", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. #![feature(nll)] fn main() { let foo = &16; //~^ HELP consider changing this to be a mutable reference //~| SUGGESTION &mut 16 *foo = 32; //~^ ERROR cannot assign to `*foo` which is behind a `&` reference let bar = foo; //~^ HELP consider changing this to be a mutable reference //~| SUGGESTION &mut i32 *bar = 64; //~^ ERROR cannot assign to `*bar` which is behind a `&` reference } ", "commid": "rust_pr_51612"}], "negative_passages": []} {"query_id": "q-en-rust-3243bc8f69ddcb545fea14fc56ecf46c064642d30448c829a66128784655b4f2", "query": "When I build this code on nightly... ...I get these errors: The first error is fine, but the second one is missing the first character of . It looks like rustc assumes the first character is an and removes it without checking. :\nCurrently working on a fix here: Still need to add UI tests, which I'll do once I figure out how they work.\nAdding to the Rust 2018 Edition Preview milestone. Has a pending PR, too.", "positive_passages": [{"docid": "doc-en-rust-01b442ed645a89986e9a6a8e6db711505701346f192c00a865c0904d3daf4ccd", "text": " error[E0594]: cannot assign to `*foo` which is behind a `&` reference --> $DIR/issue-51515.rs:17:5 | LL | let foo = &16; | --- help: consider changing this to be a mutable reference: `&mut 16` ... LL | *foo = 32; | ^^^^^^^^^ `foo` is a `&` reference, so the data it refers to cannot be written error[E0594]: cannot assign to `*bar` which is behind a `&` reference --> $DIR/issue-51515.rs:22:5 | LL | let bar = foo; | --- help: consider changing this to be a mutable reference: `&mut i32` ... LL | *bar = 64; | ^^^^^^^^^ `bar` is a `&` reference, so the data it refers to cannot be written error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0594`. ", "commid": "rust_pr_51612"}], "negative_passages": []} {"query_id": "q-en-rust-2f14500a1df18a6a7ae0b2def3ef33038558f3a63e237d5e1d2b8c77b66abbe8", "query": "If I return a direct type that is , I get a helpful compiler warning: The equivalent (and I believe preferred) code using gives no such useful feedback: Amusingly, this isn't a new problem, as returning a boxed trait object also has the same problem. It's arguably worse there because you've performed an allocation for no good reason, but it's also less likely because people are more reluctant to introduce unneeded allocations.\nSince Rust 1.27 stabilizes (on functions), one possible solution would be a lint that checks the compiler-known type underneath the and the function returning it. If the type has the annotation but the function does not, it could suggest adding the attribute to the function.\nI think it would make sense for to quickly review and agree on how we should handle this. A lint seems plausible. Inside the same module, where we know the concrete underlying type, it would definitely make sense to pass through the . I'd love to know if it makes sense to automatically pass through the on the concrete type when used from outside the module, so that this can work automatically without needing a lint and a manual additional on the function.\nWhy is the not on, say, ?\nI agree with that traits seem like the better solution here than trying to \"leak\" must-used-ness.\nIn that case, is it fine that returning an of real type will drop the must-used-ness? Maybe we should suggest to the author of the function to add the attribute to the function if the real type is\nRight, there's nothing meaningfully tying such a function to , if is hidden.\nThe new thing being stabilized in 1.27 is specifically on functions and methods (). on types, including the iterator-adapter structs like , are .\nWe may need to add an arm for in this match?\nSo, I'd absolutely support adding for traits, which would have the same effect on a function returning as on a type T would have for a function returning . However, I also think we ought to handle this case somehow, preferably in a way that doesn't require the user to manually propagate outward on every function that returns a type.\nI don't think we should. The type, including its must-use'dness, is not a part of the public API in that context. If you want the function return to be treated as must-use, the solution in that case is to tag the function as must-use.\nThen in that case, would it make sense to have a lint suggesting propagation of from the type to the function that returns that concrete type?\nI don't think so. The point of returning impl Trait is to hide the return type and the facts of its API, the fact that the type is must-use seems no different from other facts about the return type.\nAs alluded to above, I argue that this is a bug in the must-use lint due to its antedating impl-Trait; I don't see sufficient cause here to complicate the language with another lint or feature. A pull request is forthcoming.\nHm, perhaps this was too confidently worded: see description of the new pull request .\nSo let me summarize to see if I understand correctly what you're planning for must_use for traits:\nI think expanding the applicability of mustuse should probably be introduced using an RFC. It seems to me that there are two different approaches proposed and an RFC could clarify why we chose a certain approach: proposes to evaluate the lint using the concrete type and provided a PR to this effect. This is incompatible with statement with which I agree with: \"The type, including its must-use'dness, is not a part of the public API in that context [i.e. return impl Trait]\". I also agree with that mustuse for traits is something we should investigate and pursue. This would be extremely helpful for futures because forgetting to poll needs to be obvious.\nCan you provide an example of a case where it is the correct thing to return a concrete type that is tagged as through and not use the result of the function? My original premise is that this is an unintentional mistake in 95-99% of the cases, which feels like the ideal case for a lint. In the (rare) case where the author of the -returning function deliberately wants to allow forgetting to use the return value, they can allow the lint. I don't know exactly which \"other facts\" you mean, but I can think of two primary things: \"sub\" traits, which are not automagically propagated: traits, which are automagically propagated: I think that the case is different from the \"sub trait\" case because you will rather quickly run into \"well, my code doesn't compile\" errors when you forget to return the \"sub trait\". You then have to decide if you meant to expose that interface. I know the auto trait aspect was much discussed, so I don't wish to re-litigate it. My point here is that we : I think there's a case to be made that is already an ergonomics-based feature as it helps prevent the user from writing code they didn't mean to.\nAs a concrete side note, I opened this because I did it myself: I wrote some code that returned and was confused for a minute because the compiler didn't have any warnings. I'm used to those warnings and they are part of my development flow. Having them \"suddenly go missing\" just because I wanted to use felt like trying to step on the last stair that isn't there. For me, it's right up there with in that my \"normal\" flow of programming that I've established since ~Rust 0.12 doesn't quite work the same with .\nIMO the more conservative approach would be to not leak mustuse on the concrete type. If that turns out to be the wrong call, it can always be later. I think defining mustuse on traits (e.g. on and ) will already be enough to catch all the bugs we're trying to prevent. The big advantage of defining mustuse on the trait is that this system can also work for unnameable types like the return type of an async function. Additionally it also avoids all the boilerplate code that defines each concrete iterator or future type as muchuse.\nI think the current approach of \"stick on the dozens of types implementing and \" just doesn't scale and this is an example where it goes wrong. Also consider type parameters or associated types that are known implement those traits. IMO they should get the same treatment by , without knowing the concrete type.\nTo be clear, I'm totally happy with implementing for traits, but that particular option has been available \"forever\" but doesn't seem to be implemented for whichever reason. On the flip side, fairly quickly. I really just don't want to see this particular feature held up by a months (years?) long discussion of the ideal Platonic implementation of for traits. Isn't that what the current implementation does? Am I missing something? I believe that's what I'm proposing... Yes, I believe that that's a better option, and I'll add one more: presumably it will also work for trait objects. I just don't want to lose what existing ergonomics we have until someone implements for traits. These don't even have to be mutually exclusive options! The lint can catch cases that a trait wouldn't \u2014 such as when a given instance of a trait should be must use but not the trait itself. The lint could even be removed once the better solution is created, if we felt it didn't carry its weight anymore.\nI also want to suggest a more aggressive approach: existential is always no matter of its underlying type. If a function is returning an thing, you were probably meant to use it in some way.\nI wonder how hard it would be to turn such a thing on to see the impact.\nThanks for the compelling argument! I agree with what I think is the implication of your post: this question boils down to \"Is more like an auto trait, or a regular API fact?\" I'm not sure of the answer. What is the distinction with vs any other return type? Isn't it plausible that something that returns also has a useful side effect, and the return is not relevant all the time?", "positive_passages": [{"docid": "doc-en-rust-4aa94c0b0762ebd2a1c88198d3c2de59763c4d7c8b9b6ba33d3b26b1b9db8bde", "text": "} let t = cx.tables.expr_ty(&expr); // FIXME(varkor): replace with `t.is_unit() || t.conservative_is_uninhabited()`. let type_permits_no_use = match t.sty { ty::Tuple(ref tys) if tys.is_empty() => true, ty::Never => true, ty::Adt(def, _) => { if def.variants.is_empty() { true } else { check_must_use(cx, def.did, s.span, \"\") let type_permits_lack_of_use = if t.is_unit() || cx.tcx.is_ty_uninhabited_from(cx.tcx.hir.get_module_parent(expr.id), t) { true } else { match t.sty { ty::Adt(def, _) => check_must_use(cx, def.did, s.span, \"\", \"\"), ty::Opaque(def, _) => { let mut must_use = false; for (predicate, _) in &cx.tcx.predicates_of(def).predicates { if let ty::Predicate::Trait(ref poly_trait_predicate) = predicate { let trait_ref = poly_trait_predicate.skip_binder().trait_ref; if check_must_use(cx, trait_ref.def_id, s.span, \"implementer of \", \"\") { must_use = true; break; } } } must_use } ty::Dynamic(binder, _) => { let mut must_use = false; for predicate in binder.skip_binder().iter() { if let ty::ExistentialPredicate::Trait(ref trait_ref) = predicate { if check_must_use(cx, trait_ref.def_id, s.span, \"\", \" trait object\") { must_use = true; break; } } } must_use } _ => false, } _ => false, }; let mut fn_warned = false;", "commid": "rust_pr_55663"}], "negative_passages": []} {"query_id": "q-en-rust-2f14500a1df18a6a7ae0b2def3ef33038558f3a63e237d5e1d2b8c77b66abbe8", "query": "If I return a direct type that is , I get a helpful compiler warning: The equivalent (and I believe preferred) code using gives no such useful feedback: Amusingly, this isn't a new problem, as returning a boxed trait object also has the same problem. It's arguably worse there because you've performed an allocation for no good reason, but it's also less likely because people are more reluctant to introduce unneeded allocations.\nSince Rust 1.27 stabilizes (on functions), one possible solution would be a lint that checks the compiler-known type underneath the and the function returning it. If the type has the annotation but the function does not, it could suggest adding the attribute to the function.\nI think it would make sense for to quickly review and agree on how we should handle this. A lint seems plausible. Inside the same module, where we know the concrete underlying type, it would definitely make sense to pass through the . I'd love to know if it makes sense to automatically pass through the on the concrete type when used from outside the module, so that this can work automatically without needing a lint and a manual additional on the function.\nWhy is the not on, say, ?\nI agree with that traits seem like the better solution here than trying to \"leak\" must-used-ness.\nIn that case, is it fine that returning an of real type will drop the must-used-ness? Maybe we should suggest to the author of the function to add the attribute to the function if the real type is\nRight, there's nothing meaningfully tying such a function to , if is hidden.\nThe new thing being stabilized in 1.27 is specifically on functions and methods (). on types, including the iterator-adapter structs like , are .\nWe may need to add an arm for in this match?\nSo, I'd absolutely support adding for traits, which would have the same effect on a function returning as on a type T would have for a function returning . However, I also think we ought to handle this case somehow, preferably in a way that doesn't require the user to manually propagate outward on every function that returns a type.\nI don't think we should. The type, including its must-use'dness, is not a part of the public API in that context. If you want the function return to be treated as must-use, the solution in that case is to tag the function as must-use.\nThen in that case, would it make sense to have a lint suggesting propagation of from the type to the function that returns that concrete type?\nI don't think so. The point of returning impl Trait is to hide the return type and the facts of its API, the fact that the type is must-use seems no different from other facts about the return type.\nAs alluded to above, I argue that this is a bug in the must-use lint due to its antedating impl-Trait; I don't see sufficient cause here to complicate the language with another lint or feature. A pull request is forthcoming.\nHm, perhaps this was too confidently worded: see description of the new pull request .\nSo let me summarize to see if I understand correctly what you're planning for must_use for traits:\nI think expanding the applicability of mustuse should probably be introduced using an RFC. It seems to me that there are two different approaches proposed and an RFC could clarify why we chose a certain approach: proposes to evaluate the lint using the concrete type and provided a PR to this effect. This is incompatible with statement with which I agree with: \"The type, including its must-use'dness, is not a part of the public API in that context [i.e. return impl Trait]\". I also agree with that mustuse for traits is something we should investigate and pursue. This would be extremely helpful for futures because forgetting to poll needs to be obvious.\nCan you provide an example of a case where it is the correct thing to return a concrete type that is tagged as through and not use the result of the function? My original premise is that this is an unintentional mistake in 95-99% of the cases, which feels like the ideal case for a lint. In the (rare) case where the author of the -returning function deliberately wants to allow forgetting to use the return value, they can allow the lint. I don't know exactly which \"other facts\" you mean, but I can think of two primary things: \"sub\" traits, which are not automagically propagated: traits, which are automagically propagated: I think that the case is different from the \"sub trait\" case because you will rather quickly run into \"well, my code doesn't compile\" errors when you forget to return the \"sub trait\". You then have to decide if you meant to expose that interface. I know the auto trait aspect was much discussed, so I don't wish to re-litigate it. My point here is that we : I think there's a case to be made that is already an ergonomics-based feature as it helps prevent the user from writing code they didn't mean to.\nAs a concrete side note, I opened this because I did it myself: I wrote some code that returned and was confused for a minute because the compiler didn't have any warnings. I'm used to those warnings and they are part of my development flow. Having them \"suddenly go missing\" just because I wanted to use felt like trying to step on the last stair that isn't there. For me, it's right up there with in that my \"normal\" flow of programming that I've established since ~Rust 0.12 doesn't quite work the same with .\nIMO the more conservative approach would be to not leak mustuse on the concrete type. If that turns out to be the wrong call, it can always be later. I think defining mustuse on traits (e.g. on and ) will already be enough to catch all the bugs we're trying to prevent. The big advantage of defining mustuse on the trait is that this system can also work for unnameable types like the return type of an async function. Additionally it also avoids all the boilerplate code that defines each concrete iterator or future type as muchuse.\nI think the current approach of \"stick on the dozens of types implementing and \" just doesn't scale and this is an example where it goes wrong. Also consider type parameters or associated types that are known implement those traits. IMO they should get the same treatment by , without knowing the concrete type.\nTo be clear, I'm totally happy with implementing for traits, but that particular option has been available \"forever\" but doesn't seem to be implemented for whichever reason. On the flip side, fairly quickly. I really just don't want to see this particular feature held up by a months (years?) long discussion of the ideal Platonic implementation of for traits. Isn't that what the current implementation does? Am I missing something? I believe that's what I'm proposing... Yes, I believe that that's a better option, and I'll add one more: presumably it will also work for trait objects. I just don't want to lose what existing ergonomics we have until someone implements for traits. These don't even have to be mutually exclusive options! The lint can catch cases that a trait wouldn't \u2014 such as when a given instance of a trait should be must use but not the trait itself. The lint could even be removed once the better solution is created, if we felt it didn't carry its weight anymore.\nI also want to suggest a more aggressive approach: existential is always no matter of its underlying type. If a function is returning an thing, you were probably meant to use it in some way.\nI wonder how hard it would be to turn such a thing on to see the impact.\nThanks for the compelling argument! I agree with what I think is the implication of your post: this question boils down to \"Is more like an auto trait, or a regular API fact?\" I'm not sure of the answer. What is the distinction with vs any other return type? Isn't it plausible that something that returns also has a useful side effect, and the return is not relevant all the time?", "positive_passages": [{"docid": "doc-en-rust-efd5b50f712d99ec664e953c2467a5ea133b9e1a6a628f86988302b8c0a8b39f", "text": "}; if let Some(def) = maybe_def { let def_id = def.def_id(); fn_warned = check_must_use(cx, def_id, s.span, \"return value of \"); } else if type_permits_no_use { fn_warned = check_must_use(cx, def_id, s.span, \"return value of \", \"\"); } else if type_permits_lack_of_use { // We don't warn about unused unit or uninhabited types. // (See https://github.com/rust-lang/rust/issues/43806 for details.) return;", "commid": "rust_pr_55663"}], "negative_passages": []} {"query_id": "q-en-rust-2f14500a1df18a6a7ae0b2def3ef33038558f3a63e237d5e1d2b8c77b66abbe8", "query": "If I return a direct type that is , I get a helpful compiler warning: The equivalent (and I believe preferred) code using gives no such useful feedback: Amusingly, this isn't a new problem, as returning a boxed trait object also has the same problem. It's arguably worse there because you've performed an allocation for no good reason, but it's also less likely because people are more reluctant to introduce unneeded allocations.\nSince Rust 1.27 stabilizes (on functions), one possible solution would be a lint that checks the compiler-known type underneath the and the function returning it. If the type has the annotation but the function does not, it could suggest adding the attribute to the function.\nI think it would make sense for to quickly review and agree on how we should handle this. A lint seems plausible. Inside the same module, where we know the concrete underlying type, it would definitely make sense to pass through the . I'd love to know if it makes sense to automatically pass through the on the concrete type when used from outside the module, so that this can work automatically without needing a lint and a manual additional on the function.\nWhy is the not on, say, ?\nI agree with that traits seem like the better solution here than trying to \"leak\" must-used-ness.\nIn that case, is it fine that returning an of real type will drop the must-used-ness? Maybe we should suggest to the author of the function to add the attribute to the function if the real type is\nRight, there's nothing meaningfully tying such a function to , if is hidden.\nThe new thing being stabilized in 1.27 is specifically on functions and methods (). on types, including the iterator-adapter structs like , are .\nWe may need to add an arm for in this match?\nSo, I'd absolutely support adding for traits, which would have the same effect on a function returning as on a type T would have for a function returning . However, I also think we ought to handle this case somehow, preferably in a way that doesn't require the user to manually propagate outward on every function that returns a type.\nI don't think we should. The type, including its must-use'dness, is not a part of the public API in that context. If you want the function return to be treated as must-use, the solution in that case is to tag the function as must-use.\nThen in that case, would it make sense to have a lint suggesting propagation of from the type to the function that returns that concrete type?\nI don't think so. The point of returning impl Trait is to hide the return type and the facts of its API, the fact that the type is must-use seems no different from other facts about the return type.\nAs alluded to above, I argue that this is a bug in the must-use lint due to its antedating impl-Trait; I don't see sufficient cause here to complicate the language with another lint or feature. A pull request is forthcoming.\nHm, perhaps this was too confidently worded: see description of the new pull request .\nSo let me summarize to see if I understand correctly what you're planning for must_use for traits:\nI think expanding the applicability of mustuse should probably be introduced using an RFC. It seems to me that there are two different approaches proposed and an RFC could clarify why we chose a certain approach: proposes to evaluate the lint using the concrete type and provided a PR to this effect. This is incompatible with statement with which I agree with: \"The type, including its must-use'dness, is not a part of the public API in that context [i.e. return impl Trait]\". I also agree with that mustuse for traits is something we should investigate and pursue. This would be extremely helpful for futures because forgetting to poll needs to be obvious.\nCan you provide an example of a case where it is the correct thing to return a concrete type that is tagged as through and not use the result of the function? My original premise is that this is an unintentional mistake in 95-99% of the cases, which feels like the ideal case for a lint. In the (rare) case where the author of the -returning function deliberately wants to allow forgetting to use the return value, they can allow the lint. I don't know exactly which \"other facts\" you mean, but I can think of two primary things: \"sub\" traits, which are not automagically propagated: traits, which are automagically propagated: I think that the case is different from the \"sub trait\" case because you will rather quickly run into \"well, my code doesn't compile\" errors when you forget to return the \"sub trait\". You then have to decide if you meant to expose that interface. I know the auto trait aspect was much discussed, so I don't wish to re-litigate it. My point here is that we : I think there's a case to be made that is already an ergonomics-based feature as it helps prevent the user from writing code they didn't mean to.\nAs a concrete side note, I opened this because I did it myself: I wrote some code that returned and was confused for a minute because the compiler didn't have any warnings. I'm used to those warnings and they are part of my development flow. Having them \"suddenly go missing\" just because I wanted to use felt like trying to step on the last stair that isn't there. For me, it's right up there with in that my \"normal\" flow of programming that I've established since ~Rust 0.12 doesn't quite work the same with .\nIMO the more conservative approach would be to not leak mustuse on the concrete type. If that turns out to be the wrong call, it can always be later. I think defining mustuse on traits (e.g. on and ) will already be enough to catch all the bugs we're trying to prevent. The big advantage of defining mustuse on the trait is that this system can also work for unnameable types like the return type of an async function. Additionally it also avoids all the boilerplate code that defines each concrete iterator or future type as muchuse.\nI think the current approach of \"stick on the dozens of types implementing and \" just doesn't scale and this is an example where it goes wrong. Also consider type parameters or associated types that are known implement those traits. IMO they should get the same treatment by , without knowing the concrete type.\nTo be clear, I'm totally happy with implementing for traits, but that particular option has been available \"forever\" but doesn't seem to be implemented for whichever reason. On the flip side, fairly quickly. I really just don't want to see this particular feature held up by a months (years?) long discussion of the ideal Platonic implementation of for traits. Isn't that what the current implementation does? Am I missing something? I believe that's what I'm proposing... Yes, I believe that that's a better option, and I'll add one more: presumably it will also work for trait objects. I just don't want to lose what existing ergonomics we have until someone implements for traits. These don't even have to be mutually exclusive options! The lint can catch cases that a trait wouldn't \u2014 such as when a given instance of a trait should be must use but not the trait itself. The lint could even be removed once the better solution is created, if we felt it didn't carry its weight anymore.\nI also want to suggest a more aggressive approach: existential is always no matter of its underlying type. If a function is returning an thing, you were probably meant to use it in some way.\nI wonder how hard it would be to turn such a thing on to see the impact.\nThanks for the compelling argument! I agree with what I think is the implication of your post: this question boils down to \"Is more like an auto trait, or a regular API fact?\" I'm not sure of the answer. What is the distinction with vs any other return type? Isn't it plausible that something that returns also has a useful side effect, and the return is not relevant all the time?", "positive_passages": [{"docid": "doc-en-rust-9f071304879564d3a89e506a93ba95f2f5ad2a6bbbe47bb75bcb30e011156ad3", "text": "op_warned = true; } if !(type_permits_no_use || fn_warned || op_warned) { if !(type_permits_lack_of_use || fn_warned || op_warned) { cx.span_lint(UNUSED_RESULTS, s.span, \"unused result\"); } fn check_must_use(cx: &LateContext, def_id: DefId, sp: Span, describe_path: &str) -> bool { fn check_must_use( cx: &LateContext, def_id: DefId, sp: Span, descr_pre_path: &str, descr_post_path: &str, ) -> bool { for attr in cx.tcx.get_attrs(def_id).iter() { if attr.check_name(\"must_use\") { let msg = format!(\"unused {}`{}` that must be used\", describe_path, cx.tcx.item_path_str(def_id)); let msg = format!(\"unused {}`{}`{} that must be used\", descr_pre_path, cx.tcx.item_path_str(def_id), descr_post_path); let mut err = cx.struct_span_lint(UNUSED_MUST_USE, sp, &msg); // check for #[must_use = \"...\"] if let Some(note) = attr.value_str() {", "commid": "rust_pr_55663"}], "negative_passages": []} {"query_id": "q-en-rust-2f14500a1df18a6a7ae0b2def3ef33038558f3a63e237d5e1d2b8c77b66abbe8", "query": "If I return a direct type that is , I get a helpful compiler warning: The equivalent (and I believe preferred) code using gives no such useful feedback: Amusingly, this isn't a new problem, as returning a boxed trait object also has the same problem. It's arguably worse there because you've performed an allocation for no good reason, but it's also less likely because people are more reluctant to introduce unneeded allocations.\nSince Rust 1.27 stabilizes (on functions), one possible solution would be a lint that checks the compiler-known type underneath the and the function returning it. If the type has the annotation but the function does not, it could suggest adding the attribute to the function.\nI think it would make sense for to quickly review and agree on how we should handle this. A lint seems plausible. Inside the same module, where we know the concrete underlying type, it would definitely make sense to pass through the . I'd love to know if it makes sense to automatically pass through the on the concrete type when used from outside the module, so that this can work automatically without needing a lint and a manual additional on the function.\nWhy is the not on, say, ?\nI agree with that traits seem like the better solution here than trying to \"leak\" must-used-ness.\nIn that case, is it fine that returning an of real type will drop the must-used-ness? Maybe we should suggest to the author of the function to add the attribute to the function if the real type is\nRight, there's nothing meaningfully tying such a function to , if is hidden.\nThe new thing being stabilized in 1.27 is specifically on functions and methods (). on types, including the iterator-adapter structs like , are .\nWe may need to add an arm for in this match?\nSo, I'd absolutely support adding for traits, which would have the same effect on a function returning as on a type T would have for a function returning . However, I also think we ought to handle this case somehow, preferably in a way that doesn't require the user to manually propagate outward on every function that returns a type.\nI don't think we should. The type, including its must-use'dness, is not a part of the public API in that context. If you want the function return to be treated as must-use, the solution in that case is to tag the function as must-use.\nThen in that case, would it make sense to have a lint suggesting propagation of from the type to the function that returns that concrete type?\nI don't think so. The point of returning impl Trait is to hide the return type and the facts of its API, the fact that the type is must-use seems no different from other facts about the return type.\nAs alluded to above, I argue that this is a bug in the must-use lint due to its antedating impl-Trait; I don't see sufficient cause here to complicate the language with another lint or feature. A pull request is forthcoming.\nHm, perhaps this was too confidently worded: see description of the new pull request .\nSo let me summarize to see if I understand correctly what you're planning for must_use for traits:\nI think expanding the applicability of mustuse should probably be introduced using an RFC. It seems to me that there are two different approaches proposed and an RFC could clarify why we chose a certain approach: proposes to evaluate the lint using the concrete type and provided a PR to this effect. This is incompatible with statement with which I agree with: \"The type, including its must-use'dness, is not a part of the public API in that context [i.e. return impl Trait]\". I also agree with that mustuse for traits is something we should investigate and pursue. This would be extremely helpful for futures because forgetting to poll needs to be obvious.\nCan you provide an example of a case where it is the correct thing to return a concrete type that is tagged as through and not use the result of the function? My original premise is that this is an unintentional mistake in 95-99% of the cases, which feels like the ideal case for a lint. In the (rare) case where the author of the -returning function deliberately wants to allow forgetting to use the return value, they can allow the lint. I don't know exactly which \"other facts\" you mean, but I can think of two primary things: \"sub\" traits, which are not automagically propagated: traits, which are automagically propagated: I think that the case is different from the \"sub trait\" case because you will rather quickly run into \"well, my code doesn't compile\" errors when you forget to return the \"sub trait\". You then have to decide if you meant to expose that interface. I know the auto trait aspect was much discussed, so I don't wish to re-litigate it. My point here is that we : I think there's a case to be made that is already an ergonomics-based feature as it helps prevent the user from writing code they didn't mean to.\nAs a concrete side note, I opened this because I did it myself: I wrote some code that returned and was confused for a minute because the compiler didn't have any warnings. I'm used to those warnings and they are part of my development flow. Having them \"suddenly go missing\" just because I wanted to use felt like trying to step on the last stair that isn't there. For me, it's right up there with in that my \"normal\" flow of programming that I've established since ~Rust 0.12 doesn't quite work the same with .\nIMO the more conservative approach would be to not leak mustuse on the concrete type. If that turns out to be the wrong call, it can always be later. I think defining mustuse on traits (e.g. on and ) will already be enough to catch all the bugs we're trying to prevent. The big advantage of defining mustuse on the trait is that this system can also work for unnameable types like the return type of an async function. Additionally it also avoids all the boilerplate code that defines each concrete iterator or future type as muchuse.\nI think the current approach of \"stick on the dozens of types implementing and \" just doesn't scale and this is an example where it goes wrong. Also consider type parameters or associated types that are known implement those traits. IMO they should get the same treatment by , without knowing the concrete type.\nTo be clear, I'm totally happy with implementing for traits, but that particular option has been available \"forever\" but doesn't seem to be implemented for whichever reason. On the flip side, fairly quickly. I really just don't want to see this particular feature held up by a months (years?) long discussion of the ideal Platonic implementation of for traits. Isn't that what the current implementation does? Am I missing something? I believe that's what I'm proposing... Yes, I believe that that's a better option, and I'll add one more: presumably it will also work for trait objects. I just don't want to lose what existing ergonomics we have until someone implements for traits. These don't even have to be mutually exclusive options! The lint can catch cases that a trait wouldn't \u2014 such as when a given instance of a trait should be must use but not the trait itself. The lint could even be removed once the better solution is created, if we felt it didn't carry its weight anymore.\nI also want to suggest a more aggressive approach: existential is always no matter of its underlying type. If a function is returning an thing, you were probably meant to use it in some way.\nI wonder how hard it would be to turn such a thing on to see the impact.\nThanks for the compelling argument! I agree with what I think is the implication of your post: this question boils down to \"Is more like an auto trait, or a regular API fact?\" I'm not sure of the answer. What is the distinction with vs any other return type? Isn't it plausible that something that returns also has a useful side effect, and the return is not relevant all the time?", "positive_passages": [{"docid": "doc-en-rust-078fa241d8dadb3c74afdf4ca26e0b6c1edddff36749c2db695383c629302072", "text": " #![deny(unused_must_use)] #[must_use] trait Critical {} trait NotSoCritical {} trait DecidedlyUnimportant {} struct Anon; impl Critical for Anon {} impl NotSoCritical for Anon {} impl DecidedlyUnimportant for Anon {} fn get_critical() -> impl NotSoCritical + Critical + DecidedlyUnimportant { Anon {} } fn main() { get_critical(); //~ ERROR unused implementer of `Critical` that must be used } ", "commid": "rust_pr_55663"}], "negative_passages": []} {"query_id": "q-en-rust-2f14500a1df18a6a7ae0b2def3ef33038558f3a63e237d5e1d2b8c77b66abbe8", "query": "If I return a direct type that is , I get a helpful compiler warning: The equivalent (and I believe preferred) code using gives no such useful feedback: Amusingly, this isn't a new problem, as returning a boxed trait object also has the same problem. It's arguably worse there because you've performed an allocation for no good reason, but it's also less likely because people are more reluctant to introduce unneeded allocations.\nSince Rust 1.27 stabilizes (on functions), one possible solution would be a lint that checks the compiler-known type underneath the and the function returning it. If the type has the annotation but the function does not, it could suggest adding the attribute to the function.\nI think it would make sense for to quickly review and agree on how we should handle this. A lint seems plausible. Inside the same module, where we know the concrete underlying type, it would definitely make sense to pass through the . I'd love to know if it makes sense to automatically pass through the on the concrete type when used from outside the module, so that this can work automatically without needing a lint and a manual additional on the function.\nWhy is the not on, say, ?\nI agree with that traits seem like the better solution here than trying to \"leak\" must-used-ness.\nIn that case, is it fine that returning an of real type will drop the must-used-ness? Maybe we should suggest to the author of the function to add the attribute to the function if the real type is\nRight, there's nothing meaningfully tying such a function to , if is hidden.\nThe new thing being stabilized in 1.27 is specifically on functions and methods (). on types, including the iterator-adapter structs like , are .\nWe may need to add an arm for in this match?\nSo, I'd absolutely support adding for traits, which would have the same effect on a function returning as on a type T would have for a function returning . However, I also think we ought to handle this case somehow, preferably in a way that doesn't require the user to manually propagate outward on every function that returns a type.\nI don't think we should. The type, including its must-use'dness, is not a part of the public API in that context. If you want the function return to be treated as must-use, the solution in that case is to tag the function as must-use.\nThen in that case, would it make sense to have a lint suggesting propagation of from the type to the function that returns that concrete type?\nI don't think so. The point of returning impl Trait is to hide the return type and the facts of its API, the fact that the type is must-use seems no different from other facts about the return type.\nAs alluded to above, I argue that this is a bug in the must-use lint due to its antedating impl-Trait; I don't see sufficient cause here to complicate the language with another lint or feature. A pull request is forthcoming.\nHm, perhaps this was too confidently worded: see description of the new pull request .\nSo let me summarize to see if I understand correctly what you're planning for must_use for traits:\nI think expanding the applicability of mustuse should probably be introduced using an RFC. It seems to me that there are two different approaches proposed and an RFC could clarify why we chose a certain approach: proposes to evaluate the lint using the concrete type and provided a PR to this effect. This is incompatible with statement with which I agree with: \"The type, including its must-use'dness, is not a part of the public API in that context [i.e. return impl Trait]\". I also agree with that mustuse for traits is something we should investigate and pursue. This would be extremely helpful for futures because forgetting to poll needs to be obvious.\nCan you provide an example of a case where it is the correct thing to return a concrete type that is tagged as through and not use the result of the function? My original premise is that this is an unintentional mistake in 95-99% of the cases, which feels like the ideal case for a lint. In the (rare) case where the author of the -returning function deliberately wants to allow forgetting to use the return value, they can allow the lint. I don't know exactly which \"other facts\" you mean, but I can think of two primary things: \"sub\" traits, which are not automagically propagated: traits, which are automagically propagated: I think that the case is different from the \"sub trait\" case because you will rather quickly run into \"well, my code doesn't compile\" errors when you forget to return the \"sub trait\". You then have to decide if you meant to expose that interface. I know the auto trait aspect was much discussed, so I don't wish to re-litigate it. My point here is that we : I think there's a case to be made that is already an ergonomics-based feature as it helps prevent the user from writing code they didn't mean to.\nAs a concrete side note, I opened this because I did it myself: I wrote some code that returned and was confused for a minute because the compiler didn't have any warnings. I'm used to those warnings and they are part of my development flow. Having them \"suddenly go missing\" just because I wanted to use felt like trying to step on the last stair that isn't there. For me, it's right up there with in that my \"normal\" flow of programming that I've established since ~Rust 0.12 doesn't quite work the same with .\nIMO the more conservative approach would be to not leak mustuse on the concrete type. If that turns out to be the wrong call, it can always be later. I think defining mustuse on traits (e.g. on and ) will already be enough to catch all the bugs we're trying to prevent. The big advantage of defining mustuse on the trait is that this system can also work for unnameable types like the return type of an async function. Additionally it also avoids all the boilerplate code that defines each concrete iterator or future type as muchuse.\nI think the current approach of \"stick on the dozens of types implementing and \" just doesn't scale and this is an example where it goes wrong. Also consider type parameters or associated types that are known implement those traits. IMO they should get the same treatment by , without knowing the concrete type.\nTo be clear, I'm totally happy with implementing for traits, but that particular option has been available \"forever\" but doesn't seem to be implemented for whichever reason. On the flip side, fairly quickly. I really just don't want to see this particular feature held up by a months (years?) long discussion of the ideal Platonic implementation of for traits. Isn't that what the current implementation does? Am I missing something? I believe that's what I'm proposing... Yes, I believe that that's a better option, and I'll add one more: presumably it will also work for trait objects. I just don't want to lose what existing ergonomics we have until someone implements for traits. These don't even have to be mutually exclusive options! The lint can catch cases that a trait wouldn't \u2014 such as when a given instance of a trait should be must use but not the trait itself. The lint could even be removed once the better solution is created, if we felt it didn't carry its weight anymore.\nI also want to suggest a more aggressive approach: existential is always no matter of its underlying type. If a function is returning an thing, you were probably meant to use it in some way.\nI wonder how hard it would be to turn such a thing on to see the impact.\nThanks for the compelling argument! I agree with what I think is the implication of your post: this question boils down to \"Is more like an auto trait, or a regular API fact?\" I'm not sure of the answer. What is the distinction with vs any other return type? Isn't it plausible that something that returns also has a useful side effect, and the return is not relevant all the time?", "positive_passages": [{"docid": "doc-en-rust-e60f9f8bfbe08a2ba4317b8001294dd11c04f328a928e5f3363a9dc921cd0347", "text": " error: unused implementer of `Critical` that must be used --> $DIR/must_use-trait.rs:21:5 | LL | get_critical(); //~ ERROR unused implementer of `Critical` that must be used | ^^^^^^^^^^^^^^^ | note: lint level defined here --> $DIR/must_use-trait.rs:1:9 | LL | #![deny(unused_must_use)] | ^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_55663"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-2185c11d719580ae6a96bf23adb79035c9d3125dbd763d73071a729c1a03be63", "text": "let crate_types = if options.proc_macro_crate { vec![config::CrateType::ProcMacro] } else { vec![config::CrateType::Dylib] vec![config::CrateType::Rlib] }; let sessopts = config::Options {", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-730d8af471a097ca87de7e62ba799a568736de53310b9d3a004bfae9da5412d0", "text": "intravisit::walk_crate(this, krate); }); }); compiler.session().abort_if_errors(); let ret: Result<_, ErrorReported> = Ok(collector.tests); ret }) }) .expect(\"compiler aborted in rustdoc!\"); }); let tests = match tests { Ok(tests) => tests, Err(ErrorReported) => return 1, }; test_args.insert(0, \"rustdoctest\".to_string());", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-4bd020103b9b30d63ef8d769e2f702ddd6246afe24ff491873a5cc734f4597a8", "text": " // compile-flags:--test /// ``` /// assert!(true) /// ``` pub fn f() {} pub fn f() {} ", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-bd8314732989e38719c888c03ebe6b03e87ad1ef2f90c26583ed3be69bc29d23", "text": " error[E0428]: the name `f` is defined multiple times --> $DIR/test-compile-fail1.rs:8:1 | 6 | pub fn f() {} | ---------- previous definition of the value `f` here 7 | 8 | pub fn f() {} | ^^^^^^^^^^ `f` redefined here | = note: `f` must be defined only once in the value namespace of this module error: aborting due to previous error For more information about this error, try `rustc --explain E0428`. ", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-a0351748c9cbf73d77aeccbb984243628302e70247d814c6850fe281764deec1", "text": " error: expected one of `!` or `::`, found `` --> $DIR/test-compile-fail2.rs:3:1 | 3 | fail | ^^^^ expected one of `!` or `::` error: aborting due to previous error ", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-c6dadb5ef56592fc4032b296f90f31832569128a65787342d91d4c06f060a997", "text": " error: unterminated double quote string --> $DIR/test-compile-fail3.rs:3:1 | 3 | \"fail | ^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-58f1a427da5d5eca90f5c857c2cc049cf337d3d53e01e4b769953b5e75829ab3", "text": " // compile-flags:--test // normalize-stdout-test: \"src/test/rustdoc-ui\" -> \"$$DIR\" // build-pass #![no_std] extern crate alloc; /// ``` /// assert!(true) /// ``` pub fn f() {} ", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-7cdf2149b8a9da64a8d63d2bcc06669ae9a3c4070061fcd35a8fe6bdff973a4e", "query": "I noticed this in the output running locally: and that didn't make tests fail. Cc:\nSame with panic_unwind:\nThis error message is printed when running , but before running any test. The code printing it is in . It looks like rustdoc is using that code in order to load the crate and find doctests in it. But since it\u2019s not actually compiling this crate, not having an allocator is not a problem. So I think there are two issues: rustdoc keeps doing its thing even though there are errors. It probably should call something like on its instance at some point. This error should not occur in the first place. One way to inhibit it would be to tell that we\u2019re \"compiling\" an rlib () rather than some other crate type. rlibs do not need (yet) to have an allocator defined. This is possibly this line: The first issue could in some other situation hide a real problem, but in this case it makes the second one mostly harmless. The only bad consequence is some noise in the output of a test run.\nRelated:\nmodify labels: +T-rustdoc\nError: Parsing label command in failed: labels +T-rustdo... Please let know if you're having trouble with this bot.", "positive_passages": [{"docid": "doc-en-rust-9e3779ba40badfef10340e5a4878404bd118f489f87b3f18b3c8188c14387641", "text": " running 1 test test $DIR/test-no_std.rs - f (line 9) ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out ", "commid": "rust_pr_68357"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-80e55fd209c223fb00aed2ff7ea3ec1d0a53c463c93b8182cb57d55f2841cf74", "text": "let mut cargo = Command::new(&self.initial_cargo); let out_dir = self.stage_out(compiler, mode); // command specific path, we call clear_if_dirty with this let mut my_out = match cmd { \"build\" => self.cargo_out(compiler, mode, target), // This is the intended out directory for crate documentation. \"doc\" | \"rustdoc\" => self.crate_doc_out(target), _ => self.stage_out(compiler, mode), }; // This is for the original compiler, but if we're forced to use stage 1, then // std/test/rustc stamps won't exist in stage 2, so we need to get those from stage 1, since // we copy the libs forward. let cmp = self.compiler_for(compiler.stage, compiler.host, target); let libstd_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libstd_stamp(self, cmp, target), _ => compile::libstd_stamp(self, cmp, target), }; let libtest_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::libtest_stamp(self, cmp, target), _ => compile::libtest_stamp(self, cmp, target), }; let librustc_stamp = match cmd { \"check\" | \"clippy\" | \"fix\" => check::librustc_stamp(self, cmp, target), _ => compile::librustc_stamp(self, cmp, target), }; // Codegen backends are not yet tracked by -Zbinary-dep-depinfo, // so we need to explicitly clear out if they've been updated. for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&out_dir, &backend); } if cmd == \"doc\" || cmd == \"rustdoc\" { if mode == Mode::Rustc || mode == Mode::ToolRustc || mode == Mode::Codegen { let my_out = match mode { // This is the intended out directory for compiler documentation. my_out = self.compiler_doc_out(target); } Mode::Rustc | Mode::ToolRustc | Mode::Codegen => self.compiler_doc_out(target), _ => self.crate_doc_out(target), }; let rustdoc = self.rustdoc(compiler); self.clear_if_dirty(&my_out, &rustdoc); } else if cmd != \"test\" { match mode { Mode::Std => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); for backend in self.codegen_backends(compiler) { self.clear_if_dirty(&my_out, &backend); } }, Mode::Test => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::Rustc => { self.clear_if_dirty(&my_out, &self.rustc(compiler)); self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::Codegen => { self.clear_if_dirty(&my_out, &librustc_stamp); }, Mode::ToolBootstrap => { }, Mode::ToolStd => { self.clear_if_dirty(&my_out, &libstd_stamp); }, Mode::ToolTest => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); }, Mode::ToolRustc => { self.clear_if_dirty(&my_out, &libstd_stamp); self.clear_if_dirty(&my_out, &libtest_stamp); self.clear_if_dirty(&my_out, &librustc_stamp); }, } } cargo", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-9d639d84c2f0fe53329d4b551bedabea3c30bcb4f1a8908092c0c607b18583a7", "text": "}, } // This tells Cargo (and in turn, rustc) to output more complete // dependency information. Most importantly for rustbuild, this // includes sysroot artifacts, like libstd, which means that we don't // need to track those in rustbuild (an error prone process!). This // feature is currently unstable as there may be some bugs and such, but // it represents a big improvement in rustbuild's reliability on // rebuilds, so we're using it here. // // For some additional context, see #63470 (the PR originally adding // this), as well as #63012 which is the tracking issue for this // feature on the rustc side. cargo.arg(\"-Zbinary-dep-depinfo\"); cargo.arg(\"-j\").arg(self.jobs().to_string()); // Remove make-related flags to ensure Cargo can correctly set things up cargo.env_remove(\"MAKEFLAGS\");", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-f515d4963da911b191f7faae357370e1be6dc99e30fd6ccb2209db0947d434f6", "text": "let libdir = builder.sysroot_libdir(compiler, target); let hostdir = builder.sysroot_libdir(compiler, compiler.host); add_to_sysroot(&builder, &libdir, &hostdir, &rustdoc_stamp(builder, compiler, target)); builder.cargo(compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-432d9631a7ef91d129d2cfaff55f38dadb459167bf99bc7744fd5bc512bfc321", "text": "use std::process::{Command, Stdio, exit}; use std::str; use build_helper::{output, mtime, t, up_to_date}; use build_helper::{output, t, up_to_date}; use filetime::FileTime; use serde::Deserialize; use serde_json;", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-093ad82386bc6f2c9804225303004b7374d3bd3f54cc107e31b82a46c2e29d8e", "text": "// for reason why the sanitizers are not built in stage0. copy_apple_sanitizer_dylibs(builder, &builder.native_dir(target), \"osx\", &libdir); } builder.cargo(target_compiler, Mode::ToolStd, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-4f02897f85d8e9bfdaeae6766e030d8555e33e4613fcd3fb04fedf0d0990a802", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &libtest_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolTest, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-d2343b61ee6096e2e13dd91893e26549fb3a997a5898079b8bbabeb6dce96bb6", "text": "&builder.sysroot_libdir(target_compiler, compiler.host), &librustc_stamp(builder, compiler, target) ); builder.cargo(target_compiler, Mode::ToolRustc, target, \"clean\"); } }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-56d66dd8e210f89be0bd1a7f05632aa224d9cbce7732f2c7541235f5b4384edb", "query": "I was in the middle of bisecting on system llvm and running when I noticed that, in fact, while each step was rebuilding the compiler with the changed llvm correctly, the test that was being run was never recompiled, such that the same one was always running, and my bisect was guided by wrong results. STR: Run once Touch anything that is part of e.g. , e.g. Run again Expected result: Both and the test are rebuilt, and the rebuilt test executed. Actual result: is rebuilt, the test is not, and the test built before the codegen changes is executed.", "positive_passages": [{"docid": "doc-en-rust-d645c75ba90a4081207db3103439b1bcb0c9033b5d6299a205f1ce0b23845741", "text": "deps.push((path_to_add.into(), false)); } // Now we want to update the contents of the stamp file, if necessary. First // we read off the previous contents along with its mtime. If our new // contents (the list of files to copy) is different or if any dep's mtime // is newer then we rewrite the stamp file. deps.sort(); let stamp_contents = fs::read(stamp); let stamp_mtime = mtime(&stamp); let mut new_contents = Vec::new(); let mut max = None; let mut max_path = None; for (dep, proc_macro) in deps.iter() { let mtime = mtime(dep); if Some(mtime) > max { max = Some(mtime); max_path = Some(dep.clone()); } new_contents.extend(if *proc_macro { b\"h\" } else { b\"t\" }); new_contents.extend(dep.to_str().unwrap().as_bytes()); new_contents.extend(b\"0\"); } let max = max.unwrap(); let max_path = max_path.unwrap(); let contents_equal = stamp_contents .map(|contents| contents == new_contents) .unwrap_or_default(); if contents_equal && max <= stamp_mtime { builder.verbose(&format!(\"not updating {:?}; contents equal and {:?} <= {:?}\", stamp, max, stamp_mtime)); return deps.into_iter().map(|(d, _)| d).collect() } if max > stamp_mtime { builder.verbose(&format!(\"updating {:?} as {:?} changed\", stamp, max_path)); } else { builder.verbose(&format!(\"updating {:?} as deps changed\", stamp)); } t!(fs::write(&stamp, &new_contents)); deps.into_iter().map(|(d, _)| d).collect() }", "commid": "rust_pr_63470"}], "negative_passages": []} {"query_id": "q-en-rust-c9d9cf71add6acbe433ecba78decd8cf4c6be3f181389d3332bc6e8a02889923", "query": "Playing with the nighly's existential types, I stumbled upon a compiler stack overflow: Error message:\nA better example might be this: removing the from the existential type decleration just compiles fine\nCan you try running the compiler inside gdb and produce a backtrace?\n #![feature(existential_type)] existential type Foo: Fn() -> Foo; //~^ ERROR: cycle detected when processing `Foo` fn crash(x: Foo) -> Foo { x } fn main() { } ", "commid": "rust_pr_59459"}], "negative_passages": []} {"query_id": "q-en-rust-c9d9cf71add6acbe433ecba78decd8cf4c6be3f181389d3332bc6e8a02889923", "query": "Playing with the nighly's existential types, I stumbled upon a compiler stack overflow: Error message:\nA better example might be this: removing the from the existential type decleration just compiles fine\nCan you try running the compiler inside gdb and produce a backtrace?\n error[E0391]: cycle detected when processing `Foo` --> $DIR/existential-types-with-cycle-error.rs:3:1 | LL | existential type Foo: Fn() -> Foo; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | note: ...which requires processing `crash`... --> $DIR/existential-types-with-cycle-error.rs:6:25 | LL | fn crash(x: Foo) -> Foo { | _________________________^ LL | | x LL | | } | |_^ = note: ...which again requires processing `Foo`, completing the cycle note: cycle used when collecting item types in top-level module --> $DIR/existential-types-with-cycle-error.rs:1:1 | LL | / #![feature(existential_type)] LL | | LL | | existential type Foo: Fn() -> Foo; LL | | ... | LL | | LL | | } | |_^ error: aborting due to previous error For more information about this error, try `rustc --explain E0391`. ", "commid": "rust_pr_59459"}], "negative_passages": []} {"query_id": "q-en-rust-c9d9cf71add6acbe433ecba78decd8cf4c6be3f181389d3332bc6e8a02889923", "query": "Playing with the nighly's existential types, I stumbled upon a compiler stack overflow: Error message:\nA better example might be this: removing the from the existential type decleration just compiles fine\nCan you try running the compiler inside gdb and produce a backtrace?\n #![feature(existential_type)] pub trait Bar { type Item; } existential type Foo: Bar; //~^ ERROR: cycle detected when processing `Foo` fn crash(x: Foo) -> Foo { x } fn main() { } ", "commid": "rust_pr_59459"}], "negative_passages": []} {"query_id": "q-en-rust-c9d9cf71add6acbe433ecba78decd8cf4c6be3f181389d3332bc6e8a02889923", "query": "Playing with the nighly's existential types, I stumbled upon a compiler stack overflow: Error message:\nA better example might be this: removing the from the existential type decleration just compiles fine\nCan you try running the compiler inside gdb and produce a backtrace?\n error[E0391]: cycle detected when processing `Foo` --> $DIR/existential-types-with-cycle-error2.rs:7:1 | LL | existential type Foo: Bar; | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | note: ...which requires processing `crash`... --> $DIR/existential-types-with-cycle-error2.rs:10:25 | LL | fn crash(x: Foo) -> Foo { | _________________________^ LL | | x LL | | } | |_^ = note: ...which again requires processing `Foo`, completing the cycle note: cycle used when collecting item types in top-level module --> $DIR/existential-types-with-cycle-error2.rs:1:1 | LL | / #![feature(existential_type)] LL | | LL | | pub trait Bar { LL | | type Item; ... | LL | | LL | | } | |_^ error: aborting due to previous error For more information about this error, try `rustc --explain E0391`. ", "commid": "rust_pr_59459"}], "negative_passages": []} {"query_id": "q-en-rust-1a53424ab950f2197d21bea962940f1b32a11045bf939a9ceaf6fefc61588ee0", "query": "These are the most common tools for concurrency in Rust. They should always be at hand.\nAlso\nIf we add we should also add and .\nAgreed\nFixed by brson's commit.", "positive_passages": [{"docid": "doc-en-rust-7abb2fef10aa85f2f4f89b3b442b1d096cd25b4b525d395f9815f73913839c24", "text": "pub use vec::{ImmutableEqVector, ImmutableCopyableVector}; pub use vec::{OwnedVector, OwnedCopyableVector}; /* Reexported runtime types */ pub use comm::{stream, Port, Chan, GenericChan, GenericSmartChan, GenericPort, Peekable}; pub use task::spawn; /* Reexported modules */ pub use at_vec;", "commid": "rust_pr_5367"}], "negative_passages": []} {"query_id": "q-en-rust-6425f1153921b70291eda1d163ac5aecec3f486b43804ab1cff6833802fa6c23", "query": "As of we'll be updating to a version of the RLS which depends on racer which depends on . Unfortunately we're bringing in two different versions of , 209 and 211, right now. These should be deduplicated as they take a good deal of time to build on CI, and then we should also add a check which ensures that there's only one version of in our workspace. cc\nI've sent a PR to update racer as part of\nThis is fixed by", "positive_passages": [{"docid": "doc-en-rust-d0a8eafb49a65744ff4220a167579c233d5095e8d86f35abca7f8b2c60cda137", "text": "// versions of them accidentally sneak into our dependency graph to // ensure we keep our CI times under control // \"cargo\", // FIXME(#53005) // \"rustc-ap-syntax\", // FIXME(#53006) \"rustc-ap-syntax\", ]; let mut name_to_id = HashMap::new(); for node in resolve.nodes.iter() {", "commid": "rust_pr_53210"}], "negative_passages": []} {"query_id": "q-en-rust-4880d19369b37fc6327b2d38c0e4b2ceb8716429da0a8121989862d010b041fc", "query": "(edit: much simpler reproducer at second post) (see edit history for the initial issue text) Potential duplicate of and cc\n(edit: updated to nightly from 2018-08-14, as the error message changed) Another simpler reproducer: Backtrace: The backtrace looks quite different though, so I guess a fix for this one should also be checked to fix the first example.\nLatest nightly appears to have fixed the ICE, however it's still making unbounded recursion :)\nAsync-await traige: Marking as blocking pending investigation to understand what is happening.\nAlright, so here's what I've worked out so far: I was able to reproduce the ICE from the original code snippet using the compiler version and library versions from when the issue was reported ( was the closest version I could find and I had to use patch to make Cargo download versions of libraries that worked). I was then able to confirm that the reproduction supplied in the next comment also triggered the ICE. I was then able to get rid of the external dependencies (mostly by copying in parts of that version of futures) and still get the ICE - . After that, I upgraded to as mentioned in the next comment and confirmed that the ICE had been fixed and the unbounded recursion was happening with that same test case. Then, I took that test case that was confirmed to have the problem and tried to run it on the latest nightly - I had to make some minor adjustments as the pin module had changed since August, but I couldn't get the recursion error to happen. I'm inclined to say that this has been fixed at some point - .\nI've submitted with a regression test for this issue. if you are able to reproduce this failure on a more recent nightly then feel free to re-open or file a new issue.\nSounds good to me, thanks!", "positive_passages": [{"docid": "doc-en-rust-2c2a59dbf64a1412f224450ae9df6f3e46305774e5d0fedad748a34d28b229ce", "text": " // compile-pass // edition:2018 #![feature(arbitrary_self_types, async_await, await_macro)] use std::task::{self, Poll}; use std::future::Future; use std::marker::Unpin; use std::pin::Pin; // This is a regression test for a ICE/unbounded recursion issue relating to async-await. #[derive(Debug)] #[must_use = \"futures do nothing unless polled\"] pub struct Lazy { f: Option } impl Unpin for Lazy {} pub fn lazy(f: F) -> Lazy where F: FnOnce(&mut task::Context) -> R, { Lazy { f: Some(f) } } impl Future for Lazy where F: FnOnce(&mut task::Context) -> R, { type Output = R; fn poll(mut self: Pin<&mut Self>, cx: &mut task::Context) -> Poll { Poll::Ready((self.f.take().unwrap())(cx)) } } async fn __receive(want: WantFn) -> () where Fut: Future, WantFn: Fn(&Box) -> Fut, { await!(lazy(|_| ())); } pub fn basic_spawn_receive() { async { await!(__receive(|_| async { () })) }; } fn main() {} ", "commid": "rust_pr_60243"}], "negative_passages": []} {"query_id": "q-en-rust-33996eeb4e6692aa17d062f50ff98244029ecf4a978f251e75f0e6515305c0a1", "query": "The following code: Produces the following error message: This error message tells you that the types are mismatched, but doesn't fully explain why. To fix it, you need to use a floating point literal instead of an integer literal. This isn't clear from the error message. If you're new to programming or if you come from a language that doesn't draw the same distinction between integer and floating point literals, it may not be clear why the literal is not the same as . It would be great if we could add something in our error message that would suggest one or more of the valid ways to fix this: - - type could be in another context", "positive_passages": [{"docid": "doc-en-rust-821638dbb94fa27234684f735494b3f36f062fea14d0615f05c9949c21fcfab9", "text": "use std::fmt; use rustc_target::spec::abi; use syntax::ast; use errors::DiagnosticBuilder; use errors::{Applicability, DiagnosticBuilder}; use syntax_pos::Span; use hir;", "commid": "rust_pr_53283"}], "negative_passages": []} {"query_id": "q-en-rust-33996eeb4e6692aa17d062f50ff98244029ecf4a978f251e75f0e6515305c0a1", "query": "The following code: Produces the following error message: This error message tells you that the types are mismatched, but doesn't fully explain why. To fix it, you need to use a floating point literal instead of an integer literal. This isn't clear from the error message. If you're new to programming or if you come from a language that doesn't draw the same distinction between integer and floating point literals, it may not be clear why the literal is not the same as . It would be great if we could add something in our error message that would suggest one or more of the valid ways to fix this: - - type could be in another context", "positive_passages": [{"docid": "doc-en-rust-2ff11411f28cf5373cdcb3397a9631e235accd20837b458164346db94089419d", "text": "db.note(\"no two closures, even if identical, have the same type\"); db.help(\"consider boxing your closure and/or using it as a trait object\"); } match (&values.found.sty, &values.expected.sty) { // Issue #53280 (ty::TyInfer(ty::IntVar(_)), ty::TyFloat(_)) => { if let Ok(snippet) = self.sess.codemap().span_to_snippet(sp) { if snippet.chars().all(|c| c.is_digit(10) || c == '-' || c == '_') { db.span_suggestion_with_applicability( sp, \"use a float literal\", format!(\"{}.0\", snippet), Applicability::MachineApplicable ); } } }, _ => {} } }, OldStyleLUB(err) => { db.note(\"this was previously accepted by the compiler but has been phased out\");", "commid": "rust_pr_53283"}], "negative_passages": []} {"query_id": "q-en-rust-33996eeb4e6692aa17d062f50ff98244029ecf4a978f251e75f0e6515305c0a1", "query": "The following code: Produces the following error message: This error message tells you that the types are mismatched, but doesn't fully explain why. To fix it, you need to use a floating point literal instead of an integer literal. This isn't clear from the error message. If you're new to programming or if you come from a language that doesn't draw the same distinction between integer and floating point literals, it may not be clear why the literal is not the same as . It would be great if we could add something in our error message that would suggest one or more of the valid ways to fix this: - - type could be in another context", "positive_passages": [{"docid": "doc-en-rust-bfeb57377dab3f0c492bbc40ca0672bf3e2abcc647618a0728a0599320560755", "text": "--> $DIR/catch-block-type-error.rs:18:9 | LL | 42 | ^^ expected f32, found integral variable | ^^ | | | expected f32, found integral variable | help: use a float literal: `42.0` | = note: expected type `f32` found type `{integer}`", "commid": "rust_pr_53283"}], "negative_passages": []} {"query_id": "q-en-rust-33996eeb4e6692aa17d062f50ff98244029ecf4a978f251e75f0e6515305c0a1", "query": "The following code: Produces the following error message: This error message tells you that the types are mismatched, but doesn't fully explain why. To fix it, you need to use a floating point literal instead of an integer literal. This isn't clear from the error message. If you're new to programming or if you come from a language that doesn't draw the same distinction between integer and floating point literals, it may not be clear why the literal is not the same as . It would be great if we could add something in our error message that would suggest one or more of the valid ways to fix this: - - type could be in another context", "positive_passages": [{"docid": "doc-en-rust-981800a85e59b3f872c59949783b88c469a3fa72570f85c3d23519178cb4a3a8", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { let sixteen: f32 = 16; //~^ ERROR mismatched types //~| HELP use a float literal let a_million_and_seventy: f64 = 1_000_070; //~^ ERROR mismatched types //~| HELP use a float literal let negative_nine: f32 = -9; //~^ ERROR mismatched types //~| HELP use a float literal // only base-10 literals get the suggestion let sixteen_again: f64 = 0x10; //~^ ERROR mismatched types let and_once_more: f32 = 0o20; //~^ ERROR mismatched types } ", "commid": "rust_pr_53283"}], "negative_passages": []} {"query_id": "q-en-rust-33996eeb4e6692aa17d062f50ff98244029ecf4a978f251e75f0e6515305c0a1", "query": "The following code: Produces the following error message: This error message tells you that the types are mismatched, but doesn't fully explain why. To fix it, you need to use a floating point literal instead of an integer literal. This isn't clear from the error message. If you're new to programming or if you come from a language that doesn't draw the same distinction between integer and floating point literals, it may not be clear why the literal is not the same as . It would be great if we could add something in our error message that would suggest one or more of the valid ways to fix this: - - type could be in another context", "positive_passages": [{"docid": "doc-en-rust-4cd453487e5fa3a45012b5762e11ce463fa8d802ae6dc0268a235b1c22c78695", "text": " error[E0308]: mismatched types --> $DIR/issue-53280-expected-float-found-integer-literal.rs:12:24 | LL | let sixteen: f32 = 16; | ^^ | | | expected f32, found integral variable | help: use a float literal: `16.0` | = note: expected type `f32` found type `{integer}` error[E0308]: mismatched types --> $DIR/issue-53280-expected-float-found-integer-literal.rs:15:38 | LL | let a_million_and_seventy: f64 = 1_000_070; | ^^^^^^^^^ | | | expected f64, found integral variable | help: use a float literal: `1_000_070.0` | = note: expected type `f64` found type `{integer}` error[E0308]: mismatched types --> $DIR/issue-53280-expected-float-found-integer-literal.rs:18:30 | LL | let negative_nine: f32 = -9; | ^^ | | | expected f32, found integral variable | help: use a float literal: `-9.0` | = note: expected type `f32` found type `{integer}` error[E0308]: mismatched types --> $DIR/issue-53280-expected-float-found-integer-literal.rs:25:30 | LL | let sixteen_again: f64 = 0x10; | ^^^^ expected f64, found integral variable | = note: expected type `f64` found type `{integer}` error[E0308]: mismatched types --> $DIR/issue-53280-expected-float-found-integer-literal.rs:27:30 | LL | let and_once_more: f32 = 0o20; | ^^^^ expected f32, found integral variable | = note: expected type `f32` found type `{integer}` error: aborting due to 5 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_53283"}], "negative_passages": []} {"query_id": "q-en-rust-b6ed28e355d7ad2031e85bed094107dee3961e955feecccfeab34da8693d329a", "query": "We should only pass stage0 in general, should be sufficient for all use cases. cc for awareness and in case there's any concerns", "positive_passages": [{"docid": "doc-en-rust-19c3dfa8bc32bc9a9584f2e3bafd8a4a997be1399d8b29b2018a77b098620cf3", "text": "let mut cmd = Command::new(rustc); cmd.args(&args) .arg(\"--cfg\") .arg(format!(\"stage{}\", stage)) .env(bootstrap::util::dylib_path_var(), env::join_paths(&dylib_path).unwrap()); let mut maybe_crate = None; // Non-zero stages must all be treated uniformly to avoid problems when attempting to uplift // compiler libraries and such from stage 1 to 2. if stage == \"0\" { cmd.arg(\"--cfg\").arg(\"bootstrap\"); } // Print backtrace in case of ICE if env::var(\"RUSTC_BACKTRACE_ON_ICE\").is_ok() && env::var(\"RUST_BACKTRACE\").is_err() { cmd.env(\"RUST_BACKTRACE\", \"1\");", "commid": "rust_pr_61494"}], "negative_passages": []} {"query_id": "q-en-rust-b6ed28e355d7ad2031e85bed094107dee3961e955feecccfeab34da8693d329a", "query": "We should only pass stage0 in general, should be sufficient for all use cases. cc for awareness and in case there's any concerns", "positive_passages": [{"docid": "doc-en-rust-1dbeda62b6c0057fe1bf796294382bf907a2c954f68a1c0b7d65a2b983beb7ff", "text": "if !up_to_date(src_file, dst_file) { let mut cmd = Command::new(&builder.initial_rustc); builder.run(cmd.env(\"RUSTC_BOOTSTRAP\", \"1\") .arg(\"--cfg\").arg(\"stage0\") .arg(\"--cfg\").arg(\"bootstrap\") .arg(\"--target\").arg(target) .arg(\"--emit=obj\") .arg(\"-o\").arg(dst_file)", "commid": "rust_pr_61494"}], "negative_passages": []} {"query_id": "q-en-rust-b6ed28e355d7ad2031e85bed094107dee3961e955feecccfeab34da8693d329a", "query": "We should only pass stage0 in general, should be sufficient for all use cases. cc for awareness and in case there's any concerns", "positive_passages": [{"docid": "doc-en-rust-2e59dab19486ef3b0a58bde31e33fe9a2c5887e33a1694173a7719ca60ec4a48", "text": "/// Returns the result of an unchecked addition, resulting in /// undefined behavior when `x + y > T::max_value()` or `x + y < T::min_value()`. #[cfg(not(stage0))] #[cfg(not(bootstrap))] pub fn unchecked_add(x: T, y: T) -> T; /// Returns the result of an unchecked substraction, resulting in /// undefined behavior when `x - y > T::max_value()` or `x - y < T::min_value()`. #[cfg(not(stage0))] #[cfg(not(bootstrap))] pub fn unchecked_sub(x: T, y: T) -> T; /// Returns the result of an unchecked multiplication, resulting in /// undefined behavior when `x * y > T::max_value()` or `x * y < T::min_value()`. #[cfg(not(stage0))] #[cfg(not(bootstrap))] pub fn unchecked_mul(x: T, y: T) -> T; /// Performs rotate left.", "commid": "rust_pr_61494"}], "negative_passages": []} {"query_id": "q-en-rust-b6ed28e355d7ad2031e85bed094107dee3961e955feecccfeab34da8693d329a", "query": "We should only pass stage0 in general, should be sufficient for all use cases. cc for awareness and in case there's any concerns", "positive_passages": [{"docid": "doc-en-rust-cc011ff0668bd2072e9de041c221f8ca0fe1d41c19577ea919454185c88153b7", "text": "#[derive(Copy, Clone, Eq, PartialEq, Ord, PartialOrd, Hash)] #[repr(transparent)] #[rustc_layout_scalar_valid_range_start(1)] #[cfg_attr(not(stage0), rustc_nonnull_optimization_guaranteed)] #[cfg_attr(not(bootstrap), rustc_nonnull_optimization_guaranteed)] pub struct $Ty($Int); }", "commid": "rust_pr_61494"}], "negative_passages": []} {"query_id": "q-en-rust-b6ed28e355d7ad2031e85bed094107dee3961e955feecccfeab34da8693d329a", "query": "We should only pass stage0 in general, should be sufficient for all use cases. cc for awareness and in case there's any concerns", "positive_passages": [{"docid": "doc-en-rust-f6d2dd2b46d27d24a9fdbb1535240e3889309298fdc1d6a6a64a0ee873ac2b23", "text": "#[stable(feature = \"nonnull\", since = \"1.25.0\")] #[repr(transparent)] #[rustc_layout_scalar_valid_range_start(1)] #[cfg_attr(not(stage0), rustc_nonnull_optimization_guaranteed)] #[cfg_attr(not(bootstrap), rustc_nonnull_optimization_guaranteed)] pub struct NonNull { pointer: *const T, }", "commid": "rust_pr_61494"}], "negative_passages": []} {"query_id": "q-en-rust-8d232c1bc8597181a68fbe6b5c9637ac3470f849adf48cac95261107cdb2a7fb", "query": "Let's say you have the following (simplified) code: () This results in the following error: The suggested fix will work, but already copies the data so the extra is wasteful. The error message should instead suggest . This is an error you can run into if was previously of type and you changed it to a slice after writing the rest of the code.\nNote: this kind of improvement applies to too: Err:", "positive_passages": [{"docid": "doc-en-rust-8ae11538bc4c12fa714430adecf84234046102af1e7ab41715005d23d939e9bb", "text": "if receiver.ends_with(&method_call) { None // do not suggest code that is already there (#53348) } else { Some(format!(\"{}{}\", receiver, method_call)) /* methods defined in `method_call_list` will overwrite `.clone()` in copy of `receiver` */ let method_call_list = [\".to_vec()\", \".to_string()\"]; if receiver.ends_with(\".clone()\") && method_call_list.contains(&method_call.as_str()){ // created copy of `receiver` because we don't want other // suggestion to get affected let mut new_receiver = receiver.clone(); let max_len = new_receiver.rfind(\".\").unwrap(); new_receiver.truncate(max_len); Some(format!(\"{}{}\", new_receiver, method_call)) } else { Some(format!(\"{}{}\", receiver, method_call)) } } }) .collect::>(); if !suggestions.is_empty() {", "commid": "rust_pr_54080"}], "negative_passages": []} {"query_id": "q-en-rust-8d232c1bc8597181a68fbe6b5c9637ac3470f849adf48cac95261107cdb2a7fb", "query": "Let's say you have the following (simplified) code: () This results in the following error: The suggested fix will work, but already copies the data so the extra is wasteful. The error message should instead suggest . This is an error you can run into if was previously of type and you changed it to a slice after writing the rest of the code.\nNote: this kind of improvement applies to too: Err:", "positive_passages": [{"docid": "doc-en-rust-30d510eaee38def757036da73a32135e0f9bc82b8e01c2314cfd0a981cbbac3e", "text": " // Copyright 2018 The Rust Project Developers. See the COPYRIGHT // file at the top-level directory of this distribution and at // http://rust-lang.org/COPYRIGHT. // // Licensed under the Apache License, Version 2.0 or the MIT license // , at your // option. This file may not be copied, modified, or distributed // except according to those terms. fn main() { let items = vec![1, 2, 3]; let ref_items: &[i32] = &items; let items_clone: Vec = ref_items.clone(); // in that case no suggestion will be triggered let items_clone_2:Vec = items.clone(); let s = \"hi\"; let string: String = s.clone(); // in that case no suggestion will be triggered let s2 = \"hi\"; let string_2: String = s2.to_string(); } ", "commid": "rust_pr_54080"}], "negative_passages": []} {"query_id": "q-en-rust-8d232c1bc8597181a68fbe6b5c9637ac3470f849adf48cac95261107cdb2a7fb", "query": "Let's say you have the following (simplified) code: () This results in the following error: The suggested fix will work, but already copies the data so the extra is wasteful. The error message should instead suggest . This is an error you can run into if was previously of type and you changed it to a slice after writing the rest of the code.\nNote: this kind of improvement applies to too: Err:", "positive_passages": [{"docid": "doc-en-rust-f17d79c8ff3435d5acbe2c9bdff658e15ecf333a3bffc532e0c2e9b8622b5cb7", "text": " error[E0308]: mismatched types --> $DIR/issue-53692.rs:13:37 | LL | let items_clone: Vec = ref_items.clone(); | ^^^^^^^^^^^^^^^^^ | | | expected struct `std::vec::Vec`, found &[i32] | help: try using a conversion method: `ref_items.to_vec()` | = note: expected type `std::vec::Vec` found type `&[i32]` error[E0308]: mismatched types --> $DIR/issue-53692.rs:19:30 | LL | let string: String = s.clone(); | ^^^^^^^^^ | | | expected struct `std::string::String`, found &str | help: try using a conversion method: `s.to_string()` | = note: expected type `std::string::String` found type `&str` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_54080"}], "negative_passages": []} {"query_id": "q-en-rust-071a6f23d74e89545056bec04a86fa4c3e0ba768d58aebabc11e9dcef69d9917", "query": "This example (reduced from ) either crashes LLVM or makes it emit an assertion: I suspect the loops aren't needed, but the control-flow there affects the bug in finicky ways. The name of the module appears to be important and tied to the fact that we end up emitting in a symbol name (because of the ), which gets mangled as (note the two dots). Among other things, changing the we use to mangle , to makes the bug go away. Assertion message (if LLVM assertions are enabled): Stack backtrace for SIGSEGV (if LLVM assertions are disabled): cc\nFound the culprit in : LLVM implicitly assumes that cannot show up randomly in symbol names, and rustc breaks that assumption - but LLVM never checks it, so it shares part of the blame here, IMO. Probably the correct thing to do for LLVM is to split the string at the last , instead of the first.\nWhy are computers\nThe work around I introduced with the above commit is a proposition of and fixes this issue with the segfault.\nSanitizing to and adding a debug assertion would also be a good idea.", "positive_passages": [{"docid": "doc-en-rust-965352e527425cb2e2428513ec0566e8d991617c8402e6b553c9518ec06ed767", "text": "// for ':' and '-' '-' | ':' => self.path.temp_buf.push('.'), // Avoid segmentation fault on some platforms, see #60925. 'm' if self.path.temp_buf.ends_with(\".llv\") => self.path.temp_buf.push_str(\"$6d$\"), // These are legal symbols 'a'..='z' | 'A'..='Z' | '0'..='9' | '_' | '.' | '$' => self.path.temp_buf.push(c),", "commid": "rust_pr_61195"}], "negative_passages": []} {"query_id": "q-en-rust-071a6f23d74e89545056bec04a86fa4c3e0ba768d58aebabc11e9dcef69d9917", "query": "This example (reduced from ) either crashes LLVM or makes it emit an assertion: I suspect the loops aren't needed, but the control-flow there affects the bug in finicky ways. The name of the module appears to be important and tied to the fact that we end up emitting in a symbol name (because of the ), which gets mangled as (note the two dots). Among other things, changing the we use to mangle , to makes the bug go away. Assertion message (if LLVM assertions are enabled): Stack backtrace for SIGSEGV (if LLVM assertions are disabled): cc\nFound the culprit in : LLVM implicitly assumes that cannot show up randomly in symbol names, and rustc breaks that assumption - but LLVM never checks it, so it shares part of the blame here, IMO. Probably the correct thing to do for LLVM is to split the string at the last , instead of the first.\nWhy are computers\nThe work around I introduced with the above commit is a proposition of and fixes this issue with the segfault.\nSanitizing to and adding a debug assertion would also be a good idea.", "positive_passages": [{"docid": "doc-en-rust-77f5d451307b19de0cd129129217f9d2bbe14fcca4d0b330af5b1f965f0d3ae4", "text": " // compile-pass // This test is the same code as in ui/symbol-names/issue-60925.rs but this checks that the // reproduction compiles successfully and doesn't segfault, whereas that test just checks that the // symbol mangling fix produces the correct result. fn dummy() {} mod llvm { pub(crate) struct Foo; } mod foo { pub(crate) struct Foo(T); impl Foo<::llvm::Foo> { pub(crate) fn foo() { for _ in 0..0 { for _ in &[::dummy()] { ::dummy(); ::dummy(); ::dummy(); } } } } pub(crate) fn foo() { Foo::foo(); Foo::foo(); } } pub fn foo() { foo::foo(); } fn main() {} ", "commid": "rust_pr_61195"}], "negative_passages": []} {"query_id": "q-en-rust-071a6f23d74e89545056bec04a86fa4c3e0ba768d58aebabc11e9dcef69d9917", "query": "This example (reduced from ) either crashes LLVM or makes it emit an assertion: I suspect the loops aren't needed, but the control-flow there affects the bug in finicky ways. The name of the module appears to be important and tied to the fact that we end up emitting in a symbol name (because of the ), which gets mangled as (note the two dots). Among other things, changing the we use to mangle , to makes the bug go away. Assertion message (if LLVM assertions are enabled): Stack backtrace for SIGSEGV (if LLVM assertions are disabled): cc\nFound the culprit in : LLVM implicitly assumes that cannot show up randomly in symbol names, and rustc breaks that assumption - but LLVM never checks it, so it shares part of the blame here, IMO. Probably the correct thing to do for LLVM is to split the string at the last , instead of the first.\nWhy are computers\nThe work around I introduced with the above commit is a proposition of and fixes this issue with the segfault.\nSanitizing to and adding a debug assertion would also be a good idea.", "positive_passages": [{"docid": "doc-en-rust-39e443e712ffb8bcdbd2441c3cc3be7781dbf146dc4d371976481a7cba3543ad", "text": " #![feature(rustc_attrs)] // This test is the same code as in ui/issue-53912.rs but this test checks that the symbol mangling // fix produces the correct result, whereas that test just checks that the reproduction compiles // successfully and doesn't segfault fn dummy() {} mod llvm { pub(crate) struct Foo; } mod foo { pub(crate) struct Foo(T); impl Foo<::llvm::Foo> { #[rustc_symbol_name] //~^ ERROR _ZN11issue_609253foo36Foo$LT$issue_60925..llv$6d$..Foo$GT$3foo17h059a991a004536adE pub(crate) fn foo() { for _ in 0..0 { for _ in &[::dummy()] { ::dummy(); ::dummy(); ::dummy(); } } } } pub(crate) fn foo() { Foo::foo(); Foo::foo(); } } pub fn foo() { foo::foo(); } fn main() {} ", "commid": "rust_pr_61195"}], "negative_passages": []} {"query_id": "q-en-rust-071a6f23d74e89545056bec04a86fa4c3e0ba768d58aebabc11e9dcef69d9917", "query": "This example (reduced from ) either crashes LLVM or makes it emit an assertion: I suspect the loops aren't needed, but the control-flow there affects the bug in finicky ways. The name of the module appears to be important and tied to the fact that we end up emitting in a symbol name (because of the ), which gets mangled as (note the two dots). Among other things, changing the we use to mangle , to makes the bug go away. Assertion message (if LLVM assertions are enabled): Stack backtrace for SIGSEGV (if LLVM assertions are disabled): cc\nFound the culprit in : LLVM implicitly assumes that cannot show up randomly in symbol names, and rustc breaks that assumption - but LLVM never checks it, so it shares part of the blame here, IMO. Probably the correct thing to do for LLVM is to split the string at the last , instead of the first.\nWhy are computers\nThe work around I introduced with the above commit is a proposition of and fixes this issue with the segfault.\nSanitizing to and adding a debug assertion would also be a good idea.", "positive_passages": [{"docid": "doc-en-rust-46624c17f2abdd3cd74cc525304007f027a75abc1012a191f5ba70026e7a42eb", "text": " error: symbol-name(_ZN11issue_609253foo36Foo$LT$issue_60925..llv$6d$..Foo$GT$3foo17h059a991a004536adE) --> $DIR/issue-60925.rs:16:9 | LL | #[rustc_symbol_name] | ^^^^^^^^^^^^^^^^^^^^ error: aborting due to previous error ", "commid": "rust_pr_61195"}], "negative_passages": []} {"query_id": "q-en-rust-36393fc0942fda621374bc29f1fc43a3537108ec899e36d6b2125f94ffa7d3ea", "query": "Take the following code: Attempting to compile this results in the following: I think the compiler should fail with something along the lines of \"you attempted to use a complex type () where I expected an immediate\".\nedit: not sure if these symptoms are the same, but here's what I stumbled across. The following code generates a similar result for me: The and struct seem to be required, else the ICE is not triggered. Output\nNo longer reproduces:", "positive_passages": [{"docid": "doc-en-rust-b111cee88113aab44e0e7a620b5688b5e609d8d74eabb1430ecdb1a13081dc57", "text": " // check-pass // ignore-emscripten no llvm_asm! support #![feature(llvm_asm)] pub fn boot(addr: Option) { unsafe { llvm_asm!(\"mov sp, $0\"::\"r\" (addr)); } } fn main() {} ", "commid": "rust_pr_71182"}], "negative_passages": []} {"query_id": "q-en-rust-36393fc0942fda621374bc29f1fc43a3537108ec899e36d6b2125f94ffa7d3ea", "query": "Take the following code: Attempting to compile this results in the following: I think the compiler should fail with something along the lines of \"you attempted to use a complex type () where I expected an immediate\".\nedit: not sure if these symptoms are the same, but here's what I stumbled across. The following code generates a similar result for me: The and struct seem to be required, else the ICE is not triggered. Output\nNo longer reproduces:", "positive_passages": [{"docid": "doc-en-rust-9bbda2951bd1d422a0299f321743b1ffc80223d3d2b4feb84ae91e1615131b62", "text": " // edition:2018 use std::sync::{Arc, Mutex}; pub async fn f(_: ()) {} pub async fn run() { let x: Arc> = unimplemented!(); f(*x.lock().unwrap()).await; } ", "commid": "rust_pr_71182"}], "negative_passages": []} {"query_id": "q-en-rust-36393fc0942fda621374bc29f1fc43a3537108ec899e36d6b2125f94ffa7d3ea", "query": "Take the following code: Attempting to compile this results in the following: I think the compiler should fail with something along the lines of \"you attempted to use a complex type () where I expected an immediate\".\nedit: not sure if these symptoms are the same, but here's what I stumbled across. The following code generates a similar result for me: The and struct seem to be required, else the ICE is not triggered. Output\nNo longer reproduces:", "positive_passages": [{"docid": "doc-en-rust-0171650462e4b54ca96211827890dfed43585caa9d06c87d9b38aa84282859f0", "text": " // aux-build: issue_67893.rs // edition:2018 // dont-check-compiler-stderr // FIXME(#71222): Add above flag because of the difference of stderrs on some env. extern crate issue_67893; fn g(_: impl Send) {} fn main() { g(issue_67893::run()) //~^ ERROR: `std::sync::MutexGuard<'_, ()>` cannot be sent between threads safely } ", "commid": "rust_pr_71182"}], "negative_passages": []} {"query_id": "q-en-rust-36393fc0942fda621374bc29f1fc43a3537108ec899e36d6b2125f94ffa7d3ea", "query": "Take the following code: Attempting to compile this results in the following: I think the compiler should fail with something along the lines of \"you attempted to use a complex type () where I expected an immediate\".\nedit: not sure if these symptoms are the same, but here's what I stumbled across. The following code generates a similar result for me: The and struct seem to be required, else the ICE is not triggered. Output\nNo longer reproduces:", "positive_passages": [{"docid": "doc-en-rust-2ddf6c58ddf4b601141935c77f885ff7fc81a67e0a3f28728e31de8e60e347d2", "text": " #![feature(intrinsics)] extern \"C\" { pub static FOO: extern \"rust-intrinsic\" fn(); } fn main() { FOO() //~ ERROR: use of extern static is unsafe } ", "commid": "rust_pr_71182"}], "negative_passages": []} {"query_id": "q-en-rust-36393fc0942fda621374bc29f1fc43a3537108ec899e36d6b2125f94ffa7d3ea", "query": "Take the following code: Attempting to compile this results in the following: I think the compiler should fail with something along the lines of \"you attempted to use a complex type () where I expected an immediate\".\nedit: not sure if these symptoms are the same, but here's what I stumbled across. The following code generates a similar result for me: The and struct seem to be required, else the ICE is not triggered. Output\nNo longer reproduces:", "positive_passages": [{"docid": "doc-en-rust-347db88b9583cffb99f514a6abdc7080778fc486bcce9801ef67e394bb032380", "text": " error[E0133]: use of extern static is unsafe and requires unsafe function or block --> $DIR/issue-28575.rs:8:5 | LL | FOO() | ^^^ use of extern static | = note: extern statics are not controlled by the Rust type system: invalid data, aliasing violations or data races will cause undefined behavior error: aborting due to previous error For more information about this error, try `rustc --explain E0133`. ", "commid": "rust_pr_71182"}], "negative_passages": []} {"query_id": "q-en-rust-36393fc0942fda621374bc29f1fc43a3537108ec899e36d6b2125f94ffa7d3ea", "query": "Take the following code: Attempting to compile this results in the following: I think the compiler should fail with something along the lines of \"you attempted to use a complex type () where I expected an immediate\".\nedit: not sure if these symptoms are the same, but here's what I stumbled across. The following code generates a similar result for me: The and struct seem to be required, else the ICE is not triggered. Output\nNo longer reproduces:", "positive_passages": [{"docid": "doc-en-rust-39cfaec0d2fbd441504c7b401a9cf6dbd1948d7e157379fc88934e375e7cfb63", "text": " pub static TEST_STR: &'static str = \"Hello world\"; ", "commid": "rust_pr_71182"}], "negative_passages": []} {"query_id": "q-en-rust-36393fc0942fda621374bc29f1fc43a3537108ec899e36d6b2125f94ffa7d3ea", "query": "Take the following code: Attempting to compile this results in the following: I think the compiler should fail with something along the lines of \"you attempted to use a complex type () where I expected an immediate\".\nedit: not sure if these symptoms are the same, but here's what I stumbled across. The following code generates a similar result for me: The and struct seem to be required, else the ICE is not triggered. Output\nNo longer reproduces:", "positive_passages": [{"docid": "doc-en-rust-5c74c3295b7cbb090e71d42c4c3e57b5d4abae463790a7a89870ca49933cbeae", "text": " // aux-build: issue_24843.rs // check-pass extern crate issue_24843; static _TEST_STR_2: &'static str = &issue_24843::TEST_STR; fn main() {} ", "commid": "rust_pr_71182"}], "negative_passages": []} {"query_id": "q-en-rust-f6f81194b04e02f36c2b9b5ef5a1ab9c3091a225566ade2ff3ad7f4e3bffe465", "query": "This page: It's not obvious that reference equality is accomplished via implicit reference-pointer coercion plus . The docs mention this, but I would not look there to find reference equality. The reference primitive page is someplace I would look for info about reference equality. Update this page to discuss reference equality, and link to .\nCan I help with this issue?\nStill waiting for your reply\nSorry, it looks like got to this now, in", "positive_passages": [{"docid": "doc-en-rust-350e274ef6e82a00ba99357a3982b6ed9d0b64494cf2da840510109239afbd59", "text": "/// `&mut T` references can be freely coerced into `&T` references with the same referent type, and /// references with longer lifetimes can be freely coerced into references with shorter ones. /// /// Reference equality by address, instead of comparing the values pointed to, is accomplished via /// implicit reference-pointer coercion and raw pointer equality via [`ptr::eq`], while /// [`PartialEq`] compares values. /// /// [`ptr::eq`]: ptr/fn.eq.html /// [`PartialEq`]: cmp/trait.PartialEq.html /// /// ``` /// use std::ptr; /// /// let five = 5; /// let other_five = 5; /// let five_ref = &five; /// let same_five_ref = &five; /// let other_five_ref = &other_five; /// /// assert!(five_ref == same_five_ref); /// assert!(five_ref == other_five_ref); /// /// assert!(ptr::eq(five_ref, same_five_ref)); /// assert!(!ptr::eq(five_ref, other_five_ref)); /// ``` /// /// For more information on how to use references, see [the book's section on \"References and /// Borrowing\"][book-refs]. /// /// [book-refs]: ../book/second-edition/ch04-02-references-and-borrowing.html /// /// # Trait implementations /// /// The following traits are implemented for all `&T`, regardless of the type of its referent: /// /// * [`Copy`]", "commid": "rust_pr_54755"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-5bdedc812132e683bb84fce1ed07393ea44f05998df134b2dc1b437b1eafea15", "text": "}; if self.can_coerce(ref_ty, expected) { if let Ok(src) = cm.span_to_snippet(sp) { let sugg_expr = match expr.node { // parenthesize if needed (Issue #46756) let needs_parens = match expr.node { // parenthesize if needed (Issue #46756) hir::ExprKind::Cast(_, _) | hir::ExprKind::Binary(_, _, _) => format!(\"({})\", src), _ => src, hir::ExprKind::Binary(_, _, _) => true, // parenthesize borrows of range literals (Issue #54505) _ if self.is_range_literal(expr) => true, _ => false, }; let sugg_expr = if needs_parens { format!(\"({})\", src) } else { src }; if let Some(sugg) = self.can_use_as_ref(expr) { return Some(sugg); }", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-010a7dda86dbc7d94aff92302d4fd01089656e021988847660ef6a4996612f9f", "text": "None } /// This function checks if the specified expression is a built-in range literal. /// (See: `LoweringContext::lower_expr()` in `src/librustc/hir/lowering.rs`). fn is_range_literal(&self, expr: &hir::Expr) -> bool { use hir::{Path, QPath, ExprKind, TyKind}; // We support `::std::ops::Range` and `::core::ops::Range` prefixes let is_range_path = |path: &Path| { let mut segs = path.segments.iter() .map(|seg| seg.ident.as_str()); if let (Some(root), Some(std_core), Some(ops), Some(range), None) = (segs.next(), segs.next(), segs.next(), segs.next(), segs.next()) { // \"{{root}}\" is the equivalent of `::` prefix in Path root == \"{{root}}\" && (std_core == \"std\" || std_core == \"core\") && ops == \"ops\" && range.starts_with(\"Range\") } else { false } }; let span_is_range_literal = |span: &Span| { // Check whether a span corresponding to a range expression // is a range literal, rather than an explicit struct or `new()` call. let source_map = self.tcx.sess.source_map(); let end_point = source_map.end_point(*span); if let Ok(end_string) = source_map.span_to_snippet(end_point) { !(end_string.ends_with(\"}\") || end_string.ends_with(\")\")) } else { false } }; match expr.node { // All built-in range literals but `..=` and `..` desugar to Structs ExprKind::Struct(QPath::Resolved(None, ref path), _, _) | // `..` desugars to its struct path ExprKind::Path(QPath::Resolved(None, ref path)) => { return is_range_path(&path) && span_is_range_literal(&expr.span); } // `..=` desugars into `::std::ops::RangeInclusive::new(...)` ExprKind::Call(ref func, _) => { if let ExprKind::Path(QPath::TypeRelative(ref ty, ref segment)) = func.node { if let TyKind::Path(QPath::Resolved(None, ref path)) = ty.node { let call_to_new = segment.ident.as_str() == \"new\"; return is_range_path(&path) && span_is_range_literal(&expr.span) && call_to_new; } } } _ => {} } false } pub fn check_for_cast(&self, err: &mut DiagnosticBuilder<'tcx>, expr: &hir::Expr,", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-a5261fce819135d2e62386b006710dc10c88fb761e13852cd7a21c4da565b082", "text": " // run-rustfix // Regression test for changes introduced while fixing #54505 // This test uses non-literals for Ranges // (expecting no parens with borrow suggestion) use std::ops::RangeBounds; // take a reference to any built-in range fn take_range(_r: &impl RangeBounds) {} fn main() { take_range(&std::ops::Range { start: 0, end: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::Range { start: 0, end: 1 } take_range(&::std::ops::Range { start: 0, end: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::Range { start: 0, end: 1 } take_range(&std::ops::RangeFrom { start: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeFrom { start: 1 } take_range(&::std::ops::RangeFrom { start: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeFrom { start: 1 } take_range(&std::ops::RangeFull {}); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeFull {} take_range(&::std::ops::RangeFull {}); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeFull {} take_range(&std::ops::RangeInclusive::new(0, 1)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeInclusive::new(0, 1) take_range(&::std::ops::RangeInclusive::new(0, 1)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeInclusive::new(0, 1) take_range(&std::ops::RangeTo { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeTo { end: 5 } take_range(&::std::ops::RangeTo { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeTo { end: 5 } take_range(&std::ops::RangeToInclusive { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeToInclusive { end: 5 } take_range(&::std::ops::RangeToInclusive { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeToInclusive { end: 5 } } ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-fabdcc7d6d506a1ff52e772db76cc5afd2d4f6ddac47dc034c9d34aa36bae083", "text": " // run-rustfix // Regression test for changes introduced while fixing #54505 // This test uses non-literals for Ranges // (expecting no parens with borrow suggestion) use std::ops::RangeBounds; // take a reference to any built-in range fn take_range(_r: &impl RangeBounds) {} fn main() { take_range(std::ops::Range { start: 0, end: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::Range { start: 0, end: 1 } take_range(::std::ops::Range { start: 0, end: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::Range { start: 0, end: 1 } take_range(std::ops::RangeFrom { start: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeFrom { start: 1 } take_range(::std::ops::RangeFrom { start: 1 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeFrom { start: 1 } take_range(std::ops::RangeFull {}); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeFull {} take_range(::std::ops::RangeFull {}); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeFull {} take_range(std::ops::RangeInclusive::new(0, 1)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeInclusive::new(0, 1) take_range(::std::ops::RangeInclusive::new(0, 1)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeInclusive::new(0, 1) take_range(std::ops::RangeTo { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeTo { end: 5 } take_range(::std::ops::RangeTo { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeTo { end: 5 } take_range(std::ops::RangeToInclusive { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &std::ops::RangeToInclusive { end: 5 } take_range(::std::ops::RangeToInclusive { end: 5 }); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &::std::ops::RangeToInclusive { end: 5 } } ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-91aa829566a5df7a09c8791c153248e9d3fbd45e1b398560fdad1754e8b5f6f6", "text": " error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:16:16 | LL | take_range(std::ops::Range { start: 0, end: 1 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::Range` | help: consider borrowing here: `&std::ops::Range { start: 0, end: 1 }` | = note: expected type `&_` found type `std::ops::Range<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:21:16 | LL | take_range(::std::ops::Range { start: 0, end: 1 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::Range` | help: consider borrowing here: `&::std::ops::Range { start: 0, end: 1 }` | = note: expected type `&_` found type `std::ops::Range<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:26:16 | LL | take_range(std::ops::RangeFrom { start: 1 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeFrom` | help: consider borrowing here: `&std::ops::RangeFrom { start: 1 }` | = note: expected type `&_` found type `std::ops::RangeFrom<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:31:16 | LL | take_range(::std::ops::RangeFrom { start: 1 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeFrom` | help: consider borrowing here: `&::std::ops::RangeFrom { start: 1 }` | = note: expected type `&_` found type `std::ops::RangeFrom<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:36:16 | LL | take_range(std::ops::RangeFull {}); | ^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeFull` | help: consider borrowing here: `&std::ops::RangeFull {}` | = note: expected type `&_` found type `std::ops::RangeFull` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:41:16 | LL | take_range(::std::ops::RangeFull {}); | ^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeFull` | help: consider borrowing here: `&::std::ops::RangeFull {}` | = note: expected type `&_` found type `std::ops::RangeFull` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:46:16 | LL | take_range(std::ops::RangeInclusive::new(0, 1)); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeInclusive` | help: consider borrowing here: `&std::ops::RangeInclusive::new(0, 1)` | = note: expected type `&_` found type `std::ops::RangeInclusive<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:51:16 | LL | take_range(::std::ops::RangeInclusive::new(0, 1)); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeInclusive` | help: consider borrowing here: `&::std::ops::RangeInclusive::new(0, 1)` | = note: expected type `&_` found type `std::ops::RangeInclusive<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:56:16 | LL | take_range(std::ops::RangeTo { end: 5 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeTo` | help: consider borrowing here: `&std::ops::RangeTo { end: 5 }` | = note: expected type `&_` found type `std::ops::RangeTo<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:61:16 | LL | take_range(::std::ops::RangeTo { end: 5 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeTo` | help: consider borrowing here: `&::std::ops::RangeTo { end: 5 }` | = note: expected type `&_` found type `std::ops::RangeTo<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:66:16 | LL | take_range(std::ops::RangeToInclusive { end: 5 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeToInclusive` | help: consider borrowing here: `&std::ops::RangeToInclusive { end: 5 }` | = note: expected type `&_` found type `std::ops::RangeToInclusive<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-literals.rs:71:16 | LL | take_range(::std::ops::RangeToInclusive { end: 5 }); | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | | | expected reference, found struct `std::ops::RangeToInclusive` | help: consider borrowing here: `&::std::ops::RangeToInclusive { end: 5 }` | = note: expected type `&_` found type `std::ops::RangeToInclusive<{integer}>` error: aborting due to 12 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-cd4d8604ae155a76c3a701af59bd5abb29f82a20a0ace52c2091a5ddc27bdc8f", "text": " // error-pattern: `#[panic_handler]` function required, but not found // Regression test for #54505 - range borrowing suggestion had // incorrect syntax (missing parentheses). // This test doesn't use std // (so all Ranges resolve to core::ops::Range...) #![no_std] #![feature(lang_items)] use core::ops::RangeBounds; #[cfg(not(target_arch = \"wasm32\"))] #[lang = \"eh_personality\"] extern fn eh_personality() {} #[cfg(target_os = \"windows\")] #[lang = \"eh_unwind_resume\"] extern fn eh_unwind_resume() {} // take a reference to any built-in range fn take_range(_r: &impl RangeBounds) {} fn main() { take_range(0..1); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(0..1) take_range(1..); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(1..) take_range(..); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..) take_range(0..=1); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(0..=1) take_range(..5); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..5) take_range(..=42); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..=42) } ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-09a06697fbc4231a846de2c13d7cd29f61f5b8aad2cfd53cf82133cd865a5f3e", "text": " error: `#[panic_handler]` function required, but not found error[E0308]: mismatched types --> $DIR/issue-54505-no-std.rs:28:16 | LL | take_range(0..1); | ^^^^ | | | expected reference, found struct `core::ops::Range` | help: consider borrowing here: `&(0..1)` | = note: expected type `&_` found type `core::ops::Range<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-std.rs:33:16 | LL | take_range(1..); | ^^^ | | | expected reference, found struct `core::ops::RangeFrom` | help: consider borrowing here: `&(1..)` | = note: expected type `&_` found type `core::ops::RangeFrom<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-std.rs:38:16 | LL | take_range(..); | ^^ | | | expected reference, found struct `core::ops::RangeFull` | help: consider borrowing here: `&(..)` | = note: expected type `&_` found type `core::ops::RangeFull` error[E0308]: mismatched types --> $DIR/issue-54505-no-std.rs:43:16 | LL | take_range(0..=1); | ^^^^^ | | | expected reference, found struct `core::ops::RangeInclusive` | help: consider borrowing here: `&(0..=1)` | = note: expected type `&_` found type `core::ops::RangeInclusive<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-std.rs:48:16 | LL | take_range(..5); | ^^^ | | | expected reference, found struct `core::ops::RangeTo` | help: consider borrowing here: `&(..5)` | = note: expected type `&_` found type `core::ops::RangeTo<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505-no-std.rs:53:16 | LL | take_range(..=42); | ^^^^^ | | | expected reference, found struct `core::ops::RangeToInclusive` | help: consider borrowing here: `&(..=42)` | = note: expected type `&_` found type `core::ops::RangeToInclusive<{integer}>` error: aborting due to 7 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-d83ff6bebabce62b6f19ddf7d7d80425f810fcee1d4f1c009680bef6f8e641e7", "text": " // run-rustfix // Regression test for #54505 - range borrowing suggestion had // incorrect syntax (missing parentheses). use std::ops::RangeBounds; // take a reference to any built-in range fn take_range(_r: &impl RangeBounds) {} fn main() { take_range(&(0..1)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(0..1) take_range(&(1..)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(1..) take_range(&(..)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..) take_range(&(0..=1)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(0..=1) take_range(&(..5)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..5) take_range(&(..=42)); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..=42) } ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-3e1d92c8ffc40aa82d752a257b29ea2abfe0bcdf3bad19633b08b741135b7677", "text": " // run-rustfix // Regression test for #54505 - range borrowing suggestion had // incorrect syntax (missing parentheses). use std::ops::RangeBounds; // take a reference to any built-in range fn take_range(_r: &impl RangeBounds) {} fn main() { take_range(0..1); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(0..1) take_range(1..); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(1..) take_range(..); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..) take_range(0..=1); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(0..=1) take_range(..5); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..5) take_range(..=42); //~^ ERROR mismatched types [E0308] //~| HELP consider borrowing here //~| SUGGESTION &(..=42) } ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-cb5ba9fb918e6d473b3423374e7b2ae159e02888b70f5c535e473ff4893edd3d", "query": "is an incorrect suggestion because has higher precedence than . It should suggest .\nIf someone wants to tackle this issue, it should be straightforward to fix. In: there should be another branch for , where the path of the call is . This code is a good reference: This issue affects the other range syntaxes too, so those should also be fixed (although these are not \u2014 look at for clues).\nHi I'd like to tackle this and take the plunge into the Rust compiler code :)\ngreat! Let me know if you hit any snags or need any more tips. You'll want to add tests for these (in ). The is a good place for general information about the compiler. When you're done, include in the pull request and I'll review it!\nyou've anticipated most of my newbie questions already :) Thanks for that info, very useful! I'm now in the process of stressing my PC with lots of compiling to actually get started. Skimming the test code I'm really impressed by how easy it is to add \"UI\" tests for these sorts of things (compile this, expect this error on this line). Snazzy! That's all great, but do you anticipate I should also add \"deeper\" tests? I guess I'm trying to understand if the scope of this is just pure UI or do I need to touch some syntax parsing/immediate representations, since you mentioned code in . Because the compiler is emitting a syntax suggestion, we might want to assert that the suggestion is correct syntax in the first place. I don't know if that's how deep the rabbit hole goes and if there's a facility to help out, but my hunch is that since we have all of the compiler at hand, why not use it to validate the suggestion syntax? EDIT: OK, reading up on adding new tests to the compiler I encountered mention of , and it seems to be sort-of what I was after.\nThe issue here is just a diagnostic issue, so checking that we're getting the updated hint is enough. The code in is just helpful to see what ranges like and are desugared into, so you can catch them in \u2014 you shouldn't need to modify anything (apart from the test) outside of . It would technically be possible to do something like this, but it would probably be more trouble than it's worth. (I'm not aware of a facility to do this easily already.) I think manually going through the possible s and checking that the parenthesisation behaviour is correct in each case would be a lot simpler: if we handle the different s in an exhaustive now (it's likely that if the ranges were handled incorrectly, there could be some others that are also missed at the moment), then this should be a one-time fix and the tests should prevent it regressing. A simple UI test should be sufficient here: will generate the output files, which will include all the relevant notes.\nThere was a similar problem with the suggestions, and it was fixed in . Further use of the operator precedence list can be seen in (which introduced the error fixed in the other PR).\nThanks for your guidance and much appreciated. I have dug into HIR and built-in ranges a bit more. Decided to check all the built-in range types and concluded they are all indeed affected by this problem. The following test I came up with demonstrates my intended fix (fails with nightly as well as stage 1 compiled from master): None of the fix suggestions, when applied, constitute valid syntax. My test checks for syntax I believe should be suggested by the compiler. I'm keen to tackle all of the range types at once in the scope of this issue. The problem I see though is just how wildly different each of the range types is represented in HIR. Adding a call just above this line: Revealed the following: for isfor isfor isfor isfor isfor is I'm quite bewildered by this non-uniformity and I guess I need some help unifying all of these into a elegant condition to tell \"is this a built-in range?\". I could probably brute-force it in a with explicit checks for , but maybe we can do better. Thanks in advance! EDIT realized that this non-uniformity was actually mentioned in the second comment from , but my plea still stands. Thanks for your patience.\nI think you can match on all of these with something along the lines of (pseudocode):\nRight, that's what I had in mind, but was also wondering if something like a check for trait would work better. I guess we want to match against built-in range literals though, and explicitly. Here's how I rationalize it: because they are parsed and desugared by the compiler, the check needs to be explicit and hardcode all paths to builtin ranges. Handle a special case with a special case.\nYes, this is where the lowering code comes in handy, because you can see exactly how the different syntax is desugared differently, to make sure you really are catching all the cases. Yes, I agree. It'd be nice if there was a more uniform way to handle them, but it's not convenient at the moment. You could include a reference back to in a comment to where the desugaring takes place, which will provide some motivation for matching in .\nAll clear now, thanks. I have succeeded in matching all range types and getting the test to pass. Needs some cleanups, but hopefully should be ready for PR soon. I have a few more questions before that though :smile: I have noticed that most of my code would actually be irrelevant if I re-used functions and constants defined in , e.g.: Can't deny that I have used these as inspiration / guideline (albeit I did not do a stupid copy-paste). What's the policy on using clippy code (clippy is a submodule of main repo)? This is purely in the interest of pursuing . The diff is not that big (52 new lines to , fairly specific to matching paths) -- but still, my clean code sense tingles because of possible DRY violation. Another conundrum I have is whether we should address and paths (and if so, how to work out we're not using at compile time)?\nUnfortunately, there's not much sharing from clippy to rustc. I wouldn't worry about it. If you can encapsulate it easily, it's possible that clippy could make use of the code in rustc instead. The function in picks the correct path for / \u2014 you should be able to use that.\nThanks for the suggestion for , unfortunately I can't really make the association of and . It seems to me they are disjoint -- in : lowering happens in , whereas typechecking happens in . in takes : The it uses needs an awful lot of things to be instantiated. Not sure if I can get the hold of it all in . Another problem I encountered while writing tests is that the changes I have made will affect suggestions for code like this: With my changes, the suggestion would be . I believe the correct suggestion would not involve the needless parentheses. This is due to not differentiating between \"de-sugared\" form coming from and between these de-sugared forms being supplied explicitly as input source code. With this in mind, I think current code is not PR worthy just yet. If of interest, here's the branch:\nLeft a couple of comments on possible approaches on that commit. The code looks fine.", "positive_passages": [{"docid": "doc-en-rust-deefe676370d584647586c621f1a526d83a84e2c7c25a2c6360858c1953fe721", "text": " error[E0308]: mismatched types --> $DIR/issue-54505.rs:14:16 | LL | take_range(0..1); | ^^^^ | | | expected reference, found struct `std::ops::Range` | help: consider borrowing here: `&(0..1)` | = note: expected type `&_` found type `std::ops::Range<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505.rs:19:16 | LL | take_range(1..); | ^^^ | | | expected reference, found struct `std::ops::RangeFrom` | help: consider borrowing here: `&(1..)` | = note: expected type `&_` found type `std::ops::RangeFrom<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505.rs:24:16 | LL | take_range(..); | ^^ | | | expected reference, found struct `std::ops::RangeFull` | help: consider borrowing here: `&(..)` | = note: expected type `&_` found type `std::ops::RangeFull` error[E0308]: mismatched types --> $DIR/issue-54505.rs:29:16 | LL | take_range(0..=1); | ^^^^^ | | | expected reference, found struct `std::ops::RangeInclusive` | help: consider borrowing here: `&(0..=1)` | = note: expected type `&_` found type `std::ops::RangeInclusive<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505.rs:34:16 | LL | take_range(..5); | ^^^ | | | expected reference, found struct `std::ops::RangeTo` | help: consider borrowing here: `&(..5)` | = note: expected type `&_` found type `std::ops::RangeTo<{integer}>` error[E0308]: mismatched types --> $DIR/issue-54505.rs:39:16 | LL | take_range(..=42); | ^^^^^ | | | expected reference, found struct `std::ops::RangeToInclusive` | help: consider borrowing here: `&(..=42)` | = note: expected type `&_` found type `std::ops::RangeToInclusive<{integer}>` error: aborting due to 6 previous errors For more information about this error, try `rustc --explain E0308`. ", "commid": "rust_pr_54734"}], "negative_passages": []} {"query_id": "q-en-rust-f6a2487dfe6c6ad2d3368030f01d2978c73ed3e529469a986a5d27e3cc8f4722", "query": "The following code emits an unsquelchable warning. This warning should be a part of .\nempty trait list in should have a lint name ... and that name is .\nDoes not appear to squelch the warning for me...\nIt seems eminently reasonable to me to include in since it is an attribute applied with no effect; so let's kick off fcp for that. merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [ ] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nImplementing this as part of the existing would be complicated by , but it might also be possible to implement where the existing \"empty trait list\" warning is issued (possibly as a ).\nI need to make a buffered early lint for the same thing with .\nSeems reasonable. And a macro that might end up with an empty derive list should have no problem turning off locally.\nreviewed I certainly agree with moving warnings to lints.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.", "positive_passages": [{"docid": "doc-en-rust-b1e8bfb7983ffc24a235c4315cc4f0052233aeb451593610c91d42a1700d3b02", "text": "match attr.parse_list(cx.parse_sess, |parser| parser.parse_path_allowing_meta(PathStyle::Mod)) { Ok(ref traits) if traits.is_empty() => { cx.span_warn(attr.span, \"empty trait list in `derive`\"); false } Ok(traits) => { result.extend(traits); true", "commid": "rust_pr_62051"}], "negative_passages": []} {"query_id": "q-en-rust-f6a2487dfe6c6ad2d3368030f01d2978c73ed3e529469a986a5d27e3cc8f4722", "query": "The following code emits an unsquelchable warning. This warning should be a part of .\nempty trait list in should have a lint name ... and that name is .\nDoes not appear to squelch the warning for me...\nIt seems eminently reasonable to me to include in since it is an attribute applied with no effect; so let's kick off fcp for that. merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [ ] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nImplementing this as part of the existing would be complicated by , but it might also be possible to implement where the existing \"empty trait list\" warning is issued (possibly as a ).\nI need to make a buffered early lint for the same thing with .\nSeems reasonable. And a macro that might end up with an empty derive list should have no problem turning off locally.\nreviewed I certainly agree with moving warnings to lints.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.", "positive_passages": [{"docid": "doc-en-rust-ff4c4961f90f9ede2604b1e03a1a640b056b00519a7c57d87a4e59071c5e1bb1", "text": " // compile-pass #![deny(unused)] #[derive()] //~ WARNING empty trait list in `derive` struct Bar; #[derive()] //~ ERROR unused attribute struct _Bar; pub fn main() {}", "commid": "rust_pr_62051"}], "negative_passages": []} {"query_id": "q-en-rust-f6a2487dfe6c6ad2d3368030f01d2978c73ed3e529469a986a5d27e3cc8f4722", "query": "The following code emits an unsquelchable warning. This warning should be a part of .\nempty trait list in should have a lint name ... and that name is .\nDoes not appear to squelch the warning for me...\nIt seems eminently reasonable to me to include in since it is an attribute applied with no effect; so let's kick off fcp for that. merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [ ] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nImplementing this as part of the existing would be complicated by , but it might also be possible to implement where the existing \"empty trait list\" warning is issued (possibly as a ).\nI need to make a buffered early lint for the same thing with .\nSeems reasonable. And a macro that might end up with an empty derive list should have no problem turning off locally.\nreviewed I certainly agree with moving warnings to lints.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.", "positive_passages": [{"docid": "doc-en-rust-8483b7ca7bfacacadfe5611c6b8f0b5ec642263cdf8381fbadd80c47dc249fa5", "text": " warning: empty trait list in `derive` error: unused attribute --> $DIR/deriving-meta-empty-trait-list.rs:3:1 | LL | #[derive()] | ^^^^^^^^^^^ | note: lint level defined here --> $DIR/deriving-meta-empty-trait-list.rs:1:9 | LL | #![deny(unused)] | ^^^^^^ = note: #[deny(unused_attributes)] implied by #[deny(unused)] error: aborting due to previous error ", "commid": "rust_pr_62051"}], "negative_passages": []} {"query_id": "q-en-rust-f6a2487dfe6c6ad2d3368030f01d2978c73ed3e529469a986a5d27e3cc8f4722", "query": "The following code emits an unsquelchable warning. This warning should be a part of .\nempty trait list in should have a lint name ... and that name is .\nDoes not appear to squelch the warning for me...\nIt seems eminently reasonable to me to include in since it is an attribute applied with no effect; so let's kick off fcp for that. merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [ ] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nImplementing this as part of the existing would be complicated by , but it might also be possible to implement where the existing \"empty trait list\" warning is issued (possibly as a ).\nI need to make a buffered early lint for the same thing with .\nSeems reasonable. And a macro that might end up with an empty derive list should have no problem turning off locally.\nreviewed I certainly agree with moving warnings to lints.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.", "positive_passages": [{"docid": "doc-en-rust-1848e1e7afe437f78b18aff5165f39a3a77e6a781c23e7abdcdb9d55605dc510", "text": "#[derive(Copy=\"bad\")] //~ ERROR expected one of `)`, `,`, or `::`, found `=` struct Test2; #[derive()] //~ WARNING empty trait list struct Test3; #[derive] //~ ERROR malformed `derive` attribute input struct Test4;", "commid": "rust_pr_62051"}], "negative_passages": []} {"query_id": "q-en-rust-f6a2487dfe6c6ad2d3368030f01d2978c73ed3e529469a986a5d27e3cc8f4722", "query": "The following code emits an unsquelchable warning. This warning should be a part of .\nempty trait list in should have a lint name ... and that name is .\nDoes not appear to squelch the warning for me...\nIt seems eminently reasonable to me to include in since it is an attribute applied with no effect; so let's kick off fcp for that. merge\nTeam member has proposed to merge this. The next step is review by the rest of the tagged teams: [x] [x] [x] [x] [x] [ ] [x] [x] [x] [ ] No concerns currently listed. Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up! See for info about what commands tagged team members can give me.\nImplementing this as part of the existing would be complicated by , but it might also be possible to implement where the existing \"empty trait list\" warning is issued (possibly as a ).\nI need to make a buffered early lint for the same thing with .\nSeems reasonable. And a macro that might end up with an empty derive list should have no problem turning off locally.\nreviewed I certainly agree with moving warnings to lints.\n:bell: This is now entering its final comment period, as per the . :bell:\nThe final comment period, with a disposition to merge, as per the , is now complete.", "positive_passages": [{"docid": "doc-en-rust-5b2a5b41e6f8e84913ace0fafeb8d838d49e837f1b1294ba0f4d507c7b97995d", "text": "LL | #[derive(Copy=\"bad\")] | ^ expected one of `)`, `,`, or `::` here warning: empty trait list in `derive` --> $DIR/malformed-derive-entry.rs:7:1 | LL | #[derive()] | ^^^^^^^^^^^ error: malformed `derive` attribute input --> $DIR/malformed-derive-entry.rs:10:1 --> $DIR/malformed-derive-entry.rs:7:1 | LL | #[derive] | ^^^^^^^^^ help: missing traits to be derived: `#[derive(Trait1, Trait2, ...)]`", "commid": "rust_pr_62051"}], "negative_passages": []} {"query_id": "q-en-rust-5efd07353549697879a475ea121fa180dd2b22aa126045980aeccf8659ef586e", "query": "The documentation for with say \"no re-ordering of reads and writes across this point is allowed\". As I understand it, for a single thread doesn't give anything more than . E.g., a store before the fence could be moved past a load after then fence. Is that correct? cc\nHmm, that's a good point, though the details are a little subtle. From the : I think what this translates into is that guarantees that: no subsequent operations can be moved before this operation if it is a load, and no operations can be moved after this operation if it is a store. The italic parts there aren't present in the current documentation, so that should probably be rectified. I think some annotated source code would really help here!\nIt actually depends on the other operation, does it not? stores after the fence can be moved up, but stores cannot. TBH I have a hard time thinking in terms of reorderings; I usually think in terms of what actually happens in the underlying memory model. Some of these models can be fully characterized by the allowed reorderings, but not all of them can (as far as we know). And anyway, SC fences are broken in the C11 memory model and probably going to change in the next version, so...^^ On top of that, all are in a total order, and that is the entire point of . (The website says it but you left it away in your quote. Just making sure nobody thinks that a load is the same as an load. It is not.)\nI'm actually not entirely sure about that. Another store to the same memory location shouldn't be ordered before, but I don't see from the reference text why an unrelated store with couldn't be. I'd be happy for this to be rephrased this in terms of the memory model, but I'm not entirely sure how we'd even do that. Proposals welcome. I think talking about what can and cannot happen, with examples, is probably the most understandable way to present what the different orderings do. I left off the total order part because it didn't seem directly related to the original question. But you're right that it's a vital part of that shouldn't get lost in the docs!\nIt says The sequentially-consistent ordering consists of all operation, across all locations, and the order in which they are observed must be consistent across all threads. In particular, if one thread performs two operations (i.e., they are ordered by program-order), then the order in which it does them must match the order in which every other thread observes them. That rules out reordering them. This is the sole justification for the existence of .\nYes, good point. So how about we move the bullet point to bottom and change it to say", "positive_passages": [{"docid": "doc-en-rust-6be86a4b17407b66591be990ebb5061a521c69a6a117335881d19c20e8b0685b", "text": "/// An atomic fence. /// /// Depending on the specified order, a fence prevents the compiler and CPU from /// reordering certain types of memory operations around it. /// That creates synchronizes-with relationships between it and atomic operations /// or fences in other threads. /// Fences create synchronization between themselves and atomic operations or fences in other /// threads. To achieve this, a fence prevents the compiler and CPU from reordering certain types of /// memory operations around it. /// /// A fence 'A' which has (at least) [`Release`] ordering semantics, synchronizes /// with a fence 'B' with (at least) [`Acquire`] semantics, if and only if there", "commid": "rust_pr_129856"}], "negative_passages": []} {"query_id": "q-en-rust-5efd07353549697879a475ea121fa180dd2b22aa126045980aeccf8659ef586e", "query": "The documentation for with say \"no re-ordering of reads and writes across this point is allowed\". As I understand it, for a single thread doesn't give anything more than . E.g., a store before the fence could be moved past a load after then fence. Is that correct? cc\nHmm, that's a good point, though the details are a little subtle. From the : I think what this translates into is that guarantees that: no subsequent operations can be moved before this operation if it is a load, and no operations can be moved after this operation if it is a store. The italic parts there aren't present in the current documentation, so that should probably be rectified. I think some annotated source code would really help here!\nIt actually depends on the other operation, does it not? stores after the fence can be moved up, but stores cannot. TBH I have a hard time thinking in terms of reorderings; I usually think in terms of what actually happens in the underlying memory model. Some of these models can be fully characterized by the allowed reorderings, but not all of them can (as far as we know). And anyway, SC fences are broken in the C11 memory model and probably going to change in the next version, so...^^ On top of that, all are in a total order, and that is the entire point of . (The website says it but you left it away in your quote. Just making sure nobody thinks that a load is the same as an load. It is not.)\nI'm actually not entirely sure about that. Another store to the same memory location shouldn't be ordered before, but I don't see from the reference text why an unrelated store with couldn't be. I'd be happy for this to be rephrased this in terms of the memory model, but I'm not entirely sure how we'd even do that. Proposals welcome. I think talking about what can and cannot happen, with examples, is probably the most understandable way to present what the different orderings do. I left off the total order part because it didn't seem directly related to the original question. But you're right that it's a vital part of that shouldn't get lost in the docs!\nIt says The sequentially-consistent ordering consists of all operation, across all locations, and the order in which they are observed must be consistent across all threads. In particular, if one thread performs two operations (i.e., they are ordered by program-order), then the order in which it does them must match the order in which every other thread observes them. That rules out reordering them. This is the sole justification for the existence of .\nYes, good point. So how about we move the bullet point to bottom and change it to say", "positive_passages": [{"docid": "doc-en-rust-97274b5e8277dd4b054060cc97f794d1bdd8dc967f7658c80bfe4ddba57790e0", "text": "/// } /// ``` /// /// Note that in the example above, it is crucial that the accesses to `x` are atomic. Fences cannot /// be used to establish synchronization among non-atomic accesses in different threads. However, /// thanks to the happens-before relationship between A and B, any non-atomic accesses that /// happen-before A are now also properly synchronized with any non-atomic accesses that /// happen-after B. /// /// Atomic operations with [`Release`] or [`Acquire`] semantics can also synchronize /// with a fence. ///", "commid": "rust_pr_129856"}], "negative_passages": []} {"query_id": "q-en-rust-5efd07353549697879a475ea121fa180dd2b22aa126045980aeccf8659ef586e", "query": "The documentation for with say \"no re-ordering of reads and writes across this point is allowed\". As I understand it, for a single thread doesn't give anything more than . E.g., a store before the fence could be moved past a load after then fence. Is that correct? cc\nHmm, that's a good point, though the details are a little subtle. From the : I think what this translates into is that guarantees that: no subsequent operations can be moved before this operation if it is a load, and no operations can be moved after this operation if it is a store. The italic parts there aren't present in the current documentation, so that should probably be rectified. I think some annotated source code would really help here!\nIt actually depends on the other operation, does it not? stores after the fence can be moved up, but stores cannot. TBH I have a hard time thinking in terms of reorderings; I usually think in terms of what actually happens in the underlying memory model. Some of these models can be fully characterized by the allowed reorderings, but not all of them can (as far as we know). And anyway, SC fences are broken in the C11 memory model and probably going to change in the next version, so...^^ On top of that, all are in a total order, and that is the entire point of . (The website says it but you left it away in your quote. Just making sure nobody thinks that a load is the same as an load. It is not.)\nI'm actually not entirely sure about that. Another store to the same memory location shouldn't be ordered before, but I don't see from the reference text why an unrelated store with couldn't be. I'd be happy for this to be rephrased this in terms of the memory model, but I'm not entirely sure how we'd even do that. Proposals welcome. I think talking about what can and cannot happen, with examples, is probably the most understandable way to present what the different orderings do. I left off the total order part because it didn't seem directly related to the original question. But you're right that it's a vital part of that shouldn't get lost in the docs!\nIt says The sequentially-consistent ordering consists of all operation, across all locations, and the order in which they are observed must be consistent across all threads. In particular, if one thread performs two operations (i.e., they are ordered by program-order), then the order in which it does them must match the order in which every other thread observes them. That rules out reordering them. This is the sole justification for the existence of .\nYes, good point. So how about we move the bullet point to bottom and change it to say", "positive_passages": [{"docid": "doc-en-rust-f2ded8438f775bc738785c63a82c3fb960f5d59312583f0db4c45e6e078cd021", "text": "} } /// A compiler memory fence. /// A \"compiler-only\" atomic fence. /// /// `compiler_fence` does not emit any machine code, but restricts the kinds /// of memory re-ordering the compiler is allowed to do. Specifically, depending on /// the given [`Ordering`] semantics, the compiler may be disallowed from moving reads /// or writes from before or after the call to the other side of the call to /// `compiler_fence`. Note that it does **not** prevent the *hardware* /// from doing such re-ordering. This is not a problem in a single-threaded, /// execution context, but when other threads may modify memory at the same /// time, stronger synchronization primitives such as [`fence`] are required. /// Like [`fence`], this function establishes synchronization with other atomic operations and /// fences. However, unlike [`fence`], `compiler_fence` only establishes synchronization with /// operations *in the same thread*. This may at first sound rather useless, since code within a /// thread is typically already totally ordered and does not need any further synchronization. /// However, there are cases where code can run on the same thread without being ordered: /// - The most common case is that of a *signal handler*: a signal handler runs in the same thread /// as the code it interrupted, but it is not ordered with respect to that code. `compiler_fence` /// can be used to establish synchronization between a thread and its signal handler, the same way /// that `fence` can be used to establish synchronization across threads. /// - Similar situations can arise in embedded programming with interrupt handlers, or in custom /// implementations of preemptive green threads. In general, `compiler_fence` can establish /// synchronization with code that is guaranteed to run on the same hardware CPU. /// /// The re-ordering prevented by the different ordering semantics are: /// See [`fence`] for how a fence can be used to achieve synchronization. Note that just like /// [`fence`], synchronization still requires atomic operations to be used in both threads -- it is /// not possible to perform synchronization entirely with fences and non-atomic operations. /// /// - with [`SeqCst`], no re-ordering of reads and writes across this point is allowed. /// - with [`Release`], preceding reads and writes cannot be moved past subsequent writes. /// - with [`Acquire`], subsequent reads and writes cannot be moved ahead of preceding reads. /// - with [`AcqRel`], both of the above rules are enforced. /// `compiler_fence` does not emit any machine code, but restricts the kinds of memory re-ordering /// the compiler is allowed to do. `compiler_fence` corresponds to [`atomic_signal_fence`] in C and /// C++. /// /// `compiler_fence` is generally only useful for preventing a thread from /// racing *with itself*. That is, if a given thread is executing one piece /// of code, and is then interrupted, and starts executing code elsewhere /// (while still in the same thread, and conceptually still on the same /// core). In traditional programs, this can only occur when a signal /// handler is registered. In more low-level code, such situations can also /// arise when handling interrupts, when implementing green threads with /// pre-emption, etc. Curious readers are encouraged to read the Linux kernel's /// discussion of [memory barriers]. /// [`atomic_signal_fence`]: https://en.cppreference.com/w/cpp/atomic/atomic_signal_fence /// /// # Panics ///", "commid": "rust_pr_129856"}], "negative_passages": []} {"query_id": "q-en-rust-5efd07353549697879a475ea121fa180dd2b22aa126045980aeccf8659ef586e", "query": "The documentation for with say \"no re-ordering of reads and writes across this point is allowed\". As I understand it, for a single thread doesn't give anything more than . E.g., a store before the fence could be moved past a load after then fence. Is that correct? cc\nHmm, that's a good point, though the details are a little subtle. From the : I think what this translates into is that guarantees that: no subsequent operations can be moved before this operation if it is a load, and no operations can be moved after this operation if it is a store. The italic parts there aren't present in the current documentation, so that should probably be rectified. I think some annotated source code would really help here!\nIt actually depends on the other operation, does it not? stores after the fence can be moved up, but stores cannot. TBH I have a hard time thinking in terms of reorderings; I usually think in terms of what actually happens in the underlying memory model. Some of these models can be fully characterized by the allowed reorderings, but not all of them can (as far as we know). And anyway, SC fences are broken in the C11 memory model and probably going to change in the next version, so...^^ On top of that, all are in a total order, and that is the entire point of . (The website says it but you left it away in your quote. Just making sure nobody thinks that a load is the same as an load. It is not.)\nI'm actually not entirely sure about that. Another store to the same memory location shouldn't be ordered before, but I don't see from the reference text why an unrelated store with couldn't be. I'd be happy for this to be rephrased this in terms of the memory model, but I'm not entirely sure how we'd even do that. Proposals welcome. I think talking about what can and cannot happen, with examples, is probably the most understandable way to present what the different orderings do. I left off the total order part because it didn't seem directly related to the original question. But you're right that it's a vital part of that shouldn't get lost in the docs!\nIt says The sequentially-consistent ordering consists of all operation, across all locations, and the order in which they are observed must be consistent across all threads. In particular, if one thread performs two operations (i.e., they are ordered by program-order), then the order in which it does them must match the order in which every other thread observes them. That rules out reordering them. This is the sole justification for the existence of .\nYes, good point. So how about we move the bullet point to bottom and change it to say", "positive_passages": [{"docid": "doc-en-rust-e27299259d8c0d9b110a533e632ead76661da767298bdb6195e85e307d9cad45", "text": "/// } /// } /// ``` /// /// [memory barriers]: https://www.kernel.org/doc/Documentation/memory-barriers.txt #[inline] #[stable(feature = \"compiler_fences\", since = \"1.21.0\")] #[rustc_diagnostic_item = \"compiler_fence\"]", "commid": "rust_pr_129856"}], "negative_passages": []} {"query_id": "q-en-rust-06b9f91040abc21cb9e390950d5bca38e42618b939816b97113b347662bbb9cb", "query": "Given: results in: Is this intentional due to some soundness issue with having it otherwise? (I cannot think of any...) We could fix this by attaching to manually. While having be structurally matchable is quite boring and useless on its lonesome it prevents other types that contain it from being structurally matchable. (not likely to occur very often, but when it occur it seems like it would be annoying...) r? cc", "positive_passages": [{"docid": "doc-en-rust-12e059053b5009502918c280af5a9d9acc15cc8163b1d6ad059652d6523c87e9", "text": "#![feature(const_transmute)] #![feature(reverse_bits)] #![feature(non_exhaustive)] #![feature(structural_match)] #[prelude_import] #[allow(unused)]", "commid": "rust_pr_55837"}], "negative_passages": []} {"query_id": "q-en-rust-06b9f91040abc21cb9e390950d5bca38e42618b939816b97113b347662bbb9cb", "query": "Given: results in: Is this intentional due to some soundness issue with having it otherwise? (I cannot think of any...) We could fix this by attaching to manually. While having be structurally matchable is quite boring and useless on its lonesome it prevents other types that contain it from being structurally matchable. (not likely to occur very often, but when it occur it seems like it would be annoying...) r? cc", "positive_passages": [{"docid": "doc-en-rust-ab132f350e2f8a1fa17eaefe802f367ba0b0d8dba0f1fc038588174fbbad910d", "text": "/// /// [drop check]: ../../nomicon/dropck.html #[lang = \"phantom_data\"] #[structural_match] #[stable(feature = \"rust1\", since = \"1.0.0\")] pub struct PhantomData;", "commid": "rust_pr_55837"}], "negative_passages": []} {"query_id": "q-en-rust-06b9f91040abc21cb9e390950d5bca38e42618b939816b97113b347662bbb9cb", "query": "Given: results in: Is this intentional due to some soundness issue with having it otherwise? (I cannot think of any...) We could fix this by attaching to manually. While having be structurally matchable is quite boring and useless on its lonesome it prevents other types that contain it from being structurally matchable. (not likely to occur very often, but when it occur it seems like it would be annoying...) r? cc", "positive_passages": [{"docid": "doc-en-rust-3fcf1c61841cd6438265c773e8ed21a8e2942c8546ef2d559ccb6e0775ee978a", "text": " // run-pass // This file checks that `PhantomData` is considered structurally matchable. use std::marker::PhantomData; fn main() { let mut count = 0; // A type which is not structurally matchable: struct NotSM; // And one that is: #[derive(PartialEq, Eq)] struct SM; // Check that SM is #[structural_match]: const CSM: SM = SM; match SM { CSM => count += 1, }; // Check that PhantomData is #[structural_match] even if T is not. const CPD1: PhantomData = PhantomData; match PhantomData { CPD1 => count += 1, }; // Check that PhantomData is #[structural_match] when T is. const CPD2: PhantomData = PhantomData; match PhantomData { CPD2 => count += 1, }; // Check that a type which has a PhantomData is `#[structural_match]`. #[derive(PartialEq, Eq, Default)] struct Foo { alpha: PhantomData, beta: PhantomData, } const CFOO: Foo = Foo { alpha: PhantomData, beta: PhantomData, }; match Foo::default() { CFOO => count += 1, }; // Final count must be 4 now if all assert_eq!(count, 4); } ", "commid": "rust_pr_55837"}], "negative_passages": []} {"query_id": "q-en-rust-35176f59529ba2a4da909a0c962ea85a7437375653bfbb72e9cf1a6e34f48a18", "query": "The output from , at the top of the page for a or a , etc., includes a line like, Clicking the on a browser w/ restrictive cookie settings (which typically includes ) results in an error similar to the following: Additionally, the section fails to expand or contract, as the exception isn't handled. is attempting to store the collapse/expand state, s.t. later visits to the page presumably maintain that state for the user's convenience. I browse the web with an admittedly somewhat atypical setup: I whitelist cookies. , in most browsers, counts, since it is a similar mechanism of persistence. I think Rust's use of here is fine, but I would request that, if is forbidden by the user agent, that the documentation gracefully \"degrade\" to just expanding or contracting the requested item, but not persist it. As it is, the JS crashes while storing the state, and appears to never make it to actually expanding/contracting the item.", "positive_passages": [{"docid": "doc-en-rust-a4ac48e9ae704c33e4aad05f47c61ab97cea2d56aef5d94da5fc389456104691", "text": "return false; } function usableLocalStorage() { // Check if the browser supports localStorage at all: if (typeof(Storage) === \"undefined\") { return false; } // Check if we can access it; this access will fail if the browser // preferences deny access to localStorage, e.g., to prevent storage of // \"cookies\" (or cookie-likes, as is the case here). try { window.localStorage; } catch(err) { // Storage is supported, but browser preferences deny access to it. return false; } return true; } function updateLocalStorage(name, value) { if (typeof(Storage) !== \"undefined\") { if (usableLocalStorage()) { localStorage[name] = value; } else { // No Web Storage support so we do nothing", "commid": "rust_pr_55080"}], "negative_passages": []} {"query_id": "q-en-rust-35176f59529ba2a4da909a0c962ea85a7437375653bfbb72e9cf1a6e34f48a18", "query": "The output from , at the top of the page for a or a , etc., includes a line like, Clicking the on a browser w/ restrictive cookie settings (which typically includes ) results in an error similar to the following: Additionally, the section fails to expand or contract, as the exception isn't handled. is attempting to store the collapse/expand state, s.t. later visits to the page presumably maintain that state for the user's convenience. I browse the web with an admittedly somewhat atypical setup: I whitelist cookies. , in most browsers, counts, since it is a similar mechanism of persistence. I think Rust's use of here is fine, but I would request that, if is forbidden by the user agent, that the documentation gracefully \"degrade\" to just expanding or contracting the requested item, but not persist it. As it is, the JS crashes while storing the state, and appears to never make it to actually expanding/contracting the item.", "positive_passages": [{"docid": "doc-en-rust-b16cad38b400ffbc9ab4fae63c3573b922ea6efea0aa18c0a58489cede765b84", "text": "} function getCurrentValue(name) { if (typeof(Storage) !== \"undefined\" && localStorage[name] !== undefined) { if (usableLocalStorage() && localStorage[name] !== undefined) { return localStorage[name]; } return null;", "commid": "rust_pr_55080"}], "negative_passages": []} {"query_id": "q-en-rust-6ce1708b31d20ac5183d838b450f78197f13395dab8c0a1f30e0eea91549061d", "query": "Spawned off of 's on :\nto be clear: we don't need to use to reproduce the problem, right? I.e. hypothetically examples exist that don't use that feature? In particular I know this problem is in some way related to , which is definitely a true regression of stable code.\nRight, this here is not really a regression. It is just an ICE that should be an error and it is fixed by We can probably do this without const_let, but I couldn't come up with an example.\nA user is the\nI have tried to reduce the problem. This repository is smaller, but still fails on : . The same crate builds on stable.\nI think you are encountering which has a fix in (making everything work again) This issue is different, even though the symptom is the same right now. When this issue is fixed (via ), the ICE will become a hard error.", "positive_passages": [{"docid": "doc-en-rust-57cab936b516ca5b41ac32c9ed7195e3df266e7c99af343b379f8ca24312ec76", "text": "if self.alloc_map.contains_key(&alloc) { // Not yet interned, so proceed recursively self.intern_static(alloc, mutability)?; } else if self.dead_alloc_map.contains_key(&alloc) { // dangling pointer return err!(ValidationFailure( \"encountered dangling pointer in final constant\".into(), )) } } Ok(())", "commid": "rust_pr_55262"}], "negative_passages": []} {"query_id": "q-en-rust-6ce1708b31d20ac5183d838b450f78197f13395dab8c0a1f30e0eea91549061d", "query": "Spawned off of 's on :\nto be clear: we don't need to use to reproduce the problem, right? I.e. hypothetically examples exist that don't use that feature? In particular I know this problem is in some way related to , which is definitely a true regression of stable code.\nRight, this here is not really a regression. It is just an ICE that should be an error and it is fixed by We can probably do this without const_let, but I couldn't come up with an example.\nA user is the\nI have tried to reduce the problem. This repository is smaller, but still fails on : . The same crate builds on stable.\nI think you are encountering which has a fix in (making everything work again) This issue is different, even though the symptom is the same right now. When this issue is fixed (via ), the ICE will become a hard error.", "positive_passages": [{"docid": "doc-en-rust-936292c88968680d8a96ec04b02cdda61ecf866d36e3f94acff2deb95639c52c", "text": " // https://github.com/rust-lang/rust/issues/55223 #![feature(const_let)] union Foo<'a> { y: &'a (), long_live_the_unit: &'static (), } const FOO: &() = { //~ ERROR any use of this value will cause an error let y = (); unsafe { Foo { y: &y }.long_live_the_unit } }; fn main() {} ", "commid": "rust_pr_55262"}], "negative_passages": []} {"query_id": "q-en-rust-6ce1708b31d20ac5183d838b450f78197f13395dab8c0a1f30e0eea91549061d", "query": "Spawned off of 's on :\nto be clear: we don't need to use to reproduce the problem, right? I.e. hypothetically examples exist that don't use that feature? In particular I know this problem is in some way related to , which is definitely a true regression of stable code.\nRight, this here is not really a regression. It is just an ICE that should be an error and it is fixed by We can probably do this without const_let, but I couldn't come up with an example.\nA user is the\nI have tried to reduce the problem. This repository is smaller, but still fails on : . The same crate builds on stable.\nI think you are encountering which has a fix in (making everything work again) This issue is different, even though the symptom is the same right now. When this issue is fixed (via ), the ICE will become a hard error.", "positive_passages": [{"docid": "doc-en-rust-b26aa9eed4a12c5a1de38eb4ad1e87d0dc5310eb98bb1a922e30b0788a3b8207", "text": " error: any use of this value will cause an error --> $DIR/dangling-alloc-id-ice.rs:10:1 | LL | / const FOO: &() = { //~ ERROR any use of this value will cause an error LL | | let y = (); LL | | unsafe { Foo { y: &y }.long_live_the_unit } LL | | }; | |__^ type validation failed: encountered dangling pointer in final constant | = note: #[deny(const_err)] on by default error: aborting due to previous error ", "commid": "rust_pr_55262"}], "negative_passages": []} {"query_id": "q-en-rust-6ce1708b31d20ac5183d838b450f78197f13395dab8c0a1f30e0eea91549061d", "query": "Spawned off of 's on :\nto be clear: we don't need to use to reproduce the problem, right? I.e. hypothetically examples exist that don't use that feature? In particular I know this problem is in some way related to , which is definitely a true regression of stable code.\nRight, this here is not really a regression. It is just an ICE that should be an error and it is fixed by We can probably do this without const_let, but I couldn't come up with an example.\nA user is the\nI have tried to reduce the problem. This repository is smaller, but still fails on : . The same crate builds on stable.\nI think you are encountering which has a fix in (making everything work again) This issue is different, even though the symptom is the same right now. When this issue is fixed (via ), the ICE will become a hard error.", "positive_passages": [{"docid": "doc-en-rust-9346453625005d0aa1cfd1569ec9e738d3b291025fbf01f3acb1aa941f8988f1", "text": " #![feature(const_let)] const FOO: *const u32 = { //~ ERROR any use of this value will cause an error let x = 42; &x }; fn main() { let x = FOO; } ", "commid": "rust_pr_55262"}], "negative_passages": []} {"query_id": "q-en-rust-6ce1708b31d20ac5183d838b450f78197f13395dab8c0a1f30e0eea91549061d", "query": "Spawned off of 's on :\nto be clear: we don't need to use to reproduce the problem, right? I.e. hypothetically examples exist that don't use that feature? In particular I know this problem is in some way related to , which is definitely a true regression of stable code.\nRight, this here is not really a regression. It is just an ICE that should be an error and it is fixed by We can probably do this without const_let, but I couldn't come up with an example.\nA user is the\nI have tried to reduce the problem. This repository is smaller, but still fails on : . The same crate builds on stable.\nI think you are encountering which has a fix in (making everything work again) This issue is different, even though the symptom is the same right now. When this issue is fixed (via ), the ICE will become a hard error.", "positive_passages": [{"docid": "doc-en-rust-100ae8fdd2b0311d50a0be5587916efa2d3bb089d4720e828144e08c862e9ca1", "text": " error: any use of this value will cause an error --> $DIR/dangling_raw_ptr.rs:3:1 | LL | / const FOO: *const u32 = { //~ ERROR any use of this value will cause an error LL | | let x = 42; LL | | &x LL | | }; | |__^ type validation failed: encountered dangling pointer in final constant | = note: #[deny(const_err)] on by default error: aborting due to previous error ", "commid": "rust_pr_55262"}], "negative_passages": []} {"query_id": "q-en-rust-7c93159c6bbe4889bea3a6b6d6a52ffd5da84e496442c6d7f659a99a5f7ebeaa", "query": "On the surface, these functions seem very similar. When is it appropriate to use one vs. the other?\nAnyone able to describe the differences and use-cases for these functions? I'd be willing to add docs for this, I just don't know when to use either\nMy understanding for these two functions is: Both are probably not needed in general. While can be a debugging utility in some situations they're largely only needed when writing some form of a spin loop. The function is for very tight spin loops. It is ideally used in situations where you know that a contended lock is held by another thread and you're sure the other thread is running on a different CPU. If the other thread holding a lock (or whatever condition is being waited on in a spin loop) isn't actually running then spinning with this function will eat the entire time slice allocated for the OS until that thread starts running again. This is a CPU instruction on some platforms, and a noop on other platforms. The function invokes the OS scheduler, asking the current thread to basically be put into the back of the run queue. This is best for spin loops where you're pretty certain the lock/condition will become available soon, but you're not sure if the thread holding the lock is running or not. I believe the general wisdom is that on lock contention you for a bit, then a little bit, and finally sleep for real. All of these are pretty advanced functions, though, and the documentation should be very clear that these are largely just current-practice-guidelines and benchmarking and further understanding is highly encouraged\nOne use case for is a bunch of threads blocked on . potentially needs a large number of cycles to execute but it is also executed in performance-sensitive code (graphics, crypto, etc.) where a full or would not be wanted.\nAre you still interested in writing docs for this?\nI still don't feel confident to write them given my current level of understanding with these APIs. But if you want to try, go for it!\nI will give it a try! I am sure others will correct me if I write some things that are wrong :) I hope I'll be able to deliver a PR somewhere this week.\nIt seems like you always pop up on my screen when people are talking about technical details so I thought you might be the right man for this question :) When diving into the details of I found this: So basically does a call to which is defined like this: Question: What is the difference in semantics between and ? Or to ask it differently, why should we not deprecate and only use ? The documentation of both functions is exactly the same as well. (Aside being experimental). Or we could maybe re-export as in ?\nI did an attempt! I hope it clarifies some things. I didn't edit the documentation to not bloat the documentation there too much. But if that is desired as well, please let me know. The pull is here:", "positive_passages": [{"docid": "doc-en-rust-afeac0229c024d09c60d43228f5ca428e5c330076b402c9a5d1719471f2f9425", "text": "intrinsics::unreachable() } /// Save power or switch hyperthreads in a busy-wait spin-loop. /// Signals the processor that it is entering a busy-wait spin-loop. /// /// This function is deliberately more primitive than /// [`std::thread::yield_now`](../../std/thread/fn.yield_now.html) and /// does not directly yield to the system's scheduler. /// In some cases it might be useful to use a combination of both functions. /// Careful benchmarking is advised. /// Upon receiving spin-loop signal the processor can optimize its behavior by, for example, saving /// power or switching hyper-threads. /// /// On some platforms this function may not do anything at all. /// This function is different than [`std::thread::yield_now`] which directly yields to the /// system's scheduler, whereas `spin_loop` only signals the processor that it is entering a /// busy-wait spin-loop without yielding control to the system's scheduler. /// /// Using a busy-wait spin-loop with `spin_loop` is ideally used in situations where a /// contended lock is held by another thread executed on a different CPU and where the waiting /// times are relatively small. Because entering busy-wait spin-loop does not trigger the system's /// scheduler, no overhead for switching threads occurs. However, if the thread holding the /// contended lock is running on the same CPU, the spin-loop is likely to occupy an entire CPU slice /// before switching to the thread that holds the lock. If the contending lock is held by a thread /// on the same CPU or if the waiting times for acquiring the lock are longer, it is often better to /// use [`std::thread::yield_now`]. /// /// **Note**: On platforms that do not support receiving spin-loop hints this function does not /// do anything at all. /// /// [`std::thread::yield_now`]: ../../std/thread/fn.yield_now.html #[inline] #[unstable(feature = \"renamed_spin_loop\", issue = \"55002\")] pub fn spin_loop() {", "commid": "rust_pr_59664"}], "negative_passages": []} {"query_id": "q-en-rust-7c93159c6bbe4889bea3a6b6d6a52ffd5da84e496442c6d7f659a99a5f7ebeaa", "query": "On the surface, these functions seem very similar. When is it appropriate to use one vs. the other?\nAnyone able to describe the differences and use-cases for these functions? I'd be willing to add docs for this, I just don't know when to use either\nMy understanding for these two functions is: Both are probably not needed in general. While can be a debugging utility in some situations they're largely only needed when writing some form of a spin loop. The function is for very tight spin loops. It is ideally used in situations where you know that a contended lock is held by another thread and you're sure the other thread is running on a different CPU. If the other thread holding a lock (or whatever condition is being waited on in a spin loop) isn't actually running then spinning with this function will eat the entire time slice allocated for the OS until that thread starts running again. This is a CPU instruction on some platforms, and a noop on other platforms. The function invokes the OS scheduler, asking the current thread to basically be put into the back of the run queue. This is best for spin loops where you're pretty certain the lock/condition will become available soon, but you're not sure if the thread holding the lock is running or not. I believe the general wisdom is that on lock contention you for a bit, then a little bit, and finally sleep for real. All of these are pretty advanced functions, though, and the documentation should be very clear that these are largely just current-practice-guidelines and benchmarking and further understanding is highly encouraged\nOne use case for is a bunch of threads blocked on . potentially needs a large number of cycles to execute but it is also executed in performance-sensitive code (graphics, crypto, etc.) where a full or would not be wanted.\nAre you still interested in writing docs for this?\nI still don't feel confident to write them given my current level of understanding with these APIs. But if you want to try, go for it!\nI will give it a try! I am sure others will correct me if I write some things that are wrong :) I hope I'll be able to deliver a PR somewhere this week.\nIt seems like you always pop up on my screen when people are talking about technical details so I thought you might be the right man for this question :) When diving into the details of I found this: So basically does a call to which is defined like this: Question: What is the difference in semantics between and ? Or to ask it differently, why should we not deprecate and only use ? The documentation of both functions is exactly the same as well. (Aside being experimental). Or we could maybe re-export as in ?\nI did an attempt! I hope it clarifies some things. I didn't edit the documentation to not bloat the documentation there too much. But if that is desired as well, please let me know. The pull is here:", "positive_passages": [{"docid": "doc-en-rust-7d14533da2cddab2a30caa44a6cd65e182da9604c504d79e32def88ad4be3e3f", "text": "use hint::spin_loop; /// Save power or switch hyperthreads in a busy-wait spin-loop. /// Signals the processor that it is entering a busy-wait spin-loop. /// /// This function is deliberately more primitive than /// [`std::thread::yield_now`](../../../std/thread/fn.yield_now.html) and /// does not directly yield to the system's scheduler. /// In some cases it might be useful to use a combination of both functions. /// Careful benchmarking is advised. /// Upon receiving spin-loop signal the processor can optimize its behavior by, for example, saving /// power or switching hyper-threads. /// /// On some platforms this function may not do anything at all. /// This function is different than [`std::thread::yield_now`] which directly yields to the /// system's scheduler, whereas `spin_loop_hint` only signals the processor that it is entering a /// busy-wait spin-loop without yielding control to the system's scheduler. /// /// Using a busy-wait spin-loop with `spin_loop_hint` is ideally used in situations where a /// contended lock is held by another thread executed on a different CPU and where the waiting /// times are relatively small. Because entering busy-wait spin-loop does not trigger the system's /// scheduler, no overhead for switching threads occurs. However, if the thread holding the /// contended lock is running on the same CPU, the spin-loop is likely to occupy an entire CPU slice /// before switching to the thread that holds the lock. If the contending lock is held by a thread /// on the same CPU or if the waiting times for acquiring the lock are longer, it is often better to /// use [`std::thread::yield_now`]. /// /// **Note**: On platforms that do not support receiving spin-loop hints this function does not /// do anything at all. /// /// [`std::thread::yield_now`]: ../../../std/thread/fn.yield_now.html #[inline] #[stable(feature = \"spin_loop_hint\", since = \"1.24.0\")] pub fn spin_loop_hint() {", "commid": "rust_pr_59664"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-1cff204ee024d86311e185e27fef27fe28c8e1a83f0bcad40ba64e49c61c2bdc", "text": "self.unclosed_delims.extend(snapshot.unclosed_delims.clone()); } pub fn unclosed_delims(&self) -> &[UnmatchedBrace] { &self.unclosed_delims } /// Create a snapshot of the `Parser`. pub(super) fn create_snapshot_for_diagnostic(&self) -> SnapshotParser<'a> { let mut snapshot = self.clone();", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-bab0575fdcd2f6ee5b54ddf7e8c7a464951c3c3400dcb61651b68f6e898d4d9d", "text": "use rustc_middle::hir::map::Map; use rustc_middle::hir::nested_filter; use rustc_middle::ty::TyCtxt; use rustc_parse::maybe_new_parser_from_source_str; use rustc_parse::parser::attr::InnerAttrPolicy; use rustc_session::config::{self, CrateType, ErrorOutputType}; use rustc_session::parse::ParseSess; use rustc_session::{lint, DiagnosticOutput, Session}; use rustc_span::edition::Edition; use rustc_span::source_map::SourceMap;", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-7a63ff71512b5b6bcb2a09db07a90c8fd066913e5977f8f05661160360d8b01b", "text": "edition: Edition, test_id: Option<&str>, ) -> (String, usize, bool) { let (crate_attrs, everything_else, crates) = partition_source(s); let (crate_attrs, everything_else, crates) = partition_source(s, edition); let everything_else = everything_else.trim(); let mut line_offset = 0; let mut prog = String::new();", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-1fadbbb7f740fe6904a6c06be595309aa753d4674888bc38278d8edcb13354db", "text": "rustc_span::create_session_if_not_set_then(edition, |_| { use rustc_errors::emitter::{Emitter, EmitterWriter}; use rustc_errors::Handler; use rustc_parse::maybe_new_parser_from_source_str; use rustc_parse::parser::ForceCollect; use rustc_session::parse::ParseSess; use rustc_span::source_map::FilePathMapping; let filename = FileName::anon_source_code(s);", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-b608f387d3a29e9464fad8d4b426a3281abee8ba7efa04da6e0250542a6d0713", "text": "(prog, line_offset, supports_color) } // FIXME(aburka): use a real parser to deal with multiline attributes fn partition_source(s: &str) -> (String, String, String) { fn check_if_attr_is_complete(source: &str, edition: Edition) -> bool { if source.is_empty() { // Empty content so nothing to check in here... return true; } rustc_span::create_session_if_not_set_then(edition, |_| { let filename = FileName::anon_source_code(source); let sess = ParseSess::with_silent_emitter(None); let mut parser = match maybe_new_parser_from_source_str(&sess, filename, source.to_owned()) { Ok(p) => p, Err(_) => { debug!(\"Cannot build a parser to check mod attr so skipping...\"); return true; } }; // If a parsing error happened, it's very likely that the attribute is incomplete. if !parser.parse_attribute(InnerAttrPolicy::Permitted).is_ok() { return false; } // We now check if there is an unclosed delimiter for the attribute. To do so, we look at // the `unclosed_delims` and see if the opening square bracket was closed. parser .unclosed_delims() .get(0) .map(|unclosed| { unclosed.unclosed_span.map(|s| s.lo()).unwrap_or(BytePos(0)) != BytePos(2) }) .unwrap_or(true) }) } fn partition_source(s: &str, edition: Edition) -> (String, String, String) { #[derive(Copy, Clone, PartialEq)] enum PartitionState { Attrs,", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-89ce9b77d1c626a8bdee7e85c92d3d185d26e4173dd8009a175240d12718c84f", "text": "let mut crates = String::new(); let mut after = String::new(); let mut mod_attr_pending = String::new(); for line in s.lines() { let trimline = line.trim();", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-e8b10fba0bf3ce71442463ee4e1ea706bfc2486713b7aee2fb1055159893e3ec", "text": "// shunted into \"everything else\" match state { PartitionState::Attrs => { state = if trimline.starts_with(\"#![\") || trimline.chars().all(|c| c.is_whitespace()) state = if trimline.starts_with(\"#![\") { if !check_if_attr_is_complete(line, edition) { mod_attr_pending = line.to_owned(); } else { mod_attr_pending.clear(); } PartitionState::Attrs } else if trimline.chars().all(|c| c.is_whitespace()) || (trimline.starts_with(\"//\") && !trimline.starts_with(\"///\")) { PartitionState::Attrs", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-84b1d62e4e2906a4846aa571951139728bc41eec8ee9e2f97e2dff747710b06b", "text": "{ PartitionState::Crates } else { PartitionState::Other // First we check if the previous attribute was \"complete\"... if !mod_attr_pending.is_empty() { // If not, then we append the new line into the pending attribute to check // if this time it's complete... mod_attr_pending.push_str(line); if !trimline.is_empty() && check_if_attr_is_complete(line, edition) { // If it's complete, then we can clear the pending content. mod_attr_pending.clear(); } // In any case, this is considered as `PartitionState::Attrs` so it's // prepended before rustdoc's inserts. PartitionState::Attrs } else { PartitionState::Other } }; } PartitionState::Crates => {", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-d0544d36ebafca414b763b7a31d293c2ba326ca6df4db81fb9c840b47e33ac04", "text": " // compile-flags:--test // normalize-stdout-test: \"src/test/rustdoc-ui\" -> \"$$DIR\" // normalize-stdout-test \"finished in d+.d+s\" -> \"finished in $$TIME\" // check-pass /// ``` /// # #![cfg_attr(not(dox), deny(missing_abi, /// # non_ascii_idents))] /// /// pub struct Bar; /// ``` pub struct Bar; ", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-ede4c332a68dde0898d1b618b5ee1d639f2b3e081b76796cddf3540bf7fbdcb8", "query": "With following doctest, where the cfg attribute spans multiple lines, the final program is invalid: the wrapping method doesn't take into account the dangling line that closes the attribute. rust /// # #![cfgattr(not(dox), feature(cfgtargetfeature, targetfeature, /// # stdsimd))] /// /// fn foo() { /// let mut dst = [0]; /// addquickly(&[1], &[2], &mut dst); /// asserteq!(dst[0], 3); /// } ///\ncc\ncc\nI don't see a message associated with the stack trace, is that missing?\nThe problem lies in how rustdoc parses doctests. It scans lines for crate-level attributes and statements, but it doesn't do anything more intelligent than checking the beginning of each line: We recently changed the parsing for and detection to use the libsyntax parser, but this initial scan is still a string-based search. The way to fix the test in the OP is to actually remove the part of it - rustdoc doesn't pass on any information into doctest compilation, so those features are going to be enabled regardless.\nThis is the associated message:", "positive_passages": [{"docid": "doc-en-rust-140e61bf4b286a1444dd2ee17651cb20bf05ee799147805ee5099ed325034323", "text": " running 1 test test $DIR/doc-comment-multi-line-cfg-attr.rs - Bar (line 6) ... ok test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in $TIME ", "commid": "rust_pr_95590"}], "negative_passages": []} {"query_id": "q-en-rust-c91419fbc9c04efe0ce4be4b9a1c1145472a60c982af690f2dc42c14c5466c93", "query": "This is intended to be a tracking issue to incorporate into the current beta branch for the Rust 2018 release, as we'll be relying on that in documentation that we're writing.\nThe Cargo backport is happening here - , and after that needs a PR to the beta branch in this repository\nThis is now submitted as\nDone!", "positive_passages": [{"docid": "doc-en-rust-44a90984de736df9a4c14cd90592e41e21577c4dec40c2cc66ea6dc3eb4fab39", "text": " Subproject commit 5d967343acae16fd7b53a61e6cd4e93a1ac6f199 Subproject commit 569507ef2beefcce4b666af41e09276f06445e96 ", "commid": "rust_pr_55958"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-5163ea1321219f56eab1bec8a61eef05960487c410618220785dfa1c9c9ec723", "text": "/// kept separate because of issue #42760. #[derive(Clone, RustcEncodable, RustcDecodable, PartialEq, Eq, Debug, Hash)] pub enum DocFragment { // FIXME #44229 (misdreavus): sugared and raw doc comments can be brought back together once // hoedown is completely removed from rustdoc. /// A doc fragment created from a `///` or `//!` doc comment. SugaredDoc(usize, syntax_pos::Span, String), /// A doc fragment created from a \"raw\" `#[doc=\"\"]` attribute.", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-2f1bbe5b7e2ed3e9e1a845498decf01cddf7f6b9a2bfb43b156e4fa6ed129319", "text": "start.to(end) } /// Reports a resolution failure diagnostic. /// /// Ideally we can report the diagnostic with the actual span in the source where the link failure /// occurred. However, there's a mismatch between the span in the source code and the span in the /// markdown, so we have to do a bit of work to figure out the correspondence. /// /// It's not too hard to find the span for sugared doc comments (`///` and `/**`), because the /// source will match the markdown exactly, excluding the comment markers. However, it's much more /// difficult to calculate the spans for unsugared docs, because we have to deal with escaping and /// other source features. So, we attempt to find the exact source span of the resolution failure /// in sugared docs, but use the span of the documentation attributes themselves for unsugared /// docs. Because this span might be overly large, we display the markdown line containing the /// failure as a note. fn resolution_failure( cx: &DocContext, attrs: &Attributes,", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-c01781df398f704eae83e7aa64917894a174256b9b207453ddfd127e2f26ecd0", "text": "let sp = span_of_attrs(attrs); let msg = format!(\"`[{}]` cannot be resolved, ignoring it...\", path_str); let code_dox = sp.to_src(cx); let doc_comment_padding = 3; let mut diag = if let Some(link_range) = link_range { // blah blah blahnblahnblah [blah] blah blahnblah blah // ^ ~~~~~~ // | link_range // last_new_line_offset let mut diag; if dox.lines().count() == code_dox.lines().count() { let line_offset = dox[..link_range.start].lines().count(); // The span starts in the `///`, so we don't have to account for the leading whitespace. let code_dox_len = if line_offset <= 1 { doc_comment_padding } else { // The first `///`. doc_comment_padding + // Each subsequent leading whitespace and `///`. code_dox.lines().skip(1).take(line_offset - 1).fold(0, |sum, line| { sum + doc_comment_padding + line.len() - line.trim_start().len() }) }; let src = cx.sess().source_map().span_to_snippet(sp); let is_all_sugared_doc = attrs.doc_strings.iter().all(|frag| match frag { DocFragment::SugaredDoc(..) => true, _ => false, }); if let (Ok(src), true) = (src, is_all_sugared_doc) { // The number of markdown lines up to and including the resolution failure. let num_lines = dox[..link_range.start].lines().count(); // We use `split_terminator('n')` instead of `lines()` when counting bytes to ensure // that DOS-style line endings do not cause the spans to be calculated incorrectly. let mut src_lines = src.split_terminator('n'); let mut md_lines = dox.split_terminator('n').take(num_lines).peekable(); // The number of bytes from the start of the source span to the resolution failure that // are *not* part of the markdown, like comment markers. let mut extra_src_bytes = 0; while let Some(md_line) = md_lines.next() { loop { let source_line = src_lines .next() .expect(\"could not find markdown line in source\"); match source_line.find(md_line) { Some(offset) => { extra_src_bytes += if md_lines.peek().is_some() { source_line.len() - md_line.len() } else { offset }; break; } None => { // Since this is a source line that doesn't include a markdown line, // we have to count the newline that we split from earlier. extra_src_bytes += source_line.len() + 1; } } } } // Extract the specific span. let sp = sp.from_inner_byte_pos( link_range.start + code_dox_len, link_range.end + code_dox_len, link_range.start + extra_src_bytes, link_range.end + extra_src_bytes, ); diag = cx.tcx.struct_span_lint_node(lint::builtin::INTRA_DOC_LINK_RESOLUTION_FAILURE, NodeId::from_u32(0), sp, &msg); let mut diag = cx.tcx.struct_span_lint_node( lint::builtin::INTRA_DOC_LINK_RESOLUTION_FAILURE, NodeId::from_u32(0), sp, &msg, ); diag.span_label(sp, \"cannot be resolved, ignoring\"); diag } else { diag = cx.tcx.struct_span_lint_node(lint::builtin::INTRA_DOC_LINK_RESOLUTION_FAILURE, NodeId::from_u32(0), sp, &msg); let mut diag = cx.tcx.struct_span_lint_node( lint::builtin::INTRA_DOC_LINK_RESOLUTION_FAILURE, NodeId::from_u32(0), sp, &msg, ); // blah blah blahnblahnblah [blah] blah blahnblah blah // ^ ~~~~ // | link_range // last_new_line_offset let last_new_line_offset = dox[..link_range.start].rfind('n').map_or(0, |n| n + 1); let line = dox[last_new_line_offset..].lines().next().unwrap_or(\"\");", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-5ba1a8e4926ec55d579d926094d671c9fabe4edd3a4393a6911dc411e9545ade", "text": "before=link_range.start - last_new_line_offset, found=link_range.len(), )); diag } diag } else { cx.tcx.struct_span_lint_node(lint::builtin::INTRA_DOC_LINK_RESOLUTION_FAILURE, NodeId::from_u32(0),", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-c76b77689b2777473f4dfe86e5014afb8c297b5ad21ef5cee971ab162d27b7f3", "text": " intra-links-warning-crlf.rs eol=crlf ", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-7530d342f35e83ba5a192c93d49e19170458f5715257a01eedd1bac23044efaf", "text": " // ignore-tidy-cr // compile-pass // This file checks the spans of intra-link warnings in a file with CRLF line endings. The // .gitattributes file in this directory should enforce it. /// [error] pub struct A; /// /// docs [error1] /// docs [error2] /// pub struct B; /** * This is a multi-line comment. * * It also has an [error]. */ pub struct C; ", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-a9520eb8dfcb0320cbcfc9dcb9b80eb899916be2c2b0d8c8e20cccaa1c2b3a7b", "text": " warning: `[error]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning-crlf.rs:8:6 | LL | /// [error] | ^^^^^ cannot be resolved, ignoring | = note: #[warn(intra_doc_link_resolution_failure)] on by default = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error1]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning-crlf.rs:12:11 | LL | /// docs [error1] | ^^^^^^ cannot be resolved, ignoring | = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error2]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning-crlf.rs:14:11 | LL | /// docs [error2] | ^^^^^^ cannot be resolved, ignoring | = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning-crlf.rs:21:20 | LL | * It also has an [error]. | ^^^^^ cannot be resolved, ignoring | = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` ", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-5e2b6bad927a12ab4dbf470639d1e70eb5e517f9325eb9f3e18b619c51f43a49", "text": "} } f!(\"Foonbar [BarF] barnbaz\"); /** # for example, * * time to introduce a link [error]*/ pub struct A; /** * # for example, * * time to introduce a link [error] */ pub struct B; #[doc = \"single line [error]\"] pub struct C; #[doc = \"single line with \"escaping\" [error]\"] pub struct D; /// Item docs. #[doc=\"Hello there!\"] /// [error] pub struct E; /// /// docs [error1] /// docs [error2] /// pub struct F; ", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-c7ea45eaf0e5d3b5d1fbd52959cb353dd4f3ee6944c711697aa6da44a4b759f6", "text": "| = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:61:30 | LL | * time to introduce a link [error]*/ | ^^^^^ cannot be resolved, ignoring | = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:67:30 | LL | * time to introduce a link [error] | ^^^^^ cannot be resolved, ignoring | = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:71:1 | LL | #[doc = \"single line [error]\"] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the link appears in this line: single line [error] ^^^^^ = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:74:1 | LL | #[doc = \"single line with /\"escaping/\" [error]\"] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | = note: the link appears in this line: single line with \"escaping\" [error] ^^^^^ = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:77:1 | LL | / /// Item docs. LL | | #[doc=\"Hello there!\"] LL | | /// [error] | |___________^ | = note: the link appears in this line: [error] ^^^^^ = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error1]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:83:11 | LL | /// docs [error1] | ^^^^^^ cannot be resolved, ignoring | = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[error2]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:85:11 | LL | /// docs [error2] | ^^^^^^ cannot be resolved, ignoring | = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[BarA]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:24:10 |", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-330c9ca016c2443df0e7a4f9170ed7ed9e93dc04d12e50f52f017a98848bb347", "query": "Given the following source: The span conversion calculation in the error reporting will give the wrong offset when reporting the resolution failure, leading to a confusing report: Strangely enough, putting either the leading or the closing on its own line will cause the report to change to something that looks correct, if a little too broad: I believe the code at fault is this section here: Because it assumes that every doc comment will have a padding of 3 characters (the or ), it includes that in its calculation of each line's offset. However, for block-style comments, this assumption is false.\nI'm working on this.\nYou can also reproduce this with unsugared docs:", "positive_passages": [{"docid": "doc-en-rust-f20ef48b16ccc4ec1fbb14b6f4f6ff2fe760eefdb32bc6dff23eeb36c9dd74cc", "text": "= help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[BarB]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:28:1 --> $DIR/intra-links-warning.rs:30:9 | LL | / /** LL | | * Foo LL | | * bar [BarB] bar LL | | * baz LL | | */ | |___^ LL | * bar [BarB] bar | ^^^^ cannot be resolved, ignoring | = note: the link appears in this line: bar [BarB] bar ^^^^ = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[BarC]` cannot be resolved, ignoring it... --> $DIR/intra-links-warning.rs:35:1 --> $DIR/intra-links-warning.rs:37:6 | LL | / /** Foo LL | | LL | | bar [BarC] bar LL | | baz ... | LL | | LL | | */ | |__^ LL | bar [BarC] bar | ^^^^ cannot be resolved, ignoring | = note: the link appears in this line: bar [BarC] bar ^^^^ = help: to escape `[` and `]` characters, just add '/' before them like `/[` or `/]` warning: `[BarD]` cannot be resolved, ignoring it...", "commid": "rust_pr_56010"}], "negative_passages": []} {"query_id": "q-en-rust-642b98b82520709ed4c0252d0e4257c3c6dbe8ce2477daa312654a27479ce8c9", "query": "